-
-How to Download MiChat Lite and Why You Should Try It
-If you are looking for a messaging app that is not only fast and reliable, but also fun and social, you might want to check out MiChat Lite. MiChat Lite is a lightweight version of MiChat, a popular
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Garena Free Fire APK for Indian Server Explore the New Character Pet and Game Mode in the OB38 Update.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Garena Free Fire APK for Indian Server Explore the New Character Pet and Game Mode in the OB38 Update.md
deleted file mode 100644
index 6fcd9715833a47173843717c5821590ddda3899c..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Garena Free Fire APK for Indian Server Explore the New Character Pet and Game Mode in the OB38 Update.md
+++ /dev/null
@@ -1,133 +0,0 @@
-
-Garena Free Fire APK Download Indian Server: How to Play the Latest Version of the Popular Battle Royale Game
- Garena Free Fire is one of the most popular battle royale games in the world, especially in India. It has over 500 million downloads on Google Play Store and has won several awards, such as the Best Popular Vote Game by Google Play in 2019. The game offers fast-paced and thrilling gameplay, where you have to survive against 49 other players on a remote island. You can choose from a variety of characters, weapons, vehicles, and modes to customize your experience. You can also team up with your friends and communicate with them using voice chat.
- If you are a fan of Garena Free Fire, you might be wondering how to download and play the latest version of the game on the Indian server. In this article, we will tell you everything you need to know about the OB38 update, the Advance Server, and the benefits of playing on the Indian server. We will also give you some tips and tricks to improve your gameplay and win more matches. So, let's get started!
-garena free fire apk download indian server Download »»» https://urlin.us/2uT37R
- What are the features of the OB38 update and how to download it?
- The OB38 update is the first major update of Garena Free Fire in 2023. It was released on January 6th and brought many new and exciting features to the game. Some of the highlights of the update are:
-
-A new character named Skyler, who is a CEO and superstar. He has a passive skill called Riptide Rhythm, which can damage gloo walls and increase his HP recovery when he deploys them.
-A new pet named Dreki, who is a dragon-like creature. He has a skill called Dragon Glare, which can detect enemies using medkits within 10 meters.
-A new mode called Bomb Squad, where two teams have to plant or defuse bombs in different locations.
-A new weapon called MAG-7, which is a shotgun that can fire multiple pellets at once.
-A new training ground called Batou City, where you can practice your skills and interact with other players.
-Many other improvements and bug fixes.
-
- To download the OB38 update, you need to follow these steps:
-
-Open Google Play Store on your Android device and search for Garena Free Fire.
-Tap on the Update button and wait for the download to complete.
-Launch the game and enjoy the new features.
-
- Note: If you are an iOS user, you need to download the update from the App Store instead.
- What is the Advance Server and how to access it?
- The Advance Server is a separate client that allows you to test out the upcoming features of Garena Free Fire before they are officially released. The Advance Server is usually available for a week before each update. For example, the Advance Server for the OB33 update was open from March 10th to March 17th in 2022.
- By playing on the Advance Server, you can get a sneak peek of what's coming next in the game. You can also report any bugs or glitches that you encounter and help improve the game quality. Moreover, you can earn diamonds as rewards for reporting bugs or giving feedback.
- However, not everyone can access the Advance Server. You need to have an Activation Code that is issued by Garena to a limited number of players who register for it. The registration process is as follows:
-
-Visit the Advance Server website using a web browser.
-Sign up using your Facebook account or email address that is linked to your Garena Free Fire account.
-Fill in your personal details and submit the form.
-Wait for the confirmation email from Garena. If you are selected, you will receive an Activation Code in the email.
-Download the Advance Server APK file from the website and install it on your device.
-Open the Advance Server app and enter your Activation Code to log in.
-Enjoy playing on the Advance Server and testing out the new features.
-
- Note: The Advance Server is only available for Android devices. You also need to have enough storage space on your device to install the APK file.
- What are the benefits of playing Garena Free Fire on the Indian server?
- Playing Garena Free Fire on the Indian server has many benefits, such as:
-garena free fire advance server apk download india
-garena free fire ob38 update apk download indian server
-garena free fire 6th anniversary apk download india
-garena free fire max apk download indian server
-garena free fire redeem code apk download india
-garena free fire rampage apk download indian server
-garena free fire new update apk download india
-garena free fire mod apk unlimited diamonds download indian server
-garena free fire battlegrounds apk download india
-garena free fire lite apk download indian server
-garena free fire latest version apk download india
-garena free fire hack apk download indian server
-garena free fire winterlands apk download india
-garena free fire cobra apk download indian server
-garena free fire wonderland apk download india
-garena free fire ob37 update apk download indian server
-garena free fire kalahari apk download india
-garena free fire booyah day apk download indian server
-garena free fire 3volution apk download india
-garena free fire spade squad apk download indian server
-garena free fire ob36 update apk download indian server
-garena free fire world series apk download india
-garena free fire project cobra apk download india
-garena free fire chronos origin apk download indian server
-garena free fire operation chrono apk download india
-garena free fire ob35 update apk download indian server
-garena free fire halloween update apk download india
-garena free fire clash squad ranked season 7 apk download indian server
-garena free fire elite pass season 39 apk download india
-garena free fire new character dimitri vegas and like mike apk download indian server
-garena free fire ob34 update apk download indian server
-garena free fire pet rumble mode apk download india
-garena free fire new pet dreki dragon apk download indian server
-garena free fire new map bermuda remastered apk download india
-garena free fire new weapon vector akimbo dual wield smg apk download indian server
-garena free fire ob33 update apk download indian server
-garena free fire new character chrono cristiano ronaldo apk download india
-garena free fire new mode cosmic racer mode apk download indian server
-garena free fire new event one punch man collaboration apk download india
-garena free fire new vehicle monster truck and sports car apk download indian server
-
-Better ping and latency: You can enjoy smoother and faster gameplay without any lag or delay. You can also avoid getting disconnected or kicked out of matches due to network issues.
-More events and rewards: You can participate in exclusive events and challenges that are tailored for the Indian audience. You can also earn more rewards, such as diamonds, coins, vouchers, skins, and characters.
-More friends and community: You can connect with more players who share your language and culture. You can also join or create guilds, clans, or squads with them. You can also interact with them through chat, voice, or social media.
-More support and feedback: You can get more assistance and guidance from the Garena team and the moderators. You can also report any problems or suggestions that you have and get a quick response.
-
- To play Garena Free Fire on the Indian server, you need to download the game from the official website or from Google Play Store or App Store. You also need to have an Indian phone number to verify your account.
- What are some tips and tricks to improve your gameplay and win more matches?
- Garena Free Fire is a competitive and challenging game that requires skill, strategy, and luck. Here are some tips and tricks that can help you improve your gameplay and win more matches:
-
-Choose your landing spot wisely: You should land in a place that has good loot, cover, and escape routes. You should also avoid landing in hot zones where many players drop, unless you are confident in your fighting skills.
-Loot fast and smart: You should loot as quickly as possible and prioritize the items that you need, such as weapons, ammo, armor, and healing items. You should also avoid carrying too much unnecessary stuff that can slow you down or take up space in your backpack.
-Use your map and minimap: You should always check your map and minimap to see where the safe zone, the danger zone, the airdrops, and the enemies are. You should also use the ping system to communicate with your teammates and mark important locations or items.
-Move and hide: You should always keep moving and changing your position to avoid being sniped or ambushed by enemies. You should also use the terrain, buildings, vehicles, and gloo walls to hide yourself or create cover.
-Aim and shoot: You should aim for the head or chest of your enemies to deal more damage and kill them faster. You should also use the right weapon for the right situation, such as snipers for long-range, assault rifles for mid-range, and shotguns for close-range. You should also adjust your sensitivity settings to suit your preference.
-
- Conclusion
- Garena Free Fire is a fun and exciting game that you can play on your Android or iOS device. It offers a variety of features, modes, characters, weapons, vehicles, and pets that you can enjoy. It also has regular updates that bring new content and improvements to the game. If you want to play Garena Free Fire on the Indian server, you need to download it from the official website or from Google Play Store or App Store. You can also access the Advance Server to test out the upcoming features before they are released. By playing on the Indian server, you can get better ping, more events, more friends, and more support. You can also improve your gameplay and win more matches by following some tips and tricks that we shared with you in this article. We hope that you found this article helpful and informative. If you have any questions or feedback, please let us know in the comments section below. Thank you for reading!
- Frequently Asked Questions
- Here are some of the most common questions that people ask about Garena Free Fire:
-
-How do I redeem codes in Garena Free Fire?
-You can redeem codes in Garena Free Fire by following these steps:
-
-Visit the official redemption website using a web browser.
-Log in using your Facebook, Google, VK, or Huawei account that is linked to your Garena Free Fire account.
-Enter the 12-digit code that you received from Garena or other sources and click on Confirm.
-Check your in-game mail to claim your rewards.
-
-How do I get diamonds in Garena Free Fire?
-You can get diamonds in Garena Free Fire by following these methods:
-
-Purchase them using real money from the in-game store or from third-party websites.
-Earn them by completing surveys, tasks, or offers from various apps or websites.
-Win them by participating in events, tournaments, or giveaways from Garena or other sources.
-Report bugs or give feedback on the Advance Server and receive diamonds as rewards.
-
-How do I change my name in Garena Free Fire?
-You can change your name in Garena Free Fire by following these steps:
-
-Open the game and tap on your profile icon on the top left corner of the screen.
-Tap on the edit icon next to your name and enter your new name.
-Tap on the confirm icon and pay 390 diamonds to change your name.
-
-How do I get free characters in Garena Free Fire?
-You can get free characters in Garena Free Fire by following these methods:
-
-Collect character fragments from various sources, such as events, missions, crates, or lucky draws. You can use these fragments to unlock or upgrade your characters.
-Exchange gold for characters from the in-game store. You can earn gold by playing matches, completing daily tasks, or watching ads.
-Claim characters as rewards from special events, such as anniversary, festival, or collaboration events.
-
-How do I play Garena Free Fire on PC?
-You can play Garena Free Fire on PC by using an emulator, which is a software that allows you to run Android apps on your computer. Some of the popular emulators are BlueStacks, LDPlayer, NoxPlayer, and Gameloop. You can download and install any of these emulators from their official websites. Then, you can download and install Garena Free Fire from Google Play Store or from the APK file. You can also customize your keyboard and mouse settings to suit your preference. 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Music from Instagram Videos in 3 Easy Steps.md b/spaces/1phancelerku/anime-remove-background/Download Music from Instagram Videos in 3 Easy Steps.md
deleted file mode 100644
index 21913d99e0bf39253303965f81c0e746e193d365..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Music from Instagram Videos in 3 Easy Steps.md
+++ /dev/null
@@ -1,128 +0,0 @@
-
-How to Download Music from Instagram for Free
-Instagram is one of the most popular social media platforms in the world, with over 1 billion monthly active users. It is not only a place to share photos and videos, but also a source of amazing music. Whether you want to discover new artists, listen to your favorite songs, or create your own content, Instagram has something for everyone.
-download music instagram free Download ✏ https://jinyurl.com/2uNTwt
-But what if you want to download music from Instagram for free? Maybe you want to use it in your own videos, podcasts, or online advertising. Maybe you want to save it offline and listen to it anytime, anywhere. Or maybe you just love the music and want to keep it forever.
-Downloading music from Instagram can be tricky, though. Unlike other platforms like YouTube or Spotify, Instagram does not have a built-in download feature. You also need to be careful about the copyright and license of the music, as not all songs are free to use or share.
-Fortunately, there are some methods that can help you download music from Instagram for free. In this article, we will show you how to use online tools and mobile apps to get the music you want from Instagram. We will also give you some tips and tricks to make sure you download music legally and with high quality.
- Methods to Download Music from Instagram for Free
-Using Online Tools
-One of the easiest ways to download music from Instagram is to use online tools that can extract audio from video. There are many websites that offer this service, but we will focus on two of them: Mixkit and AceThinker.
-Mixkit is a website that provides royalty free stock music for videos. You can browse through different genres, moods, and themes, and download any track you like for free. You can also use Mixkit to download music from Instagram videos. Here is how:
-
-Go to Mixkit and click on Free Stock Music.
-Find the track you want to download and click on Download Free Music.
-Copy the URL of the Instagram video that contains the track.
-Paste it into the input box on Mixkit and click on Download.
-Save the MP3 file on your device.
-
-AceThinker is another website that can help you download music from Instagram. It is an online video downloader that supports various platforms, including YouTube, Facebook, Twitter, and Instagram. You can use AceThinker to download Instagram video to MP3 in a few steps:
-download royalty free music for instagram videos
-download free mp3s for instagram reels
-download free stock music for instagram stories
-download free background music for instagram posts
-download free instrumental music for instagram
-download free music for youtube and instagram
-download free music for instagram video editing
-download free music for instagram ads
-download free music for instagram reels and tiktok
-download free music for instagram shorts
-download free upbeat music for instagram
-download free chill music for instagram
-download free cinematic music for instagram
-download free happy music for instagram
-download free sad music for instagram
-download free funny music for instagram
-download free motivational music for instagram
-download free energetic music for instagram
-download free indian music for instagram
-download free romantic music for instagram
-download free horror music for instagram
-download free ambient music for instagram
-download free house and electronica music for instagram
-download free hip hop music for instagram
-download free pop music for instagram
-download free r&b music for instagram
-download free classical music for instagram
-download free acoustic music for instagram
-download free corporate music for instagram
-download free children's music for instagram
-download free experimental music for instagram
-download free comical music for instagram
-download free deep meditation music for instagram
-download free tech house vibes music for instagram
-download free hazy after hours music for instagram
-download free hip hop 02 by lily j. music for instagram
-download free a very happy christmas by michael ramir c. music for instagram
-download free sun and his daughter by eugenio mininni music for instagram
-download free raising me higher by ahjay stelino music for instagram
-download free driving ambition by ahjay stelino music for instagram
-download free life is a dream by michael ramir c. music for instagram
-download free serene view by arulo music for instagram
-download free deep urban by eugenio mininni music for instagram
-download free complicated by arulo music for instagram
-download free c.b.p.d by arulo music for instagram
-download free dance with me by ahjay stelino music for instagram
-download free dreaming big by ahjay stelino music for instagram
-download free cat walk by arulo music for instagram
-download free feeling happy by ahjay stelino music for instagram
-
-Go to AceThinker and click on Online Downloader.
-Copy the URL of the Instagram video you want to download.
-Paste it into the input box on AceThinker and click on Download.
-Select MP3 as the output format and click on Download again.
-Save the MP3 file on your device.
-
- Using Mobile Apps
-If you prefer to use your smartphone or tablet to download music from Instagram, you can also use some mobile apps that can do the job. We will introduce two of them: InShot and SnapTube.
-InShot is a video editor app that allows you to trim, crop, rotate, add filters, stickers, music, and more to your videos. You can also use InShot to download and edit music from Instagram. Here is how:
-
-Download and install InShot from the App Store or Google Play.
-Open the app and tap on Video.
-Tap on New and select Instagram from the list of sources.
-Login to your Instagram account and find the video you want to download.
-Tap on the video and then tap on Save.
-The video will be imported to InShot. Tap on Music and then tap on Extracted from Video.
-Select the music you want to download and edit it as you like.
-Tap on Save and choose MP3 as the output format.
-Save the MP3 file on your device.
-
-SnapTube is another app that can help you download music from Instagram. It is a video downloader app that supports various platforms, including YouTube, Facebook, Twitter, and Instagram. You can use SnapTube to download Instagram video and audio in a few steps:
-
-Download and install SnapTube from its official website or Google Play.
-Open the app and tap on Instagram from the list of sources.
-Login to your Instagram account and find the video you want to download.
-Tap on the video and then tap on the Download icon at the bottom right corner.
-Select MP3 or M4A as the output format and tap on Download again.
-Save the audio file on your device.
-
- Tips and Tricks to Download Music from Instagram for Free
-Check the License and Attribution of the Music
-Before you download music from Instagram, you should always check the license and attribution of the music. Not all music is free to use or share, and some may require permission or credit from the original creators. You should respect the rights of the artists and avoid any legal issues.
-To find royalty free music for Instagram, you can use some websites that offer free stock music for videos, such as Mixkit , Bensound , or YouTube Audio Library . These websites provide music that is licensed under Creative Commons or other public domain licenses, which means you can use them for free without attribution or permission. However, you should always read the terms and conditions of each website before downloading any music.
-To credit the original creators of the music, you can use some tools that can help you generate proper attribution, such as Creative Commons License Generator or Attribution Builder . These tools can help you create a text or HTML code that contains the name of the artist, the title of the song, the license type, and a link to the source. You can then paste this attribution in your video description, credits, or website.
- Optimize the Quality and Format of the Music
-Another thing you should consider when downloading music from Instagram is the quality and format of the music. You want to make sure that the music sounds good and fits your needs. You also want to avoid any compatibility or storage issues.
-To choose the best bitrate and file type for the music, you should consider some factors, such as:
-
-The purpose of your project: If you are using the music for personal use, such as listening offline or making a slideshow, you can choose a lower bitrate (such as 128 kbps) and a smaller file type (such as MP3) to save space. If you are using the music for professional use, such as making a video, podcast, or online advertising, you can choose a higher bitrate (such as 320 kbps) and a larger file type (such as WAV) to ensure quality.
-The device and platform you are using: If you are using a mobile device, such as a smartphone or tablet, you can choose a more compatible and common file type (such as MP3 or M4A) to avoid any playback issues. If you are using a desktop or laptop computer, you can choose a more versatile and lossless file type (such as WAV or FLAC) to preserve the original sound.
-
-To convert and compress the music if needed, you can use some tools that can help you change the format and size of the music, such as Online Audio Converter or MP3 Compressor . These tools can help you upload your music file and choose the output format and bitrate you want. You can then download the converted and compressed music file on your device.
- Conclusion
-Downloading music from Instagram for free can be easy and fun if you know how to do it. In this article, we have shown you how to use online tools and mobile apps to get the music you want from Instagram. We have also given you some tips and tricks to make sure you download music legally and with high quality.
-Now that you have learned how to download music from Instagram for free, you can enjoy listening to your favorite songs offline, use them in your own projects, or share them with your friends. Just remember to respect the rights of the artists and follow the terms and conditions of each tool and website you use.
-We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!
- FAQs
-Can I download any music from Instagram?
-No, not all music from Instagram is downloadable. Some music may be protected by copyright or license, which means you need permission or credit from the original creators to use or share it. You should always check the license and attribution of the music before downloading it.
-Is it legal to download music from Instagram?
-It depends on the source and license of the music. Some music is royalty free or public domain, which means you can download it for free without attribution or permission. Some music is licensed under Creative Commons or other licenses, which means you can download it for free with attribution or permission. Some music is not free at all, which means you cannot download it without violating the law. You should always read the terms and conditions of each tool and website you use to download music from Instagram.
-How can I edit the music I downloaded from Instagram?
-You can use some tools that can help you edit the music you downloaded from Instagram, such as Audacity or GarageBand . These tools can help you cut, trim, merge, fade, adjust, add effects, and more to your music. You can then save the edited music file on your device.
-How can I share the music I downloaded from Instagram?
-You can share the music you downloaded from Instagram with your friends or followers by using some platforms that allow you to upload and stream audio, such as SoundCloud or Spotify . These platforms can help you create playlists, discover new music, and connect with other listeners. You can also share the music on other social media platforms, such as Facebook, Twitter, or TikTok. Just make sure you credit the original creators of the music if required.
-Where can I find more free music for Instagram?
-You can find more free music for Instagram by using some websites that offer free stock music for videos, such as Mixkit , Bensound , or YouTube Audio Library . These websites provide thousands of tracks that are royalty free or licensed under Creative Commons or other licenses. You can browse through different genres, moods, and themes, and download any track you like for free.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Naruto Ultimate Ninja Storm 4 for Android PPSSPP in Easy Steps and Start Your Ninja Adventure.md b/spaces/1phancelerku/anime-remove-background/Download Naruto Ultimate Ninja Storm 4 for Android PPSSPP in Easy Steps and Start Your Ninja Adventure.md
deleted file mode 100644
index 0f2e38e4ca6e546e838ad37b6459fc4318490c51..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Naruto Ultimate Ninja Storm 4 for Android PPSSPP in Easy Steps and Start Your Ninja Adventure.md
+++ /dev/null
@@ -1,111 +0,0 @@
-
-How to Download Naruto Ultimate Ninja Storm 4 for Android PPSSPP
-If you are a fan of Naruto, you might have heard of Naruto Ultimate Ninja Storm 4, the latest and final installment of the popular fighting game series based on the manga and anime. This game was released in 2016 for PlayStation 4, Xbox One, and PC, but did you know that you can also play it on your Android device using a PSP emulator? In this article, we will show you how to download and install Naruto Ultimate Ninja Storm 4 for Android PPSSPP, as well as how to optimize the game settings for the best performance.
-how to download naruto ultimate ninja storm 4 for android ppsspp DOWNLOAD ——— https://jinyurl.com/2uNOHi
- What is Naruto Ultimate Ninja Storm 4?
-Naruto Ultimate Ninja Storm 4 is a fighting game that follows the story of Naruto Shippuden, the second part of the Naruto series. The game features a large roster of characters from the anime, including Naruto, Sasuke, Sakura, Kakashi, Madara, Obito, and many more. You can choose your favorite character and fight against other players or the computer in various modes, such as story mode, adventure mode, survival mode, and online mode. The game also boasts impressive graphics, animations, and sound effects that bring the Naruto world to life.
- Features of the game
-
-Over 100 playable characters with different abilities and fighting styles
-Dynamic and destructible environments that change during battles
-New gameplay mechanics such as wall-running, elemental damage, and team combinations
-Multiple game modes that offer hours of fun and replay value
-Original voice acting from the anime cast
-
- Requirements for playing on Android
-To play Naruto Ultimate Ninja Storm 4 on your Android device, you will need a few things:
-
-An Android device with at least 2 GB of RAM and 4 GB of free storage space
-A PPSSPP emulator app that can run PSP games on your device
-An ISO file of Naruto Ultimate Ninja Storm 4 that contains the game data
-A file extractor app that can unzip compressed files
-
- How to download and install the game
-Now that you have everything you need, let's get started with downloading and installing Naruto Ultimate Ninja Storm 4 on your Android device. Follow these steps carefully:
- Step 1: Download the PPSSPP emulator
-The PPSSPP emulator is an app that allows you to play PSP games on your Android device. You can download it from the Google Play Store or from its official website. Once you have downloaded it, install it on your device and grant it the necessary permissions.
- Step 2: Download the ISO file of the game
-The ISO file of Naruto Ultimate Ninja Storm 4 is a compressed file that contains the game data. You can download it from various websites that offer PSP games for free. One such website is KODAIKA.com, where you can find a link to download the ISO file of Naruto Ultimate Ninja Storm 4. Make sure you have enough space on your device before downloading it.
-* Naruto ultimate storm 4 ppsspp android download link
-* How to install naruto ultimate storm 4 on android ppsspp
-* Naruto ultimate storm 4 ppsspp iso file download for android
-* Best settings for naruto ultimate storm 4 ppsspp android
-* Naruto ultimate storm 4 ppsspp android gameplay
-* Download naruto ultimate storm 4 mod ppsspp android
-* Naruto ultimate storm 4 ppsspp android highly compressed
-* How to play naruto ultimate storm 4 online ppsspp android
-* Naruto ultimate storm 4 ppsspp android cheats
-* Download naruto ultimate storm 4 full version ppsspp android
-* Naruto ultimate storm 4 ppsspp android save data
-* How to fix naruto ultimate storm 4 lag ppsspp android
-* Naruto ultimate storm 4 ppsspp android apk + obb
-* Download naruto ultimate storm 4 road to boruto ppsspp android
-* Naruto ultimate storm 4 ppsspp android requirements
-* How to unlock all characters in naruto ultimate storm 4 ppsspp android
-* Naruto ultimate storm 4 ppsspp android english patch
-* Download naruto ultimate storm 4 lite ppsspp android
-* Naruto ultimate storm 4 ppsspp android free download
-* How to update naruto ultimate storm 4 ppsspp android
-* Naruto ultimate storm 4 ppsspp android review
-* Download naruto ultimate storm 4 mega mod ppsspp android
-* Naruto ultimate storm 4 ppsspp android controller support
-* How to change language in naruto ultimate storm 4 ppsspp android
-* Naruto ultimate storm 4 ppsspp android emulator
- Step 3: Extract the ISO file
-After downloading the ISO file of Naruto Ultimate Ninja Storm 4, you will need to extract it using a file extractor app. You can use any app that can unzip compressed files, such as ZArchiver or RAR. Once you have installed a file extractor app, open it and locate the ISO file of Naruto Ultimate Ninja Storm 4. Tap on the file and select "Extract here" or "Extract to" depending on your preference. Wait for the extraction process to finish. You should see a new folder with the same name as the ISO file, containing another file with the .iso extension. This is the file you will need to load the game on the PPSSPP emulator.
- Step 4: Launch the PPSSPP emulator and load the game
-Now that you have extracted the ISO file of Naruto Ultimate Ninja Storm 4, you are ready to play the game on your Android device. To do so, open the PPSSPP emulator app and tap on "Games". Navigate to the folder where you extracted the ISO file and tap on it. The game should start loading and you should see the title screen of Naruto Ultimate Ninja Storm 4. Enjoy!
- How to optimize the game settings
-Naruto Ultimate Ninja Storm 4 is a high-end game that requires a lot of resources to run smoothly. Depending on your device's specifications, you may experience some lag or glitches while playing the game. To improve the game performance, you can tweak some settings on the PPSSPP emulator. Here are some tips on how to optimize the game settings:
- Graphics settings
-To access the graphics settings, tap on the menu icon on the top right corner of the PPSSPP emulator and select "Settings". Then, tap on "Graphics". Here are some options you can adjust:
-
-Rendering mode: Choose "Buffered rendering" for better graphics quality, or "Skip buffer effects" for faster speed.
-Frame skipping: Choose "Off" for smooth gameplay, or "1" or "2" for better performance.
-Resolution: Choose "1x PSP" for faster speed, or "2x PSP" or higher for better graphics quality.
-Texture filtering: Choose "Nearest" for faster speed, or "Linear" or higher for better graphics quality.
-Anisotropic filtering: Choose "Off" for faster speed, or "2x" or higher for better graphics quality.
-
- Audio settings
-To access the audio settings, tap on the menu icon on the top right corner of the PPSSPP emulator and select "Settings". Then, tap on "Audio". Here are some options you can adjust:
-
-Enable sound: Choose "On" to hear the game sound effects and music, or "Off" to mute them.
-Audio latency: Choose "Low" for better sound quality, or "Medium" or "High" for better performance.
-
- Control settings
-To access the control settings, tap on the menu icon on the top right corner of the PPSSPP emulator and select "Settings". Then, tap on "Controls". Here are some options you can adjust:
-
-On-screen touch controls: Choose "On" to use the virtual buttons on your screen, or "Off" to use an external controller.
-Control mapping: Choose "Edit touch control layout" to customize the position and size of the virtual buttons, or "Control mapping" to assign different functions to different buttons.
-Haptic feedback: Choose "On" to feel vibrations when you press a button, or "Off" to disable them.
-
- Conclusion
-Naruto Ultimate Ninja Storm 4 is an amazing game that lets you experience the epic battles and adventures of Naruto and his friends. You can play it on your Android device using a PPSSPP emulator and an ISO file of the game. All you need to do is follow these steps:
-
-Download and install the PPSSPP emulator app from the Google Play Store or its official website.
-Download and extract the ISO file of Naruto Ultimate Ninja Storm 4 from KODAIKA.com or any other website that offers PSP games for free.
-Launch the PPSSPP emulator app and load the ISO file of Naruto Ultimate Ninja Storm 4 from your device's storage.
-Optimize the game settings according to your device's specifications and preferences.
-
-We hope this article helped you learn how to download and install Naruto Ultimate Ninja Storm 4 for Android PPSSPP. If you have any questions or feedback, feel free to leave a comment below. Have fun playing!
- FAQs
-
-Q: Is Naruto Ultimate Ninja Storm 4 free?
-A: The original game is not free, but you can download it for free from various websites that offer PSP games for free. However, we do not endorse or support piracy and we recommend that you buy the game legally if you can.
-Q: How can I play Naruto Ultimate Ninja Storm 4 online with other players?
-A: To play Naruto Ultimate Ninja Storm 4 online with other players, you will need to use a VPN app that can connect you to a server where other players are playing. You will also need to enable the "WLAN" option in the PPSSPP emulator settings and create or join a room with other players. However, this method is not very reliable and may cause lag or connection issues.
-Q: How can I save my progress in Naruto Ultimate Ninja Storm 4?
-A: To save your progress in Naruto Ultimate Ninja Storm 4, you will need to use the in-game save feature that allows you to create a save file on your device's storage. You can also use the "Save state" and "Load state" options in the PPSSPP emulator menu to save and load your game at any point.
-Q: How can I update Naruto Ultimate Ninja Storm 4 to the latest version?
-A: To update Naruto Ultimate Ninja Storm 4 to the latest version, you will need to download and install the latest ISO file of the game from the same website where you downloaded the original ISO file. You will also need to delete or overwrite the old ISO file on your device's storage.
-Q: How can I fix Naruto Ultimate Ninja Storm 4 crashing or freezing on my device?
-A: To fix Naruto Ultimate Ninja Storm 4 crashing or freezing on your device, you can try these solutions:
-
-Clear the cache and data of the PPSSPP emulator app and restart it.
-Lower the graphics settings of the game and the PPSSPP emulator.
-Close any background apps that may be consuming your device's resources.
-Restart your device and try again.
-
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download World Soccer Champs 4.5.3.3 Mod APK with Unlimited Money and Enjoy the Game.md b/spaces/1phancelerku/anime-remove-background/Download World Soccer Champs 4.5.3.3 Mod APK with Unlimited Money and Enjoy the Game.md
deleted file mode 100644
index 5901fd5e9dfbcfa4b51b14b9f93e178015a5caac..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download World Soccer Champs 4.5.3.3 Mod APK with Unlimited Money and Enjoy the Game.md
+++ /dev/null
@@ -1,118 +0,0 @@
-
-World Soccer Champs Mod APK 4.5.3.3: A Fun and Exciting Soccer Game for Android
- Introduction
- If you are a fan of soccer games, you might have heard of World Soccer Champs, a popular and addictive game for Android devices. World Soccer Champs is a game that lets you manage your own soccer team and compete in various tournaments and leagues around the world. You can also customize your players, tactics, and formations to suit your style and strategy.
-world soccer champs mod apk 4.5.3.3 DOWNLOAD ☑ https://jinyurl.com/2uNOMu
- But what if you want to enjoy the game without any limitations or restrictions? What if you want to have unlimited money to buy the best players and upgrade your team? What if you want to play the game without any annoying ads interrupting your gameplay? Well, there is a way to do that, and that is by downloading World Soccer Champs mod APK 4.5.3.3.
- What is World Soccer Champs?
- World Soccer Champs is a soccer game developed by Monkey I Brow Studios, a studio that specializes in creating fun and engaging sports games for mobile devices. World Soccer Champs was released in 2020 and has since gained millions of downloads and positive reviews from players and critics alike.
- World Soccer Champs is a game that combines the elements of management, simulation, and arcade in one package. You can choose from over 100 national teams and clubs to represent, and play in various competitions such as the World Cup, the Champions League, the Copa America, and more. You can also scout and sign new players, train your squad, and adjust your tactics before each match.
- World Soccer Champs also features simple and intuitive controls that make it easy to play for anyone. You can swipe, tap, and drag on the screen to pass, shoot, dribble, tackle, and perform other actions on the pitch. You can also switch between different camera angles and zoom levels to get the best view of the action.
- What is a mod APK?
- A mod APK is a modified version of an original APK file, which is the format used to install applications on Android devices. A mod APK usually has some changes or additions that are not present in the original version, such as unlocked features, unlimited resources, removed ads, or enhanced performance.
- A mod APK can be created by anyone who has the skills and tools to modify an original APK file, or by using a modding software that automates the process. However, not all mod APKs are safe or reliable, as some may contain viruses, malware, or other harmful components that can damage your device or compromise your privacy.
- Therefore, it is important to download mod APKs only from trusted sources that have been verified by other users or experts. You should also scan any mod APK file with an antivirus or anti-malware program before installing it on your device.
- Why download World Soccer Champs mod APK 4.5.3.3?
- World Soccer Champs mod APK 4.5.3.3 is one of the best mod APKs for World Soccer Champs that you can find online. It has several advantages over the original version of the game, such as:
- Features of World Soccer Champs mod APK 4.5.3.3
- Unlimited money
- One of the most appealing features of World Soccer Champs mod APK 4.5.3.3 is that it gives you unlimited money to spend on the game. Money is the main currency in World Soccer Champs, and you can use it to buy new players, upgrade your stadium, hire coaches, and more. However, money is not easy to earn in the game, as you have to win matches, complete achievements, and watch ads to get some.
-world soccer champs hack mod apk 4.5.3.3
-world soccer champs unlimited money mod apk 4.5.3.3
-world soccer champs latest version mod apk 4.5.3.3
-world soccer champs premium mod apk 4.5.3.3
-world soccer champs modded apk 4.5.3.3 download
-world soccer champs apk mod 4.5.3.3 free
-world soccer champs mod apk 4.5.3.3 android
-world soccer champs mod apk 4.5.3.3 offline
-world soccer champs mod apk 4.5.3.3 online
-world soccer champs mod apk 4.5.3.3 no ads
-world soccer champs mod apk 4.5.3.3 unlocked
-world soccer champs mod apk 4.5.3.3 cheats
-world soccer champs mod apk 4.5.3.3 gameplay
-world soccer champs mod apk 4.5.3.3 review
-world soccer champs mod apk 4.5.3.3 update
-world soccer champs mod apk 4.5.3.3 happymod
-world soccer champs mod apk 4.5.3.3 rexdl
-world soccer champs mod apk 4.5.3.3 revdl
-world soccer champs mod apk 4.5.3.3 apkpure
-world soccer champs mod apk 4.5.3.3 apkmody
-world soccer champs mod apk 4.5.3.3 an1
-world soccer champs mod apk 4.5.3.3 android1
-world soccer champs mod apk 4.5.3.3 mob.org
-world soccer champs mod apk 4.5.3.3 mobpark
-world soccer champs mod apk 4.5.3.3 andropalace
-world soccer champs mod apk 4 androeed.ru
-world soccer champs mod apk 4 ihackedit.com
-world soccer champs mod apk 4 platinmods.com
-world soccer champs mod apk 4 blackmod.net
-world soccer champs mod apk 4 sbenny.com
-download world soccer champs mod apk v4 .5 . . . . . . . . . . . . . . . . . . . . .
-how to install world soccer champs mod apk v4 .5 . . . . . . . . . . .
-how to play world soccer champs mod apk v4 .5 . . . . . . .
-how to get world soccer champs mod apk v4 .5 for free
-how to update world soccer champs mod apk v4 to v4 .5 .
-how to uninstall world soccer champs mod apk v4 from device
-how to fix world soccer champs mod apk v4 not working issue
-how to backup and restore data in world soccer champs mod apk v4 .
-how to transfer data from old device to new device in world soccer champs mod apk v4 .
-how to use cheat codes in world soccer champs mod apk v4 .
-what's new in world soccer champs mod apk v4 version 4 point five point three point three .
-what are the features of world soccer champs mod apk v4 version four point five point three point three .
-what are the requirements of world soccer champs mod apk v four point five point three point three .
-what are the benefits of using world soccer champs mod apk version four dot five dot three dot three .
-what are the drawbacks of using world soccer champs mod apk version four dot five dot three dot three .
- With World Soccer Champs mod APK 4.5.3.3, you don't have to worry about running out of money or spending real money to buy more. You can have as much money as you want, and buy anything you need to improve your team and dominate the soccer world.
- No ads
- Another great feature of World Soccer Champs mod APK 4.5.3.3 is that it removes all the ads from the game. Ads are a common source of annoyance and frustration for many players, as they can interrupt your gameplay, slow down your device, and consume your data. Ads can also ruin your immersion and enjoyment of the game, especially when they pop up at the worst possible moments.
- With World Soccer Champs mod APK 4.5.3.3, you can play the game without any ads bothering you or wasting your time. You can focus on the game and have a smooth and satisfying experience.
- Simple and intuitive controls
- World Soccer Champs mod APK 4.5.3.3 also retains the simple and intuitive controls that make the game easy and fun to play for anyone. You can swipe, tap, and drag on the screen to perform various actions on the pitch, such as passing, shooting, dribbling, tackling, and more. You can also switch between different camera angles and zoom levels to get the best view of the action.
- The controls are responsive and accurate, and you can adjust them to your preference in the settings menu. You can also enable or disable the auto-play feature, which lets the game control your players for you while you watch.
- Realistic graphics and animations
- World Soccer Champs mod APK 4.5.3.3 also boasts realistic graphics and animations that make the game look amazing on any device. The game has high-quality graphics that show the details of the players, stadiums, crowds, and weather effects. The game also has smooth and fluid animations that capture the movements and expressions of the players, as well as the physics and dynamics of the ball.
- The game also has realistic sound effects and commentary that add to the atmosphere and excitement of the game. You can hear the cheers and chants of the fans, the whistles of the referees, and the voices of the commentators who narrate the action.
- Multiple game modes and challenges
- World Soccer Champs mod APK 4.5.3.3 also offers multiple game modes and challenges that keep you entertained and challenged for hours. You can choose from over 100 national teams and clubs to represent, and play in various competitions such as the World Cup, the Champions League, the Copa America, and more. You can also play friendly matches against other teams or against your friends online.
- The game also has various challenges that test your skills and knowledge of soccer. You can try to score goals from different angles and distances, complete trivia questions about soccer history and facts, or beat other players' records and achievements.
- How to download and install World Soccer Champs mod APK 4.5.3.3
- If you are interested in downloading and installing World Soccer Champs mod APK 4.5.3.3 on your Android device, you can follow these simple steps:
- Step 1: Download the mod APK file from a trusted source
- The first step is to download the mod APK file from a trusted source that has been verified by other users or experts. You can use this link to download World Soccer Champs mod APK 4.5.3.3 safely and securely.
- Step 2: Enable unknown sources on your device settings
- The second step is to enable unknown sources on your device settings, which allows you to install applications from sources other than Google Play Store. To do this, go to your device settings > security > unknown sources > enable.
- Step 3: Install the mod APK file and launch the game
- The third step is to install the mod APK file on your device by tapping on it and following the instructions on the screen. Once installed, launch the game from your app drawer or home screen, and enjoy
Conclusion
- World Soccer Champs mod APK 4.5.3.3 is a fun and exciting soccer game for Android devices that lets you manage your own soccer team and compete in various tournaments and leagues around the world. It also gives you unlimited money, no ads, simple and intuitive controls, realistic graphics and animations, and multiple game modes and challenges to enjoy.
- If you want to download and install World Soccer Champs mod APK 4.5.3.3 on your device, you can follow the steps mentioned above and get the game in minutes. You can also share the game with your friends and challenge them online.
- World Soccer Champs mod APK 4.5.3.3 is a game that will keep you entertained and challenged for hours, whether you are a casual or hardcore soccer fan. So what are you waiting for? Download World Soccer Champs mod APK 4.5.3.3 now and start playing!
- FAQs
- Here are some of the frequently asked questions about World Soccer Champs mod APK 4.5.3.3:
-
-
-Question
-Answer
-
-
-Is World Soccer Champs mod APK 4.5.3.3 safe to download and install?
-Yes, World Soccer Champs mod APK 4.5.3.3 is safe to download and install, as long as you get it from a trusted source that has been verified by other users or experts. You should also scan the mod APK file with an antivirus or anti-malware program before installing it on your device.
-
-
-Do I need to root my device to use World Soccer Champs mod APK 4.5.3.3?
-No, you do not need to root your device to use World Soccer Champs mod APK 4.5.3.3, as it does not require any special permissions or access to your device's system files.
-
-
-Will World Soccer Champs mod APK 4.5.3.3 work on any Android device?
-Yes, World Soccer Champs mod APK 4.5.3.3 will work on any Android device that meets the minimum requirements of the game, which are: Android version 4.1 or higher, 1 GB of RAM, and 100 MB of free storage space.
-
-
-Can I play World Soccer Champs mod APK 4.5.3.3 offline?
-Yes, you can play World Soccer Champs mod APK 4.5.3.3 offline, as it does not require an internet connection to run the game or access its features.
-
-
-Can I update World Soccer Champs mod APK 4.5.3.3 to the latest version?
-No, you cannot update World Soccer Champs mod APK 4.5.3.3 to the latest version, as it is a modified version of the original game that may not be compatible with the official updates from the developer.
-
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy Wii and GameCube Classics on PC Windows 7 with Dolphin Emulator.md b/spaces/1phancelerku/anime-remove-background/Enjoy Wii and GameCube Classics on PC Windows 7 with Dolphin Emulator.md
deleted file mode 100644
index 3c52ef8faf3b1fc21db5980d67bfc9645cdbe04b..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Enjoy Wii and GameCube Classics on PC Windows 7 with Dolphin Emulator.md
+++ /dev/null
@@ -1,153 +0,0 @@
-
-How to Download and Install Dolphin Emulator on PC Windows 7
- Do you want to play your favorite GameCube and Wii games on your PC Windows 7? If so, you might be interested in trying out the dolphin emulator, a free and open-source software that allows you to do just that. Dolphin emulator is an amazing program that can run games for these two consoles in full HD (1080p) with several enhancements, such as compatibility with all PC controllers, turbo speed, networked multiplayer, and even more. In this article, we will show you how to download and install dolphin emulator on your PC Windows 7 in a few easy steps.
- Downloading Dolphin Emulator
- The first thing you need to do is to download the latest beta version of the dolphin emulator from the official website. The beta versions are updated every month and have more features and bug fixes than the stable versions. You can find them here: https://dolphin-emu.org/download/
-dolphin emulator download pc windows 7 Download Zip ★★★★★ https://jinyurl.com/2uNP6b
- On this page, you will see a list of different versions for different platforms. You need to choose the one that matches your system architecture (64-bit or 32-bit). To check which one you have, you can right-click on My Computer icon on your desktop and select Properties. You will see a window that shows your system information. Look for System type and see if it says 64-bit or 32-bit.
- Once you have chosen the right version for your system, click on it and save it to your hard disk drive. The file will be in a compressed format (.7z), so you will need a program like WinRAR or 7-Zip to extract it
After you have downloaded the file, you need to extract it to a new folder. You can do this by right-clicking on the file and selecting Extract Here or Extract to dolphin-x64 (or dolphin-x86). You will see a new folder with the same name as the file. This folder contains all the files and folders that you need to run the dolphin emulator.
- Installing Dolphin Emulator
- Now that you have extracted the dolphin emulator files, you are ready to install it on your PC Windows 7. The installation process is very simple and straightforward. All you need to do is to run the dolphin emulator executable file and select open. You can find this file in the folder that you extracted earlier. It will have a dolphin icon and a name like Dolphin.exe or Dolphin-x64.exe (or Dolphin-x86.exe).
- When you run the file, you will see a window that shows the dolphin emulator interface. This is where you can access all the settings and features of the emulator. Before you start playing games, you need to add them to the dolphin library. To do this, you need to navigate to your game file location and select them. You can do this by clicking on the Open button on the toolbar or by pressing Ctrl+O on your keyboard. You will see a file browser window that allows you to browse your hard disk drive and find your game files.
- The game files that are compatible with the dolphin emulator are usually in ISO or WBFS format. These are disc image files that contain all the data of the original game discs. You can also use other formats, such as CISO, GCZ, or NKit, but they might not work as well as ISO or WBFS. Once you have found your game files, you can select them and click on Open. They will be added to the dolphin library and displayed on the main window.
- After you have added your games, you can configure some general settings of the emulator, such as language, theme, interface, etc. You can do this by clicking on the Config button on the toolbar or by pressing Ctrl+C on your keyboard. You will see a window that shows several tabs with different options. You can explore these tabs and change the settings according to your preferences. For example, you can change the language of the emulator by going to the Interface tab and selecting your desired language from the drop-down menu.
-dolphin emulator windows 7 64 bit download
-dolphin emulator for pc windows 7 free download
-dolphin emulator latest version download for windows 7
-dolphin emulator download for windows 7 32 bit
-dolphin emulator setup download for windows 7
-dolphin emulator games download for pc windows 7
-dolphin emulator download pc windows 7 full version
-dolphin emulator download for windows 7 ultimate
-dolphin emulator download pc windows 7 offline installer
-dolphin emulator download for windows 7 professional
-dolphin emulator best settings for pc windows 7 download
-dolphin emulator download pc windows 7 zip file
-dolphin emulator download for windows 7 home premium
-dolphin emulator download pc windows 7 portable
-dolphin emulator download for windows 7 starter
-dolphin emulator cheats download for pc windows 7
-dolphin emulator download pc windows 7 softonic
-dolphin emulator download for windows 7 enterprise
-dolphin emulator bios download for pc windows 7
-dolphin emulator download pc windows 7 apk
-dolphin emulator roms download for pc windows 7
-dolphin emulator download pc windows 7 exe
-dolphin emulator skins download for windows 7
-dolphin emulator download pc windows 7 iso
-dolphin emulator themes download for windows 7
-dolphin emulator download pc windows 7 rar
-dolphin emulator plugins download for windows 7
-dolphin emulator download pc windows 7 crack
-dolphin emulator controller profiles download for windows 7
-dolphin emulator download pc windows 7 no survey
-dolphin emulator netplay download for windows 7
-dolphin emulator download pc windows 7 highly compressed
-dolphin emulator texture packs download for windows 7
-dolphin emulator download pc windows 7 google drive
-dolphin emulator save files download for windows 7
-dolphin emulator download pc windows 7 mega
-dolphin emulator custom builds download for windows 7
-dolphin emulator download pc windows 7 mediafire
-dolphin emulator sound fix download for windows 7
-dolphin emulator download pc windows 7 youtube
-dolphin emulator shaders download for windows 7
-dolphin emulator download pc windows 7 reddit
-dolphin emulator mods download for windows 7
-dolphin emulator download pc windows 7 github
-dolphin emulator update download for windows 7
-dolphin emulator download pc windows 7 sourceforge
-dolphin emulator tools download for windows 7
-dolphin emulator download pc windows 7 cnet
-dolphin emulator patches download for windows 7
- Configuring Dolphin Emulator
- One of the most important aspects of using the dolphin emulator is configuring it properly for your system and preferences. This will ensure that you get the best performance and quality while playing games. There are three main settings that you need to configure: graphics, controller, and audio.
- To access the graphics settings, you need to click on the Graphics button on the toolbar or press Ctrl+G on your keyboard. You will see a window that shows four tabs: General, Enhancements, Hacks, and Advanced. These tabs allow you to choose the best video backend, resolution, enhancements, etc. for your system and preferences.
- The video backend is the software that renders the graphics of the games. There are four options available: Direct3D 11, Direct3D 12, OpenGL, and Vulkan. Each one has its own advantages and disadvantages, depending on your hardware and drivers. Generally speaking, Direct3D 11 is recommended for most Windows users, as it offers good compatibility and performance. However, you can try other options and see which one works best for you.
- The resolution is the size of the output image that is displayed on your screen. The higher the resolution, the sharper and clearer the image will be. However, higher resolutions also require more processing power and might cause slowdowns or glitches. The default resolution is Auto (Window Size), which means that it will match the size of your emulator window. You can change this by selecting a different option from the drop-down menu or by entering a custom value.
- The enhancements are optional features that improve the graphics quality of the games beyond their original capabilities. Some of these features include anti-aliasing, anisotropic filtering, texture scaling, stereoscopic 3D, etc. These features can make the games look more realistic and immersive, but they also require more processing power and might cause slowdowns or glitches. You can enable or disable these features by checking or unchecking their boxes or by adjusting their sliders.
The hacks are optional features that improve the performance and compatibility of the games by bypassing some of the limitations or problems of the original hardware. Some of these features include skip EFB access, ignore format changes, store EFB copies to texture only, etc. These features can make the games run faster and smoother, but they might also cause graphical errors or glitches. You can enable or disable these features by checking or unchecking their boxes.
- The advanced tab contains some additional options that are not recommended for most users, as they might cause instability or crashes. These options include backend multithreading, shader compilation mode, asynchronous shader compilation, etc. You can leave these options at their default values unless you know what you are doing.
- To access the controller settings, you need to click on the Controllers button on the toolbar or press Ctrl+P on your keyboard. You will see a window that shows two tabs: GameCube and Wii. These tabs allow you to configure your input devices for each console.
- The dolphin emulator supports various types of input devices, such as keyboard, mouse, gamepad, Wiimote, etc. You can choose which device you want to use for each controller port by selecting an option from the drop-down menu. For example, you can choose Keyboard/Mouse for Port 1 if you want to use your keyboard and mouse as a GameCube controller.
- After you have chosen your device, you need to configure the buttons and axes for each input. You can do this by clicking on the Configure button next to the device option. You will see a window that shows a diagram of the controller and a list of inputs. You can assign an input to a button or axis by clicking on it and then pressing the corresponding key or moving the corresponding stick on your device. You can also clear an input by right-clicking on it and selecting Clear.
- You can also adjust the sensitivity and deadzone of each axis by moving the sliders below them. The sensitivity determines how fast the axis responds to your input, while the deadzone determines how much movement is required to activate the axis. You can test your configuration by looking at the preview window on the right side of the window. It will show you how your device inputs are mapped to the controller inputs.
- To access the audio settings, you need to click on the Audio button on the toolbar or press Ctrl+A on your keyboard. You will see a window that shows two tabs: DSP and Audio Backend. These tabs allow you to adjust the volume, backend, latency, etc. of the audio output.
- The volume slider allows you to increase or decrease the sound level of the emulator. The default value is 100%, but you can change it according to your preferences.
- The audio backend is the software that handles the audio output of the emulator. There are four options available: XAudio2, Cubeb, OpenAL, and Null. Each one has its own advantages and disadvantages, depending on your hardware and drivers. Generally speaking, XAudio2 is recommended for most Windows users, as it offers good compatibility and performance. However, you can try other options and see which one works best for you.
- The latency slider allows you to adjust the delay between the audio input and output of the emulator. The lower the latency, the more responsive and synchronized the sound will be. However, lower latency also requires more processing power and might cause stuttering or crackling. The default value is 2 ms, but you can change it according to your preferences.
Playing Games with Dolphin Emulator
- Now that you have configured the dolphin emulator to your liking, you are ready to play games with it. Playing games with the dolphin emulator is very easy and fun. All you need to do is to launch a game from the dolphin library and enjoy it in full HD with various enhancements.
- To launch a game, you need to double-click on it in the dolphin library or right-click on it and select Play. The game will start in a new window and you will see the dolphin logo and some information on the top left corner of the screen. You can also see the FPS (frames per second) and the VPS (video processor speed) on the top right corner of the screen. These numbers indicate how well the game is running on your system.
- While playing a game, you can access some additional features of the emulator by pressing some keys on your keyboard. For example, you can save and load states, use cheats, take screenshots, record videos, etc. Here are some of the most useful keys and their functions:
-
-
-Key
-Function
-
-
-F1
-Save state to slot 1
-
-
-F2
-Cycle through save state slots (1-8)
-
-
-F3
-Load state from current slot
-
-
-F4
-Toggle frame limit (on/off)
-
-
-F5
-Toggle fullscreen mode (on/off)
-
-
-F6
-Decrease frame limit by 5%
-
-
-F7
-Increase frame limit by 5%
-
-
-F8
-Take screenshot and save it to User/Screenshots folder
-
-
-F9
-Toggle render to main window (on/off)
-
-
-F10
-Start/stop video recording and save it to User/Dump/Frames folder
-
-
-F11
-Toggle audio mute (on/off)
-
-
-F12
-Toggle IR pointer (on/off) for Wiimote emulation
-
- Conclusion
- In this article, we have shown you how to download and install dolphin emulator on your PC Windows 7. We have also explained how to configure the graphics, controller, and audio settings of the emulator. Finally, we have given you some tips on how to play games with the emulator and access some of its features. We hope that you have found this article helpful and informative.
- Dolphin emulator is a great software that allows you to play GameCube and Wii games on your PC or Android device. It offers many advantages, such as compatibility, performance, graphics, controllers, and more. It also has a large and active community of users and developers who are constantly improving and updating it. If you are a fan of these consoles and their games, you should definitely give dolphin emulator a try. You will be amazed by how well it works and how much fun it is.
- If you have any questions or comments about this article or the dolphin emulator, feel free to leave them below. We would love to hear from you and help you out. Thank you for reading and happy gaming!
- FAQs
- Here are some of the frequently asked questions about dolphin emulator:
-
- Is dolphin emulator legal?
- Dolphin emulator is legal as long as you own the original game discs and consoles that you are emulating. You can legally dump your own game discs and use them with the emulator. However, downloading or sharing game files that you do not own is illegal and considered piracy.
- Is dolphin emulator safe?
- Dolphin emulator is safe as long as you download it from the official website or other trusted sources. You should avoid downloading it from unknown or suspicious websites, as they might contain viruses or malware that could harm your system.
- What games can I play with dolphin emulator?
- You can play almost any GameCube or Wii game with dolphin emulator, as long as your system meets the requirements and you have the game files. Some of the most popular games that you can play with dolphin emulator are Super Smash Bros. Melee, The Legend of Zelda: Twilight Princess, Mario Kart Wii, Super Mario Galaxy, Metroid Prime, Resident Evil 4, and many more. You can check the compatibility list of the dolphin emulator here: https://wiki.dolphin-emu.org/index.php?title=Category:Games
- How can I update dolphin emulator?
- You can update dolphin emulator by downloading the latest beta version from the official website or by using the built-in updater. To use the updater, you need to go to the Help menu and select Check for Updates. The emulator will check for any available updates and prompt you to download and install them.
- How can I get help or support for dolphin emulator?
- You can get help or support for dolphin emulator by visiting the official website or the forums. The website has a lot of useful information, such as guides, FAQs, wiki, blog, etc. The forums have a large and active community of users and developers who can answer your questions and help you with your issues. You can also join the discord server or the IRC channel of the dolphin emulator and chat with other users and developers.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/4Taps/SadTalker/src/facerender/sync_batchnorm/comm.py b/spaces/4Taps/SadTalker/src/facerender/sync_batchnorm/comm.py
deleted file mode 100644
index 922f8c4a3adaa9b32fdcaef09583be03b0d7eb2b..0000000000000000000000000000000000000000
--- a/spaces/4Taps/SadTalker/src/facerender/sync_batchnorm/comm.py
+++ /dev/null
@@ -1,137 +0,0 @@
-# -*- coding: utf-8 -*-
-# File : comm.py
-# Author : Jiayuan Mao
-# Email : maojiayuan@gmail.com
-# Date : 27/01/2018
-#
-# This file is part of Synchronized-BatchNorm-PyTorch.
-# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
-# Distributed under MIT License.
-
-import queue
-import collections
-import threading
-
-__all__ = ['FutureResult', 'SlavePipe', 'SyncMaster']
-
-
-class FutureResult(object):
- """A thread-safe future implementation. Used only as one-to-one pipe."""
-
- def __init__(self):
- self._result = None
- self._lock = threading.Lock()
- self._cond = threading.Condition(self._lock)
-
- def put(self, result):
- with self._lock:
- assert self._result is None, 'Previous result has\'t been fetched.'
- self._result = result
- self._cond.notify()
-
- def get(self):
- with self._lock:
- if self._result is None:
- self._cond.wait()
-
- res = self._result
- self._result = None
- return res
-
-
-_MasterRegistry = collections.namedtuple('MasterRegistry', ['result'])
-_SlavePipeBase = collections.namedtuple('_SlavePipeBase', ['identifier', 'queue', 'result'])
-
-
-class SlavePipe(_SlavePipeBase):
- """Pipe for master-slave communication."""
-
- def run_slave(self, msg):
- self.queue.put((self.identifier, msg))
- ret = self.result.get()
- self.queue.put(True)
- return ret
-
-
-class SyncMaster(object):
- """An abstract `SyncMaster` object.
-
- - During the replication, as the data parallel will trigger an callback of each module, all slave devices should
- call `register(id)` and obtain an `SlavePipe` to communicate with the master.
- - During the forward pass, master device invokes `run_master`, all messages from slave devices will be collected,
- and passed to a registered callback.
- - After receiving the messages, the master device should gather the information and determine to message passed
- back to each slave devices.
- """
-
- def __init__(self, master_callback):
- """
-
- Args:
- master_callback: a callback to be invoked after having collected messages from slave devices.
- """
- self._master_callback = master_callback
- self._queue = queue.Queue()
- self._registry = collections.OrderedDict()
- self._activated = False
-
- def __getstate__(self):
- return {'master_callback': self._master_callback}
-
- def __setstate__(self, state):
- self.__init__(state['master_callback'])
-
- def register_slave(self, identifier):
- """
- Register an slave device.
-
- Args:
- identifier: an identifier, usually is the device id.
-
- Returns: a `SlavePipe` object which can be used to communicate with the master device.
-
- """
- if self._activated:
- assert self._queue.empty(), 'Queue is not clean before next initialization.'
- self._activated = False
- self._registry.clear()
- future = FutureResult()
- self._registry[identifier] = _MasterRegistry(future)
- return SlavePipe(identifier, self._queue, future)
-
- def run_master(self, master_msg):
- """
- Main entry for the master device in each forward pass.
- The messages were first collected from each devices (including the master device), and then
- an callback will be invoked to compute the message to be sent back to each devices
- (including the master device).
-
- Args:
- master_msg: the message that the master want to send to itself. This will be placed as the first
- message when calling `master_callback`. For detailed usage, see `_SynchronizedBatchNorm` for an example.
-
- Returns: the message to be sent back to the master device.
-
- """
- self._activated = True
-
- intermediates = [(0, master_msg)]
- for i in range(self.nr_slaves):
- intermediates.append(self._queue.get())
-
- results = self._master_callback(intermediates)
- assert results[0][0] == 0, 'The first result should belongs to the master.'
-
- for i, res in results:
- if i == 0:
- continue
- self._registry[i].result.put(res)
-
- for i in range(self.nr_slaves):
- assert self._queue.get() is True
-
- return results[0][1]
-
- @property
- def nr_slaves(self):
- return len(self._registry)
diff --git a/spaces/7hao/bingo/src/app/page.tsx b/spaces/7hao/bingo/src/app/page.tsx
deleted file mode 100644
index 0dff3431b098ce4fe282cc83fc87a93a28a43090..0000000000000000000000000000000000000000
--- a/spaces/7hao/bingo/src/app/page.tsx
+++ /dev/null
@@ -1,15 +0,0 @@
-import dynamic from 'next/dynamic'
-
-const DynamicComponentWithNoSSR = dynamic(
- () => import('../components/chat'),
- { ssr: false }
-)
-
-export default function IndexPage() {
- return (
- <>
-
-
- >
- )
-}
diff --git a/spaces/7hao/bingo/src/components/welcome-screen.tsx b/spaces/7hao/bingo/src/components/welcome-screen.tsx
deleted file mode 100644
index f7449fcbb6c621875e235db98f2790bf7894fb0a..0000000000000000000000000000000000000000
--- a/spaces/7hao/bingo/src/components/welcome-screen.tsx
+++ /dev/null
@@ -1,34 +0,0 @@
-import { useBing } from '@/lib/hooks/use-bing'
-
-const exampleMessages = [
- {
- heading: '🧐 提出复杂问题',
- message: `我可以为我挑剔的只吃橙色食物的孩子做什么饭?`
- },
- {
- heading: '🙌 获取更好的答案',
- message: '销量最高的 3 种宠物吸尘器有哪些优点和缺点?'
- },
- {
- heading: '🎨 获得创意灵感',
- message: `以海盗的口吻写一首关于外太空鳄鱼的俳句`
- }
-]
-
-export function WelcomeScreen({ setInput }: Pick, 'setInput'>) {
- return (
-
- {exampleMessages.map(example => (
-
setInput(example.message)}>
- {example.heading}
-
-
-
-
“{example.message}”
-
-
-
- ))}
-
- )
-}
diff --git a/spaces/801artistry/RVC801/diffq/uniform.py b/spaces/801artistry/RVC801/diffq/uniform.py
deleted file mode 100644
index f61e9129c04caaa33c66f726bf2433d51689cfa5..0000000000000000000000000000000000000000
--- a/spaces/801artistry/RVC801/diffq/uniform.py
+++ /dev/null
@@ -1,121 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Classic uniform quantization over n bits.
-"""
-from typing import Tuple
-import torch
-
-from .base import BaseQuantizer
-from .utils import simple_repr
-
-
-def uniform_quantize(p: torch.Tensor, bits: torch.Tensor = torch.tensor(8.)):
- """
- Quantize the given weights over `bits` bits.
-
- Returns:
- - quantized levels
- - (min, max) range.
-
- """
- assert (bits >= 1).all() and (bits <= 15).all()
- num_levels = (2 ** bits.float()).long()
- mn = p.min().item()
- mx = p.max().item()
- p = (p - mn) / (mx - mn) # put p in [0, 1]
- unit = 1 / (num_levels - 1) # quantization unit
- levels = (p / unit).round()
- if (bits <= 8).all():
- levels = levels.byte()
- else:
- levels = levels.short()
- return levels, (mn, mx)
-
-
-def uniform_unquantize(levels: torch.Tensor, scales: Tuple[float, float],
- bits: torch.Tensor = torch.tensor(8.)):
- """
- Unquantize the weights from the levels and scale. Return a float32 tensor.
- """
- mn, mx = scales
- num_levels = 2 ** bits.float()
- unit = 1 / (num_levels - 1)
- levels = levels.float()
- p = levels * unit # in [0, 1]
- return p * (mx - mn) + mn
-
-
-class UniformQuantizer(BaseQuantizer):
- def __init__(self, model: torch.nn.Module, bits: float = 8., min_size: float = 0.01,
- float16: bool = False, qat: bool = False, exclude=[], detect_bound=True):
- """
- Args:
- model (torch.nn.Module): model to quantize
- bits (float): number of bits to quantize over.
- min_size (float): minimum size in MB of a parameter to be quantized.
- float16 (bool): if a layer is smaller than min_size, should we still do float16?
- qat (bool): perform quantized aware training.
- exclude (list[str]): list of patterns used to match parameters to exclude.
- For instance `['bias']` to exclude all bias terms.
- detect_bound (bool): if True, will detect bound parameters and reuse
- the same quantized tensor for both.
- """
- self.bits = float(bits)
- self.qat = qat
-
- super().__init__(model, min_size, float16, exclude, detect_bound)
-
- def __repr__(self):
- return simple_repr(self, )
-
- def _pre_forward_train(self):
- if self.qat:
- for qparam in self._qparams:
- if qparam.other is not None:
- new_param = qparam.other.module._parameters[qparam.other.name]
- else:
- quantized = self._quantize_param(qparam)
- qvalue = self._unquantize_param(qparam, quantized)
- new_param = qparam.param + (qvalue - qparam.param).detach()
- qparam.module._parameters[qparam.name] = new_param
- return True
- return False
-
- def _post_forward_train(self):
- if self.qat:
- for qparam in self._qparams:
- qparam.module._parameters[qparam.name] = qparam.param
- return True
- return False
-
- def _quantize_param(self, qparam):
- levels, scales = uniform_quantize(qparam.param.data, torch.tensor(self.bits))
- return (levels, scales)
-
- def _unquantize_param(self, qparam, quantized):
- levels, scales = quantized
- return uniform_unquantize(levels, scales, torch.tensor(self.bits))
-
- def model_size(self):
- """
- Non differentiable model size in MB.
- """
- total = super().model_size()
- subtotal = 0
- for qparam in self._qparams:
- if qparam.other is None: # if parameter is bound, count only one copy.
- subtotal += self.bits * qparam.param.numel() + 64 # 2 float for the overall scales
- subtotal /= 2**20 * 8 # bits to MegaBytes
- return total + subtotal
-
- def true_model_size(self):
- """
- Return the true quantized model size, in MB, without extra
- compression.
- """
- return self.model_size().item()
diff --git a/spaces/801artistry/RVC801/go-applio-manager-recode.bat b/spaces/801artistry/RVC801/go-applio-manager-recode.bat
deleted file mode 100644
index 91b8acfc0c69a356fd5b1d77650b2cd728b1072b..0000000000000000000000000000000000000000
--- a/spaces/801artistry/RVC801/go-applio-manager-recode.bat
+++ /dev/null
@@ -1,322 +0,0 @@
-@echo off
-title Applio Installer
-
-::: _ _ _____ _
-::: /\ | (_) | __ \ | |
-::: / \ _ __ _ __ | |_ ___ | |__) |___ ___ ___ __| | ___
-::: / /\ \ | '_ \| '_ \| | |/ _ \ | _ // _ \/ __/ _ \ / _` |/ _ \
-::: / ____ \| |_) | |_) | | | (_) | | | \ \ __/ (_| (_) | (_| | __/
-::: /_/ \_\ .__/| .__/|_|_|\___/ |_| \_\___|\___\___/ \__,_|\___|
-::: | | | |
-::: |_| |_|
-:::
-:::
-
-setlocal
-set "branch=applio-recode"
-set "runtime=runtime-recode"
-set "repoUrl=https://github.com/IAHispano/Applio-RVC-Fork/archive/refs/heads/%branch%.zip"
-set "fixesFolder=fixes"
-set "localFixesPy=local_fixes.py"
-set "principal=%cd%"
-set "URL_BASE=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main"
-set "URL_EXTRA=https://huggingface.co/IAHispano/applio/resolve/main"
-
-:menu
-for /f "delims=: tokens=*" %%A in ('findstr /b ":::" "%~f0"') do @echo(%%A
-
-echo [1] Reinstall Applio
-echo [2] Update Applio
-echo [3] Update Applio + Runtime
-echo.
-
-set /p choice=Select an option:
-set choice=%choice: =%
-
-if "%choice%"=="1" (
- cls
- echo Starting Applio Reinstaller...
- echo.
- goto reinstaller
- pause
- cls
- goto menu
-
-)
-
-if "%choice%"=="2" (
- cls
- echo Starting Applio Updater...
- echo.
- goto updater
- pause
- cls
- goto menu
-)
-
-if "%choice%"=="3" (
- cls
- echo Updating Applio + Runtime...
- echo.
- goto updaterRuntime
- pause
- cls
- goto menu
-
-)
-
-cls
-echo Invalid option. Please enter a number from 1 to 3.
-echo.
-echo Press 'Enter' to access the main menu...
-pause>nul
-cls
-goto menu
-
-:reinstaller
-
-echo WARNING: Remember to install Microsoft C++ Build Tools, Redistributable, Python, and Git before continuing.
-echo.
-echo Step-by-step guide: https://rentry.org/appliolocal
-echo Build Tools: https://aka.ms/vs/17/release/vs_BuildTools.exe
-echo Redistributable: https://aka.ms/vs/17/release/vc_redist.x64.exe
-echo Git: https://github.com/git-for-windows/git/releases/download/v2.42.0.windows.2/Git-2.42.0.2-64-bit.exe
-echo Python: Add this route to the windows enviroment variables the user path variable: %principal%\runtime\Scripts
-echo.
-pause
-cls
-
-echo Downloading ZIP file...
-powershell -command "& { Invoke-WebRequest -Uri '%repoUrl%' -OutFile '%principal%\repo.zip' }"
-echo.
-
-echo Extracting ZIP file...
-powershell -command "& { Add-Type -AssemblyName System.IO.Compression.FileSystem ; [System.IO.Compression.ZipFile]::ExtractToDirectory('%principal%\repo.zip', '%principal%') }"
-echo.
-
-echo Copying folder and file structure from subdirectory to main directory...
-robocopy "%principal%\Applio-RVC-Fork-%branch%" "%principal%" /E
-echo.
-
-echo Deleting contents of subdirectory (files and folders)...
-rmdir "%principal%\Applio-RVC-Fork-%branch%" /S /Q
-echo.
-
-echo Cleaning up...
-del "%principal%\repo.zip"
-echo.
-cls
-
-echo Proceeding to download the models...
-echo.
-
-echo WARNING: At this point, it's recommended to disable antivirus or firewall, as errors might occur when downloading pretrained models.
-pause
-cls
-
-echo Downloading models in the assets folder...
-cd "assets"
-echo.
-echo Downloading the "pretrained" folder...
-cd "pretrained"
-curl -LJO "%URL_BASE%/pretrained/D32k.pth"
-curl -LJO "%URL_BASE%/pretrained/D40k.pth"
-curl -LJO "%URL_BASE%/pretrained/D48k.pth"
-curl -LJO "%URL_BASE%/pretrained/G32k.pth"
-curl -LJO "%URL_BASE%/pretrained/G40k.pth"
-curl -LJO "%URL_BASE%/pretrained/G48k.pth"
-curl -LJO "%URL_BASE%/pretrained/f0D32k.pth"
-curl -LJO "%URL_BASE%/pretrained/f0D40k.pth"
-curl -LJO "%URL_BASE%/pretrained/f0D48k.pth"
-curl -LJO "%URL_BASE%/pretrained/f0G32k.pth"
-curl -LJO "%URL_BASE%/pretrained/f0G40k.pth"
-curl -LJO "%URL_BASE%/pretrained/f0G48k.pth"
-cd ".."
-echo.
-cls
-
-echo Downloading the "pretrained_v2" folder...
-cd "pretrained_v2"
-curl -LJO "%URL_BASE%/pretrained_v2/D32k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/D40k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/D48k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/G32k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/G40k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/G48k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/f0D32k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/f0D40k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/f0D48k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/f0G32k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/f0G40k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/f0G48k.pth"
-cd ".."
-echo.
-cls
-
-echo Downloading the hubert_base.pt file...
-cd "hubert"
-curl -LJO "%URL_BASE%/hubert_base.pt"
-cd ".."
-echo.
-cls
-
-
-echo Downloading the rmvpe.pt file...
-cd "rmvpe"
-curl -LJO "%URL_BASE%/rmvpe.pt"
-echo.
-cls
-
-echo Downloading the rmvpe.onnx file...
-curl -LJO "%URL_BASE%/rmvpe.onnx"
-cd ".."
-cd ".."
-echo.
-cls
-
-echo Downloading the rest of the large files
-
-echo Downloading the "uvr5_weights" folder...
-cd "uvr5_weights"
-curl -LJO "%URL_BASE%/uvr5_weights/HP2_all_vocals.pth"
-curl -LJO "%URL_BASE%/uvr5_weights/HP3_all_vocals.pth"
-curl -LJO "%URL_BASE%/uvr5_weights/HP5_only_main_vocal.pth"
-curl -LJO "%URL_BASE%/uvr5_weights/VR-DeEchoAggressive.pth"
-curl -LJO "%URL_BASE%/uvr5_weights/VR-DeEchoDeReverb.pth"
-curl -LJO "%URL_BASE%/uvr5_weights/VR-DeEchoNormal.pth"
-cd ".."
-echo.
-cls
-
-echo Downloading the ffmpeg.exe file...
-curl -LJO "%URL_BASE%/ffmpeg.exe"
-echo.
-cls
-
-echo Downloading the ffprobe.exe file...
-curl -LJO "%URL_BASE%/ffprobe.exe"
-echo.
-cls
-
-echo Downloading the runtime.zip file...
-curl -LJO "%URL_EXTRA%/%runtime%.zip"
-echo.
-cls
-
-echo Extracting the runtime.zip file, this might take a while...
-powershell -Command "Expand-Archive -Path '%runtime%.zip' -DestinationPath '.'"
-del %runtime%.zip
-echo.
-cls
-
-echo Downloads completed!
-echo.
-
-echo Checking if the local_fixes.py file exists in the Fixes folder...
-if exist "%fixesFolder%\%localFixesPy%" (
- echo Running the file...
- runtime\python.exe "%fixesFolder%\%localFixesPy%"
-) else (
- echo The "%localFixesPy%" file was not found in the "Fixes" folder.
-)
-echo.
-
-echo Fixes Applied!
-echo.
-
-echo Applio has been reinstalled!
-echo.
-echo Press 'Enter' to access the main menu...
-pause>nul
-cls
-goto menu
-
-
-:updater
-
-echo Downloading the ZIP file...
-powershell -command "& { Invoke-WebRequest -Uri '%repoUrl%' -OutFile '%principal%\repo.zip' }"
-echo.
-
-echo Extracting ZIP file...
-powershell -command "& { Add-Type -AssemblyName System.IO.Compression.FileSystem ; [System.IO.Compression.ZipFile]::ExtractToDirectory('%principal%\repo.zip', '%principal%') }"
-echo.
-
-echo Copying folder and file structure from subdirectory to main directory...
-robocopy "%principal%\Applio-RVC-Fork-%branch%" "%principal%" /E
-echo.
-
-echo Deleting contents of the subdirectory (files and folders)...
-rmdir "%principal%\Applio-RVC-Fork-%branch%" /S /Q
-echo.
-
-echo Cleaning up...
-del "%principal%\repo.zip"
-echo.
-cls
-
-echo Verifying if the local_fixes.py file exists in the Fixes folder...
-if exist "%fixesFolder%\%localFixesPy%" (
- echo Running the file...
- runtime\python.exe "%fixesFolder%\%localFixesPy%"
-) else (
- echo The file "%localFixesPy%" was not found in the "Fixes" folder.
-)
-echo.
-
-echo Applio has been updated!
-echo.
-echo Press 'Enter' to access the main menu...
-pause>nul
-cls
-goto menu
-
-
-:updaterRuntime
-
-echo Downloading the ZIP file...
-powershell -command "& { Invoke-WebRequest -Uri '%repoUrl%' -OutFile '%principal%\repo.zip' }"
-echo.
-
-echo Extracting ZIP file...
-powershell -command "& { Add-Type -AssemblyName System.IO.Compression.FileSystem ; [System.IO.Compression.ZipFile]::ExtractToDirectory('%principal%\repo.zip', '%principal%') }"
-echo.
-
-echo Copying folder and file structure from subdirectory to main directory...
-robocopy "%principal%\Applio-RVC-Fork-%branch%" "%principal%" /E
-echo.
-
-echo Deleting contents of the subdirectory (files and folders)...
-rmdir "%principal%\Applio-RVC-Fork-%branch%" /S /Q
-echo.
-
-echo Cleaning up...
-del "%principal%\repo.zip"
-echo.
-cls
-
-echo Downloading the runtime.zip file...
-curl -LJO "%URL_EXTRA%/%runtime%.zip"
-echo.
-cls
-echo Extracting the runtime.zip file, this might take a while...
-powershell -Command "Expand-Archive -Path '%runtime%.zip' -DestinationPath '.'"
-del runtime.zip
-echo.
-cls
-
-echo Verifying if the local_fixes.py file exists in the Fixes folder...
-if exist "%fixesFolder%\%localFixesPy%" (
- echo Running the file...
- runtime\python.exe "%fixesFolder%\%localFixesPy%"
-) else (
- echo The file "%localFixesPy%" was not found in the "Fixes" folder.
-)
-echo.
-
-echo Applio has been updated!
-echo.
-echo Press 'Enter' to access the main menu...
-pause>nul
-cls
-goto menu
diff --git a/spaces/AI-Zero-to-Hero/01-H5-Play-Canvas-Sim-Physics/README.md b/spaces/AI-Zero-to-Hero/01-H5-Play-Canvas-Sim-Physics/README.md
deleted file mode 100644
index a770df06bcabc2f5956567cc316d0c8f9c4ddea5..0000000000000000000000000000000000000000
--- a/spaces/AI-Zero-to-Hero/01-H5-Play-Canvas-Sim-Physics/README.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-title: 01-H5-Play-Canvas-Sim-Physics
-emoji: 🤖🏎️
-colorFrom: purple
-colorTo: indigo
-sdk: static
-pinned: false
-license: apache-2.0
----
diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/tasks/tts/fs2.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/tasks/tts/fs2.py
deleted file mode 100644
index 620d7f3faa53a5326ef97707b9de53506ab059bb..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/tasks/tts/fs2.py
+++ /dev/null
@@ -1,509 +0,0 @@
-import matplotlib
-matplotlib.use('Agg')
-from utils import audio
-import matplotlib.pyplot as plt
-from data_gen.tts.data_gen_utils import get_pitch
-from tasks.tts.fs2_utils import FastSpeechDataset
-from utils.cwt import cwt2f0
-from utils.pl_utils import data_loader
-import os
-from multiprocessing.pool import Pool
-from tqdm import tqdm
-from modules.fastspeech.tts_modules import mel2ph_to_dur
-from utils.hparams import hparams
-from utils.plot import spec_to_figure, dur_to_figure, f0_to_figure
-from utils.pitch_utils import denorm_f0
-from modules.fastspeech.fs2 import FastSpeech2
-from tasks.tts.tts import TtsTask
-import torch
-import torch.optim
-import torch.utils.data
-import torch.nn.functional as F
-import utils
-import torch.distributions
-import numpy as np
-from modules.commons.ssim import ssim
-
-class FastSpeech2Task(TtsTask):
- def __init__(self):
- super(FastSpeech2Task, self).__init__()
- self.dataset_cls = FastSpeechDataset
- self.mse_loss_fn = torch.nn.MSELoss()
- mel_losses = hparams['mel_loss'].split("|")
- self.loss_and_lambda = {}
- for i, l in enumerate(mel_losses):
- if l == '':
- continue
- if ':' in l:
- l, lbd = l.split(":")
- lbd = float(lbd)
- else:
- lbd = 1.0
- self.loss_and_lambda[l] = lbd
- print("| Mel losses:", self.loss_and_lambda)
- self.sil_ph = self.phone_encoder.sil_phonemes()
-
- @data_loader
- def train_dataloader(self):
- train_dataset = self.dataset_cls(hparams['train_set_name'], shuffle=True)
- return self.build_dataloader(train_dataset, True, self.max_tokens, self.max_sentences,
- endless=hparams['endless_ds'])
-
- @data_loader
- def val_dataloader(self):
- valid_dataset = self.dataset_cls(hparams['valid_set_name'], shuffle=False)
- return self.build_dataloader(valid_dataset, False, self.max_eval_tokens, self.max_eval_sentences)
-
- @data_loader
- def test_dataloader(self):
- test_dataset = self.dataset_cls(hparams['test_set_name'], shuffle=False)
- return self.build_dataloader(test_dataset, False, self.max_eval_tokens,
- self.max_eval_sentences, batch_by_size=False)
-
- def build_tts_model(self):
- self.model = FastSpeech2(self.phone_encoder)
-
- def build_model(self):
- self.build_tts_model()
- if hparams['load_ckpt'] != '':
- self.load_ckpt(hparams['load_ckpt'], strict=True)
- utils.print_arch(self.model)
- return self.model
-
- def _training_step(self, sample, batch_idx, _):
- loss_output = self.run_model(self.model, sample)
- total_loss = sum([v for v in loss_output.values() if isinstance(v, torch.Tensor) and v.requires_grad])
- loss_output['batch_size'] = sample['txt_tokens'].size()[0]
- return total_loss, loss_output
-
- def validation_step(self, sample, batch_idx):
- outputs = {}
- outputs['losses'] = {}
- outputs['losses'], model_out = self.run_model(self.model, sample, return_output=True)
- outputs['total_loss'] = sum(outputs['losses'].values())
- outputs['nsamples'] = sample['nsamples']
- mel_out = self.model.out2mel(model_out['mel_out'])
- outputs = utils.tensors_to_scalars(outputs)
- # if sample['mels'].shape[0] == 1:
- # self.add_laplace_var(mel_out, sample['mels'], outputs)
- if batch_idx < hparams['num_valid_plots']:
- self.plot_mel(batch_idx, sample['mels'], mel_out)
- self.plot_dur(batch_idx, sample, model_out)
- if hparams['use_pitch_embed']:
- self.plot_pitch(batch_idx, sample, model_out)
- return outputs
-
- def _validation_end(self, outputs):
- all_losses_meter = {
- 'total_loss': utils.AvgrageMeter(),
- }
- for output in outputs:
- n = output['nsamples']
- for k, v in output['losses'].items():
- if k not in all_losses_meter:
- all_losses_meter[k] = utils.AvgrageMeter()
- all_losses_meter[k].update(v, n)
- all_losses_meter['total_loss'].update(output['total_loss'], n)
- return {k: round(v.avg, 4) for k, v in all_losses_meter.items()}
-
- def run_model(self, model, sample, return_output=False):
- txt_tokens = sample['txt_tokens'] # [B, T_t]
- target = sample['mels'] # [B, T_s, 80]
- mel2ph = sample['mel2ph'] # [B, T_s]
- f0 = sample['f0']
- uv = sample['uv']
- energy = sample['energy']
- spk_embed = sample.get('spk_embed') if not hparams['use_spk_id'] else sample.get('spk_ids')
- if hparams['pitch_type'] == 'cwt':
- cwt_spec = sample[f'cwt_spec']
- f0_mean = sample['f0_mean']
- f0_std = sample['f0_std']
- sample['f0_cwt'] = f0 = model.cwt2f0_norm(cwt_spec, f0_mean, f0_std, mel2ph)
-
- output = model(txt_tokens, mel2ph=mel2ph, spk_embed=spk_embed,
- ref_mels=target, f0=f0, uv=uv, energy=energy, infer=False)
-
- losses = {}
- self.add_mel_loss(output['mel_out'], target, losses)
- self.add_dur_loss(output['dur'], mel2ph, txt_tokens, losses=losses)
- if hparams['use_pitch_embed']:
- self.add_pitch_loss(output, sample, losses)
- if hparams['use_energy_embed']:
- self.add_energy_loss(output['energy_pred'], energy, losses)
- if not return_output:
- return losses
- else:
- return losses, output
-
- ############
- # losses
- ############
- def add_mel_loss(self, mel_out, target, losses, postfix='', mel_mix_loss=None):
- if mel_mix_loss is None:
- for loss_name, lbd in self.loss_and_lambda.items():
- if 'l1' == loss_name:
- l = self.l1_loss(mel_out, target)
- elif 'mse' == loss_name:
- raise NotImplementedError
- elif 'ssim' == loss_name:
- l = self.ssim_loss(mel_out, target)
- elif 'gdl' == loss_name:
- raise NotImplementedError
- losses[f'{loss_name}{postfix}'] = l * lbd
- else:
- raise NotImplementedError
-
- def l1_loss(self, decoder_output, target):
- # decoder_output : B x T x n_mel
- # target : B x T x n_mel
- l1_loss = F.l1_loss(decoder_output, target, reduction='none')
- weights = self.weights_nonzero_speech(target)
- l1_loss = (l1_loss * weights).sum() / weights.sum()
- return l1_loss
-
- def ssim_loss(self, decoder_output, target, bias=6.0):
- # decoder_output : B x T x n_mel
- # target : B x T x n_mel
- assert decoder_output.shape == target.shape
- weights = self.weights_nonzero_speech(target)
- decoder_output = decoder_output[:, None] + bias
- target = target[:, None] + bias
- ssim_loss = 1 - ssim(decoder_output, target, size_average=False)
- ssim_loss = (ssim_loss * weights).sum() / weights.sum()
- return ssim_loss
-
- def add_dur_loss(self, dur_pred, mel2ph, txt_tokens, losses=None):
- """
-
- :param dur_pred: [B, T], float, log scale
- :param mel2ph: [B, T]
- :param txt_tokens: [B, T]
- :param losses:
- :return:
- """
- B, T = txt_tokens.shape
- nonpadding = (txt_tokens != 0).float()
- dur_gt = mel2ph_to_dur(mel2ph, T).float() * nonpadding
- is_sil = torch.zeros_like(txt_tokens).bool()
- for p in self.sil_ph:
- is_sil = is_sil | (txt_tokens == self.phone_encoder.encode(p)[0])
- is_sil = is_sil.float() # [B, T_txt]
-
- # phone duration loss
- if hparams['dur_loss'] == 'mse':
- losses['pdur'] = F.mse_loss(dur_pred, (dur_gt + 1).log(), reduction='none')
- losses['pdur'] = (losses['pdur'] * nonpadding).sum() / nonpadding.sum()
- dur_pred = (dur_pred.exp() - 1).clamp(min=0)
- elif hparams['dur_loss'] == 'mog':
- return NotImplementedError
- elif hparams['dur_loss'] == 'crf':
- losses['pdur'] = -self.model.dur_predictor.crf(
- dur_pred, dur_gt.long().clamp(min=0, max=31), mask=nonpadding > 0, reduction='mean')
- losses['pdur'] = losses['pdur'] * hparams['lambda_ph_dur']
-
- # use linear scale for sent and word duration
- if hparams['lambda_word_dur'] > 0:
- word_id = (is_sil.cumsum(-1) * (1 - is_sil)).long()
- word_dur_p = dur_pred.new_zeros([B, word_id.max() + 1]).scatter_add(1, word_id, dur_pred)[:, 1:]
- word_dur_g = dur_gt.new_zeros([B, word_id.max() + 1]).scatter_add(1, word_id, dur_gt)[:, 1:]
- wdur_loss = F.mse_loss((word_dur_p + 1).log(), (word_dur_g + 1).log(), reduction='none')
- word_nonpadding = (word_dur_g > 0).float()
- wdur_loss = (wdur_loss * word_nonpadding).sum() / word_nonpadding.sum()
- losses['wdur'] = wdur_loss * hparams['lambda_word_dur']
- if hparams['lambda_sent_dur'] > 0:
- sent_dur_p = dur_pred.sum(-1)
- sent_dur_g = dur_gt.sum(-1)
- sdur_loss = F.mse_loss((sent_dur_p + 1).log(), (sent_dur_g + 1).log(), reduction='mean')
- losses['sdur'] = sdur_loss.mean() * hparams['lambda_sent_dur']
-
- def add_pitch_loss(self, output, sample, losses):
- if hparams['pitch_type'] == 'ph':
- nonpadding = (sample['txt_tokens'] != 0).float()
- pitch_loss_fn = F.l1_loss if hparams['pitch_loss'] == 'l1' else F.mse_loss
- losses['f0'] = (pitch_loss_fn(output['pitch_pred'][:, :, 0], sample['f0'],
- reduction='none') * nonpadding).sum() \
- / nonpadding.sum() * hparams['lambda_f0']
- return
- mel2ph = sample['mel2ph'] # [B, T_s]
- f0 = sample['f0']
- uv = sample['uv']
- nonpadding = (mel2ph != 0).float()
- if hparams['pitch_type'] == 'cwt':
- cwt_spec = sample[f'cwt_spec']
- f0_mean = sample['f0_mean']
- f0_std = sample['f0_std']
- cwt_pred = output['cwt'][:, :, :10]
- f0_mean_pred = output['f0_mean']
- f0_std_pred = output['f0_std']
- losses['C'] = self.cwt_loss(cwt_pred, cwt_spec) * hparams['lambda_f0']
- if hparams['use_uv']:
- assert output['cwt'].shape[-1] == 11
- uv_pred = output['cwt'][:, :, -1]
- losses['uv'] = (F.binary_cross_entropy_with_logits(uv_pred, uv, reduction='none') * nonpadding) \
- .sum() / nonpadding.sum() * hparams['lambda_uv']
- losses['f0_mean'] = F.l1_loss(f0_mean_pred, f0_mean) * hparams['lambda_f0']
- losses['f0_std'] = F.l1_loss(f0_std_pred, f0_std) * hparams['lambda_f0']
- if hparams['cwt_add_f0_loss']:
- f0_cwt_ = self.model.cwt2f0_norm(cwt_pred, f0_mean_pred, f0_std_pred, mel2ph)
- self.add_f0_loss(f0_cwt_[:, :, None], f0, uv, losses, nonpadding=nonpadding)
- elif hparams['pitch_type'] == 'frame':
- self.add_f0_loss(output['pitch_pred'], f0, uv, losses, nonpadding=nonpadding)
-
- def add_f0_loss(self, p_pred, f0, uv, losses, nonpadding):
- assert p_pred[..., 0].shape == f0.shape
- if hparams['use_uv']:
- assert p_pred[..., 1].shape == uv.shape
- losses['uv'] = (F.binary_cross_entropy_with_logits(
- p_pred[:, :, 1], uv, reduction='none') * nonpadding).sum() \
- / nonpadding.sum() * hparams['lambda_uv']
- nonpadding = nonpadding * (uv == 0).float()
-
- f0_pred = p_pred[:, :, 0]
- if hparams['pitch_loss'] in ['l1', 'l2']:
- pitch_loss_fn = F.l1_loss if hparams['pitch_loss'] == 'l1' else F.mse_loss
- losses['f0'] = (pitch_loss_fn(f0_pred, f0, reduction='none') * nonpadding).sum() \
- / nonpadding.sum() * hparams['lambda_f0']
- elif hparams['pitch_loss'] == 'ssim':
- return NotImplementedError
-
- def cwt_loss(self, cwt_p, cwt_g):
- if hparams['cwt_loss'] == 'l1':
- return F.l1_loss(cwt_p, cwt_g)
- if hparams['cwt_loss'] == 'l2':
- return F.mse_loss(cwt_p, cwt_g)
- if hparams['cwt_loss'] == 'ssim':
- return self.ssim_loss(cwt_p, cwt_g, 20)
-
- def add_energy_loss(self, energy_pred, energy, losses):
- nonpadding = (energy != 0).float()
- loss = (F.mse_loss(energy_pred, energy, reduction='none') * nonpadding).sum() / nonpadding.sum()
- loss = loss * hparams['lambda_energy']
- losses['e'] = loss
-
-
- ############
- # validation plots
- ############
- def plot_mel(self, batch_idx, spec, spec_out, name=None):
- spec_cat = torch.cat([spec, spec_out], -1)
- name = f'mel_{batch_idx}' if name is None else name
- vmin = hparams['mel_vmin']
- vmax = hparams['mel_vmax']
- self.logger.experiment.add_figure(name, spec_to_figure(spec_cat[0], vmin, vmax), self.global_step)
-
- def plot_dur(self, batch_idx, sample, model_out):
- T_txt = sample['txt_tokens'].shape[1]
- dur_gt = mel2ph_to_dur(sample['mel2ph'], T_txt)[0]
- dur_pred = self.model.dur_predictor.out2dur(model_out['dur']).float()
- txt = self.phone_encoder.decode(sample['txt_tokens'][0].cpu().numpy())
- txt = txt.split(" ")
- self.logger.experiment.add_figure(
- f'dur_{batch_idx}', dur_to_figure(dur_gt, dur_pred, txt), self.global_step)
-
- def plot_pitch(self, batch_idx, sample, model_out):
- f0 = sample['f0']
- if hparams['pitch_type'] == 'ph':
- mel2ph = sample['mel2ph']
- f0 = self.expand_f0_ph(f0, mel2ph)
- f0_pred = self.expand_f0_ph(model_out['pitch_pred'][:, :, 0], mel2ph)
- self.logger.experiment.add_figure(
- f'f0_{batch_idx}', f0_to_figure(f0[0], None, f0_pred[0]), self.global_step)
- return
- f0 = denorm_f0(f0, sample['uv'], hparams)
- if hparams['pitch_type'] == 'cwt':
- # cwt
- cwt_out = model_out['cwt']
- cwt_spec = cwt_out[:, :, :10]
- cwt = torch.cat([cwt_spec, sample['cwt_spec']], -1)
- self.logger.experiment.add_figure(f'cwt_{batch_idx}', spec_to_figure(cwt[0]), self.global_step)
- # f0
- f0_pred = cwt2f0(cwt_spec, model_out['f0_mean'], model_out['f0_std'], hparams['cwt_scales'])
- if hparams['use_uv']:
- assert cwt_out.shape[-1] == 11
- uv_pred = cwt_out[:, :, -1] > 0
- f0_pred[uv_pred > 0] = 0
- f0_cwt = denorm_f0(sample['f0_cwt'], sample['uv'], hparams)
- self.logger.experiment.add_figure(
- f'f0_{batch_idx}', f0_to_figure(f0[0], f0_cwt[0], f0_pred[0]), self.global_step)
- elif hparams['pitch_type'] == 'frame':
- # f0
- uv_pred = model_out['pitch_pred'][:, :, 1] > 0
- pitch_pred = denorm_f0(model_out['pitch_pred'][:, :, 0], uv_pred, hparams)
- self.logger.experiment.add_figure(
- f'f0_{batch_idx}', f0_to_figure(f0[0], None, pitch_pred[0]), self.global_step)
-
- ############
- # infer
- ############
- def test_step(self, sample, batch_idx):
- spk_embed = sample.get('spk_embed') if not hparams['use_spk_id'] else sample.get('spk_ids')
- txt_tokens = sample['txt_tokens']
- mel2ph, uv, f0 = None, None, None
- ref_mels = None
- if hparams['profile_infer']:
- pass
- else:
- if hparams['use_gt_dur']:
- mel2ph = sample['mel2ph']
- if hparams['use_gt_f0']:
- f0 = sample['f0']
- uv = sample['uv']
- print('Here using gt f0!!')
- if hparams.get('use_midi') is not None and hparams['use_midi']:
- outputs = self.model(
- txt_tokens, spk_embed=spk_embed, mel2ph=mel2ph, f0=f0, uv=uv, ref_mels=ref_mels, infer=True,
- pitch_midi=sample['pitch_midi'], midi_dur=sample.get('midi_dur'), is_slur=sample.get('is_slur'))
- else:
- outputs = self.model(
- txt_tokens, spk_embed=spk_embed, mel2ph=mel2ph, f0=f0, uv=uv, ref_mels=ref_mels, infer=True)
- sample['outputs'] = self.model.out2mel(outputs['mel_out'])
- sample['mel2ph_pred'] = outputs['mel2ph']
- if hparams.get('pe_enable') is not None and hparams['pe_enable']:
- sample['f0'] = self.pe(sample['mels'])['f0_denorm_pred'] # pe predict from GT mel
- sample['f0_pred'] = self.pe(sample['outputs'])['f0_denorm_pred'] # pe predict from Pred mel
- else:
- sample['f0'] = denorm_f0(sample['f0'], sample['uv'], hparams)
- sample['f0_pred'] = outputs.get('f0_denorm')
- return self.after_infer(sample)
-
- def after_infer(self, predictions):
- if self.saving_result_pool is None and not hparams['profile_infer']:
- self.saving_result_pool = Pool(min(int(os.getenv('N_PROC', os.cpu_count())), 16))
- self.saving_results_futures = []
- predictions = utils.unpack_dict_to_list(predictions)
- t = tqdm(predictions)
- for num_predictions, prediction in enumerate(t):
- for k, v in prediction.items():
- if type(v) is torch.Tensor:
- prediction[k] = v.cpu().numpy()
-
- item_name = prediction.get('item_name')
- text = prediction.get('text').replace(":", "%3A")[:80]
-
- # remove paddings
- mel_gt = prediction["mels"]
- mel_gt_mask = np.abs(mel_gt).sum(-1) > 0
- mel_gt = mel_gt[mel_gt_mask]
- mel2ph_gt = prediction.get("mel2ph")
- mel2ph_gt = mel2ph_gt[mel_gt_mask] if mel2ph_gt is not None else None
- mel_pred = prediction["outputs"]
- mel_pred_mask = np.abs(mel_pred).sum(-1) > 0
- mel_pred = mel_pred[mel_pred_mask]
- mel_gt = np.clip(mel_gt, hparams['mel_vmin'], hparams['mel_vmax'])
- mel_pred = np.clip(mel_pred, hparams['mel_vmin'], hparams['mel_vmax'])
-
- mel2ph_pred = prediction.get("mel2ph_pred")
- if mel2ph_pred is not None:
- if len(mel2ph_pred) > len(mel_pred_mask):
- mel2ph_pred = mel2ph_pred[:len(mel_pred_mask)]
- mel2ph_pred = mel2ph_pred[mel_pred_mask]
-
- f0_gt = prediction.get("f0")
- f0_pred = prediction.get("f0_pred")
- if f0_pred is not None:
- f0_gt = f0_gt[mel_gt_mask]
- if len(f0_pred) > len(mel_pred_mask):
- f0_pred = f0_pred[:len(mel_pred_mask)]
- f0_pred = f0_pred[mel_pred_mask]
-
- str_phs = None
- if self.phone_encoder is not None and 'txt_tokens' in prediction:
- str_phs = self.phone_encoder.decode(prediction['txt_tokens'], strip_padding=True)
- gen_dir = os.path.join(hparams['work_dir'],
- f'generated_{self.trainer.global_step}_{hparams["gen_dir_name"]}')
- wav_pred = self.vocoder.spec2wav(mel_pred, f0=f0_pred)
- if not hparams['profile_infer']:
- os.makedirs(gen_dir, exist_ok=True)
- os.makedirs(f'{gen_dir}/wavs', exist_ok=True)
- os.makedirs(f'{gen_dir}/plot', exist_ok=True)
- os.makedirs(os.path.join(hparams['work_dir'], 'P_mels_npy'), exist_ok=True)
- os.makedirs(os.path.join(hparams['work_dir'], 'G_mels_npy'), exist_ok=True)
- self.saving_results_futures.append(
- self.saving_result_pool.apply_async(self.save_result, args=[
- wav_pred, mel_pred, 'P', item_name, text, gen_dir, str_phs, mel2ph_pred, f0_gt, f0_pred]))
-
- if mel_gt is not None and hparams['save_gt']:
- wav_gt = self.vocoder.spec2wav(mel_gt, f0=f0_gt)
- self.saving_results_futures.append(
- self.saving_result_pool.apply_async(self.save_result, args=[
- wav_gt, mel_gt, 'G', item_name, text, gen_dir, str_phs, mel2ph_gt, f0_gt, f0_pred]))
- if hparams['save_f0']:
- import matplotlib.pyplot as plt
- # f0_pred_, _ = get_pitch(wav_pred, mel_pred, hparams)
- f0_pred_ = f0_pred
- f0_gt_, _ = get_pitch(wav_gt, mel_gt, hparams)
- fig = plt.figure()
- plt.plot(f0_pred_, label=r'$f0_P$')
- plt.plot(f0_gt_, label=r'$f0_G$')
- if hparams.get('pe_enable') is not None and hparams['pe_enable']:
- # f0_midi = prediction.get("f0_midi")
- # f0_midi = f0_midi[mel_gt_mask]
- # plt.plot(f0_midi, label=r'$f0_M$')
- pass
- plt.legend()
- plt.tight_layout()
- plt.savefig(f'{gen_dir}/plot/[F0][{item_name}]{text}.png', format='png')
- plt.close(fig)
-
- t.set_description(
- f"Pred_shape: {mel_pred.shape}, gt_shape: {mel_gt.shape}")
- else:
- if 'gen_wav_time' not in self.stats:
- self.stats['gen_wav_time'] = 0
- self.stats['gen_wav_time'] += len(wav_pred) / hparams['audio_sample_rate']
- print('gen_wav_time: ', self.stats['gen_wav_time'])
-
- return {}
-
- @staticmethod
- def save_result(wav_out, mel, prefix, item_name, text, gen_dir, str_phs=None, mel2ph=None, gt_f0=None, pred_f0=None):
- item_name = item_name.replace('/', '-')
- base_fn = f'[{item_name}][{prefix}]'
-
- if text is not None:
- base_fn += text
- base_fn += ('-' + hparams['exp_name'])
- np.save(os.path.join(hparams['work_dir'], f'{prefix}_mels_npy', item_name), mel)
- audio.save_wav(wav_out, f'{gen_dir}/wavs/{base_fn}.wav', hparams['audio_sample_rate'],
- norm=hparams['out_wav_norm'])
- fig = plt.figure(figsize=(14, 10))
- spec_vmin = hparams['mel_vmin']
- spec_vmax = hparams['mel_vmax']
- heatmap = plt.pcolor(mel.T, vmin=spec_vmin, vmax=spec_vmax)
- fig.colorbar(heatmap)
- if hparams.get('pe_enable') is not None and hparams['pe_enable']:
- gt_f0 = (gt_f0 - 100) / (800 - 100) * 80 * (gt_f0 > 0)
- pred_f0 = (pred_f0 - 100) / (800 - 100) * 80 * (pred_f0 > 0)
- plt.plot(pred_f0, c='white', linewidth=1, alpha=0.6)
- plt.plot(gt_f0, c='red', linewidth=1, alpha=0.6)
- else:
- f0, _ = get_pitch(wav_out, mel, hparams)
- f0 = (f0 - 100) / (800 - 100) * 80 * (f0 > 0)
- plt.plot(f0, c='white', linewidth=1, alpha=0.6)
- if mel2ph is not None and str_phs is not None:
- decoded_txt = str_phs.split(" ")
- dur = mel2ph_to_dur(torch.LongTensor(mel2ph)[None, :], len(decoded_txt))[0].numpy()
- dur = [0] + list(np.cumsum(dur))
- for i in range(len(dur) - 1):
- shift = (i % 20) + 1
- plt.text(dur[i], shift, decoded_txt[i])
- plt.hlines(shift, dur[i], dur[i + 1], colors='b' if decoded_txt[i] != '|' else 'black')
- plt.vlines(dur[i], 0, 5, colors='b' if decoded_txt[i] != '|' else 'black',
- alpha=1, linewidth=1)
- plt.tight_layout()
- plt.savefig(f'{gen_dir}/plot/{base_fn}.png', format='png', dpi=1000)
- plt.close(fig)
-
- ##############
- # utils
- ##############
- @staticmethod
- def expand_f0_ph(f0, mel2ph):
- f0 = denorm_f0(f0, None, hparams)
- f0 = F.pad(f0, [1, 0])
- f0 = torch.gather(f0, 1, mel2ph) # [B, T_mel]
- return f0
-
-
-if __name__ == '__main__':
- FastSpeech2Task.start()
diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/deprecated/Opchatgpts.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/deprecated/Opchatgpts.py
deleted file mode 100644
index ab0d68c903dbe4133d103c5e49cb6b3cd0852a7e..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/deprecated/Opchatgpts.py
+++ /dev/null
@@ -1,7 +0,0 @@
-from __future__ import annotations
-
-from .ChatgptLogin import ChatgptLogin
-
-
-class Opchatgpts(ChatgptLogin):
- url = "https://opchatgpts.net"
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/role_assigner/base.py b/spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/role_assigner/base.py
deleted file mode 100644
index 726abf52a6b6cf86a3eeb4b561fab9863ee006bc..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/role_assigner/base.py
+++ /dev/null
@@ -1,55 +0,0 @@
-from __future__ import annotations
-
-from typing import TYPE_CHECKING, List, Tuple
-
-from agentverse.agents import BaseAgent
-
-from pydantic import BaseModel
-
-from abc import abstractmethod
-from . import role_assigner_registry
-
-if TYPE_CHECKING:
- from agentverse.agents import RoleAssignerAgent, CriticAgent
-
-
-class BaseRoleAssigner(BaseModel):
- """
- The base class of role assignment class.
- """
-
- @abstractmethod
- def step(
- self,
- role_assigner: RoleAssignerAgent,
- group_members: List[CriticAgent],
- advice: str = "No advice yet.",
- task_description: str = "",
- *args,
- **kwargs,
- ) -> List[CriticAgent]:
- pass
-
- def reset(self):
- pass
-
-
-@role_assigner_registry.register("dummy")
-class DummyRoleAssigner(BaseRoleAssigner):
- """
- The base class of role assignment class.
- """
-
- def step(
- self,
- role_assigner: RoleAssignerAgent,
- group_members: List[CriticAgent],
- advice: str = "No advice yet.",
- task_description: str = "",
- *args,
- **kwargs,
- ) -> List[CriticAgent]:
- return group_members
-
- def reset(self):
- pass
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/pie/Factory.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/pie/Factory.js
deleted file mode 100644
index a5861a1d60a723f7ab999022b3216880972d892c..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/pie/Factory.js
+++ /dev/null
@@ -1,13 +0,0 @@
-import Pie from './Pie.js';
-import ObjectFactory from '../ObjectFactory.js';
-import SetValue from '../../../plugins/utils/object/SetValue.js';
-
-ObjectFactory.register('pie', function (config) {
- var gameObject = new Pie(this.scene, config);
- this.scene.add.existing(gameObject);
- return gameObject;
-});
-
-SetValue(window, 'RexPlugins.Spinner.Pie', Pie);
-
-export default Pie;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/lineprogresscanvas/Factory.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/lineprogresscanvas/Factory.d.ts
deleted file mode 100644
index aaf8ac9b9887b8877190b68bfeada277b7ace5e3..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/lineprogresscanvas/Factory.d.ts
+++ /dev/null
@@ -1,19 +0,0 @@
-import LineProgressCanvas from './LineProgressCanvas';
-
-export default function (
- config?: LineProgressCanvas.IConfig
-): LineProgressCanvas;
-
-export default function (
- x?: number, y?: number,
- width?: number, height?: number,
- config?: LineProgressCanvas.IConfig
-): LineProgressCanvas;
-
-export default function (
- x?: number, y?: number,
- width?: number, height?: number,
- barColor?: string | number,
- value?: number,
- config?: LineProgressCanvas.IConfig
-): LineProgressCanvas;
\ No newline at end of file
diff --git a/spaces/Akmyradov/TurkmenTTSweSTT/uroman/lib/JSON/backportPP/Compat5005.pm b/spaces/Akmyradov/TurkmenTTSweSTT/uroman/lib/JSON/backportPP/Compat5005.pm
deleted file mode 100644
index 139990edff0a28474e53f882d4c4efeb2ad7d701..0000000000000000000000000000000000000000
--- a/spaces/Akmyradov/TurkmenTTSweSTT/uroman/lib/JSON/backportPP/Compat5005.pm
+++ /dev/null
@@ -1,131 +0,0 @@
-package # This is JSON::backportPP
- JSON::backportPP5005;
-
-use 5.005;
-use strict;
-
-my @properties;
-
-$JSON::PP5005::VERSION = '1.10';
-
-BEGIN {
-
- sub utf8::is_utf8 {
- 0; # It is considered that UTF8 flag off for Perl 5.005.
- }
-
- sub utf8::upgrade {
- }
-
- sub utf8::downgrade {
- 1; # must always return true.
- }
-
- sub utf8::encode {
- }
-
- sub utf8::decode {
- }
-
- *JSON::PP::JSON_PP_encode_ascii = \&_encode_ascii;
- *JSON::PP::JSON_PP_encode_latin1 = \&_encode_latin1;
- *JSON::PP::JSON_PP_decode_surrogates = \&_decode_surrogates;
- *JSON::PP::JSON_PP_decode_unicode = \&_decode_unicode;
-
- # missing in B module.
- sub B::SVp_IOK () { 0x01000000; }
- sub B::SVp_NOK () { 0x02000000; }
- sub B::SVp_POK () { 0x04000000; }
-
- $INC{'bytes.pm'} = 1; # dummy
-}
-
-
-
-sub _encode_ascii {
- join('', map { $_ <= 127 ? chr($_) : sprintf('\u%04x', $_) } unpack('C*', $_[0]) );
-}
-
-
-sub _encode_latin1 {
- join('', map { chr($_) } unpack('C*', $_[0]) );
-}
-
-
-sub _decode_surrogates { # from http://homepage1.nifty.com/nomenclator/unicode/ucs_utf.htm
- my $uni = 0x10000 + (hex($_[0]) - 0xD800) * 0x400 + (hex($_[1]) - 0xDC00); # from perlunicode
- my $bit = unpack('B32', pack('N', $uni));
-
- if ( $bit =~ /^00000000000(...)(......)(......)(......)$/ ) {
- my ($w, $x, $y, $z) = ($1, $2, $3, $4);
- return pack('B*', sprintf('11110%s10%s10%s10%s', $w, $x, $y, $z));
- }
- else {
- Carp::croak("Invalid surrogate pair");
- }
-}
-
-
-sub _decode_unicode {
- my ($u) = @_;
- my ($utf8bit);
-
- if ( $u =~ /^00([89a-f][0-9a-f])$/i ) { # 0x80-0xff
- return pack( 'H2', $1 );
- }
-
- my $bit = unpack("B*", pack("H*", $u));
-
- if ( $bit =~ /^00000(.....)(......)$/ ) {
- $utf8bit = sprintf('110%s10%s', $1, $2);
- }
- elsif ( $bit =~ /^(....)(......)(......)$/ ) {
- $utf8bit = sprintf('1110%s10%s10%s', $1, $2, $3);
- }
- else {
- Carp::croak("Invalid escaped unicode");
- }
-
- return pack('B*', $utf8bit);
-}
-
-
-sub JSON::PP::incr_text {
- $_[0]->{_incr_parser} ||= JSON::PP::IncrParser->new;
-
- if ( $_[0]->{_incr_parser}->{incr_parsing} ) {
- Carp::croak("incr_text can not be called when the incremental parser already started parsing");
- }
-
- $_[0]->{_incr_parser}->{incr_text} = $_[1] if ( @_ > 1 );
- $_[0]->{_incr_parser}->{incr_text};
-}
-
-
-1;
-__END__
-
-=pod
-
-=head1 NAME
-
-JSON::PP5005 - Helper module in using JSON::PP in Perl 5.005
-
-=head1 DESCRIPTION
-
-JSON::PP calls internally.
-
-=head1 AUTHOR
-
-Makamaka Hannyaharamitu, Emakamaka[at]cpan.orgE
-
-
-=head1 COPYRIGHT AND LICENSE
-
-Copyright 2007-2012 by Makamaka Hannyaharamitu
-
-This library is free software; you can redistribute it and/or modify
-it under the same terms as Perl itself.
-
-=cut
-
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/_base_/models/rpn_r50_caffe_c4.py b/spaces/Andy1621/uniformer_image_detection/configs/_base_/models/rpn_r50_caffe_c4.py
deleted file mode 100644
index 9c32a55ddaa88812c8020872c33502122c409041..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/_base_/models/rpn_r50_caffe_c4.py
+++ /dev/null
@@ -1,56 +0,0 @@
-# model settings
-model = dict(
- type='RPN',
- pretrained='open-mmlab://detectron2/resnet50_caffe',
- backbone=dict(
- type='ResNet',
- depth=50,
- num_stages=3,
- strides=(1, 2, 2),
- dilations=(1, 1, 1),
- out_indices=(2, ),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=False),
- norm_eval=True,
- style='caffe'),
- neck=None,
- rpn_head=dict(
- type='RPNHead',
- in_channels=1024,
- feat_channels=1024,
- anchor_generator=dict(
- type='AnchorGenerator',
- scales=[2, 4, 8, 16, 32],
- ratios=[0.5, 1.0, 2.0],
- strides=[16]),
- bbox_coder=dict(
- type='DeltaXYWHBBoxCoder',
- target_means=[.0, .0, .0, .0],
- target_stds=[1.0, 1.0, 1.0, 1.0]),
- loss_cls=dict(
- type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
- loss_bbox=dict(type='L1Loss', loss_weight=1.0)),
- # model training and testing settings
- train_cfg=dict(
- rpn=dict(
- assigner=dict(
- type='MaxIoUAssigner',
- pos_iou_thr=0.7,
- neg_iou_thr=0.3,
- min_pos_iou=0.3,
- ignore_iof_thr=-1),
- sampler=dict(
- type='RandomSampler',
- num=256,
- pos_fraction=0.5,
- neg_pos_ub=-1,
- add_gt_as_proposals=False),
- allowed_border=0,
- pos_weight=-1,
- debug=False)),
- test_cfg=dict(
- rpn=dict(
- nms_pre=12000,
- max_per_img=2000,
- nms=dict(type='nms', iou_threshold=0.7),
- min_bbox_size=0)))
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/guided_anchoring/ga_retinanet_x101_32x4d_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/guided_anchoring/ga_retinanet_x101_32x4d_fpn_1x_coco.py
deleted file mode 100644
index 18daadd6a9d3024f30157aea1f1cef3e13326b5a..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/guided_anchoring/ga_retinanet_x101_32x4d_fpn_1x_coco.py
+++ /dev/null
@@ -1,13 +0,0 @@
-_base_ = './ga_retinanet_r50_fpn_1x_coco.py'
-model = dict(
- pretrained='open-mmlab://resnext101_32x4d',
- backbone=dict(
- type='ResNeXt',
- depth=101,
- groups=32,
- base_width=4,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=True),
- style='pytorch'))
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/vfnet/vfnet_r101_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/vfnet/vfnet_r101_fpn_1x_coco.py
deleted file mode 100644
index 09521310523f38be90518e9c7db6856db1225c1b..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/vfnet/vfnet_r101_fpn_1x_coco.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './vfnet_r50_fpn_1x_coco.py'
-model = dict(pretrained='torchvision://resnet101', backbone=dict(depth=101))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_512x512_80k_ade20k.py b/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_512x512_80k_ade20k.py
deleted file mode 100644
index 78f4d0d9de3d6b8dd2b097531317956d8e3b19f1..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_512x512_80k_ade20k.py
+++ /dev/null
@@ -1,6 +0,0 @@
-_base_ = [
- '../_base_/models/deeplabv3_r50-d8.py', '../_base_/datasets/ade20k.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
-]
-model = dict(
- decode_head=dict(num_classes=150), auxiliary_head=dict(num_classes=150))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_d6_r50-d16_769x769_40k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_d6_r50-d16_769x769_40k_cityscapes.py
deleted file mode 100644
index 01d8f27c8cc62e681df770e111ff9f866e9d112f..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_d6_r50-d16_769x769_40k_cityscapes.py
+++ /dev/null
@@ -1,10 +0,0 @@
-_base_ = [
- '../_base_/models/fcn_r50-d8.py',
- '../_base_/datasets/cityscapes_769x769.py', '../_base_/default_runtime.py',
- '../_base_/schedules/schedule_40k.py'
-]
-model = dict(
- backbone=dict(dilations=(1, 1, 1, 2), strides=(1, 2, 2, 1)),
- decode_head=dict(align_corners=True, dilation=6),
- auxiliary_head=dict(align_corners=True, dilation=6),
- test_cfg=dict(mode='slide', crop_size=(769, 769), stride=(513, 513)))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r50-d8_480x480_80k_pascal_context.py b/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r50-d8_480x480_80k_pascal_context.py
deleted file mode 100644
index 09e96dabf74cc17a5fcb09b114f2bddd2af9af7f..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r50-d8_480x480_80k_pascal_context.py
+++ /dev/null
@@ -1,10 +0,0 @@
-_base_ = [
- '../_base_/models/pspnet_r50-d8.py',
- '../_base_/datasets/pascal_context.py', '../_base_/default_runtime.py',
- '../_base_/schedules/schedule_80k.py'
-]
-model = dict(
- decode_head=dict(num_classes=60),
- auxiliary_head=dict(num_classes=60),
- test_cfg=dict(mode='slide', crop_size=(480, 480), stride=(320, 320)))
-optimizer = dict(type='SGD', lr=0.004, momentum=0.9, weight_decay=0.0001)
diff --git a/spaces/Anonymous-sub/Rerender/src/import_util.py b/spaces/Anonymous-sub/Rerender/src/import_util.py
deleted file mode 100644
index c7dcbc49e46cf2f729e1250adf879d790b6451cf..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/src/import_util.py
+++ /dev/null
@@ -1,10 +0,0 @@
-import os
-import sys
-
-cur_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
-gmflow_dir = os.path.join(cur_dir, 'gmflow_module')
-controlnet_dir = os.path.join(cur_dir, 'ControlNet')
-sys.path.insert(0, gmflow_dir)
-sys.path.insert(0, controlnet_dir)
-
-import ControlNet.share # noqa: F401 E402
diff --git a/spaces/Aravindsssss/gradin/README.md b/spaces/Aravindsssss/gradin/README.md
deleted file mode 100644
index a06d1322c2fc2b48cb0cc917072b433347e962bb..0000000000000000000000000000000000000000
--- a/spaces/Aravindsssss/gradin/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Gradin
-emoji: 🦀
-colorFrom: purple
-colorTo: green
-sdk: gradio
-sdk_version: 3.39.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/modeling/test_roi_heads.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/modeling/test_roi_heads.py
deleted file mode 100644
index 6af160efeb02e500e5f354fa8107a05a12b735eb..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/modeling/test_roi_heads.py
+++ /dev/null
@@ -1,323 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import logging
-import unittest
-from copy import deepcopy
-import torch
-from torch import nn
-
-from detectron2 import model_zoo
-from detectron2.config import get_cfg
-from detectron2.export.torchscript_patch import (
- freeze_training_mode,
- patch_builtin_len,
- patch_instances,
-)
-from detectron2.layers import ShapeSpec
-from detectron2.modeling.proposal_generator.build import build_proposal_generator
-from detectron2.modeling.roi_heads import (
- FastRCNNConvFCHead,
- KRCNNConvDeconvUpsampleHead,
- MaskRCNNConvUpsampleHead,
- StandardROIHeads,
- build_roi_heads,
-)
-from detectron2.projects import point_rend
-from detectron2.structures import BitMasks, Boxes, ImageList, Instances, RotatedBoxes
-from detectron2.utils.events import EventStorage
-from detectron2.utils.testing import assert_instances_allclose, random_boxes
-
-logger = logging.getLogger(__name__)
-
-"""
-Make sure the losses of ROIHeads/RPN do not change, to avoid
-breaking the forward logic by mistake.
-This relies on assumption that pytorch's RNG is stable.
-"""
-
-
-class ROIHeadsTest(unittest.TestCase):
- def test_roi_heads(self):
- torch.manual_seed(121)
- cfg = get_cfg()
- cfg.MODEL.ROI_BOX_HEAD.NAME = "FastRCNNConvFCHead"
- cfg.MODEL.ROI_BOX_HEAD.NUM_FC = 2
- cfg.MODEL.ROI_BOX_HEAD.POOLER_TYPE = "ROIAlignV2"
- cfg.MODEL.ROI_BOX_HEAD.BBOX_REG_WEIGHTS = (10, 10, 5, 5)
- cfg.MODEL.MASK_ON = True
- num_images = 2
- images_tensor = torch.rand(num_images, 20, 30)
- image_sizes = [(10, 10), (20, 30)]
- images = ImageList(images_tensor, image_sizes)
- num_channels = 1024
- features = {"res4": torch.rand(num_images, num_channels, 1, 2)}
- feature_shape = {"res4": ShapeSpec(channels=num_channels, stride=16)}
-
- image_shape = (15, 15)
- gt_boxes0 = torch.tensor([[1, 1, 3, 3], [2, 2, 6, 6]], dtype=torch.float32)
- gt_instance0 = Instances(image_shape)
- gt_instance0.gt_boxes = Boxes(gt_boxes0)
- gt_instance0.gt_classes = torch.tensor([2, 1])
- gt_instance0.gt_masks = BitMasks(torch.rand((2,) + image_shape) > 0.5)
- gt_boxes1 = torch.tensor([[1, 5, 2, 8], [7, 3, 10, 5]], dtype=torch.float32)
- gt_instance1 = Instances(image_shape)
- gt_instance1.gt_boxes = Boxes(gt_boxes1)
- gt_instance1.gt_classes = torch.tensor([1, 2])
- gt_instance1.gt_masks = BitMasks(torch.rand((2,) + image_shape) > 0.5)
- gt_instances = [gt_instance0, gt_instance1]
-
- proposal_generator = build_proposal_generator(cfg, feature_shape)
- roi_heads = StandardROIHeads(cfg, feature_shape)
-
- with EventStorage(): # capture events in a new storage to discard them
- proposals, proposal_losses = proposal_generator(images, features, gt_instances)
- _, detector_losses = roi_heads(images, features, proposals, gt_instances)
-
- detector_losses.update(proposal_losses)
- expected_losses = {
- "loss_cls": 4.5253729820251465,
- "loss_box_reg": 0.009785720147192478,
- "loss_mask": 0.693184494972229,
- "loss_rpn_cls": 0.08186662942171097,
- "loss_rpn_loc": 0.1104838103055954,
- }
- succ = all(
- torch.allclose(detector_losses[name], torch.tensor(expected_losses.get(name, 0.0)))
- for name in detector_losses.keys()
- )
- self.assertTrue(
- succ,
- "Losses has changed! New losses: {}".format(
- {k: v.item() for k, v in detector_losses.items()}
- ),
- )
-
- def test_rroi_heads(self):
- torch.manual_seed(121)
- cfg = get_cfg()
- cfg.MODEL.PROPOSAL_GENERATOR.NAME = "RRPN"
- cfg.MODEL.ANCHOR_GENERATOR.NAME = "RotatedAnchorGenerator"
- cfg.MODEL.ROI_HEADS.NAME = "RROIHeads"
- cfg.MODEL.ROI_BOX_HEAD.NAME = "FastRCNNConvFCHead"
- cfg.MODEL.ROI_BOX_HEAD.NUM_FC = 2
- cfg.MODEL.RPN.BBOX_REG_WEIGHTS = (1, 1, 1, 1, 1)
- cfg.MODEL.RPN.HEAD_NAME = "StandardRPNHead"
- cfg.MODEL.ROI_BOX_HEAD.POOLER_TYPE = "ROIAlignRotated"
- cfg.MODEL.ROI_BOX_HEAD.BBOX_REG_WEIGHTS = (10, 10, 5, 5, 1)
- num_images = 2
- images_tensor = torch.rand(num_images, 20, 30)
- image_sizes = [(10, 10), (20, 30)]
- images = ImageList(images_tensor, image_sizes)
- num_channels = 1024
- features = {"res4": torch.rand(num_images, num_channels, 1, 2)}
- feature_shape = {"res4": ShapeSpec(channels=num_channels, stride=16)}
-
- image_shape = (15, 15)
- gt_boxes0 = torch.tensor([[2, 2, 2, 2, 30], [4, 4, 4, 4, 0]], dtype=torch.float32)
- gt_instance0 = Instances(image_shape)
- gt_instance0.gt_boxes = RotatedBoxes(gt_boxes0)
- gt_instance0.gt_classes = torch.tensor([2, 1])
- gt_boxes1 = torch.tensor([[1.5, 5.5, 1, 3, 0], [8.5, 4, 3, 2, -50]], dtype=torch.float32)
- gt_instance1 = Instances(image_shape)
- gt_instance1.gt_boxes = RotatedBoxes(gt_boxes1)
- gt_instance1.gt_classes = torch.tensor([1, 2])
- gt_instances = [gt_instance0, gt_instance1]
-
- proposal_generator = build_proposal_generator(cfg, feature_shape)
- roi_heads = build_roi_heads(cfg, feature_shape)
-
- with EventStorage(): # capture events in a new storage to discard them
- proposals, proposal_losses = proposal_generator(images, features, gt_instances)
- _, detector_losses = roi_heads(images, features, proposals, gt_instances)
-
- detector_losses.update(proposal_losses)
- expected_losses = {
- "loss_cls": 4.365657806396484,
- "loss_box_reg": 0.0015851043863222003,
- "loss_rpn_cls": 0.2427729219198227,
- "loss_rpn_loc": 0.3646621108055115,
- }
- succ = all(
- torch.allclose(detector_losses[name], torch.tensor(expected_losses.get(name, 0.0)))
- for name in detector_losses.keys()
- )
- self.assertTrue(
- succ,
- "Losses has changed! New losses: {}".format(
- {k: v.item() for k, v in detector_losses.items()}
- ),
- )
-
- def test_box_head_scriptability(self):
- input_shape = ShapeSpec(channels=1024, height=14, width=14)
- box_features = torch.randn(4, 1024, 14, 14)
-
- box_head = FastRCNNConvFCHead(
- input_shape, conv_dims=[512, 512], fc_dims=[1024, 1024]
- ).eval()
- script_box_head = torch.jit.script(box_head)
-
- origin_output = box_head(box_features)
- script_output = script_box_head(box_features)
- self.assertTrue(torch.equal(origin_output, script_output))
-
- def test_mask_head_scriptability(self):
- input_shape = ShapeSpec(channels=1024)
- mask_features = torch.randn(4, 1024, 14, 14)
-
- image_shapes = [(10, 10), (15, 15)]
- pred_instance0 = Instances(image_shapes[0])
- pred_classes0 = torch.tensor([1, 2, 3], dtype=torch.int64)
- pred_instance0.pred_classes = pred_classes0
- pred_instance1 = Instances(image_shapes[1])
- pred_classes1 = torch.tensor([4], dtype=torch.int64)
- pred_instance1.pred_classes = pred_classes1
-
- mask_head = MaskRCNNConvUpsampleHead(
- input_shape, num_classes=80, conv_dims=[256, 256]
- ).eval()
- # pred_instance will be in-place changed during the inference
- # process of `MaskRCNNConvUpsampleHead`
- origin_outputs = mask_head(mask_features, deepcopy([pred_instance0, pred_instance1]))
-
- fields = {"pred_masks": torch.Tensor, "pred_classes": torch.Tensor}
- with freeze_training_mode(mask_head), patch_instances(fields) as NewInstances:
- sciript_mask_head = torch.jit.script(mask_head)
- pred_instance0 = NewInstances.from_instances(pred_instance0)
- pred_instance1 = NewInstances.from_instances(pred_instance1)
- script_outputs = sciript_mask_head(mask_features, [pred_instance0, pred_instance1])
-
- for origin_ins, script_ins in zip(origin_outputs, script_outputs):
- assert_instances_allclose(origin_ins, script_ins, rtol=0)
-
- def test_keypoint_head_scriptability(self):
- input_shape = ShapeSpec(channels=1024, height=14, width=14)
- keypoint_features = torch.randn(4, 1024, 14, 14)
-
- image_shapes = [(10, 10), (15, 15)]
- pred_boxes0 = torch.tensor([[1, 1, 3, 3], [2, 2, 6, 6], [1, 5, 2, 8]], dtype=torch.float32)
- pred_instance0 = Instances(image_shapes[0])
- pred_instance0.pred_boxes = Boxes(pred_boxes0)
- pred_boxes1 = torch.tensor([[7, 3, 10, 5]], dtype=torch.float32)
- pred_instance1 = Instances(image_shapes[1])
- pred_instance1.pred_boxes = Boxes(pred_boxes1)
-
- keypoint_head = KRCNNConvDeconvUpsampleHead(
- input_shape, num_keypoints=17, conv_dims=[512, 512]
- ).eval()
- origin_outputs = keypoint_head(
- keypoint_features, deepcopy([pred_instance0, pred_instance1])
- )
-
- fields = {
- "pred_boxes": Boxes,
- "pred_keypoints": torch.Tensor,
- "pred_keypoint_heatmaps": torch.Tensor,
- }
- with freeze_training_mode(keypoint_head), patch_instances(fields) as NewInstances:
- sciript_keypoint_head = torch.jit.script(keypoint_head)
- pred_instance0 = NewInstances.from_instances(pred_instance0)
- pred_instance1 = NewInstances.from_instances(pred_instance1)
- script_outputs = sciript_keypoint_head(
- keypoint_features, [pred_instance0, pred_instance1]
- )
-
- for origin_ins, script_ins in zip(origin_outputs, script_outputs):
- assert_instances_allclose(origin_ins, script_ins, rtol=0)
-
- def test_StandardROIHeads_scriptability(self):
- cfg = get_cfg()
- cfg.MODEL.ROI_BOX_HEAD.NAME = "FastRCNNConvFCHead"
- cfg.MODEL.ROI_BOX_HEAD.NUM_FC = 2
- cfg.MODEL.ROI_BOX_HEAD.POOLER_TYPE = "ROIAlignV2"
- cfg.MODEL.ROI_BOX_HEAD.BBOX_REG_WEIGHTS = (10, 10, 5, 5)
- cfg.MODEL.MASK_ON = True
- cfg.MODEL.ROI_HEADS.NMS_THRESH_TEST = 0.01
- cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.01
- num_images = 2
- images_tensor = torch.rand(num_images, 20, 30)
- image_sizes = [(10, 10), (20, 30)]
- images = ImageList(images_tensor, image_sizes)
- num_channels = 1024
- features = {"res4": torch.rand(num_images, num_channels, 1, 2)}
- feature_shape = {"res4": ShapeSpec(channels=num_channels, stride=16)}
-
- roi_heads = StandardROIHeads(cfg, feature_shape).eval()
-
- proposal0 = Instances(image_sizes[0])
- proposal_boxes0 = torch.tensor([[1, 1, 3, 3], [2, 2, 6, 6]], dtype=torch.float32)
- proposal0.proposal_boxes = Boxes(proposal_boxes0)
- proposal0.objectness_logits = torch.tensor([0.5, 0.7], dtype=torch.float32)
-
- proposal1 = Instances(image_sizes[1])
- proposal_boxes1 = torch.tensor([[1, 5, 2, 8], [7, 3, 10, 5]], dtype=torch.float32)
- proposal1.proposal_boxes = Boxes(proposal_boxes1)
- proposal1.objectness_logits = torch.tensor([0.1, 0.9], dtype=torch.float32)
- proposals = [proposal0, proposal1]
-
- pred_instances, _ = roi_heads(images, features, proposals)
- fields = {
- "objectness_logits": torch.Tensor,
- "proposal_boxes": Boxes,
- "pred_classes": torch.Tensor,
- "scores": torch.Tensor,
- "pred_masks": torch.Tensor,
- "pred_boxes": Boxes,
- "pred_keypoints": torch.Tensor,
- "pred_keypoint_heatmaps": torch.Tensor,
- }
- with freeze_training_mode(roi_heads), patch_instances(fields) as new_instances:
- proposal0 = new_instances.from_instances(proposal0)
- proposal1 = new_instances.from_instances(proposal1)
- proposals = [proposal0, proposal1]
- scripted_rot_heads = torch.jit.script(roi_heads)
- scripted_pred_instances, _ = scripted_rot_heads(images, features, proposals)
-
- for instance, scripted_instance in zip(pred_instances, scripted_pred_instances):
- assert_instances_allclose(instance, scripted_instance, rtol=0)
-
- def test_PointRend_mask_head_tracing(self):
- cfg = model_zoo.get_config("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml")
- point_rend.add_pointrend_config(cfg)
- cfg.MODEL.ROI_HEADS.IN_FEATURES = ["p2", "p3"]
- cfg.MODEL.ROI_MASK_HEAD.NAME = "PointRendMaskHead"
- cfg.MODEL.ROI_MASK_HEAD.POOLER_TYPE = ""
- cfg.MODEL.ROI_MASK_HEAD.POINT_HEAD_ON = True
- chan = 256
- head = point_rend.PointRendMaskHead(
- cfg,
- {
- "p2": ShapeSpec(channels=chan, stride=4),
- "p3": ShapeSpec(channels=chan, stride=8),
- },
- )
-
- def gen_inputs(h, w, N):
- p2 = torch.rand(1, chan, h, w)
- p3 = torch.rand(1, chan, h // 2, w // 2)
- boxes = random_boxes(N, max_coord=h)
- return p2, p3, boxes
-
- class Wrap(nn.ModuleDict):
- def forward(self, p2, p3, boxes):
- features = {
- "p2": p2,
- "p3": p3,
- }
- inst = Instances((p2.shape[2] * 4, p2.shape[3] * 4))
- inst.pred_boxes = Boxes(boxes)
- inst.pred_classes = torch.zeros(inst.__len__(), dtype=torch.long)
- out = self.head(features, [inst])[0]
- return out.pred_masks
-
- model = Wrap({"head": head})
- model.eval()
- with torch.no_grad(), patch_builtin_len():
- traced = torch.jit.trace(model, gen_inputs(302, 208, 20))
- inputs = gen_inputs(100, 120, 30)
- out_eager = model(*inputs)
- out_trace = traced(*inputs)
- self.assertTrue(torch.allclose(out_eager, out_trace))
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/AzinZ/vitscn/monotonic_align/core.py b/spaces/AzinZ/vitscn/monotonic_align/core.py
deleted file mode 100644
index 5ff728cd74c9228346a82ec64a9829cb98ad315e..0000000000000000000000000000000000000000
--- a/spaces/AzinZ/vitscn/monotonic_align/core.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import numba
-
-
-@numba.jit(numba.void(numba.int32[:, :, ::1], numba.float32[:, :, ::1], numba.int32[::1], numba.int32[::1]),
- nopython=True, nogil=True)
-def maximum_path_jit(paths, values, t_ys, t_xs):
- b = paths.shape[0]
- max_neg_val = -1e9
- for i in range(int(b)):
- path = paths[i]
- value = values[i]
- t_y = t_ys[i]
- t_x = t_xs[i]
-
- v_prev = v_cur = 0.0
- index = t_x - 1
-
- for y in range(t_y):
- for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)):
- if x == y:
- v_cur = max_neg_val
- else:
- v_cur = value[y - 1, x]
- if x == 0:
- if y == 0:
- v_prev = 0.
- else:
- v_prev = max_neg_val
- else:
- v_prev = value[y - 1, x - 1]
- value[y, x] += max(v_prev, v_cur)
-
- for y in range(t_y - 1, -1, -1):
- path[y, index] = 1
- if index != 0 and (index == y or value[y - 1, index] < value[y - 1, index - 1]):
- index = index - 1
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Cazador De Ciervos 2018 Hack Apk 5.2.4.md b/spaces/Benson/text-generation/Examples/Cazador De Ciervos 2018 Hack Apk 5.2.4.md
deleted file mode 100644
index c01e53344d0840e74c85ce3ea7ebd20eeb8f6f8c..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Cazador De Ciervos 2018 Hack Apk 5.2.4.md
+++ /dev/null
@@ -1,65 +0,0 @@
-
-Cazador de ciervos 2018 Hack APK 5.2.4: Lo que usted necesita saber
-Deer Hunter 2018 es uno de los juegos de simulación de caza más populares en Android. Le permite cazar varios animales en todo el mundo, desde Alaska hasta Zimbabwe, utilizando una variedad de armas y accesorios. También puedes competir con otros jugadores en eventos de temporada, cacerías históricas, pesca con lanza y tiro al blanco.
-Sin embargo, si desea disfrutar del juego sin limitaciones o restricciones, es posible que esté interesado en el uso de un archivo apk hack. Un archivo apk hack es una versión modificada del juego original que le da acceso a recursos ilimitados, características desbloqueadas, y otras ventajas. En este artículo, le diremos todo lo que necesita saber sobre Deer Hunter 2018 Hack APK 5.2.4, incluyendo sus características, beneficios, riesgos y consejos.
-cazador de ciervos 2018 hack apk 5.2.4 Download Zip ✪ https://bltlly.com/2v6MHo
- Características de Deer Hunter 2018 Hack APK 5.2.4
-Deer Hunter 2018 Hack APK 5.2.4 es una versión hackeada del juego que le ofrece varias características que no están disponibles en la versión oficial. Algunas de estas características son:
-
-Dinero y oro ilimitados : Puedes obtener tanto dinero y oro como quieras en el juego, que puedes usar para comprar nuevas armas, accesorios, mejoras, energía, boletos, etc.
-Todas las armas y accesorios desbloqueados : Puedes acceder a todas las armas y accesorios del juego, incluyendo rifles, escopetas, pistolas, arcos, ballestas, cuchillos, lanzas, etc. También puedes personalizarlos con miras, cargadores, barriles, culatas, etc.
-No se requieren anuncios ni root : Puedes jugar el juego sin anuncios molestos o ventanas emergentes. Tampoco es necesario rootear el dispositivo para instalar el archivo apk hack.
-Cómo descargar e instalar el archivo apk hack : Para descargar e instalar el archivo apk hack, debe seguir estos pasos:
-
-Ir a [1](https://lygiang.net/deer-hunter-2018-mod-apk/) o cualquier otro sitio de buena reputación que ofrece el archivo apk hack.
-
-Habilitar fuentes desconocidas en el dispositivo yendo a Configuración > Seguridad > Fuentes desconocidas.
-Busque el archivo en su aplicación de administrador de archivos y toque en él.
-Siga las instrucciones en la pantalla para instalar la aplicación.
-Iniciar la aplicación y disfrutar del juego.
-
-
- Beneficios de usar Deer Hunter 2018 Hack APK 5.2.4
-Utilizando Deer Hunter 2018 Hack APK 5.2.4 puede darle varios beneficios que pueden mejorar su experiencia de juego. Algunos de estos beneficios son:
-
-Disfruta del juego sin gastar dinero real : No tienes que gastar dinero real en compras dentro de la aplicación o suscripciones para jugar el juego. Usted puede obtener todo lo que necesita de forma gratuita con el archivo apk hack.
-Explora diferentes lugares de caza y animales : Puedes viajar a diferentes regiones de caza y cazar varios animales, desde ciervos y osos hasta leones y elefantes. También puedes ver los gráficos realistas y sonidos del juego que te hacen sentir como si estuvieras en la naturaleza.
-Mejora tus habilidades de tiro y precisión : Puedes practicar tus habilidades de tiro y precisión con diferentes armas y alcances. También puedes aprender a apuntar a órganos vitales y fotos para obtener más recompensas y trofeos.
-Participa en varios eventos y desafíos : Puedes unirte a diferentes eventos y desafíos en el juego, como cacerías estacionales, cacerías históricas, pesca con lanza y tiro al blanco. También puedes competir con otros jugadores y posicionarte en las tablas de clasificación.
-
- Los riesgos de usar Deer Hunter 2018 Hack APK 5.2.4
-Utilizando Deer Hunter 2018 Hack APK 5.2.4 también puede tener algunos riesgos que usted debe ser consciente de antes de usarlo. Algunos de estos riesgos son:
-
-
-Malware y virus infección : Descargar un archivo apk hack de una fuente desconocida o no confiable puede exponer su dispositivo a malware y virus. Estos programas maliciosos pueden dañar su dispositivo, dañar sus archivos, robar sus datos o incluso tomar el control de su dispositivo.
-Robo de datos y violación de la privacidad : El uso de un archivo apk hack también puede comprometer sus datos y privacidad. El archivo apk hack puede requerir que usted conceda ciertos permisos o el acceso a su dispositivo, que puede permitirle recopilar su información personal, como su nombre, correo electrónico, número de teléfono, ubicación, etc. Esta información se puede utilizar para el robo de identidad, fraude, spam u otros fines maliciosos.
-van desde el servidor de juego oficial : El uso de un archivo apk hack también puede conseguir que se le prohibió el servidor de juego oficial. Los desarrolladores de juegos y editores tienen formas de detectar si está utilizando un archivo apk hack o no. Si te atrapan usando uno, pueden prohibir tu cuenta, eliminar tu progreso o bloquear tu acceso al juego.
-
- Consejos y trucos para jugar Deer Hunter 2018
-Si decide utilizar Deer Hunter 2018 Hack APK 5.2.4 o no, aquí hay algunos consejos y trucos que pueden ayudarle a jugar mejor el juego:
-
-Cubre tu aroma y usa señuelos : Los animales tienen un agudo sentido del olfato y pueden detectar tu presencia si no tienes cuidado. Puedes usar artículos de cobertura de olor o aerosoles para enmascarar tu aroma y evitar alertarlos. También puedes usar señuelos o llamadas para atraerlos más cerca de ti.
-Apunta a los órganos vitales y a los disparos a la cabeza : Disparar a un animal en los órganos vitales o en la cabeza causará más daño y lo matará más rápido. También obtendrá más recompensas y trofeos por hacerlo. Sin embargo, apuntar a estas áreas puede ser complicado y requerir precisión y sincronización. Puede utilizar el modo de visión infrarroja o el modo de cámara lenta para ayudarle a apuntar mejor.
-
-Sé tranquilo y paciente : La caza no es un juego de acción de ritmo rápido. Requiere paciencia y sigilo. Debes moverte despacio y en silencio, evitar hacer ruido, permanecer oculto detrás de la cubierta, esperar el momento adecuado para disparar, etc. Si te apresuras o cometes errores, asustarás a los animales o perderás tus disparos.
-Saber cuándo y dónde cazar : Diferentes animales tienen diferentes comportamientos y patrones dependiendo de la hora del día y la ubicación. Usted debe saber cuándo y dónde cazarlos para aumentar sus posibilidades de éxito. Por ejemplo, algunos animales son más activos al amanecer o al atardecer, mientras que otros son más activos al mediodía o a la noche. Algunos animales prefieren campos abiertos o pastizales, mientras que otros prefieren bosques o montañas.
-
- Conclusión
-Deer Hunter 2018 es un divertido y realista juego de simulación de caza que te permite cazar varios animales en todo el mundo. Sin embargo, si desea desbloquear todas las características y recursos del juego, es posible que desee utilizar Deer Hunter 2018 Hack APK 5.2.4, una versión hackeada del juego que le da dinero ilimitado, oro, armas, accesorios y más. Sin embargo, el uso de este archivo apk hack también viene con algunos riesgos, tales como problemas legales, infección de malware, robo de datos, y prohibición del servidor del juego. Por lo tanto, debe ser cuidadoso y responsable al usarlo. Alternativamente, puedes jugar el juego sin usar hacks y seguir algunos consejos y trucos para mejorar tus habilidades y rendimiento. De cualquier manera, esperamos que disfrutes jugando Deer Hunter 2018 y que te diviertas mucho cazando.
-
- Preguntas frecuentes
-Aquí hay algunas preguntas frecuentes sobre Deer Hunter 2018 Hack APK 5.2.4:
-
-Es Deer Hunter 2018 Hack APK 5.2.4 seguro de usar?
-
-¿Cómo puedo actualizar Deer Hunter 2018 Hack APK 5.2.4?
-Deer Hunter 2018 Hack APK 5.2.4 no puede funcionar con la última versión del juego, ya que los desarrolladores de juegos y editores pueden actualizar sus medidas de seguridad y características. Por lo tanto, usted debe comprobar si hay actualizaciones regularmente en el sitio donde se descarga el archivo apk hack y descargar la última versión si está disponible.
-¿Puedo jugar Deer Hunter 2018 en línea con Deer Hunter 2018 Hack APK 5.2.4?
-Deer Hunter 2018 Hack APK 5.2.4 puede permitirle jugar el juego en línea con otros jugadores, pero no es recomendable, ya que puede arruinar el equilibrio del juego y la equidad para otros jugadores. También puede conseguir que se detecta y prohibido desde el servidor del juego si los desarrolladores de juegos y editores se enteran de que está utilizando un archivo apk hack.
-¿Puedo utilizar Deer Hunter 2018 Hack APK 5.2.4 en dispositivos iOS?
-Deer Hunter 2018 Hack APK 5.2.4 solo es compatible con dispositivos Android, ya que es un archivo apk que solo se puede instalar en los sistemas operativos Android. Si desea utilizar un hack para Deer Hunter 2018 en dispositivos iOS, tendrá que encontrar un método o herramienta diferente.
-¿Puedo utilizar Deer Hunter 2018 Hack APK 5.2.4 sin conexión a Internet?
-Deer Hunter 2018 Hack APK 5.2.4 puede funcionar sin conexión a Internet para algunas características y modos del juego, tales como caza fuera de línea y tiro al blanco. Sin embargo, necesitarás conexión a Internet para otras características y modos del juego, como cacerías en línea, eventos, desafíos, tablas de clasificación, etc.
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Contra Huelga Global Ofensiva Apk Descargar Para Pc.md b/spaces/Benson/text-generation/Examples/Contra Huelga Global Ofensiva Apk Descargar Para Pc.md
deleted file mode 100644
index 95ef1ded270b7195e58584d0c3df392dd76d8b09..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Contra Huelga Global Ofensiva Apk Descargar Para Pc.md
+++ /dev/null
@@ -1,96 +0,0 @@
-
-Huelga Contador Ofensiva Global APK Descargar para PC
-Si eres un fan de los juegos de disparos en primera persona, probablemente hayas oído hablar de Counter Strike Global Offensive, uno de los títulos más populares y competitivos del género. ¿Pero sabías que puedes jugar a este juego en tu PC usando un archivo APK? En este artículo, le mostraremos cómo descargar e instalar Counter Strike Ofensiva Global APK para PC, así como algunos de los beneficios y consejos para jugar a este increíble juego.
-contra huelga global ofensiva apk descargar para pc Download ✦✦✦ https://bltlly.com/2v6JWX
- ¿Qué es la Ofensiva Global de Counter Strike?
-Counter Strike Global Offensive, o CS:GO para abreviar, es un juego multijugador de disparos en primera persona que fue lanzado en 2012 por Valve y Hidden Path Entertainment. Es la cuarta entrega de la serie Counter Strike, que comenzó como un mod para Half-Life en 1999. CS:GO cuenta con dos equipos de cinco jugadores cada uno, que compiten en varios modos de juego y mapas con diferentes objetivos, como desactivar bombas, rescatar rehenes o eliminar enemigos. CS:GO también ofrece nuevos mapas, personajes, armas y modos de juego, como Carrera de Armas, Flying Scoutsman y Wingman. CS:GO es uno de los juegos más jugados y vistos en el mundo, con millones de jugadores y aficionados, así como una próspera escena profesional con torneos y ligas.
- ¿Por qué descargar Counter Strike ofensiva global APK para PC?
-Mientras que CS:GO está oficialmente disponible para Windows, Mac OS, Linux, PlayStation 3 y Xbox 360, algunos jugadores pueden preferir jugarlo en su PC usando un archivo APK. Un archivo APK es un archivo de paquete de aplicaciones de Android que contiene todos los archivos y datos necesarios para ejecutar una aplicación en un dispositivo Android. Mediante el uso de un software emulador, como BlueStacks o Nox Player, puede ejecutar un archivo APK en su PC y disfrutar de las mismas características y funciones que en su dispositivo móvil. Algunos de los beneficios de descargar Counter Strike ofensiva global APK para PC son:
-
-
-Puedes jugar a CS:GO en tu PC con mejores gráficos, rendimiento y controles que en tu dispositivo móvil.
-Puedes jugar CS:GO en tu PC con más opciones de personalización, como cambiar la resolución, la velocidad de fotogramas, la configuración de sonido y los atajos de teclado.
-Puedes jugar CS:GO en tu PC con más opciones de accesibilidad, como usar un ratón, teclado, controlador o pantalla táctil.
-Puedes jugar a CS:GO en tu PC con más opciones de seguridad, como usar una VPN, antivirus o firewall.
-
- ¿Cómo descargar Counter Strike ofensiva global APK para PC?
-Descargar e instalar Counter Strike ofensiva global APK para PC no es difícil si sigue estos sencillos pasos:
- Paso 1: Descargar un cliente torrent o un lanzador
-Lo primero que hay que hacer es descargar un software que le permitirá descargar el Counter Strike Global Offensive APK archivo de Internet. Hay dos opciones principales para esto: usar un cliente torrent o usar un lanzador. Un cliente torrent es un software que le permite descargar archivos de redes peer-to-peer, como BitTorrent o uTorrent. Un lanzador es un software que te permite descargar e instalar juegos de varias fuentes, como Epic Games o Origin. Estos son algunos de los enlaces para descargar este software:
-
-BitTorrent:
-uTorrent:
-Juegos épicos:
-Origen:
-
- Paso 2: Descargar el Counter Strike Global Offensive APK archivo
-Lo siguiente que tienes que hacer es descargar el Counter Strike Global Offensive APK archivo de una fuente confiable y segura. Hay muchos sitios web que ofrecen este archivo, pero algunos de ellos pueden contener virus, malware u otro contenido dañino. Por lo tanto, siempre debe comprobar las revisiones, calificaciones y comentarios de otros usuarios antes de descargar nada. Estos son algunos de los enlaces para descargar el archivo APK Counter Strike Global Offensive:
-
-
-APKPure:
-
-APKMonk:
-APKHome:
-
- Paso 3: Instalar el Counter Strike Global ofensiva APK archivo
-La tercera cosa que tienes que hacer es instalar el archivo APK Counter Strike Global Offensive en tu PC utilizando un software emulador. Un software emulador es un software que te permite ejecutar aplicaciones de Android en tu PC, como BlueStacks o Nox Player. Puede descargar este software de sus sitios web oficiales o de otras fuentes. Estos son algunos de los enlaces para descargar este software:
-
-BlueStacks:
-Reproductor de nox:
-
-Después de descargar e instalar el software del emulador, debe seguir estos pasos para instalar el archivo APK Counter Strike Global Offensive:
-
-Abra el software del emulador e inicie sesión con su cuenta de Google.
- Localizar el Counter Strike Global ofensiva APK archivo en su PC y arrastrar y soltarlo en la ventana del emulador.
-Espere a que el proceso de instalación se complete y conceda los permisos necesarios.
-
- Paso 4: Iniciar el juego y disfrutar de
-Lo último que tienes que hacer es lanzar el juego y disfrutar jugando en tu PC. Puedes acceder al juego desde la pantalla de inicio del emulador o desde el acceso directo del escritorio. También puede ajustar la configuración, como los gráficos, el sonido y los controles, según sus preferencias. Aquí hay algunos consejos para jugar el juego:
-
-Asegúrese de tener una conexión a Internet estable y suficiente espacio de almacenamiento en su PC.
-Actualizar el juego con regularidad para obtener las últimas características y correcciones.
-Únete a un servidor que coincida con tu región, nivel de habilidad y modo de juego.
-Comunícate con tus compañeros de equipo y sigue sus estrategias.
-Practica tu puntería, movimiento y tácticas en modo offline o en mapas personalizados.
-
- ¿Cuáles son los requisitos del sistema para Counter Strike Global ofensiva APK para PC?
-
-
-Mínimo Recomendado
-Sistema operativo Windows 7/8/10 (64-bit) Windows 10 (64-bit)
-CPU Intel Core 2 Duo E6600 / AMD Phenom X3 8750 Intel Core i5-2400 / AMD FX-8320
-RAM 2 GB 4 GB
-GPU NVIDIA GeForce 8600 GT / ATI Radeon HD 4670 NVIDIA GeForce GTX 660 / AMD Radeon HD 7870
-Espacio en disco 15 GB 15 GB
-Software de emulación BlueStacks / Reproductor de Nox BlueStacks / Reproductor de Nox
-Conexión a Internet Banda ancha Banda ancha
-
- ¿Cuáles son algunos consejos y trucos para jugar Counter Strike ofensiva global APK para PC?
-Jugar Counter Strike Ofensiva Global APK para PC puede ser una experiencia divertida y gratificante, pero también puede ser desafiante y frustrante a veces. Para ayudarte a mejorar tus habilidades y rendimiento en el juego, aquí hay algunos consejos y trucos que puedes usar:
-
-Aprenda los mapas y sus diseños, como los sitios de bombas, puntos de estrangulación, puntos de ocultación y ángulos.
-Usa las armas y el equipo adecuados para cada situación, como rifles, pistolas, granadas y armaduras.
-Administra tu economía y compra sabiamente, como ahorrar, gastar o dejar caer dinero para tus compañeros de equipo.
-Usa el sonido y el radar para localizar y rastrear a tus enemigos y aliados.
-Apunta a la cabeza y controla tus patrones de retroceso y pulverización.
-Muévete de forma inteligente e impredecible, como agacharte, saltar, ametrallar y espiar.
-Trabajar en equipo y comunicarse eficazmente, como llamar a posiciones, enemigos, estrategias y peticiones.
-Ver jugadores profesionales y serpentinas para aprender de su juego y tácticas.
-
- Conclusión
-
- Preguntas frecuentes
-Aquí están algunas de las preguntas y respuestas más frecuentes sobre Counter Strike APK ofensiva global para PC:
-
- ¿Es seguro descargar e instalar Counter Strike Global Offensive APK para PC?
-Sí, Counter Strike Ofensiva Global APK para PC es seguro para descargar e instalar si utiliza una fuente confiable y segura, como los que hemos proporcionado en este artículo. También debe usar un software de emulación que sea confiable y seguro, como BlueStacks o Nox Player. Además, debe usar una VPN, antivirus o firewall para proteger su PC de posibles amenazas o ataques.
- ¿Es Counter Strike ofensiva global APK para PC libre para jugar?
-Sí, Counter Strike Ofensiva Global APK para PC es gratis para jugar si lo descarga de una fuente que no cobra ninguna cuota o requiere ninguna suscripción. Sin embargo, es posible que tengas que pagar por algunas funciones o elementos opcionales del juego, como pieles, estuches, llaves, pegatinas o pases. También puedes apoyar a los desarrolladores comprando la versión oficial del juego en Steam u otras plataformas.
- ¿Puedo jugar Counter Strike APK ofensiva global para PC en línea con otros jugadores?
-Sí, puedes jugar Counter Strike Ofensiva Global APK para PC en línea con otros jugadores que están utilizando la misma versión del juego que usted. Puede unirse o crear servidores que coincidan con su región, nivel de habilidad y preferencias de modo de juego. También puede invitar o unirse a sus amigos que están jugando el juego en su PC o dispositivos móviles.
-¿Puedo jugar Counter Strike Global ofensiva APK para PC sin conexión a Internet?
-
-¿Puedo actualizar Counter Strike Global ofensiva APK para PC para obtener las últimas características y correcciones?
-Sí, puede actualizar Counter Strike APK ofensiva global para PC para obtener las últimas características y correcciones si lo descarga desde una fuente que proporciona actualizaciones regulares. También puedes consultar el sitio web oficial o las cuentas de redes sociales del juego para cualquier noticia o anuncio sobre actualizaciones. Alternativamente , puede actualizar el software del emulador o el cliente torrent o el lanzador que utilizó para descargar el juego para obtener la última versión del juego.
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar Bowmasters Mod Apk Desbloqueado Todo.md b/spaces/Benson/text-generation/Examples/Descargar Bowmasters Mod Apk Desbloqueado Todo.md
deleted file mode 100644
index 36fc32ee069a09a7a73c211e59858f4dcda0e2d2..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Bowmasters Mod Apk Desbloqueado Todo.md
+++ /dev/null
@@ -1,24 +0,0 @@
-descargar bowmasters mod apk desbloqueado todo Download Zip ✯✯✯ https://bltlly.com/2v6KlL
-
-Ciudad Smash APK: Un patio de la física donde se puede destruir una ciudad
- ¿Alguna vez te has preguntado cómo sería destruir una ciudad con una bomba nuclear, misiles, agujeros negros, rayos láser o rayos? Si usted tiene, entonces usted debe probar City Smash APK, un juego que le permite hacer precisamente eso. Ciudad Smash APK es un patio de la física donde se puede dar rienda suelta a varias armas en una ciudad y verlo desmoronarse y quemarse. Los edificios han sido diseñados para romperse de una manera realista para que pueda presenciar la devastación creada por estas armas. En este artículo, le diremos qué es City Smash APK, cómo descargarlo e instalarlo, cómo jugarlo y por qué debe probarlo. ¿Qué es Ciudad Smash APK?
- Ciudad Smash APK es un juego que le permite dar rienda suelta a varias armas en una ciudad y verlo desmoronarse y quemarse. Es una simulación realista de la destrucción y la física que satisfará su piromaníaca interior. También es una forma divertida y adictiva de aliviar el estrés y el aburrimiento causando caos y caos. Un juego que te permite liberar varias armas en una ciudad
- Ciudad Smash APK le ofrece una gama de armas para elegir, tales como bombas nucleares, misiles, agujeros negros, rayos láser, rayos, meteoros, ovnis, zombies, dinosaurios, y más. Cada arma tiene su propio efecto y nivel de daño. Puedes usar un arma a la vez o combinar múltiples armas para más destrucción. También puede ajustar el tamaño y la potencia de las armas para adaptarse a sus preferencias. Una simulación realista de la destrucción y la física
-
- Ciudad Smash APK es un juego que te mantendrá entretenido durante horas. Puedes experimentar con diferentes armas y escenarios para ver cuánto daño puedes causar. También puede comparar sus resultados con otros jugadores en la clasificación. Puede jugar en cualquier momento y en cualquier lugar, ya que no requiere una conexión a Internet. También puede compartir sus capturas de pantalla y videos de su destrucción con sus amigos en las redes sociales. Cómo descargar e instalar City Smash APK?
- Ciudad Smash APK no está disponible en la Google Play Store, pero se puede descargar desde APKCombo, un sitio web que proporciona archivos APK gratis para juegos y aplicaciones Android. Aquí están los pasos para descargar e instalar City Smash APK en su dispositivo Android: Los pasos para descargar el archivo APK de APKCombo
- - Vaya a [APKCombo]( 1 ) en su navegador. - Busque "City Smash" en la barra de búsqueda. - Seleccione "City Smash" de los resultados. - Elija la última versión o cualquier otra versión que desee. - Toque "Descargar APK" o "Descargar XAPK" dependiendo del tipo de archivo. - Esperar a que termine la descarga. Los pasos para instalar el archivo APK en su dispositivo Android
- - Vaya a la carpeta "Descargas" en su dispositivo o la ubicación donde guardó el archivo APK. - Toque en el archivo APK para abrirlo. - Si se le solicita, activar "Fuentes desconocidas" o "Permitir desde esta fuente" para permitir la instalación de aplicaciones desde fuera de la Google Play Store. - Siga las instrucciones en la pantalla para instalar el juego. - Una vez completada la instalación, puede iniciar el juego desde el cajón de la aplicación o la pantalla de inicio. Los permisos y requisitos para el juego
- Ciudad Smash APK requiere Android 4.4 o superior y unos 100 MB de espacio de almacenamiento gratuito en su dispositivo. También requiere acceso a sus fotos, medios y archivos para guardar y compartir sus capturas de pantalla y videos de su destrucción. Puede denegar o revocar estos permisos en cualquier momento en la configuración de su dispositivo. Cómo jugar Ciudad Smash APK?
-
- Al iniciar el juego, verá el menú principal con cuatro opciones: Jugar, Configuración, Clasificación y Más Juegos. Puedes tocar cualquiera de estas opciones para acceder a ellas. - Jugar: Esto te llevará a la pantalla del juego donde puedes seleccionar una ciudad y un arma para empezar a destrozarla. - Ajustes: Esto te permitirá ajustar la calidad de sonido, música, vibración y gráficos del juego. - Tabla de clasificación: Esto te mostrará el ranking de otros jugadores basado en su puntuación de daño total. - Más juegos: Esto te redirigirá a APKCombo donde puedes descargar más juegos del mismo desarrollador. Las diferentes armas y sus efectos
- Una vez que selecciones una ciudad y un arma, puedes tocar en cualquier lugar de la pantalla para usarla. También puedes arrastrar el dedo por la pantalla para apuntar o mover el arma. Cada arma tiene su propio efecto y nivel de daño. Aquí hay algunos ejemplos de las armas y sus efectos: - Bomba nuclear: Esto creará una explosión masiva que destruirá todo en su radio. También creará una nube de hongos y una onda de choque que derribará edificios cercanos. - Misil: Esto lanzará un proyectil que golpeará un objetivo específico y causará una explosión más pequeña. También creará humo y fuego que se extenderá a otros edificios. - Agujero Negro: Esto creará un vórtice oscuro que absorberá todo lo que lo rodea. También distorsionará el espacio y el tiempo a su alrededor. - Rayo láser: Esto disparará un poderoso rayo de luz que cortará cualquier cosa en su camino. También creará chispas y llamas que encenderán otros edificios. - Rayo: Esto golpeará un lugar al azar en la ciudad con un rayo de electricidad. También creará efectos de truenos y relámpagos que iluminarán el cielo. Los consejos y trucos para maximizar el daño y la diversión
-
- Ciudad Smash APK no es solo un juego, sino también una experiencia. Es un patio de juegos de física donde puedes destruir una ciudad con varias armas y verla desmoronarse y quemarse. Aquí hay algunas razones por las que usted debe probar City Smash APK: Los beneficios de jugar un juego basado en la física
- Ciudad Smash APK es un juego basado en la física que simula la destrucción realista y la física. Jugar un juego basado en la física puede tener varios beneficios para su cerebro, tales como: - Mejorar su conciencia espacial y habilidades de razonamiento mediante la manipulación de objetos en el espacio tridimensional. - Mejorar su creatividad y habilidades para resolver problemas mediante la experimentación con diferentes escenarios y resultados. - Estimular la curiosidad y la imaginación mediante la exploración de diferentes posibilidades y efectos. Las características y actualizaciones del juego
- Ciudad Smash APK es un juego que está siendo constantemente actualizado y mejorado por su desarrollador. Algunas de las características y actualizaciones del juego son: - Una variedad de armas para elegir, tales como bombas nucleares, misiles, agujeros negros, rayos láser, rayos, meteoros, ovnis, zombies, dinosaurios, y más. - Una selección de ciudades para destruir, como Nueva York, París, Tokio, Londres y más. - Unos gráficos realistas y efectos de sonido que te harán sentir que realmente estás destruyendo una ciudad. - Una tabla de clasificación que te mostrará la clasificación de otros jugadores en función de su puntuación de daño total. - Una actualización regular que agregará nuevas armas, ciudades, características, y correcciones de errores al juego. Los comentarios del usuario y las calificaciones del juego
-
- Ciudad Smash APK es un patio de la física donde se puede dar rienda suelta a varias armas en una ciudad y verlo desmoronarse y quemarse. Es una simulación realista de la destrucción y la física que satisfará su piromaníaca interior. También es una forma divertida y adictiva de aliviar el estrés y el aburrimiento causando caos y caos. Si desea probar City Smash APK, se puede descargar desde APKCombo, un sitio web que proporciona archivos APK gratis para juegos y aplicaciones Android. También puede seguir los pasos de este artículo para instalarlo en su dispositivo Android. A continuación, puede seleccionar una ciudad y un arma para comenzar a destruirla. Ciudad Smash APK es un juego que te mantendrá entretenido durante horas. Puedes experimentar con diferentes armas y escenarios para ver cuánto daño puedes causar. También puede comparar sus resultados con otros jugadores en la clasificación. También puedes compartir tus capturas de pantalla y videos de tu destrucción con tus amigos en las redes sociales. ¿Qué estás esperando? Descargar Ciudad Smash APK ahora y disfrutar de lo último patio de la física donde se puede destruir una ciudad. Cinco preguntas frecuentes únicas después de la conclusión 64aa2da5cf
-
-
-
diff --git a/spaces/BetterAPI/BetterChat/src/lib/server/abortedGenerations.ts b/spaces/BetterAPI/BetterChat/src/lib/server/abortedGenerations.ts
deleted file mode 100644
index 575cf637bfef812c40905e35570ba3ca1a31b241..0000000000000000000000000000000000000000
--- a/spaces/BetterAPI/BetterChat/src/lib/server/abortedGenerations.ts
+++ /dev/null
@@ -1,29 +0,0 @@
-// Shouldn't be needed if we dove into sveltekit internals, see https://github.com/huggingface/chat-ui/pull/88#issuecomment-1523173850
-
-import { setTimeout } from "node:timers/promises";
-import { collections } from "./database";
-
-let closed = false;
-process.on("SIGINT", () => {
- closed = true;
-});
-
-export let abortedGenerations: Map = new Map();
-
-async function maintainAbortedGenerations() {
- while (!closed) {
- await setTimeout(1000);
-
- try {
- const aborts = await collections.abortedGenerations.find({}).sort({ createdAt: 1 }).toArray();
-
- abortedGenerations = new Map(
- aborts.map(({ conversationId, createdAt }) => [conversationId.toString(), createdAt])
- );
- } catch (err) {
- console.error(err);
- }
- }
-}
-
-maintainAbortedGenerations();
diff --git a/spaces/BetterAPI/BetterChat_new/src/lib/utils/concatUint8Arrays.ts b/spaces/BetterAPI/BetterChat_new/src/lib/utils/concatUint8Arrays.ts
deleted file mode 100644
index e53396eca7e3dee20a543fb6ac28ecf48c7e3965..0000000000000000000000000000000000000000
--- a/spaces/BetterAPI/BetterChat_new/src/lib/utils/concatUint8Arrays.ts
+++ /dev/null
@@ -1,12 +0,0 @@
-import { sum } from "./sum";
-
-export function concatUint8Arrays(arrays: Uint8Array[]): Uint8Array {
- const totalLength = sum(arrays.map((a) => a.length));
- const result = new Uint8Array(totalLength);
- let offset = 0;
- for (const array of arrays) {
- result.set(array, offset);
- offset += array.length;
- }
- return result;
-}
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/network/auth.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/network/auth.py
deleted file mode 100644
index c0efa765c853c089c6b1469e82d2e94a2d1cb5e0..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/network/auth.py
+++ /dev/null
@@ -1,559 +0,0 @@
-"""Network Authentication Helpers
-
-Contains interface (MultiDomainBasicAuth) and associated glue code for
-providing credentials in the context of network requests.
-"""
-import logging
-import os
-import shutil
-import subprocess
-import sysconfig
-import typing
-import urllib.parse
-from abc import ABC, abstractmethod
-from functools import lru_cache
-from os.path import commonprefix
-from pathlib import Path
-from typing import Any, Dict, List, NamedTuple, Optional, Tuple
-
-from pip._vendor.requests.auth import AuthBase, HTTPBasicAuth
-from pip._vendor.requests.models import Request, Response
-from pip._vendor.requests.utils import get_netrc_auth
-
-from pip._internal.utils.logging import getLogger
-from pip._internal.utils.misc import (
- ask,
- ask_input,
- ask_password,
- remove_auth_from_url,
- split_auth_netloc_from_url,
-)
-from pip._internal.vcs.versioncontrol import AuthInfo
-
-logger = getLogger(__name__)
-
-KEYRING_DISABLED = False
-
-
-class Credentials(NamedTuple):
- url: str
- username: str
- password: str
-
-
-class KeyRingBaseProvider(ABC):
- """Keyring base provider interface"""
-
- has_keyring: bool
-
- @abstractmethod
- def get_auth_info(self, url: str, username: Optional[str]) -> Optional[AuthInfo]:
- ...
-
- @abstractmethod
- def save_auth_info(self, url: str, username: str, password: str) -> None:
- ...
-
-
-class KeyRingNullProvider(KeyRingBaseProvider):
- """Keyring null provider"""
-
- has_keyring = False
-
- def get_auth_info(self, url: str, username: Optional[str]) -> Optional[AuthInfo]:
- return None
-
- def save_auth_info(self, url: str, username: str, password: str) -> None:
- return None
-
-
-class KeyRingPythonProvider(KeyRingBaseProvider):
- """Keyring interface which uses locally imported `keyring`"""
-
- has_keyring = True
-
- def __init__(self) -> None:
- import keyring
-
- self.keyring = keyring
-
- def get_auth_info(self, url: str, username: Optional[str]) -> Optional[AuthInfo]:
- # Support keyring's get_credential interface which supports getting
- # credentials without a username. This is only available for
- # keyring>=15.2.0.
- if hasattr(self.keyring, "get_credential"):
- logger.debug("Getting credentials from keyring for %s", url)
- cred = self.keyring.get_credential(url, username)
- if cred is not None:
- return cred.username, cred.password
- return None
-
- if username is not None:
- logger.debug("Getting password from keyring for %s", url)
- password = self.keyring.get_password(url, username)
- if password:
- return username, password
- return None
-
- def save_auth_info(self, url: str, username: str, password: str) -> None:
- self.keyring.set_password(url, username, password)
-
-
-class KeyRingCliProvider(KeyRingBaseProvider):
- """Provider which uses `keyring` cli
-
- Instead of calling the keyring package installed alongside pip
- we call keyring on the command line which will enable pip to
- use which ever installation of keyring is available first in
- PATH.
- """
-
- has_keyring = True
-
- def __init__(self, cmd: str) -> None:
- self.keyring = cmd
-
- def get_auth_info(self, url: str, username: Optional[str]) -> Optional[AuthInfo]:
- # This is the default implementation of keyring.get_credential
- # https://github.com/jaraco/keyring/blob/97689324abcf01bd1793d49063e7ca01e03d7d07/keyring/backend.py#L134-L139
- if username is not None:
- password = self._get_password(url, username)
- if password is not None:
- return username, password
- return None
-
- def save_auth_info(self, url: str, username: str, password: str) -> None:
- return self._set_password(url, username, password)
-
- def _get_password(self, service_name: str, username: str) -> Optional[str]:
- """Mirror the implementation of keyring.get_password using cli"""
- if self.keyring is None:
- return None
-
- cmd = [self.keyring, "get", service_name, username]
- env = os.environ.copy()
- env["PYTHONIOENCODING"] = "utf-8"
- res = subprocess.run(
- cmd,
- stdin=subprocess.DEVNULL,
- stdout=subprocess.PIPE,
- env=env,
- )
- if res.returncode:
- return None
- return res.stdout.decode("utf-8").strip(os.linesep)
-
- def _set_password(self, service_name: str, username: str, password: str) -> None:
- """Mirror the implementation of keyring.set_password using cli"""
- if self.keyring is None:
- return None
- env = os.environ.copy()
- env["PYTHONIOENCODING"] = "utf-8"
- subprocess.run(
- [self.keyring, "set", service_name, username],
- input=f"{password}{os.linesep}".encode("utf-8"),
- env=env,
- check=True,
- )
- return None
-
-
-@lru_cache(maxsize=None)
-def get_keyring_provider(provider: str) -> KeyRingBaseProvider:
- logger.verbose("Keyring provider requested: %s", provider)
-
- # keyring has previously failed and been disabled
- if KEYRING_DISABLED:
- provider = "disabled"
- if provider in ["import", "auto"]:
- try:
- impl = KeyRingPythonProvider()
- logger.verbose("Keyring provider set: import")
- return impl
- except ImportError:
- pass
- except Exception as exc:
- # In the event of an unexpected exception
- # we should warn the user
- msg = "Installed copy of keyring fails with exception %s"
- if provider == "auto":
- msg = msg + ", trying to find a keyring executable as a fallback"
- logger.warning(msg, exc, exc_info=logger.isEnabledFor(logging.DEBUG))
- if provider in ["subprocess", "auto"]:
- cli = shutil.which("keyring")
- if cli and cli.startswith(sysconfig.get_path("scripts")):
- # all code within this function is stolen from shutil.which implementation
- @typing.no_type_check
- def PATH_as_shutil_which_determines_it() -> str:
- path = os.environ.get("PATH", None)
- if path is None:
- try:
- path = os.confstr("CS_PATH")
- except (AttributeError, ValueError):
- # os.confstr() or CS_PATH is not available
- path = os.defpath
- # bpo-35755: Don't use os.defpath if the PATH environment variable is
- # set to an empty string
-
- return path
-
- scripts = Path(sysconfig.get_path("scripts"))
-
- paths = []
- for path in PATH_as_shutil_which_determines_it().split(os.pathsep):
- p = Path(path)
- try:
- if not p.samefile(scripts):
- paths.append(path)
- except FileNotFoundError:
- pass
-
- path = os.pathsep.join(paths)
-
- cli = shutil.which("keyring", path=path)
-
- if cli:
- logger.verbose("Keyring provider set: subprocess with executable %s", cli)
- return KeyRingCliProvider(cli)
-
- logger.verbose("Keyring provider set: disabled")
- return KeyRingNullProvider()
-
-
-class MultiDomainBasicAuth(AuthBase):
- def __init__(
- self,
- prompting: bool = True,
- index_urls: Optional[List[str]] = None,
- keyring_provider: str = "auto",
- ) -> None:
- self.prompting = prompting
- self.index_urls = index_urls
- self.keyring_provider = keyring_provider # type: ignore[assignment]
- self.passwords: Dict[str, AuthInfo] = {}
- # When the user is prompted to enter credentials and keyring is
- # available, we will offer to save them. If the user accepts,
- # this value is set to the credentials they entered. After the
- # request authenticates, the caller should call
- # ``save_credentials`` to save these.
- self._credentials_to_save: Optional[Credentials] = None
-
- @property
- def keyring_provider(self) -> KeyRingBaseProvider:
- return get_keyring_provider(self._keyring_provider)
-
- @keyring_provider.setter
- def keyring_provider(self, provider: str) -> None:
- # The free function get_keyring_provider has been decorated with
- # functools.cache. If an exception occurs in get_keyring_auth that
- # cache will be cleared and keyring disabled, take that into account
- # if you want to remove this indirection.
- self._keyring_provider = provider
-
- @property
- def use_keyring(self) -> bool:
- # We won't use keyring when --no-input is passed unless
- # a specific provider is requested because it might require
- # user interaction
- return self.prompting or self._keyring_provider not in ["auto", "disabled"]
-
- def _get_keyring_auth(
- self,
- url: Optional[str],
- username: Optional[str],
- ) -> Optional[AuthInfo]:
- """Return the tuple auth for a given url from keyring."""
- # Do nothing if no url was provided
- if not url:
- return None
-
- try:
- return self.keyring_provider.get_auth_info(url, username)
- except Exception as exc:
- logger.warning(
- "Keyring is skipped due to an exception: %s",
- str(exc),
- )
- global KEYRING_DISABLED
- KEYRING_DISABLED = True
- get_keyring_provider.cache_clear()
- return None
-
- def _get_index_url(self, url: str) -> Optional[str]:
- """Return the original index URL matching the requested URL.
-
- Cached or dynamically generated credentials may work against
- the original index URL rather than just the netloc.
-
- The provided url should have had its username and password
- removed already. If the original index url had credentials then
- they will be included in the return value.
-
- Returns None if no matching index was found, or if --no-index
- was specified by the user.
- """
- if not url or not self.index_urls:
- return None
-
- url = remove_auth_from_url(url).rstrip("/") + "/"
- parsed_url = urllib.parse.urlsplit(url)
-
- candidates = []
-
- for index in self.index_urls:
- index = index.rstrip("/") + "/"
- parsed_index = urllib.parse.urlsplit(remove_auth_from_url(index))
- if parsed_url == parsed_index:
- return index
-
- if parsed_url.netloc != parsed_index.netloc:
- continue
-
- candidate = urllib.parse.urlsplit(index)
- candidates.append(candidate)
-
- if not candidates:
- return None
-
- candidates.sort(
- reverse=True,
- key=lambda candidate: commonprefix(
- [
- parsed_url.path,
- candidate.path,
- ]
- ).rfind("/"),
- )
-
- return urllib.parse.urlunsplit(candidates[0])
-
- def _get_new_credentials(
- self,
- original_url: str,
- *,
- allow_netrc: bool = True,
- allow_keyring: bool = False,
- ) -> AuthInfo:
- """Find and return credentials for the specified URL."""
- # Split the credentials and netloc from the url.
- url, netloc, url_user_password = split_auth_netloc_from_url(
- original_url,
- )
-
- # Start with the credentials embedded in the url
- username, password = url_user_password
- if username is not None and password is not None:
- logger.debug("Found credentials in url for %s", netloc)
- return url_user_password
-
- # Find a matching index url for this request
- index_url = self._get_index_url(url)
- if index_url:
- # Split the credentials from the url.
- index_info = split_auth_netloc_from_url(index_url)
- if index_info:
- index_url, _, index_url_user_password = index_info
- logger.debug("Found index url %s", index_url)
-
- # If an index URL was found, try its embedded credentials
- if index_url and index_url_user_password[0] is not None:
- username, password = index_url_user_password
- if username is not None and password is not None:
- logger.debug("Found credentials in index url for %s", netloc)
- return index_url_user_password
-
- # Get creds from netrc if we still don't have them
- if allow_netrc:
- netrc_auth = get_netrc_auth(original_url)
- if netrc_auth:
- logger.debug("Found credentials in netrc for %s", netloc)
- return netrc_auth
-
- # If we don't have a password and keyring is available, use it.
- if allow_keyring:
- # The index url is more specific than the netloc, so try it first
- # fmt: off
- kr_auth = (
- self._get_keyring_auth(index_url, username) or
- self._get_keyring_auth(netloc, username)
- )
- # fmt: on
- if kr_auth:
- logger.debug("Found credentials in keyring for %s", netloc)
- return kr_auth
-
- return username, password
-
- def _get_url_and_credentials(
- self, original_url: str
- ) -> Tuple[str, Optional[str], Optional[str]]:
- """Return the credentials to use for the provided URL.
-
- If allowed, netrc and keyring may be used to obtain the
- correct credentials.
-
- Returns (url_without_credentials, username, password). Note
- that even if the original URL contains credentials, this
- function may return a different username and password.
- """
- url, netloc, _ = split_auth_netloc_from_url(original_url)
-
- # Try to get credentials from original url
- username, password = self._get_new_credentials(original_url)
-
- # If credentials not found, use any stored credentials for this netloc.
- # Do this if either the username or the password is missing.
- # This accounts for the situation in which the user has specified
- # the username in the index url, but the password comes from keyring.
- if (username is None or password is None) and netloc in self.passwords:
- un, pw = self.passwords[netloc]
- # It is possible that the cached credentials are for a different username,
- # in which case the cache should be ignored.
- if username is None or username == un:
- username, password = un, pw
-
- if username is not None or password is not None:
- # Convert the username and password if they're None, so that
- # this netloc will show up as "cached" in the conditional above.
- # Further, HTTPBasicAuth doesn't accept None, so it makes sense to
- # cache the value that is going to be used.
- username = username or ""
- password = password or ""
-
- # Store any acquired credentials.
- self.passwords[netloc] = (username, password)
-
- assert (
- # Credentials were found
- (username is not None and password is not None)
- # Credentials were not found
- or (username is None and password is None)
- ), f"Could not load credentials from url: {original_url}"
-
- return url, username, password
-
- def __call__(self, req: Request) -> Request:
- # Get credentials for this request
- url, username, password = self._get_url_and_credentials(req.url)
-
- # Set the url of the request to the url without any credentials
- req.url = url
-
- if username is not None and password is not None:
- # Send the basic auth with this request
- req = HTTPBasicAuth(username, password)(req)
-
- # Attach a hook to handle 401 responses
- req.register_hook("response", self.handle_401)
-
- return req
-
- # Factored out to allow for easy patching in tests
- def _prompt_for_password(
- self, netloc: str
- ) -> Tuple[Optional[str], Optional[str], bool]:
- username = ask_input(f"User for {netloc}: ") if self.prompting else None
- if not username:
- return None, None, False
- if self.use_keyring:
- auth = self._get_keyring_auth(netloc, username)
- if auth and auth[0] is not None and auth[1] is not None:
- return auth[0], auth[1], False
- password = ask_password("Password: ")
- return username, password, True
-
- # Factored out to allow for easy patching in tests
- def _should_save_password_to_keyring(self) -> bool:
- if (
- not self.prompting
- or not self.use_keyring
- or not self.keyring_provider.has_keyring
- ):
- return False
- return ask("Save credentials to keyring [y/N]: ", ["y", "n"]) == "y"
-
- def handle_401(self, resp: Response, **kwargs: Any) -> Response:
- # We only care about 401 responses, anything else we want to just
- # pass through the actual response
- if resp.status_code != 401:
- return resp
-
- username, password = None, None
-
- # Query the keyring for credentials:
- if self.use_keyring:
- username, password = self._get_new_credentials(
- resp.url,
- allow_netrc=False,
- allow_keyring=True,
- )
-
- # We are not able to prompt the user so simply return the response
- if not self.prompting and not username and not password:
- return resp
-
- parsed = urllib.parse.urlparse(resp.url)
-
- # Prompt the user for a new username and password
- save = False
- if not username and not password:
- username, password, save = self._prompt_for_password(parsed.netloc)
-
- # Store the new username and password to use for future requests
- self._credentials_to_save = None
- if username is not None and password is not None:
- self.passwords[parsed.netloc] = (username, password)
-
- # Prompt to save the password to keyring
- if save and self._should_save_password_to_keyring():
- self._credentials_to_save = Credentials(
- url=parsed.netloc,
- username=username,
- password=password,
- )
-
- # Consume content and release the original connection to allow our new
- # request to reuse the same one.
- resp.content
- resp.raw.release_conn()
-
- # Add our new username and password to the request
- req = HTTPBasicAuth(username or "", password or "")(resp.request)
- req.register_hook("response", self.warn_on_401)
-
- # On successful request, save the credentials that were used to
- # keyring. (Note that if the user responded "no" above, this member
- # is not set and nothing will be saved.)
- if self._credentials_to_save:
- req.register_hook("response", self.save_credentials)
-
- # Send our new request
- new_resp = resp.connection.send(req, **kwargs)
- new_resp.history.append(resp)
-
- return new_resp
-
- def warn_on_401(self, resp: Response, **kwargs: Any) -> None:
- """Response callback to warn about incorrect credentials."""
- if resp.status_code == 401:
- logger.warning(
- "401 Error, Credentials not correct for %s",
- resp.request.url,
- )
-
- def save_credentials(self, resp: Response, **kwargs: Any) -> None:
- """Response callback to save credentials on success."""
- assert (
- self.keyring_provider.has_keyring
- ), "should never reach here without keyring"
-
- creds = self._credentials_to_save
- self._credentials_to_save = None
- if creds and resp.status_code < 400:
- try:
- logger.info("Saving credentials to keyring")
- self.keyring_provider.save_auth_info(
- creds.url, creds.username, creds.password
- )
- except Exception:
- logger.exception("Failed to save credentials")
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/requests/packages.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/requests/packages.py
deleted file mode 100644
index 9582fa730f121634348a79c1a8b0cc2df99c616f..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/requests/packages.py
+++ /dev/null
@@ -1,16 +0,0 @@
-import sys
-
-# This code exists for backwards compatibility reasons.
-# I don't like it either. Just look the other way. :)
-
-for package in ('urllib3', 'idna', 'chardet'):
- vendored_package = "pip._vendor." + package
- locals()[package] = __import__(vendored_package)
- # This traversal is apparently necessary such that the identities are
- # preserved (requests.packages.urllib3.* is urllib3.*)
- for mod in list(sys.modules):
- if mod == vendored_package or mod.startswith(vendored_package + '.'):
- unprefixed_mod = mod[len("pip._vendor."):]
- sys.modules['pip._vendor.requests.packages.' + unprefixed_mod] = sys.modules[mod]
-
-# Kinda cool, though, right?
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/mcan/net.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/mcan/net.py
deleted file mode 100644
index 77d491bb5a656ce3e33debc9a2793f60b61f5fcd..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/mcan/net.py
+++ /dev/null
@@ -1,131 +0,0 @@
-# --------------------------------------------------------
-# OpenVQA
-# Written by Yuhao Cui https://github.com/cuiyuhao1996
-# --------------------------------------------------------
-
-from openvqa.utils.make_mask import make_mask
-from openvqa.ops.fc import FC, MLP
-from openvqa.ops.layer_norm import LayerNorm
-from openvqa.models.mcan.mca import MCA_ED
-from openvqa.models.mcan.adapter import Adapter
-
-import torch.nn as nn
-import torch.nn.functional as F
-import torch
-
-
-# ------------------------------
-# ---- Flatten the sequence ----
-# ------------------------------
-
-class AttFlat(nn.Module):
- def __init__(self, __C):
- super(AttFlat, self).__init__()
- self.__C = __C
-
- self.mlp = MLP(
- in_size=__C.HIDDEN_SIZE,
- mid_size=__C.FLAT_MLP_SIZE,
- out_size=__C.FLAT_GLIMPSES,
- dropout_r=__C.DROPOUT_R,
- use_relu=True
- )
-
- self.linear_merge = nn.Linear(
- __C.HIDDEN_SIZE * __C.FLAT_GLIMPSES,
- __C.FLAT_OUT_SIZE
- )
-
- def forward(self, x, x_mask):
- att = self.mlp(x)
- att = att.masked_fill(
- x_mask.squeeze(1).squeeze(1).unsqueeze(2),
- -1e9
- )
- att = F.softmax(att, dim=1)
-
- att_list = []
- for i in range(self.__C.FLAT_GLIMPSES):
- att_list.append(
- torch.sum(att[:, :, i: i + 1] * x, dim=1)
- )
-
- x_atted = torch.cat(att_list, dim=1)
- x_atted = self.linear_merge(x_atted)
-
- return x_atted
-
-
-# -------------------------
-# ---- Main MCAN Model ----
-# -------------------------
-
-class Net(nn.Module):
- def __init__(self, __C, pretrained_emb, token_size, answer_size):
- super(Net, self).__init__()
- self.__C = __C
-
- self.embedding = nn.Embedding(
- num_embeddings=token_size,
- embedding_dim=__C.WORD_EMBED_SIZE
- )
-
- # Loading the GloVe embedding weights
- if __C.USE_GLOVE:
- self.embedding.weight.data.copy_(torch.from_numpy(pretrained_emb))
-
- self.lstm = nn.LSTM(
- input_size=__C.WORD_EMBED_SIZE,
- hidden_size=__C.HIDDEN_SIZE,
- num_layers=1,
- batch_first=True
- )
-
- self.adapter = Adapter(__C)
-
- self.backbone = MCA_ED(__C)
-
- # Flatten to vector
- self.attflat_img = AttFlat(__C)
- self.attflat_lang = AttFlat(__C)
-
- # Classification layers
- self.proj_norm = LayerNorm(__C.FLAT_OUT_SIZE)
- self.proj = nn.Linear(__C.FLAT_OUT_SIZE, answer_size)
-
-
- def forward(self, frcn_feat, grid_feat, bbox_feat, ques_ix):
-
- # Pre-process Language Feature
- lang_feat_mask = make_mask(ques_ix.unsqueeze(2))
- lang_feat = self.embedding(ques_ix)
- lang_feat, _ = self.lstm(lang_feat)
-
- img_feat, img_feat_mask = self.adapter(frcn_feat, grid_feat, bbox_feat)
-
- # Backbone Framework
- lang_feat, img_feat = self.backbone(
- lang_feat,
- img_feat,
- lang_feat_mask,
- img_feat_mask
- )
-
- # Flatten to vector
- lang_feat = self.attflat_lang(
- lang_feat,
- lang_feat_mask
- )
-
- img_feat = self.attflat_img(
- img_feat,
- img_feat_mask
- )
-
- # Classification layers
- proj_feat = lang_feat + img_feat
- proj_feat = self.proj_norm(proj_feat)
- proj_feat = self.proj(proj_feat)
-
- return proj_feat
-
diff --git a/spaces/CVPR/LIVE/pybind11/include/pybind11/eval.h b/spaces/CVPR/LIVE/pybind11/include/pybind11/eval.h
deleted file mode 100644
index ba82cf42ae3673a3de391eb55777ef413c43dc33..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/pybind11/include/pybind11/eval.h
+++ /dev/null
@@ -1,132 +0,0 @@
-/*
- pybind11/exec.h: Support for evaluating Python expressions and statements
- from strings and files
-
- Copyright (c) 2016 Klemens Morgenstern and
- Wenzel Jakob
-
- All rights reserved. Use of this source code is governed by a
- BSD-style license that can be found in the LICENSE file.
-*/
-
-#pragma once
-
-#include "pybind11.h"
-
-PYBIND11_NAMESPACE_BEGIN(PYBIND11_NAMESPACE)
-
-enum eval_mode {
- /// Evaluate a string containing an isolated expression
- eval_expr,
-
- /// Evaluate a string containing a single statement. Returns \c none
- eval_single_statement,
-
- /// Evaluate a string containing a sequence of statement. Returns \c none
- eval_statements
-};
-
-template
-object eval(str expr, object global = globals(), object local = object()) {
- if (!local)
- local = global;
-
- /* PyRun_String does not accept a PyObject / encoding specifier,
- this seems to be the only alternative */
- std::string buffer = "# -*- coding: utf-8 -*-\n" + (std::string) expr;
-
- int start;
- switch (mode) {
- case eval_expr: start = Py_eval_input; break;
- case eval_single_statement: start = Py_single_input; break;
- case eval_statements: start = Py_file_input; break;
- default: pybind11_fail("invalid evaluation mode");
- }
-
- PyObject *result = PyRun_String(buffer.c_str(), start, global.ptr(), local.ptr());
- if (!result)
- throw error_already_set();
- return reinterpret_steal(result);
-}
-
-template
-object eval(const char (&s)[N], object global = globals(), object local = object()) {
- /* Support raw string literals by removing common leading whitespace */
- auto expr = (s[0] == '\n') ? str(module::import("textwrap").attr("dedent")(s))
- : str(s);
- return eval(expr, global, local);
-}
-
-inline void exec(str expr, object global = globals(), object local = object()) {
- eval(expr, global, local);
-}
-
-template
-void exec(const char (&s)[N], object global = globals(), object local = object()) {
- eval(s, global, local);
-}
-
-#if defined(PYPY_VERSION) && PY_VERSION_HEX >= 0x3000000
-template
-object eval_file(str, object, object) {
- pybind11_fail("eval_file not supported in PyPy3. Use eval");
-}
-template
-object eval_file(str, object) {
- pybind11_fail("eval_file not supported in PyPy3. Use eval");
-}
-template
-object eval_file(str) {
- pybind11_fail("eval_file not supported in PyPy3. Use eval");
-}
-#else
-template
-object eval_file(str fname, object global = globals(), object local = object()) {
- if (!local)
- local = global;
-
- int start;
- switch (mode) {
- case eval_expr: start = Py_eval_input; break;
- case eval_single_statement: start = Py_single_input; break;
- case eval_statements: start = Py_file_input; break;
- default: pybind11_fail("invalid evaluation mode");
- }
-
- int closeFile = 1;
- std::string fname_str = (std::string) fname;
-#if PY_VERSION_HEX >= 0x03040000
- FILE *f = _Py_fopen_obj(fname.ptr(), "r");
-#elif PY_VERSION_HEX >= 0x03000000
- FILE *f = _Py_fopen(fname.ptr(), "r");
-#else
- /* No unicode support in open() :( */
- auto fobj = reinterpret_steal(PyFile_FromString(
- const_cast(fname_str.c_str()),
- const_cast("r")));
- FILE *f = nullptr;
- if (fobj)
- f = PyFile_AsFile(fobj.ptr());
- closeFile = 0;
-#endif
- if (!f) {
- PyErr_Clear();
- pybind11_fail("File \"" + fname_str + "\" could not be opened!");
- }
-
-#if PY_VERSION_HEX < 0x03000000 && defined(PYPY_VERSION)
- PyObject *result = PyRun_File(f, fname_str.c_str(), start, global.ptr(),
- local.ptr());
- (void) closeFile;
-#else
- PyObject *result = PyRun_FileEx(f, fname_str.c_str(), start, global.ptr(),
- local.ptr(), closeFile);
-#endif
-
- if (!result)
- throw error_already_set();
- return reinterpret_steal(result);
-}
-#endif
-
-PYBIND11_NAMESPACE_END(PYBIND11_NAMESPACE)
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/replace.h b/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/replace.h
deleted file mode 100644
index 95c5a14ba3df120019c9a5b6ed638db3f2555a5b..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/replace.h
+++ /dev/null
@@ -1,23 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-
-// this system inherits this algorithm
-#include
-
diff --git a/spaces/CVPR/WALT/cwalt/CWALT.py b/spaces/CVPR/WALT/cwalt/CWALT.py
deleted file mode 100644
index 894578c1c75766cf27999dbb1fe64a4c4dcf4efb..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/cwalt/CWALT.py
+++ /dev/null
@@ -1,161 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding: utf-8 -*-
-"""
-Created on Tue Oct 19 19:14:47 2021
-
-@author: dinesh
-"""
-import glob
-from .utils import bb_intersection_over_union_unoccluded
-import numpy as np
-from PIL import Image
-import datetime
-import cv2
-import os
-from tqdm import tqdm
-
-
-def get_image(time, folder):
- for week_loop in range(5):
- try:
- image = np.array(Image.open(folder+'/week' +str(week_loop)+'/'+ str(time).replace(' ','T').replace(':','-').split('+')[0] + '.jpg'))
- break
- except:
- continue
- if image is None:
- print('file not found')
- return image
-
-def get_mask(segm, image):
- poly = np.array(segm).reshape((int(len(segm)/2), 2))
- mask = image.copy()*0
- cv2.fillConvexPoly(mask, poly, (255, 255, 255))
- return mask
-
-def get_unoccluded(indices, tracks_all):
- unoccluded_indexes = []
- unoccluded_index_all =[]
- while 1:
- unoccluded_clusters = []
- len_unocc = len(unoccluded_indexes)
- for ind in indices:
- if ind in unoccluded_indexes:
- continue
- occ = False
- for ind_compare in indices:
- if ind_compare in unoccluded_indexes:
- continue
- if bb_intersection_over_union_unoccluded(tracks_all[ind], tracks_all[ind_compare]) > 0.01 and ind_compare != ind:
- occ = True
- if occ==False:
- unoccluded_indexes.extend([ind])
- unoccluded_clusters.extend([ind])
- if len(unoccluded_indexes) == len_unocc and len_unocc != 0:
- for ind in indices:
- if ind not in unoccluded_indexes:
- unoccluded_indexes.extend([ind])
- unoccluded_clusters.extend([ind])
-
- unoccluded_index_all.append(unoccluded_clusters)
- if len(unoccluded_indexes) > len(indices)-5:
- break
- return unoccluded_index_all
-
-def primes(n): # simple sieve of multiples
- odds = range(3, n+1, 2)
- sieve = set(sum([list(range(q*q, n+1, q+q)) for q in odds], []))
- return [2] + [p for p in odds if p not in sieve]
-
-def save_image(image_read, save_path, data, path):
- tracks = data['tracks_all_unoccluded']
- segmentations = data['segmentation_all_unoccluded']
- timestamps = data['timestamps_final_unoccluded']
-
- image = image_read.copy()
- indices = np.random.randint(len(tracks),size=30)
- prime_numbers = primes(1000)
- unoccluded_index_all = get_unoccluded(indices, tracks)
-
- mask_stacked = image*0
- mask_stacked_all =[]
- count = 0
- time = datetime.datetime.now()
-
- for l in indices:
- try:
- image_crop = get_image(timestamps[l], path)
- except:
- continue
- try:
- bb_left, bb_top, bb_width, bb_height, confidence = tracks[l]
- except:
- bb_left, bb_top, bb_width, bb_height, confidence, track_id = tracks[l]
- mask = get_mask(segmentations[l], image)
-
- image[mask > 0] = image_crop[mask > 0]
- mask[mask > 0] = 1
- for count, mask_inc in enumerate(mask_stacked_all):
- mask_stacked_all[count][cv2.bitwise_and(mask, mask_inc) > 0] = 2
- mask_stacked_all.append(mask)
- mask_stacked += mask
- count = count+1
-
- cv2.imwrite(save_path + '/images/'+str(time).replace(' ','T').replace(':','-').split('+')[0] + '.jpg', image[:, :, ::-1])
- cv2.imwrite(save_path + '/Segmentation/'+str(time).replace(' ','T').replace(':','-').split('+')[0] + '.jpg', mask_stacked[:, :, ::-1]*30)
- np.savez_compressed(save_path+'/Segmentation/'+str(time).replace(' ','T').replace(':','-').split('+')[0], mask=mask_stacked_all)
-
-def CWALT_Generation(camera_name):
- save_path_train = 'data/cwalt_train'
- save_path_test = 'data/cwalt_test'
-
- json_file_path = 'data/{}/{}.json'.format(camera_name,camera_name) # iii1/iii1_7_test.json' # './data.json'
- path = 'data/' + camera_name
-
- data = np.load(json_file_path + '.npz', allow_pickle=True)
-
- ## slip data
-
- data_train=dict()
- data_test=dict()
-
- split_index = int(len(data['timestamps_final_unoccluded'])*0.8)
-
- data_train['tracks_all_unoccluded'] = data['tracks_all_unoccluded'][0:split_index]
- data_train['segmentation_all_unoccluded'] = data['segmentation_all_unoccluded'][0:split_index]
- data_train['timestamps_final_unoccluded'] = data['timestamps_final_unoccluded'][0:split_index]
-
- data_test['tracks_all_unoccluded'] = data['tracks_all_unoccluded'][split_index:]
- data_test['segmentation_all_unoccluded'] = data['segmentation_all_unoccluded'][split_index:]
- data_test['timestamps_final_unoccluded'] = data['timestamps_final_unoccluded'][split_index:]
-
- image_read = np.array(Image.open(path + '/T18-median_image.jpg'))
- image_read = cv2.resize(image_read, (int(image_read.shape[1]/2), int(image_read.shape[0]/2)))
-
- try:
- os.mkdir(save_path_train)
- except:
- print(save_path_train)
-
- try:
- os.mkdir(save_path_train + '/images')
- os.mkdir(save_path_train + '/Segmentation')
- except:
- print(save_path_train+ '/images')
-
- try:
- os.mkdir(save_path_test)
- except:
- print(save_path_test)
-
- try:
- os.mkdir(save_path_test + '/images')
- os.mkdir(save_path_test + '/Segmentation')
- except:
- print(save_path_test+ '/images')
-
- for loop in tqdm(range(3000), desc="Generating training CWALT Images "):
- save_image(image_read, save_path_train, data_train, path)
-
- for loop in tqdm(range(300), desc="Generating testing CWALT Images "):
- save_image(image_read, save_path_test, data_test, path)
-
diff --git a/spaces/CVPR/lama-example/bin/train.py b/spaces/CVPR/lama-example/bin/train.py
deleted file mode 100644
index be9ca8c6ef2a0cb9143ab6a0f4d91f571b691a95..0000000000000000000000000000000000000000
--- a/spaces/CVPR/lama-example/bin/train.py
+++ /dev/null
@@ -1,72 +0,0 @@
-#!/usr/bin/env python3
-
-import logging
-import os
-import sys
-import traceback
-
-os.environ['OMP_NUM_THREADS'] = '1'
-os.environ['OPENBLAS_NUM_THREADS'] = '1'
-os.environ['MKL_NUM_THREADS'] = '1'
-os.environ['VECLIB_MAXIMUM_THREADS'] = '1'
-os.environ['NUMEXPR_NUM_THREADS'] = '1'
-
-import hydra
-from omegaconf import OmegaConf
-from pytorch_lightning import Trainer
-from pytorch_lightning.callbacks import ModelCheckpoint
-from pytorch_lightning.loggers import TensorBoardLogger
-from pytorch_lightning.plugins import DDPPlugin
-
-from saicinpainting.training.trainers import make_training_model
-from saicinpainting.utils import register_debug_signal_handlers, handle_ddp_subprocess, handle_ddp_parent_process, \
- handle_deterministic_config
-
-LOGGER = logging.getLogger(__name__)
-
-
-@handle_ddp_subprocess()
-@hydra.main(config_path='../configs/training', config_name='tiny_test.yaml')
-def main(config: OmegaConf):
- try:
- need_set_deterministic = handle_deterministic_config(config)
-
- register_debug_signal_handlers() # kill -10 will result in traceback dumped into log
-
- is_in_ddp_subprocess = handle_ddp_parent_process()
-
- config.visualizer.outdir = os.path.join(os.getcwd(), config.visualizer.outdir)
- if not is_in_ddp_subprocess:
- LOGGER.info(OmegaConf.to_yaml(config))
- OmegaConf.save(config, os.path.join(os.getcwd(), 'config.yaml'))
-
- checkpoints_dir = os.path.join(os.getcwd(), 'models')
- os.makedirs(checkpoints_dir, exist_ok=True)
-
- # there is no need to suppress this logger in ddp, because it handles rank on its own
- metrics_logger = TensorBoardLogger(config.location.tb_dir, name=os.path.basename(os.getcwd()))
- metrics_logger.log_hyperparams(config)
-
- training_model = make_training_model(config)
-
- trainer_kwargs = OmegaConf.to_container(config.trainer.kwargs, resolve=True)
- if need_set_deterministic:
- trainer_kwargs['deterministic'] = True
-
- trainer = Trainer(
- # there is no need to suppress checkpointing in ddp, because it handles rank on its own
- callbacks=ModelCheckpoint(dirpath=checkpoints_dir, **config.trainer.checkpoint_kwargs),
- logger=metrics_logger,
- default_root_dir=os.getcwd(),
- **trainer_kwargs
- )
- trainer.fit(training_model)
- except KeyboardInterrupt:
- LOGGER.warning('Interrupted by user')
- except Exception as ex:
- LOGGER.critical(f'Training failed due to {ex}:\n{traceback.format_exc()}')
- sys.exit(1)
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/CVPR/regionclip-demo/detectron2/layers/csrc/nms_rotated/nms_rotated.h b/spaces/CVPR/regionclip-demo/detectron2/layers/csrc/nms_rotated/nms_rotated.h
deleted file mode 100644
index bd855e832afea4354885f5d8bfe94e204f51827e..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/detectron2/layers/csrc/nms_rotated/nms_rotated.h
+++ /dev/null
@@ -1,39 +0,0 @@
-// Copyright (c) Facebook, Inc. and its affiliates.
-#pragma once
-#include
-
-namespace detectron2 {
-
-at::Tensor nms_rotated_cpu(
- const at::Tensor& dets,
- const at::Tensor& scores,
- const double iou_threshold);
-
-#if defined(WITH_CUDA) || defined(WITH_HIP)
-at::Tensor nms_rotated_cuda(
- const at::Tensor& dets,
- const at::Tensor& scores,
- const double iou_threshold);
-#endif
-
-// Interface for Python
-// inline is needed to prevent multiple function definitions when this header is
-// included by different cpps
-inline at::Tensor nms_rotated(
- const at::Tensor& dets,
- const at::Tensor& scores,
- const double iou_threshold) {
- assert(dets.device().is_cuda() == scores.device().is_cuda());
- if (dets.device().is_cuda()) {
-#if defined(WITH_CUDA) || defined(WITH_HIP)
- return nms_rotated_cuda(
- dets.contiguous(), scores.contiguous(), iou_threshold);
-#else
- AT_ERROR("Not compiled with GPU support");
-#endif
- }
-
- return nms_rotated_cpu(dets.contiguous(), scores.contiguous(), iou_threshold);
-}
-
-} // namespace detectron2
diff --git a/spaces/CVPR/transfiner/configs/new_baselines/mask_rcnn_regnety_4gf_dds_FPN_200ep_LSJ.py b/spaces/CVPR/transfiner/configs/new_baselines/mask_rcnn_regnety_4gf_dds_FPN_200ep_LSJ.py
deleted file mode 100644
index b867cc865e5ac4d7b70221da141894efd7cbd75c..0000000000000000000000000000000000000000
--- a/spaces/CVPR/transfiner/configs/new_baselines/mask_rcnn_regnety_4gf_dds_FPN_200ep_LSJ.py
+++ /dev/null
@@ -1,14 +0,0 @@
-from .mask_rcnn_regnety_4gf_dds_FPN_100ep_LSJ import (
- dataloader,
- lr_multiplier,
- model,
- optimizer,
- train,
-)
-
-train.max_iter *= 2 # 100ep -> 200ep
-
-lr_multiplier.scheduler.milestones = [
- milestone * 2 for milestone in lr_multiplier.scheduler.milestones
-]
-lr_multiplier.scheduler.num_updates = train.max_iter
diff --git a/spaces/ChevyWithAI/rvc-aicover/app.py b/spaces/ChevyWithAI/rvc-aicover/app.py
deleted file mode 100644
index d1d4fb32cf4b9622530b9fdba4af2ffea3a48c79..0000000000000000000000000000000000000000
--- a/spaces/ChevyWithAI/rvc-aicover/app.py
+++ /dev/null
@@ -1,188 +0,0 @@
-import os
-import json
-import argparse
-import traceback
-import logging
-import gradio as gr
-import numpy as np
-import librosa
-import torch
-import asyncio
-import edge_tts
-from datetime import datetime
-from fairseq import checkpoint_utils
-from infer_pack.models import SynthesizerTrnMs256NSFsid, SynthesizerTrnMs256NSFsid_nono
-from vc_infer_pipeline import VC
-from config import (
- is_half,
- device
-)
-logging.getLogger("numba").setLevel(logging.WARNING)
-limitation = os.getenv("SYSTEM") == "spaces" # limit audio length in huggingface spaces
-
-def create_vc_fn(tgt_sr, net_g, vc, if_f0, file_index, file_big_npy):
- def vc_fn(
- input_audio,
- f0_up_key,
- f0_method,
- index_rate,
- tts_mode,
- tts_text,
- tts_voice
- ):
- try:
- if tts_mode:
- if len(tts_text) > 100 and limitation:
- return "Text is too long", None
- if tts_text is None or tts_voice is None:
- return "You need to enter text and select a voice", None
- asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3"))
- audio, sr = librosa.load("tts.mp3", sr=16000, mono=True)
- else:
- if args.files:
- audio, sr = librosa.load(input_audio, sr=16000, mono=True)
- else:
- if input_audio is None:
- return "You need to upload an audio", None
- sampling_rate, audio = input_audio
- duration = audio.shape[0] / sampling_rate
- if duration > 20 and limitation:
- return "Please upload an audio file that is less than 20 seconds. If you need to generate a longer audio file, please use Colab.", None
- audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32)
- if len(audio.shape) > 1:
- audio = librosa.to_mono(audio.transpose(1, 0))
- if sampling_rate != 16000:
- audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000)
- times = [0, 0, 0]
- f0_up_key = int(f0_up_key)
- audio_opt = vc.pipeline(
- hubert_model,
- net_g,
- 0,
- audio,
- times,
- f0_up_key,
- f0_method,
- file_index,
- file_big_npy,
- index_rate,
- if_f0,
- )
- print(
- f"[{datetime.now().strftime('%Y-%m-%d %H:%M')}]: npy: {times[0]}, f0: {times[1]}s, infer: {times[2]}s"
- )
- return "Success", (tgt_sr, audio_opt)
- except:
- info = traceback.format_exc()
- print(info)
- return info, (None, None)
- return vc_fn
-
-def load_hubert():
- global hubert_model
- models, _, _ = checkpoint_utils.load_model_ensemble_and_task(
- ["hubert_base.pt"],
- suffix="",
- )
- hubert_model = models[0]
- hubert_model = hubert_model.to(device)
- if is_half:
- hubert_model = hubert_model.half()
- else:
- hubert_model = hubert_model.float()
- hubert_model.eval()
-
-def change_to_tts_mode(tts_mode):
- if tts_mode:
- return gr.Audio.update(visible=False), gr.Textbox.update(visible=True), gr.Dropdown.update(visible=True)
- else:
- return gr.Audio.update(visible=True), gr.Textbox.update(visible=False), gr.Dropdown.update(visible=False)
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--api', action="store_true", default=False)
- parser.add_argument("--share", action="store_true", default=False, help="share gradio app")
- parser.add_argument("--files", action="store_true", default=False, help="load audio from path")
- args, unknown = parser.parse_known_args()
- load_hubert()
- models = []
- tts_voice_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices())
- voices = [f"{v['ShortName']}-{v['Gender']}" for v in tts_voice_list]
- with open("weights/model_info.json", "r", encoding="utf-8") as f:
- models_info = json.load(f)
- for name, info in models_info.items():
- if not info['enable']:
- continue
- title = info['title']
- author = info.get("author", None)
- cover = f"weights/{name}/{info['cover']}"
- index = f"weights/{name}/{info['feature_retrieval_library']}"
- npy = f"weights/{name}/{info['feature_file']}"
- cpt = torch.load(f"weights/{name}/{name}.pth", map_location="cpu")
- tgt_sr = cpt["config"][-1]
- cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk
- if_f0 = cpt.get("f0", 1)
- if if_f0 == 1:
- net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=is_half)
- else:
- net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
- del net_g.enc_q
- print(net_g.load_state_dict(cpt["weight"], strict=False)) # 不加这一行清不干净, 真奇葩
- net_g.eval().to(device)
- if is_half:
- net_g = net_g.half()
- else:
- net_g = net_g.float()
- vc = VC(tgt_sr, device, is_half)
- models.append((name, title, author, cover, create_vc_fn(tgt_sr, net_g, vc, if_f0, index, npy)))
- with gr.Blocks() as app:
- gr.Markdown(
- "# RVC Models\n"
- "## The input audio should be clean and pure voice without background music.\n"
- "\n\n"
- "[](https://colab.research.google.com/drive/12rbZk9CoXD1m84dqBW5IKMBjiVY6tcoj?usp=share_link)\n\n"
- "[](https://huggingface.co/spaces/ardha27pi/rvc-models?duplicate=true)\n\n"
- "[](https://github.com/ardha27/AI-Song-Cover-RVC)\n\n"
- "[](https://ko-fi.com/R6R7AH1FA)\n\n"
- )
- with gr.Tabs():
- for (name, title, author, cover, vc_fn) in models:
- with gr.TabItem(name):
- with gr.Row():
- gr.Markdown(
- ''
- f'
{title}
\n'+
- (f'
Model author: {author}
' if author else "")+
- (f'
' if cover else "")+
- '
'
- )
- with gr.Row():
- with gr.Column():
- if args.files:
- vc_input = gr.Textbox(label="Input audio path")
- else:
- vc_input = gr.Audio(label="Input audio"+' (less than 20 seconds)' if limitation else '')
- vc_transpose = gr.Number(label="Transpose", value=0)
- vc_f0method = gr.Radio(
- label="Pitch extraction algorithm, PM is fast but Harvest is better for low frequencies",
- choices=["pm", "harvest"],
- value="pm",
- interactive=True,
- )
- vc_index_ratio = gr.Slider(
- minimum=0,
- maximum=1,
- label="Retrieval feature ratio",
- value=0.6,
- interactive=True,
- )
- tts_mode = gr.Checkbox(label="tts (use edge-tts as input)", value=False)
- tts_text = gr.Textbox(visible=False,label="TTS text (100 words limitation)" if limitation else "TTS text")
- tts_voice = gr.Dropdown(label="Edge-tts speaker", choices=voices, visible=False, allow_custom_value=False, value="en-US-AnaNeural-Female")
- vc_submit = gr.Button("Generate", variant="primary")
- with gr.Column():
- vc_output1 = gr.Textbox(label="Output Message")
- vc_output2 = gr.Audio(label="Output Audio")
- vc_submit.click(vc_fn, [vc_input, vc_transpose, vc_f0method, vc_index_ratio, tts_mode, tts_text, tts_voice], [vc_output1, vc_output2])
- tts_mode.change(change_to_tts_mode, [tts_mode], [vc_input, tts_text, tts_voice])
- app.queue(concurrency_count=1, max_size=20, api_open=args.api).launch(share=args.share)
\ No newline at end of file
diff --git a/spaces/Chomkwoy/Nilkessye/ocr_utils.py b/spaces/Chomkwoy/Nilkessye/ocr_utils.py
deleted file mode 100644
index d198d47b069fd42b78ed7c34f8a8364958bc33cc..0000000000000000000000000000000000000000
--- a/spaces/Chomkwoy/Nilkessye/ocr_utils.py
+++ /dev/null
@@ -1,488 +0,0 @@
-import copy
-import itertools
-
-import cv2
-import matplotlib.pyplot as plt
-import numpy as np
-import torch
-from scipy.signal import find_peaks
-from scipy.sparse.csgraph import floyd_warshall
-from scipy.spatial import distance
-from tqdm.auto import tqdm
-
-from utils.keypoint import _decode
-
-
-def get_pred_detections(output, sw, sh, threshold=0.4, ae_threshold=1.0, max_objs=9 * 16 * 4 * 2):
- detections, centers, seq_pred = _decode(
- *output[-1], ae_threshold=ae_threshold, K=max_objs, kernel=3, num_dets=100000)
-
- detections = detections.reshape(detections.shape[0], -1, 8).detach().cpu().numpy()
- detections = detections.reshape(-1, 8)
- detections = detections[detections[:, 4] > 0]
-
- centers = centers.reshape(centers.shape[0], -1, 4).detach().cpu().numpy()
- centers = centers.reshape(-1, 4)
-
- seq_pred = seq_pred[0].detach().cpu().numpy()
-
- # find matching rect for each center point
- # detections: [num_rects, 8 (tlx, tly, brx, bry, score, tlscore, brscore, cls)]
- # centers: [num_centers, 4 (x, y, cls, score)]
- detection_centers = np.stack([
- (detections[:, 0] + detections[:, 2]) / 2,
- (detections[:, 1] + detections[:, 3]) / 2
- ], axis=1)
- ratios = (detections[:, 3] - detections[:, 1]) / (detections[:, 2] - detections[:, 0])
-
- dist = distance.cdist(centers[:, :2], detection_centers) # [num_centers, num_rects]
- tlx, brx = detections[:, 0][None, :], detections[:, 2][None, :]
- tly, bry = detections[:, 1][None, :], detections[:, 3][None, :]
- inside = (
- ((tlx * 0.7 + brx * 0.3) < centers[:, 0][:, None]) & (centers[:, 0][:, None] < (tlx * 0.3 + brx * 0.7)) &
- ((tly * 0.7 + bry * 0.3) < centers[:, 1][:, None]) & (centers[:, 1][:, None] < (tly * 0.3 + bry * 0.7))
- )
-
- scores = (
- -dist * .5 # penalize far center point
- + detections[None, :, 4] * 10 # original detection score
- + inside * 100 # enforce center point inside the bounding box
- + (1 - (ratios > 2.0)) * 100 # dont select too tall boxes
- + (1 - (ratios < 0.2)) * 100 # dont select too wide boxes
- - (brx - tlx) * (bry - tly) * 0.02 # prefer smaller boxes
- )
- rect_idxs = np.argsort(scores, axis=1)[:, ::-1]
-
- tiles = []
- for (x, y, cs, score), idxs, seq in zip(centers, rect_idxs, seq_pred):
- for i in idxs[0:1]:
- tlx, tly, brx, bry = detections[i, :4]
- rx, ry = (x - tlx) / (brx - tlx), (y - tly) / (bry - tly)
- if score > threshold and 0.3 < rx < 0.7 and 0.3 < ry < 0.7:
- bbox = (
- (int(tlx * sw), int(tly * sh)),
- (int(brx * sw), int(bry * sh))
- )
- cx, cy = int(x * sw), int(y * sh)
- tiles.append((bbox, (cx, cy), seq, cs, score))
-
- tiles = sorted(tiles, key=lambda tile: tile[4], reverse=True)
-
- filtered_tiles = []
- for bbox, (cx, cy), seq, cs, score in tiles:
- max_iou = max((bb_intersection_over_union(bbox, bbox2) for bbox2, _, _, _ in filtered_tiles), default=0)
- if max_iou < 0.90:
- filtered_tiles.append((bbox, (cx, cy), seq, cs))
-
- tiles = filtered_tiles
-
- tiles = sorted(tiles, key=lambda tile: tile[2])
-
- return tiles
-
-
-def sigmoid(z):
- return 1.0 / (1.0 + np.exp(-z))
-
-
-def get_center(bbox):
- (tlx, tly), (brx, bry) = bbox
- return (tlx + brx) / 2, (tly + bry) / 2
-
-
-def bb_intersection_over_union(boxA, boxB):
- # determine the (x, y)-coordinates of the intersection rectangle
- xA = max(boxA[0][0], boxB[0][0])
- yA = max(boxA[0][1], boxB[0][1])
- xB = min(boxA[1][0], boxB[1][0])
- yB = min(boxA[1][1], boxB[1][1])
- # compute the area of intersection rectangle
- interArea = max(0, xB - xA + 1) * max(0, yB - yA + 1)
- # compute the area of both the prediction and ground-truth
- # rectangles
- boxAArea = (boxA[1][0] - boxA[0][0] + 1) * (boxA[1][1] - boxA[0][1] + 1)
- boxBArea = (boxB[1][0] - boxB[0][0] + 1) * (boxB[1][1] - boxB[0][1] + 1)
- # compute the intersection over union by taking the intersection
- # area and dividing it by the sum of prediction + ground-truth
- # areas - the interesection area
- iou = interArea / float(boxAArea + boxBArea - interArea)
- # return the intersection over union value
- return iou
-
-
-def batched(iterable, n):
- """Batch data into lists of length n. The last batch may be shorter."""
- # batched('ABCDEFG', 3) --> ABC DEF G
- it = iter(iterable)
- while True:
- batch = list(itertools.islice(it, n))
- if not batch:
- return
- yield batch
-
-
-def find_line_angle(
- cur_centers,
- cur_bboxes,
- k=5,
- n_bins=365, # per 180 degrees
- verbose=False
-):
- N = len(cur_centers)
-
- if N == 0:
- return None
-
- bbox_heights = np.array([bry - tly for (tlx, tly), (brx, bry) in cur_bboxes])
-
- corners = np.stack([
- cur_bboxes[:, 0, :], # tl
- np.stack([cur_bboxes[:, 0, 0], cur_bboxes[:, 1, 1]], axis=-1), # bl
- np.stack([cur_bboxes[:, 1, 0], cur_bboxes[:, 0, 1]], axis=-1), # tr
- cur_bboxes[:, 1, :], # br
- ], axis=1)
-
- dist_matrix = distance.cdist(corners.reshape(-1, 2), corners.reshape(-1, 2))
- dist_matrix = dist_matrix.reshape((N, 4, N, 4)).transpose(0, 2, 1, 3) # [N, N, 4, 4]
- dist_matrix = dist_matrix.min(axis=(2, 3))
- np.fill_diagonal(dist_matrix, 1e9)
- k_nearest_neighbors_indices = np.argsort(dist_matrix, axis=1)[:, :k]
-
- # Find line angle
- k_nearest_neighbors = cur_centers[k_nearest_neighbors_indices]
-
- diff = (k_nearest_neighbors - cur_centers[:, None, :])
- angles = np.fmod(np.arctan2(diff[..., 1], diff[..., 0]) + np.pi * 2, np.pi)
-
- angle_histogram, bin_edges = np.histogram(angles.flatten(), bins=n_bins)
- angle_histogram = angle_histogram.astype(float)
-
- # Avoid finding horizontal lines
- angle_histogram[0:n_bins // 4] *= 0.5
- angle_histogram[-n_bins // 4:] *= 0.5
-
- # Wrap angle
- angle_histogram = np.concatenate([angle_histogram, angle_histogram])
-
- # smoothing filter
- window_size = n_bins // 16
- box = np.ones(window_size) / window_size
- angle_histogram = np.convolve(angle_histogram, box, mode='same')
-
- # find biggest peak
- peaks, properties = find_peaks(angle_histogram, prominence=0.5, width=4)
-
- if verbose:
- plt.plot(angle_histogram)
- plt.plot(peaks, angle_histogram[peaks], "x")
- plt.vlines(x=peaks, ymin=angle_histogram[peaks] - properties["prominences"],
- ymax=angle_histogram[peaks], color="C1")
- plt.hlines(y=properties["width_heights"], xmin=properties["left_ips"],
- xmax=properties["right_ips"], color="C1")
- plt.show()
-
- if len(peaks) == 0:
- return None
-
- peak_bin = [peak_pos for _, peak_pos in sorted(zip(properties["prominences"], peaks))][-1]
- line_angle = np.fmod(peak_bin * np.pi / n_bins, np.pi)
-
- return line_angle
-
-
-def find_lines(
- cur_centers,
- cur_bboxes,
- line_angle,
- center_dist_threshold=2.,
- corner_dist_threshold=0.5,
- k=7,
- angle_delta=30 * (np.pi / 180),
-):
- N = len(cur_centers)
-
- if N == 0:
- return [], np.zeros((0, k))
-
- bbox_heights = np.array([bry - tly for (tlx, tly), (brx, bry) in cur_bboxes])
- mean_bbox_height = bbox_heights.mean()
-
- corners = np.stack([
- cur_bboxes[:, 0, :], # tl
- np.stack([cur_bboxes[:, 0, 0], cur_bboxes[:, 1, 1]], axis=-1), # bl
- np.stack([cur_bboxes[:, 1, 0], cur_bboxes[:, 0, 1]], axis=-1), # tr
- cur_bboxes[:, 1, :], # br
- ], axis=1)
-
- corner_dist_matrix = distance.cdist(corners.reshape((-1, 2)), corners.reshape((-1, 2)))
- corner_dist_matrix = corner_dist_matrix.reshape((N, 4, N, 4)).transpose(0, 2, 1, 3)
- corner_dist_matrix = corner_dist_matrix.min(axis=(2, 3))
- np.fill_diagonal(corner_dist_matrix, 1e9)
-
- dist_matrix = distance.cdist(cur_centers, cur_centers)
- np.fill_diagonal(dist_matrix, 1e9)
- k_nearest_neighbors_indices = np.argsort(dist_matrix, axis=1)[:, :k]
- k_nearest_neighbors = cur_centers[k_nearest_neighbors_indices]
-
- k_nearest_neighbors_dists = dist_matrix[np.arange(N)[:, None], k_nearest_neighbors_indices]
- k_nearest_neighbors_corner_dists = corner_dist_matrix[np.arange(N)[:, None], k_nearest_neighbors_indices]
-
- diff = (k_nearest_neighbors - cur_centers[:, None, :])
- angles = np.fmod(np.arctan2(diff[..., 1], diff[..., 0]) + np.pi * 2, np.pi)
-
- # Make inline & between-line neighbor graphs
- line_range = (line_angle - angle_delta, line_angle + angle_delta)
- is_inline = (
- ((line_range[0] < angles) & (angles < line_range[1])) |
- ((line_range[0] - np.pi < angles) & (angles < line_range[1] - np.pi)) |
- ((line_range[0] + np.pi < angles) & (angles < line_range[1] + np.pi))
- )
-
- inline_neighbors_indices = k_nearest_neighbors_indices.copy()
- inline_neighbors_indices[~is_inline] = -1
- inline_neighbors_indices[k_nearest_neighbors_dists > mean_bbox_height * center_dist_threshold] = -1
- inline_neighbors_indices[k_nearest_neighbors_corner_dists > mean_bbox_height * corner_dist_threshold] = -1
-
- def transitive_closure(neighbor_indices):
- reachable = np.zeros((N, N))
- reachable[:, :] = 1e9
- for i in range(N):
- for j in neighbor_indices[i]:
- if j != -1:
- reachable[i, j] = reachable[j, i] = 1
- reachable = floyd_warshall(reachable, directed=False)
- reachable = reachable < 1e9
-
- groups = []
-
- visited = np.zeros((N,))
- for i in range(N):
- if visited[i]:
- continue
- group = np.nonzero(reachable[i])[0]
- visited[group] = 1
- groups.append(group)
-
- return groups
-
- lines = transitive_closure(inline_neighbors_indices)
-
- return lines, inline_neighbors_indices
-
-
-def detect_lines(tiles):
- main_tiles = [(bbox, center, seq, cls) for bbox, center, seq, cls in tiles if cls in [0, 1]]
- anno_tiles = [(bbox, center, seq, cls) for bbox, center, seq, cls in tiles if cls in [2, 3]]
-
- main_centers = np.array([center for bbox, center, seq, cls in tiles if cls in [0, 1]]).reshape(-1, 2)
- anno_centers = np.array([center for bbox, center, seq, cls in tiles if cls in [2, 3]]).reshape(-1, 2)
-
- main_bboxes = np.array([bbox for bbox, center, seq, cls in tiles if cls in [0, 1]]).reshape(-1, 2, 2)
- anno_bboxes = np.array([bbox for bbox, center, seq, cls in tiles if cls in [2, 3]]).reshape(-1, 2, 2)
-
- # Find line angle
- main_line_angle = find_line_angle(main_centers, main_bboxes)
- anno_line_angle = find_line_angle(anno_centers, anno_bboxes)
-
- line_angles = []
- if main_line_angle is not None:
- line_angles.append((main_line_angle, len(main_centers)))
- if anno_line_angle is not None:
- # wrap angle
- if main_line_angle is not None:
- anno_line_angles = np.array([anno_line_angle, anno_line_angle - np.pi, anno_line_angle + np.pi])
- anno_line_angle = anno_line_angles[np.abs(anno_line_angles - main_line_angle).argmin()]
- line_angles.append((anno_line_angle, len(anno_centers)))
-
- denominator = sum(n for _, n in line_angles)
- line_angle = sum(angle * (n / denominator) for angle, n in line_angles)
- line_angle = np.fmod(line_angle + np.pi * 2, np.pi)
-
- main_lines, main_inline_neighbors_indices = find_lines(
- main_centers, main_bboxes, line_angle,
- center_dist_threshold=2,
- corner_dist_threshold=0.2,
- )
- anno_lines, anno_inline_neighbors_indices = find_lines(
- anno_centers, anno_bboxes, line_angle,
- center_dist_threshold=1.4,
- corner_dist_threshold=0.7,
- )
-
- main_lines = [[main_tiles[i] for i in line] for line in main_lines]
- anno_lines = [[anno_tiles[i] for i in line] for line in anno_lines]
-
- all_lines = main_lines + anno_lines
-
- # Sort syllable in each line by increasing center y coord
- all_lines = [
- sorted(line, key=lambda tile: tile[1][1])
- for line in all_lines
- ]
-
- # Sort lines
- def seq_score(line):
- start_x = np.array([bbox[1][0] for bbox, center, seq, cls in line]).min()
- start_y = np.array([bbox[0][1] for bbox, center, seq, cls in line]).min()
- return start_y * 0.1 - start_x
-
- all_lines = sorted(all_lines, key=seq_score)
-
- line_infos = []
- for line in all_lines:
- tlx = np.array([bbox[0][0] for bbox, center, seq, cls in line]).mean()
- tly = np.array([bbox[0][1] for bbox, center, seq, cls in line]).min()
- brx = np.array([bbox[1][0] for bbox, center, seq, cls in line]).mean()
- bry = np.array([bbox[1][1] for bbox, center, seq, cls in line]).max()
- line_bbox = ((tlx, tly), (brx, bry))
- is_anno = line[0][3] in [2, 3]
- line_infos.append({
- 'line': line,
- 'bbox': line_bbox,
- 'is_anno': is_anno,
- })
-
- # Sort lines by actual reading order
- line_infos = sort_lines(line_infos)
-
- return line_infos
-
-
-def sort_lines(line_infos):
- lines_left = copy.copy(line_infos)
- ordered_lines = [lines_left[0]]
- del lines_left[0]
- anno_line_num = 0
-
- def dist(a, b):
- return np.sqrt((a[0] - b[0]) ** 2 + (a[1] - b[1]) ** 2)
-
- while len(lines_left) > 0:
- cur_line = ordered_lines[-1]
- (tlx, tly), (brx, bry) = cur_line['bbox']
- line_width = (brx - tlx)
-
- if cur_line['is_anno']:
-
- if anno_line_num == 0:
- # check if there's a second anno line
- distances = [
- (dist((tlx, tly), (cand['bbox'][1][0], cand['bbox'][0][1])), i)
- for i, cand in enumerate(lines_left)
- if cand['is_anno']
- ]
- min_dist, min_idx = min(distances, default=(1e9, None))
-
- if min_dist < line_width / 2:
- ordered_lines.append(lines_left[min_idx])
- del lines_left[min_idx]
- # print('anno->anno')
- anno_line_num += 1
- continue
-
- next_expected_tr = (brx, bry)
-
- else: # anno_line_num == 1
- next_expected_tr = (brx + line_width, bry)
-
- # check for next main line
- distances = [
- (dist(next_expected_tr, (cand['bbox'][1][0], cand['bbox'][0][1])), i)
- for i, cand in enumerate(lines_left)
- if not cand['is_anno']
- ]
-
- min_dist, min_idx = min(distances, default=(1e9, None))
-
- if min_dist < line_width:
- ordered_lines.append(lines_left[min_idx])
- del lines_left[min_idx]
- # print('anno->main')
- anno_line_num = 0
- continue
-
- # select next line
- ordered_lines.append(lines_left[0])
- del lines_left[0]
-
- else: # not cur_line['is_anno']
-
- # check for next anno line
- distances = [
- (dist((brx, bry), (cand['bbox'][1][0], cand['bbox'][0][1])), i)
- for i, cand in enumerate(lines_left)
- if cand['is_anno']
- ]
-
- min_dist, min_idx = min(distances, default=(1e9, None))
-
- if min_dist < line_width / 2:
- ordered_lines.append(lines_left[min_idx])
- del lines_left[min_idx]
- # print('main->anno', min_idx)
- anno_line_num = 0
- continue
-
- # select next line
- # print('main->main')
- ordered_lines.append(lines_left[0])
- del lines_left[0]
-
- return ordered_lines
-
-
-def recognize_lines(line_infos, orig_image, syllable_recognizer, batch_size=32):
- tiles = []
- for line_idx, line_info in enumerate(line_infos):
- for bbox, center, seq, cls in line_info['line']:
- (tlx, tly), (brx, bry) = bbox
- w, h = brx - tlx, bry - tly
- pw, ph = w / 5, h / 5
- tile = orig_image[
- max(0, int(tly - ph)):min(orig_image.shape[0], int(bry + ph)),
- max(0, int(tlx - pw)):min(orig_image.shape[1], int(brx + pw)),
- ]
- tiles.append((tile, bbox, center, seq, cls))
-
- hangul_tiles = [(i, tile) for i, (tile, _, _, _, cls) in enumerate(tiles) if cls in [0, 2]]
-
- pred_syllables = ["〓"] * len(tiles)
- batches = list(batched(hangul_tiles, batch_size))
- for batch in tqdm(batches):
- indices, images = zip(*batch)
- batch_pred_syllables = syllable_recognizer.recognize(images)
- for i, pred_syllable in zip(indices, batch_pred_syllables):
- pred_syllables[i] = pred_syllable
-
- return pred_syllables
-
-
-def recognize_page(orig_image, centernet, syllable_recognizer, return_line_infos=False, batch_size=32):
- orig_size = (orig_image.shape[1], orig_image.shape[0])
- image = cv2.resize(orig_image, dsize=(512, 512), interpolation=cv2.INTER_AREA)
-
- image = image.astype(np.float32) / 255. - .5 # to [-.5, +.5] range
- image = image.transpose((2, 0, 1)) # [H, W, C] to [C, H, W]
- image = torch.as_tensor(image)
-
- # Run object detection
- centernet.eval()
- with torch.no_grad():
- output = centernet(torch.as_tensor(image)[None].to(centernet.device))
-
- sw, sh = orig_size[0] * 4 / 512, orig_size[1] * 4 / 512
-
- tiles = get_pred_detections(
- output, sw=sw, sh=sh,
- threshold=0.3,
- ae_threshold=20.0
- )
-
- line_infos = detect_lines(tiles)
-
- pred_syllables = recognize_lines(line_infos, orig_image, syllable_recognizer, batch_size=batch_size)
-
- if return_line_infos:
- return pred_syllables, line_infos
-
- return pred_syllables
diff --git a/spaces/Cropinky/esrgan/realesrgan/models/realesrgan_model.py b/spaces/Cropinky/esrgan/realesrgan/models/realesrgan_model.py
deleted file mode 100644
index c298a09c42433177f90001a0a31d029576072ccd..0000000000000000000000000000000000000000
--- a/spaces/Cropinky/esrgan/realesrgan/models/realesrgan_model.py
+++ /dev/null
@@ -1,258 +0,0 @@
-import numpy as np
-import random
-import torch
-from basicsr.data.degradations import random_add_gaussian_noise_pt, random_add_poisson_noise_pt
-from basicsr.data.transforms import paired_random_crop
-from basicsr.models.srgan_model import SRGANModel
-from basicsr.utils import DiffJPEG, USMSharp
-from basicsr.utils.img_process_util import filter2D
-from basicsr.utils.registry import MODEL_REGISTRY
-from collections import OrderedDict
-from torch.nn import functional as F
-
-
-@MODEL_REGISTRY.register()
-class RealESRGANModel(SRGANModel):
- """RealESRGAN Model for Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data.
-
- It mainly performs:
- 1. randomly synthesize LQ images in GPU tensors
- 2. optimize the networks with GAN training.
- """
-
- def __init__(self, opt):
- super(RealESRGANModel, self).__init__(opt)
- self.jpeger = DiffJPEG(differentiable=False).cuda() # simulate JPEG compression artifacts
- self.usm_sharpener = USMSharp().cuda() # do usm sharpening
- self.queue_size = opt.get('queue_size', 180)
-
- @torch.no_grad()
- def _dequeue_and_enqueue(self):
- """It is the training pair pool for increasing the diversity in a batch.
-
- Batch processing limits the diversity of synthetic degradations in a batch. For example, samples in a
- batch could not have different resize scaling factors. Therefore, we employ this training pair pool
- to increase the degradation diversity in a batch.
- """
- # initialize
- b, c, h, w = self.lq.size()
- if not hasattr(self, 'queue_lr'):
- assert self.queue_size % b == 0, f'queue size {self.queue_size} should be divisible by batch size {b}'
- self.queue_lr = torch.zeros(self.queue_size, c, h, w).cuda()
- _, c, h, w = self.gt.size()
- self.queue_gt = torch.zeros(self.queue_size, c, h, w).cuda()
- self.queue_ptr = 0
- if self.queue_ptr == self.queue_size: # the pool is full
- # do dequeue and enqueue
- # shuffle
- idx = torch.randperm(self.queue_size)
- self.queue_lr = self.queue_lr[idx]
- self.queue_gt = self.queue_gt[idx]
- # get first b samples
- lq_dequeue = self.queue_lr[0:b, :, :, :].clone()
- gt_dequeue = self.queue_gt[0:b, :, :, :].clone()
- # update the queue
- self.queue_lr[0:b, :, :, :] = self.lq.clone()
- self.queue_gt[0:b, :, :, :] = self.gt.clone()
-
- self.lq = lq_dequeue
- self.gt = gt_dequeue
- else:
- # only do enqueue
- self.queue_lr[self.queue_ptr:self.queue_ptr + b, :, :, :] = self.lq.clone()
- self.queue_gt[self.queue_ptr:self.queue_ptr + b, :, :, :] = self.gt.clone()
- self.queue_ptr = self.queue_ptr + b
-
- @torch.no_grad()
- def feed_data(self, data):
- """Accept data from dataloader, and then add two-order degradations to obtain LQ images.
- """
- if self.is_train and self.opt.get('high_order_degradation', True):
- # training data synthesis
- self.gt = data['gt'].to(self.device)
- self.gt_usm = self.usm_sharpener(self.gt)
-
- self.kernel1 = data['kernel1'].to(self.device)
- self.kernel2 = data['kernel2'].to(self.device)
- self.sinc_kernel = data['sinc_kernel'].to(self.device)
-
- ori_h, ori_w = self.gt.size()[2:4]
-
- # ----------------------- The first degradation process ----------------------- #
- # blur
- out = filter2D(self.gt_usm, self.kernel1)
- # random resize
- updown_type = random.choices(['up', 'down', 'keep'], self.opt['resize_prob'])[0]
- if updown_type == 'up':
- scale = np.random.uniform(1, self.opt['resize_range'][1])
- elif updown_type == 'down':
- scale = np.random.uniform(self.opt['resize_range'][0], 1)
- else:
- scale = 1
- mode = random.choice(['area', 'bilinear', 'bicubic'])
- out = F.interpolate(out, scale_factor=scale, mode=mode)
- # add noise
- gray_noise_prob = self.opt['gray_noise_prob']
- if np.random.uniform() < self.opt['gaussian_noise_prob']:
- out = random_add_gaussian_noise_pt(
- out, sigma_range=self.opt['noise_range'], clip=True, rounds=False, gray_prob=gray_noise_prob)
- else:
- out = random_add_poisson_noise_pt(
- out,
- scale_range=self.opt['poisson_scale_range'],
- gray_prob=gray_noise_prob,
- clip=True,
- rounds=False)
- # JPEG compression
- jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range'])
- out = torch.clamp(out, 0, 1) # clamp to [0, 1], otherwise JPEGer will result in unpleasant artifacts
- out = self.jpeger(out, quality=jpeg_p)
-
- # ----------------------- The second degradation process ----------------------- #
- # blur
- if np.random.uniform() < self.opt['second_blur_prob']:
- out = filter2D(out, self.kernel2)
- # random resize
- updown_type = random.choices(['up', 'down', 'keep'], self.opt['resize_prob2'])[0]
- if updown_type == 'up':
- scale = np.random.uniform(1, self.opt['resize_range2'][1])
- elif updown_type == 'down':
- scale = np.random.uniform(self.opt['resize_range2'][0], 1)
- else:
- scale = 1
- mode = random.choice(['area', 'bilinear', 'bicubic'])
- out = F.interpolate(
- out, size=(int(ori_h / self.opt['scale'] * scale), int(ori_w / self.opt['scale'] * scale)), mode=mode)
- # add noise
- gray_noise_prob = self.opt['gray_noise_prob2']
- if np.random.uniform() < self.opt['gaussian_noise_prob2']:
- out = random_add_gaussian_noise_pt(
- out, sigma_range=self.opt['noise_range2'], clip=True, rounds=False, gray_prob=gray_noise_prob)
- else:
- out = random_add_poisson_noise_pt(
- out,
- scale_range=self.opt['poisson_scale_range2'],
- gray_prob=gray_noise_prob,
- clip=True,
- rounds=False)
-
- # JPEG compression + the final sinc filter
- # We also need to resize images to desired sizes. We group [resize back + sinc filter] together
- # as one operation.
- # We consider two orders:
- # 1. [resize back + sinc filter] + JPEG compression
- # 2. JPEG compression + [resize back + sinc filter]
- # Empirically, we find other combinations (sinc + JPEG + Resize) will introduce twisted lines.
- if np.random.uniform() < 0.5:
- # resize back + the final sinc filter
- mode = random.choice(['area', 'bilinear', 'bicubic'])
- out = F.interpolate(out, size=(ori_h // self.opt['scale'], ori_w // self.opt['scale']), mode=mode)
- out = filter2D(out, self.sinc_kernel)
- # JPEG compression
- jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range2'])
- out = torch.clamp(out, 0, 1)
- out = self.jpeger(out, quality=jpeg_p)
- else:
- # JPEG compression
- jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range2'])
- out = torch.clamp(out, 0, 1)
- out = self.jpeger(out, quality=jpeg_p)
- # resize back + the final sinc filter
- mode = random.choice(['area', 'bilinear', 'bicubic'])
- out = F.interpolate(out, size=(ori_h // self.opt['scale'], ori_w // self.opt['scale']), mode=mode)
- out = filter2D(out, self.sinc_kernel)
-
- # clamp and round
- self.lq = torch.clamp((out * 255.0).round(), 0, 255) / 255.
-
- # random crop
- gt_size = self.opt['gt_size']
- (self.gt, self.gt_usm), self.lq = paired_random_crop([self.gt, self.gt_usm], self.lq, gt_size,
- self.opt['scale'])
-
- # training pair pool
- self._dequeue_and_enqueue()
- # sharpen self.gt again, as we have changed the self.gt with self._dequeue_and_enqueue
- self.gt_usm = self.usm_sharpener(self.gt)
- self.lq = self.lq.contiguous() # for the warning: grad and param do not obey the gradient layout contract
- else:
- # for paired training or validation
- self.lq = data['lq'].to(self.device)
- if 'gt' in data:
- self.gt = data['gt'].to(self.device)
- self.gt_usm = self.usm_sharpener(self.gt)
-
- def nondist_validation(self, dataloader, current_iter, tb_logger, save_img):
- # do not use the synthetic process during validation
- self.is_train = False
- super(RealESRGANModel, self).nondist_validation(dataloader, current_iter, tb_logger, save_img)
- self.is_train = True
-
- def optimize_parameters(self, current_iter):
- # usm sharpening
- l1_gt = self.gt_usm
- percep_gt = self.gt_usm
- gan_gt = self.gt_usm
- if self.opt['l1_gt_usm'] is False:
- l1_gt = self.gt
- if self.opt['percep_gt_usm'] is False:
- percep_gt = self.gt
- if self.opt['gan_gt_usm'] is False:
- gan_gt = self.gt
-
- # optimize net_g
- for p in self.net_d.parameters():
- p.requires_grad = False
-
- self.optimizer_g.zero_grad()
- self.output = self.net_g(self.lq)
-
- l_g_total = 0
- loss_dict = OrderedDict()
- if (current_iter % self.net_d_iters == 0 and current_iter > self.net_d_init_iters):
- # pixel loss
- if self.cri_pix:
- l_g_pix = self.cri_pix(self.output, l1_gt)
- l_g_total += l_g_pix
- loss_dict['l_g_pix'] = l_g_pix
- # perceptual loss
- if self.cri_perceptual:
- l_g_percep, l_g_style = self.cri_perceptual(self.output, percep_gt)
- if l_g_percep is not None:
- l_g_total += l_g_percep
- loss_dict['l_g_percep'] = l_g_percep
- if l_g_style is not None:
- l_g_total += l_g_style
- loss_dict['l_g_style'] = l_g_style
- # gan loss
- fake_g_pred = self.net_d(self.output)
- l_g_gan = self.cri_gan(fake_g_pred, True, is_disc=False)
- l_g_total += l_g_gan
- loss_dict['l_g_gan'] = l_g_gan
-
- l_g_total.backward()
- self.optimizer_g.step()
-
- # optimize net_d
- for p in self.net_d.parameters():
- p.requires_grad = True
-
- self.optimizer_d.zero_grad()
- # real
- real_d_pred = self.net_d(gan_gt)
- l_d_real = self.cri_gan(real_d_pred, True, is_disc=True)
- loss_dict['l_d_real'] = l_d_real
- loss_dict['out_d_real'] = torch.mean(real_d_pred.detach())
- l_d_real.backward()
- # fake
- fake_d_pred = self.net_d(self.output.detach().clone()) # clone for pt1.9
- l_d_fake = self.cri_gan(fake_d_pred, False, is_disc=True)
- loss_dict['l_d_fake'] = l_d_fake
- loss_dict['out_d_fake'] = torch.mean(fake_d_pred.detach())
- l_d_fake.backward()
- self.optimizer_d.step()
-
- if self.ema_decay > 0:
- self.model_ema(decay=self.ema_decay)
-
- self.log_dict = self.reduce_loss_dict(loss_dict)
diff --git a/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/runners/runner_base.py b/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/runners/runner_base.py
deleted file mode 100644
index c944123917dd0bf9947f4204f9044538a0f8bf22..0000000000000000000000000000000000000000
--- a/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/runners/runner_base.py
+++ /dev/null
@@ -1,658 +0,0 @@
-"""
- Copyright (c) 2022, salesforce.com, inc.
- All rights reserved.
- SPDX-License-Identifier: BSD-3-Clause
- For full license text, see the LICENSE_Lavis file in the repo root or https://opensource.org/licenses/BSD-3-Clause
-"""
-
-import datetime
-import json
-import logging
-import os
-import time
-from pathlib import Path
-
-import torch
-import torch.distributed as dist
-import webdataset as wds
-from video_llama.common.dist_utils import (
- download_cached_file,
- get_rank,
- get_world_size,
- is_main_process,
- main_process,
-)
-from video_llama.common.registry import registry
-from video_llama.common.utils import is_url
-from video_llama.datasets.data_utils import concat_datasets, reorg_datasets_by_split, ChainDataset
-from video_llama.datasets.datasets.dataloader_utils import (
- IterLoader,
- MultiIterLoader,
- PrefetchLoader,
-)
-from torch.nn.parallel import DistributedDataParallel as DDP
-from torch.utils.data import DataLoader, DistributedSampler
-
-
-@registry.register_runner("runner_base")
-class RunnerBase:
- """
- A runner class to train and evaluate a model given a task and datasets.
-
- The runner uses pytorch distributed data parallel by default. Future release
- will support other distributed frameworks.
- """
-
- def __init__(self, cfg, task, model, datasets, job_id):
- self.config = cfg
- self.job_id = job_id
-
- self.task = task
- self.datasets = datasets
-
- self._model = model
-
- self._wrapped_model = None
- self._device = None
- self._optimizer = None
- self._scaler = None
- self._dataloaders = None
- self._lr_sched = None
-
- self.start_epoch = 0
-
- # self.setup_seeds()
- self.setup_output_dir()
-
- @property
- def device(self):
- if self._device is None:
- self._device = torch.device(self.config.run_cfg.device)
-
- return self._device
-
- @property
- def use_distributed(self):
- return self.config.run_cfg.distributed
-
- @property
- def model(self):
- """
- A property to get the DDP-wrapped model on the device.
- """
- # move model to device
- if self._model.device != self.device:
- self._model = self._model.to(self.device)
-
- # distributed training wrapper
- if self.use_distributed:
- if self._wrapped_model is None:
- self._wrapped_model = DDP(
- self._model, device_ids=[self.config.run_cfg.gpu]
- )
- else:
- self._wrapped_model = self._model
-
- return self._wrapped_model
-
- @property
- def optimizer(self):
- # TODO make optimizer class and configurations
- if self._optimizer is None:
- num_parameters = 0
- p_wd, p_non_wd = [], []
- for n, p in self.model.named_parameters():
- if not p.requires_grad:
- continue # frozen weights
- print(n)
- if p.ndim < 2 or "bias" in n or "ln" in n or "bn" in n:
- p_non_wd.append(p)
- else:
- p_wd.append(p)
- num_parameters += p.data.nelement()
- logging.info("number of trainable parameters: %d" % num_parameters)
- optim_params = [
- {
- "params": p_wd,
- "weight_decay": float(self.config.run_cfg.weight_decay),
- },
- {"params": p_non_wd, "weight_decay": 0},
- ]
- beta2 = self.config.run_cfg.get("beta2", 0.999)
- self._optimizer = torch.optim.AdamW(
- optim_params,
- lr=float(self.config.run_cfg.init_lr),
- weight_decay=float(self.config.run_cfg.weight_decay),
- betas=(0.9, beta2),
- )
-
- return self._optimizer
-
- @property
- def scaler(self):
- amp = self.config.run_cfg.get("amp", False)
-
- if amp:
- if self._scaler is None:
- self._scaler = torch.cuda.amp.GradScaler()
-
- return self._scaler
-
- @property
- def lr_scheduler(self):
- """
- A property to get and create learning rate scheduler by split just in need.
- """
- if self._lr_sched is None:
- lr_sched_cls = registry.get_lr_scheduler_class(self.config.run_cfg.lr_sched)
-
- # max_epoch = self.config.run_cfg.max_epoch
- max_epoch = self.max_epoch
- # min_lr = self.config.run_cfg.min_lr
- min_lr = self.min_lr
- # init_lr = self.config.run_cfg.init_lr
- init_lr = self.init_lr
-
- # optional parameters
- decay_rate = self.config.run_cfg.get("lr_decay_rate", None)
- warmup_start_lr = self.config.run_cfg.get("warmup_lr", -1)
- warmup_steps = self.config.run_cfg.get("warmup_steps", 0)
- iters_per_epoch = self.config.run_cfg.get("iters_per_epoch", None)
-
- if iters_per_epoch is None:
- try:
- iters_per_epoch = len(self.dataloaders['train'])
- except (AttributeError, TypeError):
- iters_per_epoch = 10000
-
- self._lr_sched = lr_sched_cls(
- optimizer=self.optimizer,
- max_epoch=max_epoch,
- iters_per_epoch=iters_per_epoch,
- min_lr=min_lr,
- init_lr=init_lr,
- decay_rate=decay_rate,
- warmup_start_lr=warmup_start_lr,
- warmup_steps=warmup_steps,
- )
-
- return self._lr_sched
-
- @property
- def dataloaders(self) -> dict:
- """
- A property to get and create dataloaders by split just in need.
-
- If no train_dataset_ratio is provided, concatenate map-style datasets and
- chain wds.DataPipe datasets separately. Training set becomes a tuple
- (ConcatDataset, ChainDataset), both are optional but at least one of them is
- required. The resultant ConcatDataset and ChainDataset will be sampled evenly.
-
- If train_dataset_ratio is provided, create a MultiIterLoader to sample
- each dataset by ratios during training.
-
- Currently do not support multiple datasets for validation and test.
-
- Returns:
- dict: {split_name: (tuples of) dataloader}
- """
- if self._dataloaders is None:
-
- # concatenate map-style datasets and chain wds.DataPipe datasets separately
- # training set becomes a tuple (ConcatDataset, ChainDataset), both are
- # optional but at least one of them is required. The resultant ConcatDataset
- # and ChainDataset will be sampled evenly.
- logging.info(
- "dataset_ratios not specified, datasets will be concatenated (map-style datasets) or chained (webdataset.DataPipeline)."
- )
-
- datasets = reorg_datasets_by_split(self.datasets)
- self.datasets = datasets
- # self.datasets = concat_datasets(datasets)
-
- # print dataset statistics after concatenation/chaining
- for split_name in self.datasets:
- if isinstance(self.datasets[split_name], tuple) or isinstance(
- self.datasets[split_name], list
- ):
- # mixed wds.DataPipeline and torch.utils.data.Dataset
- num_records = sum(
- [
- len(d)
- if not type(d) in [wds.DataPipeline, ChainDataset]
- else 0
- for d in self.datasets[split_name]
- ]
- )
-
- else:
- if hasattr(self.datasets[split_name], "__len__"):
- # a single map-style dataset
- num_records = len(self.datasets[split_name])
- else:
- # a single wds.DataPipeline
- num_records = -1
- logging.info(
- "Only a single wds.DataPipeline dataset, no __len__ attribute."
- )
-
- if num_records >= 0:
- logging.info(
- "Loaded {} records for {} split from the dataset.".format(
- num_records, split_name
- )
- )
-
- # create dataloaders
- split_names = sorted(self.datasets.keys())
-
- datasets = [self.datasets[split] for split in split_names]
- is_trains = [split in self.train_splits for split in split_names]
-
- batch_sizes = [
- self.config.run_cfg.batch_size_train
- if split == "train"
- else self.config.run_cfg.batch_size_eval
- for split in split_names
- ]
-
- collate_fns = []
- for dataset in datasets:
- if isinstance(dataset, tuple) or isinstance(dataset, list):
- collate_fns.append([getattr(d, "collater", None) for d in dataset])
- else:
- collate_fns.append(getattr(dataset, "collater", None))
-
- dataloaders = self.create_loaders(
- datasets=datasets,
- num_workers=self.config.run_cfg.num_workers,
- batch_sizes=batch_sizes,
- is_trains=is_trains,
- collate_fns=collate_fns,
- )
-
- self._dataloaders = {k: v for k, v in zip(split_names, dataloaders)}
-
- return self._dataloaders
-
- @property
- def cuda_enabled(self):
- return self.device.type == "cuda"
-
- @property
- def max_epoch(self):
- return int(self.config.run_cfg.max_epoch)
-
- @property
- def log_freq(self):
- log_freq = self.config.run_cfg.get("log_freq", 50)
- return int(log_freq)
-
- @property
- def init_lr(self):
- return float(self.config.run_cfg.init_lr)
-
- @property
- def min_lr(self):
- return float(self.config.run_cfg.min_lr)
-
- @property
- def accum_grad_iters(self):
- return int(self.config.run_cfg.get("accum_grad_iters", 1))
-
- @property
- def valid_splits(self):
- valid_splits = self.config.run_cfg.get("valid_splits", [])
-
- if len(valid_splits) == 0:
- logging.info("No validation splits found.")
-
- return valid_splits
-
- @property
- def test_splits(self):
- test_splits = self.config.run_cfg.get("test_splits", [])
-
- return test_splits
-
- @property
- def train_splits(self):
- train_splits = self.config.run_cfg.get("train_splits", [])
-
- if len(train_splits) == 0:
- logging.info("Empty train splits.")
-
- return train_splits
-
- @property
- def evaluate_only(self):
- """
- Set to True to skip training.
- """
- return self.config.run_cfg.evaluate
-
- @property
- def use_dist_eval_sampler(self):
- return self.config.run_cfg.get("use_dist_eval_sampler", True)
-
- @property
- def resume_ckpt_path(self):
- return self.config.run_cfg.get("resume_ckpt_path", None)
-
- @property
- def train_loader(self):
- train_dataloader = self.dataloaders["train"]
-
- return train_dataloader
-
- def setup_output_dir(self):
- lib_root = Path(registry.get_path("library_root"))
-
- output_dir = lib_root / self.config.run_cfg.output_dir / self.job_id
- result_dir = output_dir / "result"
-
- output_dir.mkdir(parents=True, exist_ok=True)
- result_dir.mkdir(parents=True, exist_ok=True)
-
- registry.register_path("result_dir", str(result_dir))
- registry.register_path("output_dir", str(output_dir))
-
- self.result_dir = result_dir
- self.output_dir = output_dir
-
- def train(self):
- start_time = time.time()
- best_agg_metric = 0
- best_epoch = 0
-
- self.log_config()
-
- # resume from checkpoint if specified
- if not self.evaluate_only and self.resume_ckpt_path is not None:
- self._load_checkpoint(self.resume_ckpt_path)
-
- for cur_epoch in range(self.start_epoch, self.max_epoch):
- # training phase
- if not self.evaluate_only:
- logging.info("Start training")
- train_stats = self.train_epoch(cur_epoch)
- self.log_stats(split_name="train", stats=train_stats)
-
- # evaluation phase
- if len(self.valid_splits) > 0:
- for split_name in self.valid_splits:
- logging.info("Evaluating on {}.".format(split_name))
-
- val_log = self.eval_epoch(
- split_name=split_name, cur_epoch=cur_epoch
- )
- if val_log is not None:
- if is_main_process():
- assert (
- "agg_metrics" in val_log
- ), "No agg_metrics found in validation log."
-
- agg_metrics = val_log["agg_metrics"]
- if agg_metrics > best_agg_metric and split_name == "val":
- best_epoch, best_agg_metric = cur_epoch, agg_metrics
-
- self._save_checkpoint(cur_epoch, is_best=True)
-
- val_log.update({"best_epoch": best_epoch})
- self.log_stats(val_log, split_name)
-
- else:
- # if no validation split is provided, we just save the checkpoint at the end of each epoch.
- if not self.evaluate_only:
- self._save_checkpoint(cur_epoch, is_best=False)
-
- if self.evaluate_only:
- break
-
- if self.config.run_cfg.distributed:
- dist.barrier()
-
- # testing phase
- test_epoch = "best" if len(self.valid_splits) > 0 else cur_epoch
- self.evaluate(cur_epoch=test_epoch, skip_reload=self.evaluate_only)
-
- total_time = time.time() - start_time
- total_time_str = str(datetime.timedelta(seconds=int(total_time)))
- logging.info("Training time {}".format(total_time_str))
-
- def evaluate(self, cur_epoch="best", skip_reload=False):
- test_logs = dict()
-
- if len(self.test_splits) > 0:
- for split_name in self.test_splits:
- test_logs[split_name] = self.eval_epoch(
- split_name=split_name, cur_epoch=cur_epoch, skip_reload=skip_reload
- )
-
- return test_logs
-
- def train_epoch(self, epoch):
- # train
- self.model.train()
-
- return self.task.train_epoch(
- epoch=epoch,
- model=self.model,
- data_loader=self.train_loader,
- optimizer=self.optimizer,
- scaler=self.scaler,
- lr_scheduler=self.lr_scheduler,
- cuda_enabled=self.cuda_enabled,
- log_freq=self.log_freq,
- accum_grad_iters=self.accum_grad_iters,
- )
-
- @torch.no_grad()
- def eval_epoch(self, split_name, cur_epoch, skip_reload=False):
- """
- Evaluate the model on a given split.
-
- Args:
- split_name (str): name of the split to evaluate on.
- cur_epoch (int): current epoch.
- skip_reload_best (bool): whether to skip reloading the best checkpoint.
- During training, we will reload the best checkpoint for validation.
- During testing, we will use provided weights and skip reloading the best checkpoint .
- """
- data_loader = self.dataloaders.get(split_name, None)
- assert data_loader, "data_loader for split {} is None.".format(split_name)
-
- # TODO In validation, you need to compute loss as well as metrics
- # TODO consider moving to model.before_evaluation()
- model = self.unwrap_dist_model(self.model)
- if not skip_reload and cur_epoch == "best":
- model = self._reload_best_model(model)
- model.eval()
-
- self.task.before_evaluation(
- model=model,
- dataset=self.datasets[split_name],
- )
- results = self.task.evaluation(model, data_loader)
-
- if results is not None:
- return self.task.after_evaluation(
- val_result=results,
- split_name=split_name,
- epoch=cur_epoch,
- )
-
- def unwrap_dist_model(self, model):
- if self.use_distributed:
- return model.module
- else:
- return model
-
- def create_loaders(
- self,
- datasets,
- num_workers,
- batch_sizes,
- is_trains,
- collate_fns,
- dataset_ratios=None,
- ):
- """
- Create dataloaders for training and validation.
- """
-
- def _create_loader(dataset, num_workers, bsz, is_train, collate_fn):
- # create a single dataloader for each split
- if isinstance(dataset, ChainDataset) or isinstance(
- dataset, wds.DataPipeline
- ):
- # wds.WebdDataset instance are chained together
- # webdataset.DataPipeline has its own sampler and collate_fn
- loader = iter(
- DataLoader(
- dataset,
- batch_size=bsz,
- num_workers=num_workers,
- pin_memory=True,
- )
- )
- else:
- # map-style dataset are concatenated together
- # setup distributed sampler
- if self.use_distributed:
- sampler = DistributedSampler(
- dataset,
- shuffle=is_train,
- num_replicas=get_world_size(),
- rank=get_rank(),
- )
- if not self.use_dist_eval_sampler:
- # e.g. retrieval evaluation
- sampler = sampler if is_train else None
- else:
- sampler = None
-
- loader = DataLoader(
- dataset,
- batch_size=bsz,
- num_workers=num_workers,
- pin_memory=True,
- sampler=sampler,
- shuffle=sampler is None and is_train,
- collate_fn=collate_fn,
- drop_last=True if is_train else False,
- )
- loader = PrefetchLoader(loader)
-
- if is_train:
- loader = IterLoader(loader, use_distributed=self.use_distributed)
-
- return loader
-
- loaders = []
-
- for dataset, bsz, is_train, collate_fn in zip(
- datasets, batch_sizes, is_trains, collate_fns
- ):
- if isinstance(dataset, list) or isinstance(dataset, tuple):
- if hasattr(dataset[0], 'sample_ratio') and dataset_ratios is None:
- dataset_ratios = [d.sample_ratio for d in dataset]
- loader = MultiIterLoader(
- loaders=[
- _create_loader(d, num_workers, bsz, is_train, collate_fn[i])
- for i, d in enumerate(dataset)
- ],
- ratios=dataset_ratios,
- )
- else:
- loader = _create_loader(dataset, num_workers, bsz, is_train, collate_fn)
-
- loaders.append(loader)
-
- return loaders
-
- @main_process
- def _save_checkpoint(self, cur_epoch, is_best=False):
- """
- Save the checkpoint at the current epoch.
- """
- model_no_ddp = self.unwrap_dist_model(self.model)
- param_grad_dic = {
- k: v.requires_grad for (k, v) in model_no_ddp.named_parameters()
- }
- state_dict = model_no_ddp.state_dict()
- for k in list(state_dict.keys()):
- if k in param_grad_dic.keys() and not param_grad_dic[k]:
- # delete parameters that do not require gradient
- del state_dict[k]
- save_obj = {
- "model": state_dict,
- "optimizer": self.optimizer.state_dict(),
- "config": self.config.to_dict(),
- "scaler": self.scaler.state_dict() if self.scaler else None,
- "epoch": cur_epoch,
- }
- save_to = os.path.join(
- self.output_dir,
- "checkpoint_{}.pth".format("best" if is_best else cur_epoch),
- )
- logging.info("Saving checkpoint at epoch {} to {}.".format(cur_epoch, save_to))
- torch.save(save_obj, save_to)
-
- def _reload_best_model(self, model):
- """
- Load the best checkpoint for evaluation.
- """
- checkpoint_path = os.path.join(self.output_dir, "checkpoint_best.pth")
-
- logging.info("Loading checkpoint from {}.".format(checkpoint_path))
- checkpoint = torch.load(checkpoint_path, map_location="cpu")
- try:
- model.load_state_dict(checkpoint["model"])
- except RuntimeError as e:
- logging.warning(
- """
- Key mismatch when loading checkpoint. This is expected if only part of the model is saved.
- Trying to load the model with strict=False.
- """
- )
- model.load_state_dict(checkpoint["model"], strict=False)
- return model
-
- def _load_checkpoint(self, url_or_filename):
- """
- Resume from a checkpoint.
- """
- if is_url(url_or_filename):
- cached_file = download_cached_file(
- url_or_filename, check_hash=False, progress=True
- )
- checkpoint = torch.load(cached_file, map_location=self.device, strict=False)
- elif os.path.isfile(url_or_filename):
- checkpoint = torch.load(url_or_filename, map_location=self.device, strict=False)
- else:
- raise RuntimeError("checkpoint url or path is invalid")
-
- state_dict = checkpoint["model"]
- self.unwrap_dist_model(self.model).load_state_dict(state_dict)
-
- self.optimizer.load_state_dict(checkpoint["optimizer"])
- if self.scaler and "scaler" in checkpoint:
- self.scaler.load_state_dict(checkpoint["scaler"])
-
- self.start_epoch = checkpoint["epoch"] + 1
- logging.info("Resume checkpoint from {}".format(url_or_filename))
-
- @main_process
- def log_stats(self, stats, split_name):
- if isinstance(stats, dict):
- log_stats = {**{f"{split_name}_{k}": v for k, v in stats.items()}}
- with open(os.path.join(self.output_dir, "log.txt"), "a") as f:
- f.write(json.dumps(log_stats) + "\n")
- elif isinstance(stats, list):
- pass
-
- @main_process
- def log_config(self):
- with open(os.path.join(self.output_dir, "log.txt"), "a") as f:
- f.write(json.dumps(self.config.to_dict(), indent=4) + "\n")
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/components/chatbot.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/components/chatbot.py
deleted file mode 100644
index 43ea670f80f0dda1e9cd6e053cd478c0671698c4..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/components/chatbot.py
+++ /dev/null
@@ -1,247 +0,0 @@
-"""gr.Chatbot() component."""
-
-from __future__ import annotations
-
-import inspect
-from pathlib import Path
-from typing import Callable, Literal
-
-from gradio_client import utils as client_utils
-from gradio_client.documentation import document, set_documentation_group
-from gradio_client.serializing import JSONSerializable
-
-from gradio import utils
-from gradio.components.base import IOComponent, _Keywords
-from gradio.deprecation import warn_deprecation, warn_style_method_deprecation
-from gradio.events import (
- Changeable,
- EventListenerMethod,
- Selectable,
-)
-
-set_documentation_group("component")
-
-
-@document()
-class Chatbot(Changeable, Selectable, IOComponent, JSONSerializable):
- """
- Displays a chatbot output showing both user submitted messages and responses. Supports a subset of Markdown including bold, italics, code, tables. Also supports audio/video/image files, which are displayed in the Chatbot, and other kinds of files which are displayed as links.
- Preprocessing: passes the messages in the Chatbot as a {List[List[str | None | Tuple]]}, i.e. a list of lists. The inner list has 2 elements: the user message and the response message. See `Postprocessing` for the format of these messages.
- Postprocessing: expects function to return a {List[List[str | None | Tuple]]}, i.e. a list of lists. The inner list should have 2 elements: the user message and the response message. The individual messages can be (1) strings in valid Markdown, (2) tuples if sending files: (a filepath or URL to a file, [optional string alt text]) -- if the file is image/video/audio, it is displayed in the Chatbot, or (3) None, in which case the message is not displayed.
-
- Demos: chatbot_simple, chatbot_multimodal
- Guides: creating-a-chatbot
- """
-
- def __init__(
- self,
- value: list[list[str | tuple[str] | tuple[str | Path, str] | None]]
- | Callable
- | None = None,
- color_map: dict[str, str] | None = None,
- *,
- label: str | None = None,
- every: float | None = None,
- show_label: bool | None = None,
- container: bool = True,
- scale: int | None = None,
- min_width: int = 160,
- visible: bool = True,
- elem_id: str | None = None,
- elem_classes: list[str] | str | None = None,
- height: int | None = None,
- latex_delimiters: list[dict[str, str | bool]] | None = None,
- rtl: bool = False,
- show_share_button: bool | None = None,
- **kwargs,
- ):
- """
- Parameters:
- value: Default value to show in chatbot. If callable, the function will be called whenever the app loads to set the initial value of the component.
- color_map: This parameter is deprecated.
- label: component name in interface.
- every: If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. Queue must be enabled. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute.
- show_label: if True, will display label.
- container: If True, will place the component in a container - providing some extra padding around the border.
- scale: relative width compared to adjacent Components in a Row. For example, if Component A has scale=2, and Component B has scale=1, A will be twice as wide as B. Should be an integer.
- min_width: minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first.
- visible: If False, component will be hidden.
- elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles.
- elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles.
- height: height of the component in pixels.
- latex_delimiters: A list of dicts of the form {"left": open delimiter (str), "right": close delimiter (str), "display": whether to display in newline (bool)} that will be used to render LaTeX expressions. If not provided, `latex_delimiters` is set to `[{ "left": "$$", "right": "$$", "display": True }]`, so only expressions enclosed in $$ delimiters will be rendered as LaTeX, and in a new line. Pass in an empty list to disable LaTeX rendering. For more information, see the [KaTeX documentation](https://katex.org/docs/autorender.html).
- rtl: If True, sets the direction of the rendered text to right-to-left. Default is False, which renders text left-to-right.
- show_share_button: If True, will show a share icon in the corner of the component that allows user to share outputs to Hugging Face Spaces Discussions. If False, icon does not appear. If set to None (default behavior), then the icon appears if this Gradio app is launched on Spaces, but not otherwise.
- """
- if color_map is not None:
- warn_deprecation("The 'color_map' parameter has been deprecated.")
- self.select: EventListenerMethod
- """
- Event listener for when the user selects message from Chatbot.
- Uses event data gradio.SelectData to carry `value` referring to text of selected message, and `index` tuple to refer to [message, participant] index.
- See EventData documentation on how to use this event data.
- """
- self.height = height
- self.rtl = rtl
- if latex_delimiters is None:
- latex_delimiters = [{"left": "$$", "right": "$$", "display": True}]
- self.latex_delimiters = latex_delimiters
- self.show_share_button = (
- (utils.get_space() is not None)
- if show_share_button is None
- else show_share_button
- )
-
- IOComponent.__init__(
- self,
- label=label,
- every=every,
- show_label=show_label,
- container=container,
- scale=scale,
- min_width=min_width,
- visible=visible,
- elem_id=elem_id,
- elem_classes=elem_classes,
- value=value,
- **kwargs,
- )
-
- def get_config(self):
- return {
- "value": self.value,
- "latex_delimiters": self.latex_delimiters,
- "selectable": self.selectable,
- "height": self.height,
- "show_share_button": self.show_share_button,
- "rtl": self.rtl,
- **IOComponent.get_config(self),
- }
-
- @staticmethod
- def update(
- value: list[list[str | tuple[str] | tuple[str, str] | None]]
- | Literal[_Keywords.NO_VALUE]
- | None = _Keywords.NO_VALUE,
- label: str | None = None,
- show_label: bool | None = None,
- container: bool | None = None,
- scale: int | None = None,
- min_width: int | None = None,
- visible: bool | None = None,
- height: int | None = None,
- rtl: bool | None = None,
- show_share_button: bool | None = None,
- ):
- updated_config = {
- "label": label,
- "show_label": show_label,
- "container": container,
- "scale": scale,
- "min_width": min_width,
- "visible": visible,
- "value": value,
- "height": height,
- "show_share_button": show_share_button,
- "rtl": rtl,
- "__type__": "update",
- }
- return updated_config
-
- def _preprocess_chat_messages(
- self, chat_message: str | dict | None
- ) -> str | tuple[str] | tuple[str, str] | None:
- if chat_message is None:
- return None
- elif isinstance(chat_message, dict):
- if chat_message["alt_text"] is not None:
- return (chat_message["name"], chat_message["alt_text"])
- else:
- return (chat_message["name"],)
- else: # string
- return chat_message
-
- def preprocess(
- self,
- y: list[list[str | dict | None] | tuple[str | dict | None, str | dict | None]],
- ) -> list[list[str | tuple[str] | tuple[str, str] | None]]:
- if y is None:
- return y
- processed_messages = []
- for message_pair in y:
- assert isinstance(
- message_pair, (tuple, list)
- ), f"Expected a list of lists or list of tuples. Received: {message_pair}"
- assert (
- len(message_pair) == 2
- ), f"Expected a list of lists of length 2 or list of tuples of length 2. Received: {message_pair}"
- processed_messages.append(
- [
- self._preprocess_chat_messages(message_pair[0]),
- self._preprocess_chat_messages(message_pair[1]),
- ]
- )
- return processed_messages
-
- def _postprocess_chat_messages(
- self, chat_message: str | tuple | list | None
- ) -> str | dict | None:
- if chat_message is None:
- return None
- elif isinstance(chat_message, (tuple, list)):
- file_uri = str(chat_message[0])
- if utils.validate_url(file_uri):
- filepath = file_uri
- else:
- filepath = self.make_temp_copy_if_needed(file_uri)
-
- mime_type = client_utils.get_mimetype(filepath)
- return {
- "name": filepath,
- "mime_type": mime_type,
- "alt_text": chat_message[1] if len(chat_message) > 1 else None,
- "data": None, # These last two fields are filled in by the frontend
- "is_file": True,
- }
- elif isinstance(chat_message, str):
- chat_message = inspect.cleandoc(chat_message)
- return chat_message
- else:
- raise ValueError(f"Invalid message for Chatbot component: {chat_message}")
-
- def postprocess(
- self,
- y: list[list[str | tuple[str] | tuple[str, str] | None] | tuple],
- ) -> list[list[str | dict | None]]:
- """
- Parameters:
- y: List of lists representing the message and response pairs. Each message and response should be a string, which may be in Markdown format. It can also be a tuple whose first element is a string or pathlib.Path filepath or URL to an image/video/audio, and second (optional) element is the alt text, in which case the media file is displayed. It can also be None, in which case that message is not displayed.
- Returns:
- List of lists representing the message and response. Each message and response will be a string of HTML, or a dictionary with media information. Or None if the message is not to be displayed.
- """
- if y is None:
- return []
- processed_messages = []
- for message_pair in y:
- assert isinstance(
- message_pair, (tuple, list)
- ), f"Expected a list of lists or list of tuples. Received: {message_pair}"
- assert (
- len(message_pair) == 2
- ), f"Expected a list of lists of length 2 or list of tuples of length 2. Received: {message_pair}"
- processed_messages.append(
- [
- self._postprocess_chat_messages(message_pair[0]),
- self._postprocess_chat_messages(message_pair[1]),
- ]
- )
- return processed_messages
-
- def style(self, height: int | None = None, **kwargs):
- """
- This method is deprecated. Please set these arguments in the constructor instead.
- """
- warn_style_method_deprecation()
- if height is not None:
- self.height = height
- return self
diff --git a/spaces/Datasculptor/LoRA-DreamBooth-Training-UI/style.css b/spaces/Datasculptor/LoRA-DreamBooth-Training-UI/style.css
deleted file mode 100644
index c4739b4ea5fc35e774a049e3dacc443f7f0eac19..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/LoRA-DreamBooth-Training-UI/style.css
+++ /dev/null
@@ -1,3 +0,0 @@
-h1 {
- text-align: center;
-}
diff --git a/spaces/Deci/YOLO-NAS-Pose-Demo/app.py b/spaces/Deci/YOLO-NAS-Pose-Demo/app.py
deleted file mode 100644
index 415f82e956fd113401c30a6381b1a0062cb4d33e..0000000000000000000000000000000000000000
--- a/spaces/Deci/YOLO-NAS-Pose-Demo/app.py
+++ /dev/null
@@ -1,95 +0,0 @@
-from io import BytesIO
-
-import cv2
-import gradio as gr
-import numpy as np
-import requests
-from PIL import Image
-
-
-from super_gradients.common.object_names import Models
-from super_gradients.training import models
-from super_gradients.training.utils.visualization.detection import draw_bbox
-from super_gradients.training.utils.visualization.pose_estimation import PoseVisualization
-
-# Initialize your pose estimation model
-yolo_nas_pose = models.get("yolo_nas_pose_l",
- num_classes=17,
- checkpoint_path="./yolo_nas_pose_l_coco_pose.pth")
-
-def process_and_predict(url=None,
- image=None,
- confidence=0.5,
- iou=0.5):
- # If a URL is provided, use it directly for prediction
- if url is not None and url.strip() != "":
- response = requests.get(url)
- image = Image.open(BytesIO(response.content))
- image = np.array(image)
- result = yolo_nas_pose.predict(image, conf=confidence,iou=iou)
- # If a file is uploaded, read it, convert it to a numpy array and use it for prediction
- elif image is not None:
- result = yolo_nas_pose.predict(image, conf=confidence,iou=iou)
- else:
- return None # If no input is provided, return None
-
- # Extract prediction data
- image_prediction = result._images_prediction_lst[0]
-
- pose_data = image_prediction.prediction
-
- # Visualize the prediction
- output_image = PoseVisualization.draw_poses(
- image=image_prediction.image,
- poses=pose_data.poses,
- boxes=pose_data.bboxes_xyxy,
- scores=pose_data.scores,
- is_crowd=None,
- edge_links=pose_data.edge_links,
- edge_colors=pose_data.edge_colors,
- keypoint_colors=pose_data.keypoint_colors,
- joint_thickness=2,
- box_thickness=2,
- keypoint_radius=5
- )
-
- blank_image = np.zeros_like(image_prediction.image)
-
- skeleton_image = PoseVisualization.draw_poses(
- image=blank_image,
- poses=pose_data.poses,
- boxes=pose_data.bboxes_xyxy,
- scores=pose_data.scores,
- is_crowd=None,
- edge_links=pose_data.edge_links,
- edge_colors=pose_data.edge_colors,
- keypoint_colors=pose_data.keypoint_colors,
- joint_thickness=2,
- box_thickness=2,
- keypoint_radius=5
-)
-
- return output_image, skeleton_image
-
-# Define the Gradio interface
-iface = gr.Interface(
- fn=process_and_predict,
- inputs=[
- gr.Textbox(placeholder="Enter Image URL", label="Image URL"),
- gr.Image(label="Upload Image", type='numpy'),
- gr.Slider(minimum=0, maximum=1, step=0.01, value=0.5, label="Confidence Threshold"),
- gr.Slider(minimum=0, maximum=1, step=0.01, value=0.5, label="IoU Threshold")
- ],
- outputs=[
- gr.components.Image(label="Estimated Pose"),
- gr.components.Image(label="Skeleton Only")
- ],
- title="YOLO-NAS-Pose Demo",
- description="Upload an image, enter an image URL, or use your webcam to use a pretrained YOLO-NAS-Pose L for inference. Get more hands-on with the [starter notebook for inference](https://bit.ly/yn-pose-inference), and learn how to fine-tune your own model with the [fine-tuning notebook](https://bit.ly/yn-pose-fine-tuning). The official home of YOLO-NAS-Pose is SuperGradients, [gives us a ⭐️ on GitHub!](https://github.com/Deci-AI/super-gradients)",
- live=True,
- allow_flagging=False,
-
- )
-
-# Launch the interface
-iface.launch()
\ No newline at end of file
diff --git a/spaces/Deepak107/NSFW-Detection/app.py b/spaces/Deepak107/NSFW-Detection/app.py
deleted file mode 100644
index 8c150025b9d5756acce3735122b8b106474fd6b2..0000000000000000000000000000000000000000
--- a/spaces/Deepak107/NSFW-Detection/app.py
+++ /dev/null
@@ -1,18 +0,0 @@
-
-from tensorflow import keras
-import gradio as gr
-model = keras.models.load_model('NSFW2.h5')
-class_names = ['normal', 'porn']
-
-def predict_input_image(img):
- img_4d=img.reshape(-1,224,224,3)
- prediction=model.predict(img_4d)[0]
- return {class_names[i]: float(prediction[i]) for i in range(len(class_names))}
-
-
-image = gr.inputs.Image(shape=(224,224))
-label = gr.outputs.Label(num_top_classes=1)
-
-gr.Interface(fn=predict_input_image, inputs=image, outputs=label,interpretation='default').launch(debug='True')
-
-
diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/training_scripts/sg3/training/networks_stylegan3.py b/spaces/DragGan/DragGan-Inversion/stylegan_human/training_scripts/sg3/training/networks_stylegan3.py
deleted file mode 100644
index 31d3485accc72888a2cbb7d43bffeb8ae2f13c48..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan-Inversion/stylegan_human/training_scripts/sg3/training/networks_stylegan3.py
+++ /dev/null
@@ -1,635 +0,0 @@
-# Copyright (c) SenseTime Research. All rights reserved.
-
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-"""Generator architecture from the paper
-"Alias-Free Generative Adversarial Networks"."""
-
-import numpy as np
-import scipy.signal
-import scipy.optimize
-import torch
-from torch_utils import misc
-from torch_utils import persistence
-from torch_utils.ops import conv2d_gradfix
-from torch_utils.ops import filtered_lrelu
-from torch_utils.ops import bias_act
-
-# ----------------------------------------------------------------------------
-
-
-@misc.profiled_function
-def modulated_conv2d(
- # Input tensor: [batch_size, in_channels, in_height, in_width]
- x,
- # Weight tensor: [out_channels, in_channels, kernel_height, kernel_width]
- w,
- s, # Style tensor: [batch_size, in_channels]
- demodulate=True, # Apply weight demodulation?
- padding=0, # Padding: int or [padH, padW]
- input_gain=None, # Optional scale factors for the input channels: [], [in_channels], or [batch_size, in_channels]
-):
- with misc.suppress_tracer_warnings(): # this value will be treated as a constant
- batch_size = int(x.shape[0])
- out_channels, in_channels, kh, kw = w.shape
- misc.assert_shape(w, [out_channels, in_channels, kh, kw]) # [OIkk]
- misc.assert_shape(x, [batch_size, in_channels, None, None]) # [NIHW]
- misc.assert_shape(s, [batch_size, in_channels]) # [NI]
-
- # Pre-normalize inputs.
- if demodulate:
- w = w * w.square().mean([1, 2, 3], keepdim=True).rsqrt()
- s = s * s.square().mean().rsqrt()
-
- # Modulate weights.
- w = w.unsqueeze(0) # [NOIkk]
- w = w * s.unsqueeze(1).unsqueeze(3).unsqueeze(4) # [NOIkk]
-
- # Demodulate weights.
- if demodulate:
- dcoefs = (w.square().sum(dim=[2, 3, 4]) + 1e-8).rsqrt() # [NO]
- w = w * dcoefs.unsqueeze(2).unsqueeze(3).unsqueeze(4) # [NOIkk]
-
- # Apply input scaling.
- if input_gain is not None:
- input_gain = input_gain.expand(batch_size, in_channels) # [NI]
- w = w * input_gain.unsqueeze(1).unsqueeze(3).unsqueeze(4) # [NOIkk]
-
- # Execute as one fused op using grouped convolution.
- x = x.reshape(1, -1, *x.shape[2:])
- w = w.reshape(-1, in_channels, kh, kw)
- x = conv2d_gradfix.conv2d(input=x, weight=w.to(
- x.dtype), padding=padding, groups=batch_size)
- x = x.reshape(batch_size, -1, *x.shape[2:])
- return x
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class FullyConnectedLayer(torch.nn.Module):
- def __init__(self,
- in_features, # Number of input features.
- out_features, # Number of output features.
- # Activation function: 'relu', 'lrelu', etc.
- activation='linear',
- bias=True, # Apply additive bias before the activation function?
- lr_multiplier=1, # Learning rate multiplier.
- # Initial standard deviation of the weight tensor.
- weight_init=1,
- bias_init=0, # Initial value of the additive bias.
- ):
- super().__init__()
- self.in_features = in_features
- self.out_features = out_features
- self.activation = activation
- self.weight = torch.nn.Parameter(torch.randn(
- [out_features, in_features]) * (weight_init / lr_multiplier))
- bias_init = np.broadcast_to(np.asarray(
- bias_init, dtype=np.float32), [out_features])
- self.bias = torch.nn.Parameter(torch.from_numpy(
- bias_init / lr_multiplier)) if bias else None
- self.weight_gain = lr_multiplier / np.sqrt(in_features)
- self.bias_gain = lr_multiplier
-
- def forward(self, x):
- w = self.weight.to(x.dtype) * self.weight_gain
- b = self.bias
- if b is not None:
- b = b.to(x.dtype)
- if self.bias_gain != 1:
- b = b * self.bias_gain
- if self.activation == 'linear' and b is not None:
- x = torch.addmm(b.unsqueeze(0), x, w.t())
- else:
- x = x.matmul(w.t())
- x = bias_act.bias_act(x, b, act=self.activation)
- return x
-
- def extra_repr(self):
- return f'in_features={self.in_features:d}, out_features={self.out_features:d}, activation={self.activation:s}'
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class MappingNetwork(torch.nn.Module):
- def __init__(self,
- z_dim, # Input latent (Z) dimensionality.
- # Conditioning label (C) dimensionality, 0 = no labels.
- c_dim,
- # Intermediate latent (W) dimensionality.
- w_dim,
- # Number of intermediate latents to output.
- num_ws,
- num_layers=2, # Number of mapping layers.
- # Learning rate multiplier for the mapping layers.
- lr_multiplier=0.01,
- # Decay for tracking the moving average of W during training.
- w_avg_beta=0.998,
- ):
- super().__init__()
- self.z_dim = z_dim
- self.c_dim = c_dim
- self.w_dim = w_dim
- self.num_ws = num_ws
- self.num_layers = num_layers
- self.w_avg_beta = w_avg_beta
-
- # Construct layers.
- self.embed = FullyConnectedLayer(
- self.c_dim, self.w_dim) if self.c_dim > 0 else None
- features = [self.z_dim + (self.w_dim if self.c_dim >
- 0 else 0)] + [self.w_dim] * self.num_layers
- for idx, in_features, out_features in zip(range(num_layers), features[:-1], features[1:]):
- layer = FullyConnectedLayer(
- in_features, out_features, activation='lrelu', lr_multiplier=lr_multiplier)
- setattr(self, f'fc{idx}', layer)
- self.register_buffer('w_avg', torch.zeros([w_dim]))
-
- def forward(self, z, c, truncation_psi=1, truncation_cutoff=None, update_emas=False):
- misc.assert_shape(z, [None, self.z_dim])
- if truncation_cutoff is None:
- truncation_cutoff = self.num_ws
-
- # Embed, normalize, and concatenate inputs.
- x = z.to(torch.float32)
- x = x * (x.square().mean(1, keepdim=True) + 1e-8).rsqrt()
- if self.c_dim > 0:
- misc.assert_shape(c, [None, self.c_dim])
- y = self.embed(c.to(torch.float32))
- y = y * (y.square().mean(1, keepdim=True) + 1e-8).rsqrt()
- x = torch.cat([x, y], dim=1) if x is not None else y
-
- # Execute layers.
- for idx in range(self.num_layers):
- x = getattr(self, f'fc{idx}')(x)
-
- # Update moving average of W.
- if update_emas:
- self.w_avg.copy_(x.detach().mean(
- dim=0).lerp(self.w_avg, self.w_avg_beta))
-
- # Broadcast and apply truncation.
- x = x.unsqueeze(1).repeat([1, self.num_ws, 1])
- if truncation_psi != 1:
- x[:, :truncation_cutoff] = self.w_avg.lerp(
- x[:, :truncation_cutoff], truncation_psi)
- return x
-
- def extra_repr(self):
- return f'z_dim={self.z_dim:d}, c_dim={self.c_dim:d}, w_dim={self.w_dim:d}, num_ws={self.num_ws:d}'
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class SynthesisInput(torch.nn.Module):
- def __init__(self,
- w_dim, # Intermediate latent (W) dimensionality.
- channels, # Number of output channels.
- size, # Output spatial size: int or [width, height].
- sampling_rate, # Output sampling rate.
- bandwidth, # Output bandwidth.
- square,
- ):
- super().__init__()
- self.w_dim = w_dim
- self.channels = channels
- self.square = square
- if self.square:
- self.size = np.broadcast_to(np.asarray(size), [2])
- else:
- self.size = np.array([size // 2, size]) # [width, height]
- self.sampling_rate = sampling_rate
- self.bandwidth = bandwidth
-
- # Draw random frequencies from uniform 2D disc.
- freqs = torch.randn([self.channels, 2])
- radii = freqs.square().sum(dim=1, keepdim=True).sqrt()
- freqs /= radii * radii.square().exp().pow(0.25)
- freqs *= bandwidth
- phases = torch.rand([self.channels]) - 0.5
-
- # Setup parameters and buffers.
- self.weight = torch.nn.Parameter(
- torch.randn([self.channels, self.channels]))
- self.affine = FullyConnectedLayer(
- w_dim, 4, weight_init=0, bias_init=[1, 0, 0, 0])
- # User-specified inverse transform wrt. resulting image.
- self.register_buffer('transform', torch.eye(3, 3))
- self.register_buffer('freqs', freqs)
- self.register_buffer('phases', phases)
-
- def forward(self, w):
- # Introduce batch dimension.
- transforms = self.transform.unsqueeze(0) # [batch, row, col]
- freqs = self.freqs.unsqueeze(0) # [batch, channel, xy]
- phases = self.phases.unsqueeze(0) # [batch, channel]
-
- # Apply learned transformation.
- t = self.affine(w) # t = (r_c, r_s, t_x, t_y)
- # t' = (r'_c, r'_s, t'_x, t'_y)
- t = t / t[:, :2].norm(dim=1, keepdim=True)
- # Inverse rotation wrt. resulting image.
- m_r = torch.eye(3, device=w.device).unsqueeze(
- 0).repeat([w.shape[0], 1, 1])
- m_r[:, 0, 0] = t[:, 0] # r'_c
- m_r[:, 0, 1] = -t[:, 1] # r'_s
- m_r[:, 1, 0] = t[:, 1] # r'_s
- m_r[:, 1, 1] = t[:, 0] # r'_c
- # Inverse translation wrt. resulting image.
- m_t = torch.eye(3, device=w.device).unsqueeze(
- 0).repeat([w.shape[0], 1, 1])
- m_t[:, 0, 2] = -t[:, 2] # t'_x
- m_t[:, 1, 2] = -t[:, 3] # t'_y
- # First rotate resulting image, then translate, and finally apply user-specified transform.
- transforms = m_r @ m_t @ transforms
-
- # Transform frequencies.
- phases = phases + (freqs @ transforms[:, :2, 2:]).squeeze(2)
- freqs = freqs @ transforms[:, :2, :2]
-
- # Dampen out-of-band frequencies that may occur due to the user-specified transform.
- amplitudes = (1 - (freqs.norm(dim=2) - self.bandwidth) /
- (self.sampling_rate / 2 - self.bandwidth)).clamp(0, 1)
-
- # Construct sampling grid.
- theta = torch.eye(2, 3, device=w.device)
- theta[0, 0] = 0.5 * self.size[0] / self.sampling_rate
- theta[1, 1] = 0.5 * self.size[1] / self.sampling_rate
- grids = torch.nn.functional.affine_grid(theta.unsqueeze(
- 0), [1, 1, self.size[1], self.size[0]], align_corners=False)
-
- # Compute Fourier features.
- x = (grids.unsqueeze(3) @ freqs.permute(0, 2, 1).unsqueeze(1).unsqueeze(2)
- ).squeeze(3) # [batch, height, width, channel]
- x = x + phases.unsqueeze(1).unsqueeze(2)
- x = torch.sin(x * (np.pi * 2))
- x = x * amplitudes.unsqueeze(1).unsqueeze(2)
-
- # Apply trainable mapping.
- weight = self.weight / np.sqrt(self.channels)
- x = x @ weight.t()
-
- # Ensure correct shape.
- x = x.permute(0, 3, 1, 2) # [batch, channel, height, width]
- misc.assert_shape(x, [w.shape[0], self.channels,
- int(self.size[1]), int(self.size[0])])
- return x
-
- def extra_repr(self):
- return '\n'.join([
- f'w_dim={self.w_dim:d}, channels={self.channels:d}, size={list(self.size)},',
- f'sampling_rate={self.sampling_rate:g}, bandwidth={self.bandwidth:g}'])
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class SynthesisLayer(torch.nn.Module):
- def __init__(self,
- # Intermediate latent (W) dimensionality.
- w_dim,
- is_torgb, # Is this the final ToRGB layer?
- is_critically_sampled, # Does this layer use critical sampling?
- use_fp16, # Does this layer use FP16?
-
- # Input & output specifications.
- in_channels, # Number of input channels.
- out_channels, # Number of output channels.
- # Input spatial size: int or [width, height].
- in_size,
- # Output spatial size: int or [width, height].
- out_size,
- in_sampling_rate, # Input sampling rate (s).
- out_sampling_rate, # Output sampling rate (s).
- # Input cutoff frequency (f_c).
- in_cutoff,
- # Output cutoff frequency (f_c).
- out_cutoff,
- # Input transition band half-width (f_h).
- in_half_width,
- # Output Transition band half-width (f_h).
- out_half_width,
-
- # Hyperparameters.
- # Convolution kernel size. Ignored for final the ToRGB layer.
- conv_kernel=3,
- # Low-pass filter size relative to the lower resolution when up/downsampling.
- filter_size=6,
- # Relative sampling rate for leaky ReLU. Ignored for final the ToRGB layer.
- lrelu_upsampling=2,
- # Use radially symmetric downsampling filter? Ignored for critically sampled layers.
- use_radial_filters=False,
- # Clamp the output to [-X, +X], None = disable clamping.
- conv_clamp=256,
- # Decay rate for the moving average of input magnitudes.
- magnitude_ema_beta=0.999,
- square=False, # default if for rectangle images
- ):
- super().__init__()
- self.w_dim = w_dim
- self.is_torgb = is_torgb
- self.is_critically_sampled = is_critically_sampled
- self.use_fp16 = use_fp16
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.square = square
- if self.square:
- self.in_size = np.broadcast_to(np.asarray(in_size), [2])
- self.out_size = np.broadcast_to(np.asarray(out_size), [2])
- else:
- # self.in_size = np.array[in_size, in_size//2]
- self.in_size = np.array([in_size // 2, in_size])
- # self.out_size = np.array[out_size, out_size//2]
- self.out_size = np.array([out_size // 2, out_size])
- self.in_sampling_rate = in_sampling_rate
- self.out_sampling_rate = out_sampling_rate
- self.tmp_sampling_rate = max(
- in_sampling_rate, out_sampling_rate) * (1 if is_torgb else lrelu_upsampling)
- self.in_cutoff = in_cutoff
- self.out_cutoff = out_cutoff
- self.in_half_width = in_half_width
- self.out_half_width = out_half_width
- self.conv_kernel = 1 if is_torgb else conv_kernel
- self.conv_clamp = conv_clamp
- self.magnitude_ema_beta = magnitude_ema_beta
-
- # Setup parameters and buffers.
- self.affine = FullyConnectedLayer(
- self.w_dim, self.in_channels, bias_init=1)
- self.weight = torch.nn.Parameter(torch.randn(
- [self.out_channels, self.in_channels, self.conv_kernel, self.conv_kernel]))
- self.bias = torch.nn.Parameter(torch.zeros([self.out_channels]))
- self.register_buffer('magnitude_ema', torch.ones([]))
-
- # Design upsampling filter.
- self.up_factor = int(
- np.rint(self.tmp_sampling_rate / self.in_sampling_rate))
- assert self.in_sampling_rate * self.up_factor == self.tmp_sampling_rate
- self.up_taps = filter_size * \
- self.up_factor if self.up_factor > 1 and not self.is_torgb else 1
- self.register_buffer('up_filter', self.design_lowpass_filter(
- numtaps=self.up_taps, cutoff=self.in_cutoff, width=self.in_half_width*2, fs=self.tmp_sampling_rate))
-
- # Design downsampling filter.
- self.down_factor = int(
- np.rint(self.tmp_sampling_rate / self.out_sampling_rate))
- assert self.out_sampling_rate * self.down_factor == self.tmp_sampling_rate
- self.down_taps = filter_size * \
- self.down_factor if self.down_factor > 1 and not self.is_torgb else 1
- self.down_radial = use_radial_filters and not self.is_critically_sampled
- self.register_buffer('down_filter', self.design_lowpass_filter(
- numtaps=self.down_taps, cutoff=self.out_cutoff, width=self.out_half_width*2, fs=self.tmp_sampling_rate, radial=self.down_radial))
-
- # Compute padding.
- # Desired output size before downsampling.
- pad_total = (self.out_size - 1) * self.down_factor + 1
- # Input size after upsampling.
- pad_total -= (self.in_size + self.conv_kernel - 1) * self.up_factor
- # Size reduction caused by the filters.
- pad_total += self.up_taps + self.down_taps - 2
- # Shift sample locations according to the symmetric interpretation (Appendix C.3).
- pad_lo = (pad_total + self.up_factor) // 2
- pad_hi = pad_total - pad_lo
- self.padding = [int(pad_lo[0]), int(pad_hi[0]),
- int(pad_lo[1]), int(pad_hi[1])]
-
- def forward(self, x, w, noise_mode='random', force_fp32=False, update_emas=False):
- assert noise_mode in ['random', 'const', 'none'] # unused
- misc.assert_shape(x, [None, self.in_channels, int(
- self.in_size[1]), int(self.in_size[0])])
- misc.assert_shape(w, [x.shape[0], self.w_dim])
-
- # Track input magnitude.
- if update_emas:
- with torch.autograd.profiler.record_function('update_magnitude_ema'):
- magnitude_cur = x.detach().to(torch.float32).square().mean()
- self.magnitude_ema.copy_(magnitude_cur.lerp(
- self.magnitude_ema, self.magnitude_ema_beta))
- input_gain = self.magnitude_ema.rsqrt()
-
- # Execute affine layer.
- styles = self.affine(w)
- if self.is_torgb:
- weight_gain = 1 / \
- np.sqrt(self.in_channels * (self.conv_kernel ** 2))
- styles = styles * weight_gain
-
- # Execute modulated conv2d.
- dtype = torch.float16 if (
- self.use_fp16 and not force_fp32 and x.device.type == 'cuda') else torch.float32
- x = modulated_conv2d(x=x.to(dtype), w=self.weight, s=styles,
- padding=self.conv_kernel-1, demodulate=(not self.is_torgb), input_gain=input_gain)
-
- # Execute bias, filtered leaky ReLU, and clamping.
- gain = 1 if self.is_torgb else np.sqrt(2)
- slope = 1 if self.is_torgb else 0.2
- x = filtered_lrelu.filtered_lrelu(x=x, fu=self.up_filter, fd=self.down_filter, b=self.bias.to(x.dtype),
- up=self.up_factor, down=self.down_factor, padding=self.padding, gain=gain, slope=slope, clamp=self.conv_clamp)
-
- # Ensure correct shape and dtype.
- misc.assert_shape(x, [None, self.out_channels, int(
- self.out_size[1]), int(self.out_size[0])])
- assert x.dtype == dtype
- return x
-
- @staticmethod
- def design_lowpass_filter(numtaps, cutoff, width, fs, radial=False):
- assert numtaps >= 1
-
- # Identity filter.
- if numtaps == 1:
- return None
-
- # Separable Kaiser low-pass filter.
- if not radial:
- f = scipy.signal.firwin(
- numtaps=numtaps, cutoff=cutoff, width=width, fs=fs)
- return torch.as_tensor(f, dtype=torch.float32)
-
- # Radially symmetric jinc-based filter.
- x = (np.arange(numtaps) - (numtaps - 1) / 2) / fs
- r = np.hypot(*np.meshgrid(x, x))
- f = scipy.special.j1(2 * cutoff * (np.pi * r)) / (np.pi * r)
- beta = scipy.signal.kaiser_beta(
- scipy.signal.kaiser_atten(numtaps, width / (fs / 2)))
- w = np.kaiser(numtaps, beta)
- f *= np.outer(w, w)
- f /= np.sum(f)
- return torch.as_tensor(f, dtype=torch.float32)
-
- def extra_repr(self):
- return '\n'.join([
- f'w_dim={self.w_dim:d}, is_torgb={self.is_torgb},',
- f'is_critically_sampled={self.is_critically_sampled}, use_fp16={self.use_fp16},',
- f'in_sampling_rate={self.in_sampling_rate:g}, out_sampling_rate={self.out_sampling_rate:g},',
- f'in_cutoff={self.in_cutoff:g}, out_cutoff={self.out_cutoff:g},',
- f'in_half_width={self.in_half_width:g}, out_half_width={self.out_half_width:g},',
- f'in_size={list(self.in_size)}, out_size={list(self.out_size)},',
- f'in_channels={self.in_channels:d}, out_channels={self.out_channels:d}'])
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class SynthesisNetwork(torch.nn.Module):
- def __init__(self,
- # Intermediate latent (W) dimensionality.
- w_dim,
- img_resolution, # Output image resolution.
- img_channels, # Number of color channels.
- square,
- # Overall multiplier for the number of channels.
- channel_base=32768,
- # Maximum number of channels in any layer.
- channel_max=512,
- # Total number of layers, excluding Fourier features and ToRGB.
- num_layers=14,
- # Number of critically sampled layers at the end.
- num_critical=2,
- # Cutoff frequency of the first layer (f_{c,0}).
- first_cutoff=2,
- # Minimum stopband of the first layer (f_{t,0}).
- first_stopband=2**2.1,
- # Minimum stopband of the last layer, expressed relative to the cutoff.
- last_stopband_rel=2**0.3,
- # Number of additional pixels outside the image.
- margin_size=10,
- output_scale=0.25, # Scale factor for the output image.
- # Use FP16 for the N highest resolutions.
- num_fp16_res=4,
- # Arguments for SynthesisLayer.
- **layer_kwargs,
-
- ):
- super().__init__()
- self.w_dim = w_dim
- self.num_ws = num_layers + 2
- self.img_resolution = img_resolution
- self.img_channels = img_channels
- self.num_layers = num_layers
- self.num_critical = num_critical
- self.margin_size = margin_size
- self.output_scale = output_scale
- self.num_fp16_res = num_fp16_res
- self.square = square
-
- # Geometric progression of layer cutoffs and min. stopbands.
- last_cutoff = self.img_resolution / 2 # f_{c,N}
- last_stopband = last_cutoff * last_stopband_rel # f_{t,N}
- exponents = np.minimum(
- np.arange(self.num_layers + 1) / (self.num_layers - self.num_critical), 1)
- cutoffs = first_cutoff * \
- (last_cutoff / first_cutoff) ** exponents # f_c[i]
- stopbands = first_stopband * \
- (last_stopband / first_stopband) ** exponents # f_t[i]
-
- # Compute remaining layer parameters.
- sampling_rates = np.exp2(
- np.ceil(np.log2(np.minimum(stopbands * 2, self.img_resolution)))) # s[i]
- half_widths = np.maximum(
- stopbands, sampling_rates / 2) - cutoffs # f_h[i]
- sizes = sampling_rates + self.margin_size * 2
- sizes[-2:] = self.img_resolution
- channels = np.rint(np.minimum(
- (channel_base / 2) / cutoffs, channel_max))
- channels[-1] = self.img_channels
-
- # Construct layers.
- self.input = SynthesisInput(
- w_dim=self.w_dim, channels=int(channels[0]), size=int(sizes[0]),
- sampling_rate=sampling_rates[0], bandwidth=cutoffs[0], square=self.square)
- self.layer_names = []
- for idx in range(self.num_layers + 1):
- prev = max(idx - 1, 0)
- is_torgb = (idx == self.num_layers)
- is_critically_sampled = (
- idx >= self.num_layers - self.num_critical)
- use_fp16 = (sampling_rates[idx] * (2 **
- self.num_fp16_res) > self.img_resolution)
- layer = SynthesisLayer(
- w_dim=self.w_dim, is_torgb=is_torgb, is_critically_sampled=is_critically_sampled, use_fp16=use_fp16,
- in_channels=int(channels[prev]), out_channels=int(channels[idx]),
- in_size=int(sizes[prev]), out_size=int(sizes[idx]),
- in_sampling_rate=int(sampling_rates[prev]), out_sampling_rate=int(sampling_rates[idx]),
- in_cutoff=cutoffs[prev], out_cutoff=cutoffs[idx],
- in_half_width=half_widths[prev], out_half_width=half_widths[idx],
- square=self.square,
- **layer_kwargs)
- name = f'L{idx}_{layer.out_size[0]}_{layer.out_channels}'
- setattr(self, name, layer)
- self.layer_names.append(name)
-
- def forward(self, ws, **layer_kwargs):
- misc.assert_shape(ws, [None, self.num_ws, self.w_dim])
- ws = ws.to(torch.float32).unbind(dim=1)
-
- # Execute layers.
- x = self.input(ws[0])
- for name, w in zip(self.layer_names, ws[1:]):
- x = getattr(self, name)(x, w, **layer_kwargs)
- if self.output_scale != 1:
- x = x * self.output_scale
-
- # Ensure correct shape and dtype.
- if self.square:
- misc.assert_shape(
- x, [None, self.img_channels, self.img_resolution, self.img_resolution])
- else:
- misc.assert_shape(
- x, [None, self.img_channels, self.img_resolution, self.img_resolution // 2])
- x = x.to(torch.float32)
- return x
-
- def extra_repr(self):
- return '\n'.join([
- f'w_dim={self.w_dim:d}, num_ws={self.num_ws:d},',
- f'img_resolution={self.img_resolution:d}, img_channels={self.img_channels:d},',
- f'num_layers={self.num_layers:d}, num_critical={self.num_critical:d},',
- f'margin_size={self.margin_size:d}, num_fp16_res={self.num_fp16_res:d}'])
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class Generator(torch.nn.Module):
- def __init__(self,
- z_dim, # Input latent (Z) dimensionality.
- # Conditioning label (C) dimensionality.
- c_dim,
- # Intermediate latent (W) dimensionality.
- w_dim,
- img_resolution, # Output resolution.
- square,
- img_channels, # Number of output color channels.
- mapping_kwargs={}, # Arguments for MappingNetwork.
- **synthesis_kwargs, # Arguments for SynthesisNetwork.
- ):
- super().__init__()
- self.z_dim = z_dim
- self.c_dim = c_dim
- self.w_dim = w_dim
- self.img_resolution = img_resolution
- self.img_channels = img_channels
- self.square = square
- self.synthesis = SynthesisNetwork(w_dim=w_dim, img_resolution=img_resolution,
- img_channels=img_channels, square=self.square, **synthesis_kwargs)
- self.num_ws = self.synthesis.num_ws
- self.mapping = MappingNetwork(
- z_dim=z_dim, c_dim=c_dim, w_dim=w_dim, num_ws=self.num_ws, **mapping_kwargs)
-
- def forward(self, z, c, truncation_psi=1, truncation_cutoff=None, update_emas=False, **synthesis_kwargs):
- ws = self.mapping(z, c, truncation_psi=truncation_psi,
- truncation_cutoff=truncation_cutoff, update_emas=update_emas)
- img = self.synthesis(ws, update_emas=update_emas, **synthesis_kwargs)
- return img
-
-# ----------------------------------------------------------------------------
diff --git a/spaces/EDGAhab/VITS-Aatrox-AI/attentions.py b/spaces/EDGAhab/VITS-Aatrox-AI/attentions.py
deleted file mode 100644
index e7bb6bdcedcca24fbf3e1f026ad9a3e37bb7f966..0000000000000000000000000000000000000000
--- a/spaces/EDGAhab/VITS-Aatrox-AI/attentions.py
+++ /dev/null
@@ -1,303 +0,0 @@
-import copy
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-import modules
-from modules import LayerNorm
-
-
-class Encoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
-
- self.drop = nn.Dropout(p_dropout)
- self.attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class Decoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
-
- self.drop = nn.Dropout(p_dropout)
- self.self_attn_layers = nn.ModuleList()
- self.norm_layers_0 = nn.ModuleList()
- self.encdec_attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init))
- self.norm_layers_0.append(LayerNorm(hidden_channels))
- self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask, h, h_mask):
- """
- x: decoder input
- h: encoder output
- """
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype)
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.self_attn_layers[i](x, x, self_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_0[i](x + y)
-
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False):
- super().__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.heads_share = heads_share
- self.block_length = block_length
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = nn.Conv1d(channels, channels, 1)
- self.conv_k = nn.Conv1d(channels, channels, 1)
- self.conv_v = nn.Conv1d(channels, channels, 1)
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels**-0.5
- self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
- self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
-
- nn.init.xavier_uniform_(self.conv_q.weight)
- nn.init.xavier_uniform_(self.conv_k.weight)
- nn.init.xavier_uniform_(self.conv_v.weight)
- if proximal_init:
- with torch.no_grad():
- self.conv_k.weight.copy_(self.conv_q.weight)
- self.conv_k.bias.copy_(self.conv_q.bias)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- # reshape [b, d, t] -> [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
- if self.window_size is not None:
- assert t_s == t_t, "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings)
- scores_local = self._relative_position_to_absolute_position(rel_logits)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- assert t_s == t_t, "Local attention is only available for self-attention."
- block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
- scores = scores.masked_fill(block_mask == 0, -1e4)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
- output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
- output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- max_relative_position = 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]]))
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # padd along column
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]]))
- x_flat = x.view([batch, heads, length**2 + length*(length -1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
- self.causal = causal
-
- if causal:
- self.padding = self._causal_padding
- else:
- self.padding = self._same_padding
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(self.padding(x * x_mask))
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(self.padding(x * x_mask))
- return x * x_mask
-
- def _causal_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = self.kernel_size - 1
- pad_r = 0
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
-
- def _same_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = (self.kernel_size - 1) // 2
- pad_r = self.kernel_size // 2
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
diff --git a/spaces/EDGAhab/VITS-Aatrox-AI/monotonic_align/core.c b/spaces/EDGAhab/VITS-Aatrox-AI/monotonic_align/core.c
deleted file mode 100644
index 5631d20a9a00db29e143a6e8e4e5c378d6bb850a..0000000000000000000000000000000000000000
--- a/spaces/EDGAhab/VITS-Aatrox-AI/monotonic_align/core.c
+++ /dev/null
@@ -1,21299 +0,0 @@
-/* Generated by Cython 0.29.21 */
-
-/* BEGIN: Cython Metadata
-{
- "distutils": {
- "name": "monotonic_align.core",
- "sources": [
- "core.pyx"
- ]
- },
- "module_name": "monotonic_align.core"
-}
-END: Cython Metadata */
-
-#define PY_SSIZE_T_CLEAN
-#include "Python.h"
-#ifndef Py_PYTHON_H
- #error Python headers needed to compile C extensions, please install development version of Python.
-#elif PY_VERSION_HEX < 0x02060000 || (0x03000000 <= PY_VERSION_HEX && PY_VERSION_HEX < 0x03030000)
- #error Cython requires Python 2.6+ or Python 3.3+.
-#else
-#define CYTHON_ABI "0_29_21"
-#define CYTHON_HEX_VERSION 0x001D15F0
-#define CYTHON_FUTURE_DIVISION 0
-#include
-#ifndef offsetof
- #define offsetof(type, member) ( (size_t) & ((type*)0) -> member )
-#endif
-#if !defined(WIN32) && !defined(MS_WINDOWS)
- #ifndef __stdcall
- #define __stdcall
- #endif
- #ifndef __cdecl
- #define __cdecl
- #endif
- #ifndef __fastcall
- #define __fastcall
- #endif
-#endif
-#ifndef DL_IMPORT
- #define DL_IMPORT(t) t
-#endif
-#ifndef DL_EXPORT
- #define DL_EXPORT(t) t
-#endif
-#define __PYX_COMMA ,
-#ifndef HAVE_LONG_LONG
- #if PY_VERSION_HEX >= 0x02070000
- #define HAVE_LONG_LONG
- #endif
-#endif
-#ifndef PY_LONG_LONG
- #define PY_LONG_LONG LONG_LONG
-#endif
-#ifndef Py_HUGE_VAL
- #define Py_HUGE_VAL HUGE_VAL
-#endif
-#ifdef PYPY_VERSION
- #define CYTHON_COMPILING_IN_PYPY 1
- #define CYTHON_COMPILING_IN_PYSTON 0
- #define CYTHON_COMPILING_IN_CPYTHON 0
- #undef CYTHON_USE_TYPE_SLOTS
- #define CYTHON_USE_TYPE_SLOTS 0
- #undef CYTHON_USE_PYTYPE_LOOKUP
- #define CYTHON_USE_PYTYPE_LOOKUP 0
- #if PY_VERSION_HEX < 0x03050000
- #undef CYTHON_USE_ASYNC_SLOTS
- #define CYTHON_USE_ASYNC_SLOTS 0
- #elif !defined(CYTHON_USE_ASYNC_SLOTS)
- #define CYTHON_USE_ASYNC_SLOTS 1
- #endif
- #undef CYTHON_USE_PYLIST_INTERNALS
- #define CYTHON_USE_PYLIST_INTERNALS 0
- #undef CYTHON_USE_UNICODE_INTERNALS
- #define CYTHON_USE_UNICODE_INTERNALS 0
- #undef CYTHON_USE_UNICODE_WRITER
- #define CYTHON_USE_UNICODE_WRITER 0
- #undef CYTHON_USE_PYLONG_INTERNALS
- #define CYTHON_USE_PYLONG_INTERNALS 0
- #undef CYTHON_AVOID_BORROWED_REFS
- #define CYTHON_AVOID_BORROWED_REFS 1
- #undef CYTHON_ASSUME_SAFE_MACROS
- #define CYTHON_ASSUME_SAFE_MACROS 0
- #undef CYTHON_UNPACK_METHODS
- #define CYTHON_UNPACK_METHODS 0
- #undef CYTHON_FAST_THREAD_STATE
- #define CYTHON_FAST_THREAD_STATE 0
- #undef CYTHON_FAST_PYCALL
- #define CYTHON_FAST_PYCALL 0
- #undef CYTHON_PEP489_MULTI_PHASE_INIT
- #define CYTHON_PEP489_MULTI_PHASE_INIT 0
- #undef CYTHON_USE_TP_FINALIZE
- #define CYTHON_USE_TP_FINALIZE 0
- #undef CYTHON_USE_DICT_VERSIONS
- #define CYTHON_USE_DICT_VERSIONS 0
- #undef CYTHON_USE_EXC_INFO_STACK
- #define CYTHON_USE_EXC_INFO_STACK 0
-#elif defined(PYSTON_VERSION)
- #define CYTHON_COMPILING_IN_PYPY 0
- #define CYTHON_COMPILING_IN_PYSTON 1
- #define CYTHON_COMPILING_IN_CPYTHON 0
- #ifndef CYTHON_USE_TYPE_SLOTS
- #define CYTHON_USE_TYPE_SLOTS 1
- #endif
- #undef CYTHON_USE_PYTYPE_LOOKUP
- #define CYTHON_USE_PYTYPE_LOOKUP 0
- #undef CYTHON_USE_ASYNC_SLOTS
- #define CYTHON_USE_ASYNC_SLOTS 0
- #undef CYTHON_USE_PYLIST_INTERNALS
- #define CYTHON_USE_PYLIST_INTERNALS 0
- #ifndef CYTHON_USE_UNICODE_INTERNALS
- #define CYTHON_USE_UNICODE_INTERNALS 1
- #endif
- #undef CYTHON_USE_UNICODE_WRITER
- #define CYTHON_USE_UNICODE_WRITER 0
- #undef CYTHON_USE_PYLONG_INTERNALS
- #define CYTHON_USE_PYLONG_INTERNALS 0
- #ifndef CYTHON_AVOID_BORROWED_REFS
- #define CYTHON_AVOID_BORROWED_REFS 0
- #endif
- #ifndef CYTHON_ASSUME_SAFE_MACROS
- #define CYTHON_ASSUME_SAFE_MACROS 1
- #endif
- #ifndef CYTHON_UNPACK_METHODS
- #define CYTHON_UNPACK_METHODS 1
- #endif
- #undef CYTHON_FAST_THREAD_STATE
- #define CYTHON_FAST_THREAD_STATE 0
- #undef CYTHON_FAST_PYCALL
- #define CYTHON_FAST_PYCALL 0
- #undef CYTHON_PEP489_MULTI_PHASE_INIT
- #define CYTHON_PEP489_MULTI_PHASE_INIT 0
- #undef CYTHON_USE_TP_FINALIZE
- #define CYTHON_USE_TP_FINALIZE 0
- #undef CYTHON_USE_DICT_VERSIONS
- #define CYTHON_USE_DICT_VERSIONS 0
- #undef CYTHON_USE_EXC_INFO_STACK
- #define CYTHON_USE_EXC_INFO_STACK 0
-#else
- #define CYTHON_COMPILING_IN_PYPY 0
- #define CYTHON_COMPILING_IN_PYSTON 0
- #define CYTHON_COMPILING_IN_CPYTHON 1
- #ifndef CYTHON_USE_TYPE_SLOTS
- #define CYTHON_USE_TYPE_SLOTS 1
- #endif
- #if PY_VERSION_HEX < 0x02070000
- #undef CYTHON_USE_PYTYPE_LOOKUP
- #define CYTHON_USE_PYTYPE_LOOKUP 0
- #elif !defined(CYTHON_USE_PYTYPE_LOOKUP)
- #define CYTHON_USE_PYTYPE_LOOKUP 1
- #endif
- #if PY_MAJOR_VERSION < 3
- #undef CYTHON_USE_ASYNC_SLOTS
- #define CYTHON_USE_ASYNC_SLOTS 0
- #elif !defined(CYTHON_USE_ASYNC_SLOTS)
- #define CYTHON_USE_ASYNC_SLOTS 1
- #endif
- #if PY_VERSION_HEX < 0x02070000
- #undef CYTHON_USE_PYLONG_INTERNALS
- #define CYTHON_USE_PYLONG_INTERNALS 0
- #elif !defined(CYTHON_USE_PYLONG_INTERNALS)
- #define CYTHON_USE_PYLONG_INTERNALS 1
- #endif
- #ifndef CYTHON_USE_PYLIST_INTERNALS
- #define CYTHON_USE_PYLIST_INTERNALS 1
- #endif
- #ifndef CYTHON_USE_UNICODE_INTERNALS
- #define CYTHON_USE_UNICODE_INTERNALS 1
- #endif
- #if PY_VERSION_HEX < 0x030300F0
- #undef CYTHON_USE_UNICODE_WRITER
- #define CYTHON_USE_UNICODE_WRITER 0
- #elif !defined(CYTHON_USE_UNICODE_WRITER)
- #define CYTHON_USE_UNICODE_WRITER 1
- #endif
- #ifndef CYTHON_AVOID_BORROWED_REFS
- #define CYTHON_AVOID_BORROWED_REFS 0
- #endif
- #ifndef CYTHON_ASSUME_SAFE_MACROS
- #define CYTHON_ASSUME_SAFE_MACROS 1
- #endif
- #ifndef CYTHON_UNPACK_METHODS
- #define CYTHON_UNPACK_METHODS 1
- #endif
- #ifndef CYTHON_FAST_THREAD_STATE
- #define CYTHON_FAST_THREAD_STATE 1
- #endif
- #ifndef CYTHON_FAST_PYCALL
- #define CYTHON_FAST_PYCALL 1
- #endif
- #ifndef CYTHON_PEP489_MULTI_PHASE_INIT
- #define CYTHON_PEP489_MULTI_PHASE_INIT (PY_VERSION_HEX >= 0x03050000)
- #endif
- #ifndef CYTHON_USE_TP_FINALIZE
- #define CYTHON_USE_TP_FINALIZE (PY_VERSION_HEX >= 0x030400a1)
- #endif
- #ifndef CYTHON_USE_DICT_VERSIONS
- #define CYTHON_USE_DICT_VERSIONS (PY_VERSION_HEX >= 0x030600B1)
- #endif
- #ifndef CYTHON_USE_EXC_INFO_STACK
- #define CYTHON_USE_EXC_INFO_STACK (PY_VERSION_HEX >= 0x030700A3)
- #endif
-#endif
-#if !defined(CYTHON_FAST_PYCCALL)
-#define CYTHON_FAST_PYCCALL (CYTHON_FAST_PYCALL && PY_VERSION_HEX >= 0x030600B1)
-#endif
-#if CYTHON_USE_PYLONG_INTERNALS
- #include "longintrepr.h"
- #undef SHIFT
- #undef BASE
- #undef MASK
- #ifdef SIZEOF_VOID_P
- enum { __pyx_check_sizeof_voidp = 1 / (int)(SIZEOF_VOID_P == sizeof(void*)) };
- #endif
-#endif
-#ifndef __has_attribute
- #define __has_attribute(x) 0
-#endif
-#ifndef __has_cpp_attribute
- #define __has_cpp_attribute(x) 0
-#endif
-#ifndef CYTHON_RESTRICT
- #if defined(__GNUC__)
- #define CYTHON_RESTRICT __restrict__
- #elif defined(_MSC_VER) && _MSC_VER >= 1400
- #define CYTHON_RESTRICT __restrict
- #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L
- #define CYTHON_RESTRICT restrict
- #else
- #define CYTHON_RESTRICT
- #endif
-#endif
-#ifndef CYTHON_UNUSED
-# if defined(__GNUC__)
-# if !(defined(__cplusplus)) || (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4))
-# define CYTHON_UNUSED __attribute__ ((__unused__))
-# else
-# define CYTHON_UNUSED
-# endif
-# elif defined(__ICC) || (defined(__INTEL_COMPILER) && !defined(_MSC_VER))
-# define CYTHON_UNUSED __attribute__ ((__unused__))
-# else
-# define CYTHON_UNUSED
-# endif
-#endif
-#ifndef CYTHON_MAYBE_UNUSED_VAR
-# if defined(__cplusplus)
- template void CYTHON_MAYBE_UNUSED_VAR( const T& ) { }
-# else
-# define CYTHON_MAYBE_UNUSED_VAR(x) (void)(x)
-# endif
-#endif
-#ifndef CYTHON_NCP_UNUSED
-# if CYTHON_COMPILING_IN_CPYTHON
-# define CYTHON_NCP_UNUSED
-# else
-# define CYTHON_NCP_UNUSED CYTHON_UNUSED
-# endif
-#endif
-#define __Pyx_void_to_None(void_result) ((void)(void_result), Py_INCREF(Py_None), Py_None)
-#ifdef _MSC_VER
- #ifndef _MSC_STDINT_H_
- #if _MSC_VER < 1300
- typedef unsigned char uint8_t;
- typedef unsigned int uint32_t;
- #else
- typedef unsigned __int8 uint8_t;
- typedef unsigned __int32 uint32_t;
- #endif
- #endif
-#else
- #include
-#endif
-#ifndef CYTHON_FALLTHROUGH
- #if defined(__cplusplus) && __cplusplus >= 201103L
- #if __has_cpp_attribute(fallthrough)
- #define CYTHON_FALLTHROUGH [[fallthrough]]
- #elif __has_cpp_attribute(clang::fallthrough)
- #define CYTHON_FALLTHROUGH [[clang::fallthrough]]
- #elif __has_cpp_attribute(gnu::fallthrough)
- #define CYTHON_FALLTHROUGH [[gnu::fallthrough]]
- #endif
- #endif
- #ifndef CYTHON_FALLTHROUGH
- #if __has_attribute(fallthrough)
- #define CYTHON_FALLTHROUGH __attribute__((fallthrough))
- #else
- #define CYTHON_FALLTHROUGH
- #endif
- #endif
- #if defined(__clang__ ) && defined(__apple_build_version__)
- #if __apple_build_version__ < 7000000
- #undef CYTHON_FALLTHROUGH
- #define CYTHON_FALLTHROUGH
- #endif
- #endif
-#endif
-
-#ifndef CYTHON_INLINE
- #if defined(__clang__)
- #define CYTHON_INLINE __inline__ __attribute__ ((__unused__))
- #elif defined(__GNUC__)
- #define CYTHON_INLINE __inline__
- #elif defined(_MSC_VER)
- #define CYTHON_INLINE __inline
- #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L
- #define CYTHON_INLINE inline
- #else
- #define CYTHON_INLINE
- #endif
-#endif
-
-#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX < 0x02070600 && !defined(Py_OptimizeFlag)
- #define Py_OptimizeFlag 0
-#endif
-#define __PYX_BUILD_PY_SSIZE_T "n"
-#define CYTHON_FORMAT_SSIZE_T "z"
-#if PY_MAJOR_VERSION < 3
- #define __Pyx_BUILTIN_MODULE_NAME "__builtin__"
- #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\
- PyCode_New(a+k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)
- #define __Pyx_DefaultClassType PyClass_Type
-#else
- #define __Pyx_BUILTIN_MODULE_NAME "builtins"
-#if PY_VERSION_HEX >= 0x030800A4 && PY_VERSION_HEX < 0x030800B2
- #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\
- PyCode_New(a, 0, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)
-#else
- #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\
- PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)
-#endif
- #define __Pyx_DefaultClassType PyType_Type
-#endif
-#ifndef Py_TPFLAGS_CHECKTYPES
- #define Py_TPFLAGS_CHECKTYPES 0
-#endif
-#ifndef Py_TPFLAGS_HAVE_INDEX
- #define Py_TPFLAGS_HAVE_INDEX 0
-#endif
-#ifndef Py_TPFLAGS_HAVE_NEWBUFFER
- #define Py_TPFLAGS_HAVE_NEWBUFFER 0
-#endif
-#ifndef Py_TPFLAGS_HAVE_FINALIZE
- #define Py_TPFLAGS_HAVE_FINALIZE 0
-#endif
-#ifndef METH_STACKLESS
- #define METH_STACKLESS 0
-#endif
-#if PY_VERSION_HEX <= 0x030700A3 || !defined(METH_FASTCALL)
- #ifndef METH_FASTCALL
- #define METH_FASTCALL 0x80
- #endif
- typedef PyObject *(*__Pyx_PyCFunctionFast) (PyObject *self, PyObject *const *args, Py_ssize_t nargs);
- typedef PyObject *(*__Pyx_PyCFunctionFastWithKeywords) (PyObject *self, PyObject *const *args,
- Py_ssize_t nargs, PyObject *kwnames);
-#else
- #define __Pyx_PyCFunctionFast _PyCFunctionFast
- #define __Pyx_PyCFunctionFastWithKeywords _PyCFunctionFastWithKeywords
-#endif
-#if CYTHON_FAST_PYCCALL
-#define __Pyx_PyFastCFunction_Check(func)\
- ((PyCFunction_Check(func) && (METH_FASTCALL == (PyCFunction_GET_FLAGS(func) & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_KEYWORDS | METH_STACKLESS)))))
-#else
-#define __Pyx_PyFastCFunction_Check(func) 0
-#endif
-#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Malloc)
- #define PyObject_Malloc(s) PyMem_Malloc(s)
- #define PyObject_Free(p) PyMem_Free(p)
- #define PyObject_Realloc(p) PyMem_Realloc(p)
-#endif
-#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030400A1
- #define PyMem_RawMalloc(n) PyMem_Malloc(n)
- #define PyMem_RawRealloc(p, n) PyMem_Realloc(p, n)
- #define PyMem_RawFree(p) PyMem_Free(p)
-#endif
-#if CYTHON_COMPILING_IN_PYSTON
- #define __Pyx_PyCode_HasFreeVars(co) PyCode_HasFreeVars(co)
- #define __Pyx_PyFrame_SetLineNumber(frame, lineno) PyFrame_SetLineNumber(frame, lineno)
-#else
- #define __Pyx_PyCode_HasFreeVars(co) (PyCode_GetNumFree(co) > 0)
- #define __Pyx_PyFrame_SetLineNumber(frame, lineno) (frame)->f_lineno = (lineno)
-#endif
-#if !CYTHON_FAST_THREAD_STATE || PY_VERSION_HEX < 0x02070000
- #define __Pyx_PyThreadState_Current PyThreadState_GET()
-#elif PY_VERSION_HEX >= 0x03060000
- #define __Pyx_PyThreadState_Current _PyThreadState_UncheckedGet()
-#elif PY_VERSION_HEX >= 0x03000000
- #define __Pyx_PyThreadState_Current PyThreadState_GET()
-#else
- #define __Pyx_PyThreadState_Current _PyThreadState_Current
-#endif
-#if PY_VERSION_HEX < 0x030700A2 && !defined(PyThread_tss_create) && !defined(Py_tss_NEEDS_INIT)
-#include "pythread.h"
-#define Py_tss_NEEDS_INIT 0
-typedef int Py_tss_t;
-static CYTHON_INLINE int PyThread_tss_create(Py_tss_t *key) {
- *key = PyThread_create_key();
- return 0;
-}
-static CYTHON_INLINE Py_tss_t * PyThread_tss_alloc(void) {
- Py_tss_t *key = (Py_tss_t *)PyObject_Malloc(sizeof(Py_tss_t));
- *key = Py_tss_NEEDS_INIT;
- return key;
-}
-static CYTHON_INLINE void PyThread_tss_free(Py_tss_t *key) {
- PyObject_Free(key);
-}
-static CYTHON_INLINE int PyThread_tss_is_created(Py_tss_t *key) {
- return *key != Py_tss_NEEDS_INIT;
-}
-static CYTHON_INLINE void PyThread_tss_delete(Py_tss_t *key) {
- PyThread_delete_key(*key);
- *key = Py_tss_NEEDS_INIT;
-}
-static CYTHON_INLINE int PyThread_tss_set(Py_tss_t *key, void *value) {
- return PyThread_set_key_value(*key, value);
-}
-static CYTHON_INLINE void * PyThread_tss_get(Py_tss_t *key) {
- return PyThread_get_key_value(*key);
-}
-#endif
-#if CYTHON_COMPILING_IN_CPYTHON || defined(_PyDict_NewPresized)
-#define __Pyx_PyDict_NewPresized(n) ((n <= 8) ? PyDict_New() : _PyDict_NewPresized(n))
-#else
-#define __Pyx_PyDict_NewPresized(n) PyDict_New()
-#endif
-#if PY_MAJOR_VERSION >= 3 || CYTHON_FUTURE_DIVISION
- #define __Pyx_PyNumber_Divide(x,y) PyNumber_TrueDivide(x,y)
- #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceTrueDivide(x,y)
-#else
- #define __Pyx_PyNumber_Divide(x,y) PyNumber_Divide(x,y)
- #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceDivide(x,y)
-#endif
-#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1 && CYTHON_USE_UNICODE_INTERNALS
-#define __Pyx_PyDict_GetItemStr(dict, name) _PyDict_GetItem_KnownHash(dict, name, ((PyASCIIObject *) name)->hash)
-#else
-#define __Pyx_PyDict_GetItemStr(dict, name) PyDict_GetItem(dict, name)
-#endif
-#if PY_VERSION_HEX > 0x03030000 && defined(PyUnicode_KIND)
- #define CYTHON_PEP393_ENABLED 1
- #define __Pyx_PyUnicode_READY(op) (likely(PyUnicode_IS_READY(op)) ?\
- 0 : _PyUnicode_Ready((PyObject *)(op)))
- #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_LENGTH(u)
- #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_READ_CHAR(u, i)
- #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) PyUnicode_MAX_CHAR_VALUE(u)
- #define __Pyx_PyUnicode_KIND(u) PyUnicode_KIND(u)
- #define __Pyx_PyUnicode_DATA(u) PyUnicode_DATA(u)
- #define __Pyx_PyUnicode_READ(k, d, i) PyUnicode_READ(k, d, i)
- #define __Pyx_PyUnicode_WRITE(k, d, i, ch) PyUnicode_WRITE(k, d, i, ch)
- #if defined(PyUnicode_IS_READY) && defined(PyUnicode_GET_SIZE)
- #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : PyUnicode_GET_SIZE(u)))
- #else
- #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_LENGTH(u))
- #endif
-#else
- #define CYTHON_PEP393_ENABLED 0
- #define PyUnicode_1BYTE_KIND 1
- #define PyUnicode_2BYTE_KIND 2
- #define PyUnicode_4BYTE_KIND 4
- #define __Pyx_PyUnicode_READY(op) (0)
- #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_SIZE(u)
- #define __Pyx_PyUnicode_READ_CHAR(u, i) ((Py_UCS4)(PyUnicode_AS_UNICODE(u)[i]))
- #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) ((sizeof(Py_UNICODE) == 2) ? 65535 : 1114111)
- #define __Pyx_PyUnicode_KIND(u) (sizeof(Py_UNICODE))
- #define __Pyx_PyUnicode_DATA(u) ((void*)PyUnicode_AS_UNICODE(u))
- #define __Pyx_PyUnicode_READ(k, d, i) ((void)(k), (Py_UCS4)(((Py_UNICODE*)d)[i]))
- #define __Pyx_PyUnicode_WRITE(k, d, i, ch) (((void)(k)), ((Py_UNICODE*)d)[i] = ch)
- #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_SIZE(u))
-#endif
-#if CYTHON_COMPILING_IN_PYPY
- #define __Pyx_PyUnicode_Concat(a, b) PyNumber_Add(a, b)
- #define __Pyx_PyUnicode_ConcatSafe(a, b) PyNumber_Add(a, b)
-#else
- #define __Pyx_PyUnicode_Concat(a, b) PyUnicode_Concat(a, b)
- #define __Pyx_PyUnicode_ConcatSafe(a, b) ((unlikely((a) == Py_None) || unlikely((b) == Py_None)) ?\
- PyNumber_Add(a, b) : __Pyx_PyUnicode_Concat(a, b))
-#endif
-#if CYTHON_COMPILING_IN_PYPY && !defined(PyUnicode_Contains)
- #define PyUnicode_Contains(u, s) PySequence_Contains(u, s)
-#endif
-#if CYTHON_COMPILING_IN_PYPY && !defined(PyByteArray_Check)
- #define PyByteArray_Check(obj) PyObject_TypeCheck(obj, &PyByteArray_Type)
-#endif
-#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Format)
- #define PyObject_Format(obj, fmt) PyObject_CallMethod(obj, "__format__", "O", fmt)
-#endif
-#define __Pyx_PyString_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyString_Check(b) && !PyString_CheckExact(b)))) ? PyNumber_Remainder(a, b) : __Pyx_PyString_Format(a, b))
-#define __Pyx_PyUnicode_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyUnicode_Check(b) && !PyUnicode_CheckExact(b)))) ? PyNumber_Remainder(a, b) : PyUnicode_Format(a, b))
-#if PY_MAJOR_VERSION >= 3
- #define __Pyx_PyString_Format(a, b) PyUnicode_Format(a, b)
-#else
- #define __Pyx_PyString_Format(a, b) PyString_Format(a, b)
-#endif
-#if PY_MAJOR_VERSION < 3 && !defined(PyObject_ASCII)
- #define PyObject_ASCII(o) PyObject_Repr(o)
-#endif
-#if PY_MAJOR_VERSION >= 3
- #define PyBaseString_Type PyUnicode_Type
- #define PyStringObject PyUnicodeObject
- #define PyString_Type PyUnicode_Type
- #define PyString_Check PyUnicode_Check
- #define PyString_CheckExact PyUnicode_CheckExact
-#ifndef PyObject_Unicode
- #define PyObject_Unicode PyObject_Str
-#endif
-#endif
-#if PY_MAJOR_VERSION >= 3
- #define __Pyx_PyBaseString_Check(obj) PyUnicode_Check(obj)
- #define __Pyx_PyBaseString_CheckExact(obj) PyUnicode_CheckExact(obj)
-#else
- #define __Pyx_PyBaseString_Check(obj) (PyString_Check(obj) || PyUnicode_Check(obj))
- #define __Pyx_PyBaseString_CheckExact(obj) (PyString_CheckExact(obj) || PyUnicode_CheckExact(obj))
-#endif
-#ifndef PySet_CheckExact
- #define PySet_CheckExact(obj) (Py_TYPE(obj) == &PySet_Type)
-#endif
-#if PY_VERSION_HEX >= 0x030900A4
- #define __Pyx_SET_REFCNT(obj, refcnt) Py_SET_REFCNT(obj, refcnt)
- #define __Pyx_SET_SIZE(obj, size) Py_SET_SIZE(obj, size)
-#else
- #define __Pyx_SET_REFCNT(obj, refcnt) Py_REFCNT(obj) = (refcnt)
- #define __Pyx_SET_SIZE(obj, size) Py_SIZE(obj) = (size)
-#endif
-#if CYTHON_ASSUME_SAFE_MACROS
- #define __Pyx_PySequence_SIZE(seq) Py_SIZE(seq)
-#else
- #define __Pyx_PySequence_SIZE(seq) PySequence_Size(seq)
-#endif
-#if PY_MAJOR_VERSION >= 3
- #define PyIntObject PyLongObject
- #define PyInt_Type PyLong_Type
- #define PyInt_Check(op) PyLong_Check(op)
- #define PyInt_CheckExact(op) PyLong_CheckExact(op)
- #define PyInt_FromString PyLong_FromString
- #define PyInt_FromUnicode PyLong_FromUnicode
- #define PyInt_FromLong PyLong_FromLong
- #define PyInt_FromSize_t PyLong_FromSize_t
- #define PyInt_FromSsize_t PyLong_FromSsize_t
- #define PyInt_AsLong PyLong_AsLong
- #define PyInt_AS_LONG PyLong_AS_LONG
- #define PyInt_AsSsize_t PyLong_AsSsize_t
- #define PyInt_AsUnsignedLongMask PyLong_AsUnsignedLongMask
- #define PyInt_AsUnsignedLongLongMask PyLong_AsUnsignedLongLongMask
- #define PyNumber_Int PyNumber_Long
-#endif
-#if PY_MAJOR_VERSION >= 3
- #define PyBoolObject PyLongObject
-#endif
-#if PY_MAJOR_VERSION >= 3 && CYTHON_COMPILING_IN_PYPY
- #ifndef PyUnicode_InternFromString
- #define PyUnicode_InternFromString(s) PyUnicode_FromString(s)
- #endif
-#endif
-#if PY_VERSION_HEX < 0x030200A4
- typedef long Py_hash_t;
- #define __Pyx_PyInt_FromHash_t PyInt_FromLong
- #define __Pyx_PyInt_AsHash_t PyInt_AsLong
-#else
- #define __Pyx_PyInt_FromHash_t PyInt_FromSsize_t
- #define __Pyx_PyInt_AsHash_t PyInt_AsSsize_t
-#endif
-#if PY_MAJOR_VERSION >= 3
- #define __Pyx_PyMethod_New(func, self, klass) ((self) ? ((void)(klass), PyMethod_New(func, self)) : __Pyx_NewRef(func))
-#else
- #define __Pyx_PyMethod_New(func, self, klass) PyMethod_New(func, self, klass)
-#endif
-#if CYTHON_USE_ASYNC_SLOTS
- #if PY_VERSION_HEX >= 0x030500B1
- #define __Pyx_PyAsyncMethodsStruct PyAsyncMethods
- #define __Pyx_PyType_AsAsync(obj) (Py_TYPE(obj)->tp_as_async)
- #else
- #define __Pyx_PyType_AsAsync(obj) ((__Pyx_PyAsyncMethodsStruct*) (Py_TYPE(obj)->tp_reserved))
- #endif
-#else
- #define __Pyx_PyType_AsAsync(obj) NULL
-#endif
-#ifndef __Pyx_PyAsyncMethodsStruct
- typedef struct {
- unaryfunc am_await;
- unaryfunc am_aiter;
- unaryfunc am_anext;
- } __Pyx_PyAsyncMethodsStruct;
-#endif
-
-#if defined(WIN32) || defined(MS_WINDOWS)
- #define _USE_MATH_DEFINES
-#endif
-#include
-#ifdef NAN
-#define __PYX_NAN() ((float) NAN)
-#else
-static CYTHON_INLINE float __PYX_NAN() {
- float value;
- memset(&value, 0xFF, sizeof(value));
- return value;
-}
-#endif
-#if defined(__CYGWIN__) && defined(_LDBL_EQ_DBL)
-#define __Pyx_truncl trunc
-#else
-#define __Pyx_truncl truncl
-#endif
-
-#define __PYX_MARK_ERR_POS(f_index, lineno) \
- { __pyx_filename = __pyx_f[f_index]; (void)__pyx_filename; __pyx_lineno = lineno; (void)__pyx_lineno; __pyx_clineno = __LINE__; (void)__pyx_clineno; }
-#define __PYX_ERR(f_index, lineno, Ln_error) \
- { __PYX_MARK_ERR_POS(f_index, lineno) goto Ln_error; }
-
-#ifndef __PYX_EXTERN_C
- #ifdef __cplusplus
- #define __PYX_EXTERN_C extern "C"
- #else
- #define __PYX_EXTERN_C extern
- #endif
-#endif
-
-#define __PYX_HAVE__monotonic_align__core
-#define __PYX_HAVE_API__monotonic_align__core
-/* Early includes */
-#include "pythread.h"
-#include
-#include
-#include
-#include "pystate.h"
-#ifdef _OPENMP
-#include
-#endif /* _OPENMP */
-
-#if defined(PYREX_WITHOUT_ASSERTIONS) && !defined(CYTHON_WITHOUT_ASSERTIONS)
-#define CYTHON_WITHOUT_ASSERTIONS
-#endif
-
-typedef struct {PyObject **p; const char *s; const Py_ssize_t n; const char* encoding;
- const char is_unicode; const char is_str; const char intern; } __Pyx_StringTabEntry;
-
-#define __PYX_DEFAULT_STRING_ENCODING_IS_ASCII 0
-#define __PYX_DEFAULT_STRING_ENCODING_IS_UTF8 0
-#define __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT (PY_MAJOR_VERSION >= 3 && __PYX_DEFAULT_STRING_ENCODING_IS_UTF8)
-#define __PYX_DEFAULT_STRING_ENCODING ""
-#define __Pyx_PyObject_FromString __Pyx_PyBytes_FromString
-#define __Pyx_PyObject_FromStringAndSize __Pyx_PyBytes_FromStringAndSize
-#define __Pyx_uchar_cast(c) ((unsigned char)c)
-#define __Pyx_long_cast(x) ((long)x)
-#define __Pyx_fits_Py_ssize_t(v, type, is_signed) (\
- (sizeof(type) < sizeof(Py_ssize_t)) ||\
- (sizeof(type) > sizeof(Py_ssize_t) &&\
- likely(v < (type)PY_SSIZE_T_MAX ||\
- v == (type)PY_SSIZE_T_MAX) &&\
- (!is_signed || likely(v > (type)PY_SSIZE_T_MIN ||\
- v == (type)PY_SSIZE_T_MIN))) ||\
- (sizeof(type) == sizeof(Py_ssize_t) &&\
- (is_signed || likely(v < (type)PY_SSIZE_T_MAX ||\
- v == (type)PY_SSIZE_T_MAX))) )
-static CYTHON_INLINE int __Pyx_is_valid_index(Py_ssize_t i, Py_ssize_t limit) {
- return (size_t) i < (size_t) limit;
-}
-#if defined (__cplusplus) && __cplusplus >= 201103L
- #include
- #define __Pyx_sst_abs(value) std::abs(value)
-#elif SIZEOF_INT >= SIZEOF_SIZE_T
- #define __Pyx_sst_abs(value) abs(value)
-#elif SIZEOF_LONG >= SIZEOF_SIZE_T
- #define __Pyx_sst_abs(value) labs(value)
-#elif defined (_MSC_VER)
- #define __Pyx_sst_abs(value) ((Py_ssize_t)_abs64(value))
-#elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L
- #define __Pyx_sst_abs(value) llabs(value)
-#elif defined (__GNUC__)
- #define __Pyx_sst_abs(value) __builtin_llabs(value)
-#else
- #define __Pyx_sst_abs(value) ((value<0) ? -value : value)
-#endif
-static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject*);
-static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject*, Py_ssize_t* length);
-#define __Pyx_PyByteArray_FromString(s) PyByteArray_FromStringAndSize((const char*)s, strlen((const char*)s))
-#define __Pyx_PyByteArray_FromStringAndSize(s, l) PyByteArray_FromStringAndSize((const char*)s, l)
-#define __Pyx_PyBytes_FromString PyBytes_FromString
-#define __Pyx_PyBytes_FromStringAndSize PyBytes_FromStringAndSize
-static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char*);
-#if PY_MAJOR_VERSION < 3
- #define __Pyx_PyStr_FromString __Pyx_PyBytes_FromString
- #define __Pyx_PyStr_FromStringAndSize __Pyx_PyBytes_FromStringAndSize
-#else
- #define __Pyx_PyStr_FromString __Pyx_PyUnicode_FromString
- #define __Pyx_PyStr_FromStringAndSize __Pyx_PyUnicode_FromStringAndSize
-#endif
-#define __Pyx_PyBytes_AsWritableString(s) ((char*) PyBytes_AS_STRING(s))
-#define __Pyx_PyBytes_AsWritableSString(s) ((signed char*) PyBytes_AS_STRING(s))
-#define __Pyx_PyBytes_AsWritableUString(s) ((unsigned char*) PyBytes_AS_STRING(s))
-#define __Pyx_PyBytes_AsString(s) ((const char*) PyBytes_AS_STRING(s))
-#define __Pyx_PyBytes_AsSString(s) ((const signed char*) PyBytes_AS_STRING(s))
-#define __Pyx_PyBytes_AsUString(s) ((const unsigned char*) PyBytes_AS_STRING(s))
-#define __Pyx_PyObject_AsWritableString(s) ((char*) __Pyx_PyObject_AsString(s))
-#define __Pyx_PyObject_AsWritableSString(s) ((signed char*) __Pyx_PyObject_AsString(s))
-#define __Pyx_PyObject_AsWritableUString(s) ((unsigned char*) __Pyx_PyObject_AsString(s))
-#define __Pyx_PyObject_AsSString(s) ((const signed char*) __Pyx_PyObject_AsString(s))
-#define __Pyx_PyObject_AsUString(s) ((const unsigned char*) __Pyx_PyObject_AsString(s))
-#define __Pyx_PyObject_FromCString(s) __Pyx_PyObject_FromString((const char*)s)
-#define __Pyx_PyBytes_FromCString(s) __Pyx_PyBytes_FromString((const char*)s)
-#define __Pyx_PyByteArray_FromCString(s) __Pyx_PyByteArray_FromString((const char*)s)
-#define __Pyx_PyStr_FromCString(s) __Pyx_PyStr_FromString((const char*)s)
-#define __Pyx_PyUnicode_FromCString(s) __Pyx_PyUnicode_FromString((const char*)s)
-static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE *u) {
- const Py_UNICODE *u_end = u;
- while (*u_end++) ;
- return (size_t)(u_end - u - 1);
-}
-#define __Pyx_PyUnicode_FromUnicode(u) PyUnicode_FromUnicode(u, __Pyx_Py_UNICODE_strlen(u))
-#define __Pyx_PyUnicode_FromUnicodeAndLength PyUnicode_FromUnicode
-#define __Pyx_PyUnicode_AsUnicode PyUnicode_AsUnicode
-#define __Pyx_NewRef(obj) (Py_INCREF(obj), obj)
-#define __Pyx_Owned_Py_None(b) __Pyx_NewRef(Py_None)
-static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b);
-static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*);
-static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject*);
-static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x);
-#define __Pyx_PySequence_Tuple(obj)\
- (likely(PyTuple_CheckExact(obj)) ? __Pyx_NewRef(obj) : PySequence_Tuple(obj))
-static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*);
-static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t);
-#if CYTHON_ASSUME_SAFE_MACROS
-#define __pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x))
-#else
-#define __pyx_PyFloat_AsDouble(x) PyFloat_AsDouble(x)
-#endif
-#define __pyx_PyFloat_AsFloat(x) ((float) __pyx_PyFloat_AsDouble(x))
-#if PY_MAJOR_VERSION >= 3
-#define __Pyx_PyNumber_Int(x) (PyLong_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Long(x))
-#else
-#define __Pyx_PyNumber_Int(x) (PyInt_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Int(x))
-#endif
-#define __Pyx_PyNumber_Float(x) (PyFloat_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Float(x))
-#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII
-static int __Pyx_sys_getdefaultencoding_not_ascii;
-static int __Pyx_init_sys_getdefaultencoding_params(void) {
- PyObject* sys;
- PyObject* default_encoding = NULL;
- PyObject* ascii_chars_u = NULL;
- PyObject* ascii_chars_b = NULL;
- const char* default_encoding_c;
- sys = PyImport_ImportModule("sys");
- if (!sys) goto bad;
- default_encoding = PyObject_CallMethod(sys, (char*) "getdefaultencoding", NULL);
- Py_DECREF(sys);
- if (!default_encoding) goto bad;
- default_encoding_c = PyBytes_AsString(default_encoding);
- if (!default_encoding_c) goto bad;
- if (strcmp(default_encoding_c, "ascii") == 0) {
- __Pyx_sys_getdefaultencoding_not_ascii = 0;
- } else {
- char ascii_chars[128];
- int c;
- for (c = 0; c < 128; c++) {
- ascii_chars[c] = c;
- }
- __Pyx_sys_getdefaultencoding_not_ascii = 1;
- ascii_chars_u = PyUnicode_DecodeASCII(ascii_chars, 128, NULL);
- if (!ascii_chars_u) goto bad;
- ascii_chars_b = PyUnicode_AsEncodedString(ascii_chars_u, default_encoding_c, NULL);
- if (!ascii_chars_b || !PyBytes_Check(ascii_chars_b) || memcmp(ascii_chars, PyBytes_AS_STRING(ascii_chars_b), 128) != 0) {
- PyErr_Format(
- PyExc_ValueError,
- "This module compiled with c_string_encoding=ascii, but default encoding '%.200s' is not a superset of ascii.",
- default_encoding_c);
- goto bad;
- }
- Py_DECREF(ascii_chars_u);
- Py_DECREF(ascii_chars_b);
- }
- Py_DECREF(default_encoding);
- return 0;
-bad:
- Py_XDECREF(default_encoding);
- Py_XDECREF(ascii_chars_u);
- Py_XDECREF(ascii_chars_b);
- return -1;
-}
-#endif
-#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT && PY_MAJOR_VERSION >= 3
-#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_DecodeUTF8(c_str, size, NULL)
-#else
-#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_Decode(c_str, size, __PYX_DEFAULT_STRING_ENCODING, NULL)
-#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT
-static char* __PYX_DEFAULT_STRING_ENCODING;
-static int __Pyx_init_sys_getdefaultencoding_params(void) {
- PyObject* sys;
- PyObject* default_encoding = NULL;
- char* default_encoding_c;
- sys = PyImport_ImportModule("sys");
- if (!sys) goto bad;
- default_encoding = PyObject_CallMethod(sys, (char*) (const char*) "getdefaultencoding", NULL);
- Py_DECREF(sys);
- if (!default_encoding) goto bad;
- default_encoding_c = PyBytes_AsString(default_encoding);
- if (!default_encoding_c) goto bad;
- __PYX_DEFAULT_STRING_ENCODING = (char*) malloc(strlen(default_encoding_c) + 1);
- if (!__PYX_DEFAULT_STRING_ENCODING) goto bad;
- strcpy(__PYX_DEFAULT_STRING_ENCODING, default_encoding_c);
- Py_DECREF(default_encoding);
- return 0;
-bad:
- Py_XDECREF(default_encoding);
- return -1;
-}
-#endif
-#endif
-
-
-/* Test for GCC > 2.95 */
-#if defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95)))
- #define likely(x) __builtin_expect(!!(x), 1)
- #define unlikely(x) __builtin_expect(!!(x), 0)
-#else /* !__GNUC__ or GCC < 2.95 */
- #define likely(x) (x)
- #define unlikely(x) (x)
-#endif /* __GNUC__ */
-static CYTHON_INLINE void __Pyx_pretend_to_initialize(void* ptr) { (void)ptr; }
-
-static PyObject *__pyx_m = NULL;
-static PyObject *__pyx_d;
-static PyObject *__pyx_b;
-static PyObject *__pyx_cython_runtime = NULL;
-static PyObject *__pyx_empty_tuple;
-static PyObject *__pyx_empty_bytes;
-static PyObject *__pyx_empty_unicode;
-static int __pyx_lineno;
-static int __pyx_clineno = 0;
-static const char * __pyx_cfilenm= __FILE__;
-static const char *__pyx_filename;
-
-
-static const char *__pyx_f[] = {
- "core.pyx",
- "stringsource",
-};
-/* NoFastGil.proto */
-#define __Pyx_PyGILState_Ensure PyGILState_Ensure
-#define __Pyx_PyGILState_Release PyGILState_Release
-#define __Pyx_FastGIL_Remember()
-#define __Pyx_FastGIL_Forget()
-#define __Pyx_FastGilFuncInit()
-
-/* MemviewSliceStruct.proto */
-struct __pyx_memoryview_obj;
-typedef struct {
- struct __pyx_memoryview_obj *memview;
- char *data;
- Py_ssize_t shape[8];
- Py_ssize_t strides[8];
- Py_ssize_t suboffsets[8];
-} __Pyx_memviewslice;
-#define __Pyx_MemoryView_Len(m) (m.shape[0])
-
-/* Atomics.proto */
-#include
-#ifndef CYTHON_ATOMICS
- #define CYTHON_ATOMICS 1
-#endif
-#define __pyx_atomic_int_type int
-#if CYTHON_ATOMICS && __GNUC__ >= 4 && (__GNUC_MINOR__ > 1 ||\
- (__GNUC_MINOR__ == 1 && __GNUC_PATCHLEVEL >= 2)) &&\
- !defined(__i386__)
- #define __pyx_atomic_incr_aligned(value, lock) __sync_fetch_and_add(value, 1)
- #define __pyx_atomic_decr_aligned(value, lock) __sync_fetch_and_sub(value, 1)
- #ifdef __PYX_DEBUG_ATOMICS
- #warning "Using GNU atomics"
- #endif
-#elif CYTHON_ATOMICS && defined(_MSC_VER) && 0
- #include
- #undef __pyx_atomic_int_type
- #define __pyx_atomic_int_type LONG
- #define __pyx_atomic_incr_aligned(value, lock) InterlockedIncrement(value)
- #define __pyx_atomic_decr_aligned(value, lock) InterlockedDecrement(value)
- #ifdef __PYX_DEBUG_ATOMICS
- #pragma message ("Using MSVC atomics")
- #endif
-#elif CYTHON_ATOMICS && (defined(__ICC) || defined(__INTEL_COMPILER)) && 0
- #define __pyx_atomic_incr_aligned(value, lock) _InterlockedIncrement(value)
- #define __pyx_atomic_decr_aligned(value, lock) _InterlockedDecrement(value)
- #ifdef __PYX_DEBUG_ATOMICS
- #warning "Using Intel atomics"
- #endif
-#else
- #undef CYTHON_ATOMICS
- #define CYTHON_ATOMICS 0
- #ifdef __PYX_DEBUG_ATOMICS
- #warning "Not using atomics"
- #endif
-#endif
-typedef volatile __pyx_atomic_int_type __pyx_atomic_int;
-#if CYTHON_ATOMICS
- #define __pyx_add_acquisition_count(memview)\
- __pyx_atomic_incr_aligned(__pyx_get_slice_count_pointer(memview), memview->lock)
- #define __pyx_sub_acquisition_count(memview)\
- __pyx_atomic_decr_aligned(__pyx_get_slice_count_pointer(memview), memview->lock)
-#else
- #define __pyx_add_acquisition_count(memview)\
- __pyx_add_acquisition_count_locked(__pyx_get_slice_count_pointer(memview), memview->lock)
- #define __pyx_sub_acquisition_count(memview)\
- __pyx_sub_acquisition_count_locked(__pyx_get_slice_count_pointer(memview), memview->lock)
-#endif
-
-/* ForceInitThreads.proto */
-#ifndef __PYX_FORCE_INIT_THREADS
- #define __PYX_FORCE_INIT_THREADS 0
-#endif
-
-/* BufferFormatStructs.proto */
-#define IS_UNSIGNED(type) (((type) -1) > 0)
-struct __Pyx_StructField_;
-#define __PYX_BUF_FLAGS_PACKED_STRUCT (1 << 0)
-typedef struct {
- const char* name;
- struct __Pyx_StructField_* fields;
- size_t size;
- size_t arraysize[8];
- int ndim;
- char typegroup;
- char is_unsigned;
- int flags;
-} __Pyx_TypeInfo;
-typedef struct __Pyx_StructField_ {
- __Pyx_TypeInfo* type;
- const char* name;
- size_t offset;
-} __Pyx_StructField;
-typedef struct {
- __Pyx_StructField* field;
- size_t parent_offset;
-} __Pyx_BufFmt_StackElem;
-typedef struct {
- __Pyx_StructField root;
- __Pyx_BufFmt_StackElem* head;
- size_t fmt_offset;
- size_t new_count, enc_count;
- size_t struct_alignment;
- int is_complex;
- char enc_type;
- char new_packmode;
- char enc_packmode;
- char is_valid_array;
-} __Pyx_BufFmt_Context;
-
-
-/*--- Type declarations ---*/
-struct __pyx_array_obj;
-struct __pyx_MemviewEnum_obj;
-struct __pyx_memoryview_obj;
-struct __pyx_memoryviewslice_obj;
-struct __pyx_opt_args_15monotonic_align_4core_maximum_path_each;
-
-/* "monotonic_align/core.pyx":7
- * @cython.boundscheck(False)
- * @cython.wraparound(False)
- * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<<
- * cdef int x
- * cdef int y
- */
-struct __pyx_opt_args_15monotonic_align_4core_maximum_path_each {
- int __pyx_n;
- float max_neg_val;
-};
-
-/* "View.MemoryView":105
- *
- * @cname("__pyx_array")
- * cdef class array: # <<<<<<<<<<<<<<
- *
- * cdef:
- */
-struct __pyx_array_obj {
- PyObject_HEAD
- struct __pyx_vtabstruct_array *__pyx_vtab;
- char *data;
- Py_ssize_t len;
- char *format;
- int ndim;
- Py_ssize_t *_shape;
- Py_ssize_t *_strides;
- Py_ssize_t itemsize;
- PyObject *mode;
- PyObject *_format;
- void (*callback_free_data)(void *);
- int free_data;
- int dtype_is_object;
-};
-
-
-/* "View.MemoryView":279
- *
- * @cname('__pyx_MemviewEnum')
- * cdef class Enum(object): # <<<<<<<<<<<<<<
- * cdef object name
- * def __init__(self, name):
- */
-struct __pyx_MemviewEnum_obj {
- PyObject_HEAD
- PyObject *name;
-};
-
-
-/* "View.MemoryView":330
- *
- * @cname('__pyx_memoryview')
- * cdef class memoryview(object): # <<<<<<<<<<<<<<
- *
- * cdef object obj
- */
-struct __pyx_memoryview_obj {
- PyObject_HEAD
- struct __pyx_vtabstruct_memoryview *__pyx_vtab;
- PyObject *obj;
- PyObject *_size;
- PyObject *_array_interface;
- PyThread_type_lock lock;
- __pyx_atomic_int acquisition_count[2];
- __pyx_atomic_int *acquisition_count_aligned_p;
- Py_buffer view;
- int flags;
- int dtype_is_object;
- __Pyx_TypeInfo *typeinfo;
-};
-
-
-/* "View.MemoryView":965
- *
- * @cname('__pyx_memoryviewslice')
- * cdef class _memoryviewslice(memoryview): # <<<<<<<<<<<<<<
- * "Internal class for passing memoryview slices to Python"
- *
- */
-struct __pyx_memoryviewslice_obj {
- struct __pyx_memoryview_obj __pyx_base;
- __Pyx_memviewslice from_slice;
- PyObject *from_object;
- PyObject *(*to_object_func)(char *);
- int (*to_dtype_func)(char *, PyObject *);
-};
-
-
-
-/* "View.MemoryView":105
- *
- * @cname("__pyx_array")
- * cdef class array: # <<<<<<<<<<<<<<
- *
- * cdef:
- */
-
-struct __pyx_vtabstruct_array {
- PyObject *(*get_memview)(struct __pyx_array_obj *);
-};
-static struct __pyx_vtabstruct_array *__pyx_vtabptr_array;
-
-
-/* "View.MemoryView":330
- *
- * @cname('__pyx_memoryview')
- * cdef class memoryview(object): # <<<<<<<<<<<<<<
- *
- * cdef object obj
- */
-
-struct __pyx_vtabstruct_memoryview {
- char *(*get_item_pointer)(struct __pyx_memoryview_obj *, PyObject *);
- PyObject *(*is_slice)(struct __pyx_memoryview_obj *, PyObject *);
- PyObject *(*setitem_slice_assignment)(struct __pyx_memoryview_obj *, PyObject *, PyObject *);
- PyObject *(*setitem_slice_assign_scalar)(struct __pyx_memoryview_obj *, struct __pyx_memoryview_obj *, PyObject *);
- PyObject *(*setitem_indexed)(struct __pyx_memoryview_obj *, PyObject *, PyObject *);
- PyObject *(*convert_item_to_object)(struct __pyx_memoryview_obj *, char *);
- PyObject *(*assign_item_from_object)(struct __pyx_memoryview_obj *, char *, PyObject *);
-};
-static struct __pyx_vtabstruct_memoryview *__pyx_vtabptr_memoryview;
-
-
-/* "View.MemoryView":965
- *
- * @cname('__pyx_memoryviewslice')
- * cdef class _memoryviewslice(memoryview): # <<<<<<<<<<<<<<
- * "Internal class for passing memoryview slices to Python"
- *
- */
-
-struct __pyx_vtabstruct__memoryviewslice {
- struct __pyx_vtabstruct_memoryview __pyx_base;
-};
-static struct __pyx_vtabstruct__memoryviewslice *__pyx_vtabptr__memoryviewslice;
-
-/* --- Runtime support code (head) --- */
-/* Refnanny.proto */
-#ifndef CYTHON_REFNANNY
- #define CYTHON_REFNANNY 0
-#endif
-#if CYTHON_REFNANNY
- typedef struct {
- void (*INCREF)(void*, PyObject*, int);
- void (*DECREF)(void*, PyObject*, int);
- void (*GOTREF)(void*, PyObject*, int);
- void (*GIVEREF)(void*, PyObject*, int);
- void* (*SetupContext)(const char*, int, const char*);
- void (*FinishContext)(void**);
- } __Pyx_RefNannyAPIStruct;
- static __Pyx_RefNannyAPIStruct *__Pyx_RefNanny = NULL;
- static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname);
- #define __Pyx_RefNannyDeclarations void *__pyx_refnanny = NULL;
-#ifdef WITH_THREAD
- #define __Pyx_RefNannySetupContext(name, acquire_gil)\
- if (acquire_gil) {\
- PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\
- __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\
- PyGILState_Release(__pyx_gilstate_save);\
- } else {\
- __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\
- }
-#else
- #define __Pyx_RefNannySetupContext(name, acquire_gil)\
- __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__)
-#endif
- #define __Pyx_RefNannyFinishContext()\
- __Pyx_RefNanny->FinishContext(&__pyx_refnanny)
- #define __Pyx_INCREF(r) __Pyx_RefNanny->INCREF(__pyx_refnanny, (PyObject *)(r), __LINE__)
- #define __Pyx_DECREF(r) __Pyx_RefNanny->DECREF(__pyx_refnanny, (PyObject *)(r), __LINE__)
- #define __Pyx_GOTREF(r) __Pyx_RefNanny->GOTREF(__pyx_refnanny, (PyObject *)(r), __LINE__)
- #define __Pyx_GIVEREF(r) __Pyx_RefNanny->GIVEREF(__pyx_refnanny, (PyObject *)(r), __LINE__)
- #define __Pyx_XINCREF(r) do { if((r) != NULL) {__Pyx_INCREF(r); }} while(0)
- #define __Pyx_XDECREF(r) do { if((r) != NULL) {__Pyx_DECREF(r); }} while(0)
- #define __Pyx_XGOTREF(r) do { if((r) != NULL) {__Pyx_GOTREF(r); }} while(0)
- #define __Pyx_XGIVEREF(r) do { if((r) != NULL) {__Pyx_GIVEREF(r);}} while(0)
-#else
- #define __Pyx_RefNannyDeclarations
- #define __Pyx_RefNannySetupContext(name, acquire_gil)
- #define __Pyx_RefNannyFinishContext()
- #define __Pyx_INCREF(r) Py_INCREF(r)
- #define __Pyx_DECREF(r) Py_DECREF(r)
- #define __Pyx_GOTREF(r)
- #define __Pyx_GIVEREF(r)
- #define __Pyx_XINCREF(r) Py_XINCREF(r)
- #define __Pyx_XDECREF(r) Py_XDECREF(r)
- #define __Pyx_XGOTREF(r)
- #define __Pyx_XGIVEREF(r)
-#endif
-#define __Pyx_XDECREF_SET(r, v) do {\
- PyObject *tmp = (PyObject *) r;\
- r = v; __Pyx_XDECREF(tmp);\
- } while (0)
-#define __Pyx_DECREF_SET(r, v) do {\
- PyObject *tmp = (PyObject *) r;\
- r = v; __Pyx_DECREF(tmp);\
- } while (0)
-#define __Pyx_CLEAR(r) do { PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);} while(0)
-#define __Pyx_XCLEAR(r) do { if((r) != NULL) {PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);}} while(0)
-
-/* PyObjectGetAttrStr.proto */
-#if CYTHON_USE_TYPE_SLOTS
-static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name);
-#else
-#define __Pyx_PyObject_GetAttrStr(o,n) PyObject_GetAttr(o,n)
-#endif
-
-/* GetBuiltinName.proto */
-static PyObject *__Pyx_GetBuiltinName(PyObject *name);
-
-/* MemviewSliceInit.proto */
-#define __Pyx_BUF_MAX_NDIMS %(BUF_MAX_NDIMS)d
-#define __Pyx_MEMVIEW_DIRECT 1
-#define __Pyx_MEMVIEW_PTR 2
-#define __Pyx_MEMVIEW_FULL 4
-#define __Pyx_MEMVIEW_CONTIG 8
-#define __Pyx_MEMVIEW_STRIDED 16
-#define __Pyx_MEMVIEW_FOLLOW 32
-#define __Pyx_IS_C_CONTIG 1
-#define __Pyx_IS_F_CONTIG 2
-static int __Pyx_init_memviewslice(
- struct __pyx_memoryview_obj *memview,
- int ndim,
- __Pyx_memviewslice *memviewslice,
- int memview_is_new_reference);
-static CYTHON_INLINE int __pyx_add_acquisition_count_locked(
- __pyx_atomic_int *acquisition_count, PyThread_type_lock lock);
-static CYTHON_INLINE int __pyx_sub_acquisition_count_locked(
- __pyx_atomic_int *acquisition_count, PyThread_type_lock lock);
-#define __pyx_get_slice_count_pointer(memview) (memview->acquisition_count_aligned_p)
-#define __pyx_get_slice_count(memview) (*__pyx_get_slice_count_pointer(memview))
-#define __PYX_INC_MEMVIEW(slice, have_gil) __Pyx_INC_MEMVIEW(slice, have_gil, __LINE__)
-#define __PYX_XDEC_MEMVIEW(slice, have_gil) __Pyx_XDEC_MEMVIEW(slice, have_gil, __LINE__)
-static CYTHON_INLINE void __Pyx_INC_MEMVIEW(__Pyx_memviewslice *, int, int);
-static CYTHON_INLINE void __Pyx_XDEC_MEMVIEW(__Pyx_memviewslice *, int, int);
-
-/* RaiseArgTupleInvalid.proto */
-static void __Pyx_RaiseArgtupleInvalid(const char* func_name, int exact,
- Py_ssize_t num_min, Py_ssize_t num_max, Py_ssize_t num_found);
-
-/* RaiseDoubleKeywords.proto */
-static void __Pyx_RaiseDoubleKeywordsError(const char* func_name, PyObject* kw_name);
-
-/* ParseKeywords.proto */
-static int __Pyx_ParseOptionalKeywords(PyObject *kwds, PyObject **argnames[],\
- PyObject *kwds2, PyObject *values[], Py_ssize_t num_pos_args,\
- const char* function_name);
-
-/* None.proto */
-static CYTHON_INLINE void __Pyx_RaiseUnboundLocalError(const char *varname);
-
-/* ArgTypeTest.proto */
-#define __Pyx_ArgTypeTest(obj, type, none_allowed, name, exact)\
- ((likely((Py_TYPE(obj) == type) | (none_allowed && (obj == Py_None)))) ? 1 :\
- __Pyx__ArgTypeTest(obj, type, name, exact))
-static int __Pyx__ArgTypeTest(PyObject *obj, PyTypeObject *type, const char *name, int exact);
-
-/* PyObjectCall.proto */
-#if CYTHON_COMPILING_IN_CPYTHON
-static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw);
-#else
-#define __Pyx_PyObject_Call(func, arg, kw) PyObject_Call(func, arg, kw)
-#endif
-
-/* PyThreadStateGet.proto */
-#if CYTHON_FAST_THREAD_STATE
-#define __Pyx_PyThreadState_declare PyThreadState *__pyx_tstate;
-#define __Pyx_PyThreadState_assign __pyx_tstate = __Pyx_PyThreadState_Current;
-#define __Pyx_PyErr_Occurred() __pyx_tstate->curexc_type
-#else
-#define __Pyx_PyThreadState_declare
-#define __Pyx_PyThreadState_assign
-#define __Pyx_PyErr_Occurred() PyErr_Occurred()
-#endif
-
-/* PyErrFetchRestore.proto */
-#if CYTHON_FAST_THREAD_STATE
-#define __Pyx_PyErr_Clear() __Pyx_ErrRestore(NULL, NULL, NULL)
-#define __Pyx_ErrRestoreWithState(type, value, tb) __Pyx_ErrRestoreInState(PyThreadState_GET(), type, value, tb)
-#define __Pyx_ErrFetchWithState(type, value, tb) __Pyx_ErrFetchInState(PyThreadState_GET(), type, value, tb)
-#define __Pyx_ErrRestore(type, value, tb) __Pyx_ErrRestoreInState(__pyx_tstate, type, value, tb)
-#define __Pyx_ErrFetch(type, value, tb) __Pyx_ErrFetchInState(__pyx_tstate, type, value, tb)
-static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb);
-static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb);
-#if CYTHON_COMPILING_IN_CPYTHON
-#define __Pyx_PyErr_SetNone(exc) (Py_INCREF(exc), __Pyx_ErrRestore((exc), NULL, NULL))
-#else
-#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc)
-#endif
-#else
-#define __Pyx_PyErr_Clear() PyErr_Clear()
-#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc)
-#define __Pyx_ErrRestoreWithState(type, value, tb) PyErr_Restore(type, value, tb)
-#define __Pyx_ErrFetchWithState(type, value, tb) PyErr_Fetch(type, value, tb)
-#define __Pyx_ErrRestoreInState(tstate, type, value, tb) PyErr_Restore(type, value, tb)
-#define __Pyx_ErrFetchInState(tstate, type, value, tb) PyErr_Fetch(type, value, tb)
-#define __Pyx_ErrRestore(type, value, tb) PyErr_Restore(type, value, tb)
-#define __Pyx_ErrFetch(type, value, tb) PyErr_Fetch(type, value, tb)
-#endif
-
-/* RaiseException.proto */
-static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause);
-
-/* PyCFunctionFastCall.proto */
-#if CYTHON_FAST_PYCCALL
-static CYTHON_INLINE PyObject *__Pyx_PyCFunction_FastCall(PyObject *func, PyObject **args, Py_ssize_t nargs);
-#else
-#define __Pyx_PyCFunction_FastCall(func, args, nargs) (assert(0), NULL)
-#endif
-
-/* PyFunctionFastCall.proto */
-#if CYTHON_FAST_PYCALL
-#define __Pyx_PyFunction_FastCall(func, args, nargs)\
- __Pyx_PyFunction_FastCallDict((func), (args), (nargs), NULL)
-#if 1 || PY_VERSION_HEX < 0x030600B1
-static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs);
-#else
-#define __Pyx_PyFunction_FastCallDict(func, args, nargs, kwargs) _PyFunction_FastCallDict(func, args, nargs, kwargs)
-#endif
-#define __Pyx_BUILD_ASSERT_EXPR(cond)\
- (sizeof(char [1 - 2*!(cond)]) - 1)
-#ifndef Py_MEMBER_SIZE
-#define Py_MEMBER_SIZE(type, member) sizeof(((type *)0)->member)
-#endif
- static size_t __pyx_pyframe_localsplus_offset = 0;
- #include "frameobject.h"
- #define __Pxy_PyFrame_Initialize_Offsets()\
- ((void)__Pyx_BUILD_ASSERT_EXPR(sizeof(PyFrameObject) == offsetof(PyFrameObject, f_localsplus) + Py_MEMBER_SIZE(PyFrameObject, f_localsplus)),\
- (void)(__pyx_pyframe_localsplus_offset = ((size_t)PyFrame_Type.tp_basicsize) - Py_MEMBER_SIZE(PyFrameObject, f_localsplus)))
- #define __Pyx_PyFrame_GetLocalsplus(frame)\
- (assert(__pyx_pyframe_localsplus_offset), (PyObject **)(((char *)(frame)) + __pyx_pyframe_localsplus_offset))
-#endif
-
-/* PyObjectCall2Args.proto */
-static CYTHON_UNUSED PyObject* __Pyx_PyObject_Call2Args(PyObject* function, PyObject* arg1, PyObject* arg2);
-
-/* PyObjectCallMethO.proto */
-#if CYTHON_COMPILING_IN_CPYTHON
-static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg);
-#endif
-
-/* PyObjectCallOneArg.proto */
-static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg);
-
-/* IncludeStringH.proto */
-#include
-
-/* BytesEquals.proto */
-static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals);
-
-/* UnicodeEquals.proto */
-static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals);
-
-/* StrEquals.proto */
-#if PY_MAJOR_VERSION >= 3
-#define __Pyx_PyString_Equals __Pyx_PyUnicode_Equals
-#else
-#define __Pyx_PyString_Equals __Pyx_PyBytes_Equals
-#endif
-
-/* None.proto */
-static CYTHON_INLINE Py_ssize_t __Pyx_div_Py_ssize_t(Py_ssize_t, Py_ssize_t);
-
-/* UnaryNegOverflows.proto */
-#define UNARY_NEG_WOULD_OVERFLOW(x)\
- (((x) < 0) & ((unsigned long)(x) == 0-(unsigned long)(x)))
-
-static CYTHON_UNUSED int __pyx_array_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/
-static PyObject *__pyx_array_get_memview(struct __pyx_array_obj *); /*proto*/
-/* GetAttr.proto */
-static CYTHON_INLINE PyObject *__Pyx_GetAttr(PyObject *, PyObject *);
-
-/* GetItemInt.proto */
-#define __Pyx_GetItemInt(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\
- (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\
- __Pyx_GetItemInt_Fast(o, (Py_ssize_t)i, is_list, wraparound, boundscheck) :\
- (is_list ? (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL) :\
- __Pyx_GetItemInt_Generic(o, to_py_func(i))))
-#define __Pyx_GetItemInt_List(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\
- (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\
- __Pyx_GetItemInt_List_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\
- (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL))
-static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i,
- int wraparound, int boundscheck);
-#define __Pyx_GetItemInt_Tuple(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\
- (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\
- __Pyx_GetItemInt_Tuple_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\
- (PyErr_SetString(PyExc_IndexError, "tuple index out of range"), (PyObject*)NULL))
-static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i,
- int wraparound, int boundscheck);
-static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j);
-static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i,
- int is_list, int wraparound, int boundscheck);
-
-/* ObjectGetItem.proto */
-#if CYTHON_USE_TYPE_SLOTS
-static CYTHON_INLINE PyObject *__Pyx_PyObject_GetItem(PyObject *obj, PyObject* key);
-#else
-#define __Pyx_PyObject_GetItem(obj, key) PyObject_GetItem(obj, key)
-#endif
-
-/* decode_c_string_utf16.proto */
-static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16(const char *s, Py_ssize_t size, const char *errors) {
- int byteorder = 0;
- return PyUnicode_DecodeUTF16(s, size, errors, &byteorder);
-}
-static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16LE(const char *s, Py_ssize_t size, const char *errors) {
- int byteorder = -1;
- return PyUnicode_DecodeUTF16(s, size, errors, &byteorder);
-}
-static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16BE(const char *s, Py_ssize_t size, const char *errors) {
- int byteorder = 1;
- return PyUnicode_DecodeUTF16(s, size, errors, &byteorder);
-}
-
-/* decode_c_string.proto */
-static CYTHON_INLINE PyObject* __Pyx_decode_c_string(
- const char* cstring, Py_ssize_t start, Py_ssize_t stop,
- const char* encoding, const char* errors,
- PyObject* (*decode_func)(const char *s, Py_ssize_t size, const char *errors));
-
-/* PyErrExceptionMatches.proto */
-#if CYTHON_FAST_THREAD_STATE
-#define __Pyx_PyErr_ExceptionMatches(err) __Pyx_PyErr_ExceptionMatchesInState(__pyx_tstate, err)
-static CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadState* tstate, PyObject* err);
-#else
-#define __Pyx_PyErr_ExceptionMatches(err) PyErr_ExceptionMatches(err)
-#endif
-
-/* GetAttr3.proto */
-static CYTHON_INLINE PyObject *__Pyx_GetAttr3(PyObject *, PyObject *, PyObject *);
-
-/* PyDictVersioning.proto */
-#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS
-#define __PYX_DICT_VERSION_INIT ((PY_UINT64_T) -1)
-#define __PYX_GET_DICT_VERSION(dict) (((PyDictObject*)(dict))->ma_version_tag)
-#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var)\
- (version_var) = __PYX_GET_DICT_VERSION(dict);\
- (cache_var) = (value);
-#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) {\
- static PY_UINT64_T __pyx_dict_version = 0;\
- static PyObject *__pyx_dict_cached_value = NULL;\
- if (likely(__PYX_GET_DICT_VERSION(DICT) == __pyx_dict_version)) {\
- (VAR) = __pyx_dict_cached_value;\
- } else {\
- (VAR) = __pyx_dict_cached_value = (LOOKUP);\
- __pyx_dict_version = __PYX_GET_DICT_VERSION(DICT);\
- }\
-}
-static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj);
-static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj);
-static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version);
-#else
-#define __PYX_GET_DICT_VERSION(dict) (0)
-#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var)
-#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) (VAR) = (LOOKUP);
-#endif
-
-/* GetModuleGlobalName.proto */
-#if CYTHON_USE_DICT_VERSIONS
-#define __Pyx_GetModuleGlobalName(var, name) {\
- static PY_UINT64_T __pyx_dict_version = 0;\
- static PyObject *__pyx_dict_cached_value = NULL;\
- (var) = (likely(__pyx_dict_version == __PYX_GET_DICT_VERSION(__pyx_d))) ?\
- (likely(__pyx_dict_cached_value) ? __Pyx_NewRef(__pyx_dict_cached_value) : __Pyx_GetBuiltinName(name)) :\
- __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\
-}
-#define __Pyx_GetModuleGlobalNameUncached(var, name) {\
- PY_UINT64_T __pyx_dict_version;\
- PyObject *__pyx_dict_cached_value;\
- (var) = __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\
-}
-static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value);
-#else
-#define __Pyx_GetModuleGlobalName(var, name) (var) = __Pyx__GetModuleGlobalName(name)
-#define __Pyx_GetModuleGlobalNameUncached(var, name) (var) = __Pyx__GetModuleGlobalName(name)
-static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name);
-#endif
-
-/* RaiseTooManyValuesToUnpack.proto */
-static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected);
-
-/* RaiseNeedMoreValuesToUnpack.proto */
-static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index);
-
-/* RaiseNoneIterError.proto */
-static CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void);
-
-/* ExtTypeTest.proto */
-static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type);
-
-/* GetTopmostException.proto */
-#if CYTHON_USE_EXC_INFO_STACK
-static _PyErr_StackItem * __Pyx_PyErr_GetTopmostException(PyThreadState *tstate);
-#endif
-
-/* SaveResetException.proto */
-#if CYTHON_FAST_THREAD_STATE
-#define __Pyx_ExceptionSave(type, value, tb) __Pyx__ExceptionSave(__pyx_tstate, type, value, tb)
-static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb);
-#define __Pyx_ExceptionReset(type, value, tb) __Pyx__ExceptionReset(__pyx_tstate, type, value, tb)
-static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb);
-#else
-#define __Pyx_ExceptionSave(type, value, tb) PyErr_GetExcInfo(type, value, tb)
-#define __Pyx_ExceptionReset(type, value, tb) PyErr_SetExcInfo(type, value, tb)
-#endif
-
-/* GetException.proto */
-#if CYTHON_FAST_THREAD_STATE
-#define __Pyx_GetException(type, value, tb) __Pyx__GetException(__pyx_tstate, type, value, tb)
-static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb);
-#else
-static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb);
-#endif
-
-/* SwapException.proto */
-#if CYTHON_FAST_THREAD_STATE
-#define __Pyx_ExceptionSwap(type, value, tb) __Pyx__ExceptionSwap(__pyx_tstate, type, value, tb)
-static CYTHON_INLINE void __Pyx__ExceptionSwap(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb);
-#else
-static CYTHON_INLINE void __Pyx_ExceptionSwap(PyObject **type, PyObject **value, PyObject **tb);
-#endif
-
-/* Import.proto */
-static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level);
-
-/* FastTypeChecks.proto */
-#if CYTHON_COMPILING_IN_CPYTHON
-#define __Pyx_TypeCheck(obj, type) __Pyx_IsSubtype(Py_TYPE(obj), (PyTypeObject *)type)
-static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b);
-static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches(PyObject *err, PyObject *type);
-static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches2(PyObject *err, PyObject *type1, PyObject *type2);
-#else
-#define __Pyx_TypeCheck(obj, type) PyObject_TypeCheck(obj, (PyTypeObject *)type)
-#define __Pyx_PyErr_GivenExceptionMatches(err, type) PyErr_GivenExceptionMatches(err, type)
-#define __Pyx_PyErr_GivenExceptionMatches2(err, type1, type2) (PyErr_GivenExceptionMatches(err, type1) || PyErr_GivenExceptionMatches(err, type2))
-#endif
-#define __Pyx_PyException_Check(obj) __Pyx_TypeCheck(obj, PyExc_Exception)
-
-static CYTHON_UNUSED int __pyx_memoryview_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/
-/* ListCompAppend.proto */
-#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS
-static CYTHON_INLINE int __Pyx_ListComp_Append(PyObject* list, PyObject* x) {
- PyListObject* L = (PyListObject*) list;
- Py_ssize_t len = Py_SIZE(list);
- if (likely(L->allocated > len)) {
- Py_INCREF(x);
- PyList_SET_ITEM(list, len, x);
- __Pyx_SET_SIZE(list, len + 1);
- return 0;
- }
- return PyList_Append(list, x);
-}
-#else
-#define __Pyx_ListComp_Append(L,x) PyList_Append(L,x)
-#endif
-
-/* PyIntBinop.proto */
-#if !CYTHON_COMPILING_IN_PYPY
-static PyObject* __Pyx_PyInt_AddObjC(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check);
-#else
-#define __Pyx_PyInt_AddObjC(op1, op2, intval, inplace, zerodivision_check)\
- (inplace ? PyNumber_InPlaceAdd(op1, op2) : PyNumber_Add(op1, op2))
-#endif
-
-/* ListExtend.proto */
-static CYTHON_INLINE int __Pyx_PyList_Extend(PyObject* L, PyObject* v) {
-#if CYTHON_COMPILING_IN_CPYTHON
- PyObject* none = _PyList_Extend((PyListObject*)L, v);
- if (unlikely(!none))
- return -1;
- Py_DECREF(none);
- return 0;
-#else
- return PyList_SetSlice(L, PY_SSIZE_T_MAX, PY_SSIZE_T_MAX, v);
-#endif
-}
-
-/* ListAppend.proto */
-#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS
-static CYTHON_INLINE int __Pyx_PyList_Append(PyObject* list, PyObject* x) {
- PyListObject* L = (PyListObject*) list;
- Py_ssize_t len = Py_SIZE(list);
- if (likely(L->allocated > len) & likely(len > (L->allocated >> 1))) {
- Py_INCREF(x);
- PyList_SET_ITEM(list, len, x);
- __Pyx_SET_SIZE(list, len + 1);
- return 0;
- }
- return PyList_Append(list, x);
-}
-#else
-#define __Pyx_PyList_Append(L,x) PyList_Append(L,x)
-#endif
-
-/* None.proto */
-static CYTHON_INLINE long __Pyx_div_long(long, long);
-
-/* ImportFrom.proto */
-static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name);
-
-/* HasAttr.proto */
-static CYTHON_INLINE int __Pyx_HasAttr(PyObject *, PyObject *);
-
-/* PyObject_GenericGetAttrNoDict.proto */
-#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000
-static CYTHON_INLINE PyObject* __Pyx_PyObject_GenericGetAttrNoDict(PyObject* obj, PyObject* attr_name);
-#else
-#define __Pyx_PyObject_GenericGetAttrNoDict PyObject_GenericGetAttr
-#endif
-
-/* PyObject_GenericGetAttr.proto */
-#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000
-static PyObject* __Pyx_PyObject_GenericGetAttr(PyObject* obj, PyObject* attr_name);
-#else
-#define __Pyx_PyObject_GenericGetAttr PyObject_GenericGetAttr
-#endif
-
-/* SetVTable.proto */
-static int __Pyx_SetVtable(PyObject *dict, void *vtable);
-
-/* PyObjectGetAttrStrNoError.proto */
-static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStrNoError(PyObject* obj, PyObject* attr_name);
-
-/* SetupReduce.proto */
-static int __Pyx_setup_reduce(PyObject* type_obj);
-
-/* CLineInTraceback.proto */
-#ifdef CYTHON_CLINE_IN_TRACEBACK
-#define __Pyx_CLineForTraceback(tstate, c_line) (((CYTHON_CLINE_IN_TRACEBACK)) ? c_line : 0)
-#else
-static int __Pyx_CLineForTraceback(PyThreadState *tstate, int c_line);
-#endif
-
-/* CodeObjectCache.proto */
-typedef struct {
- PyCodeObject* code_object;
- int code_line;
-} __Pyx_CodeObjectCacheEntry;
-struct __Pyx_CodeObjectCache {
- int count;
- int max_count;
- __Pyx_CodeObjectCacheEntry* entries;
-};
-static struct __Pyx_CodeObjectCache __pyx_code_cache = {0,0,NULL};
-static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line);
-static PyCodeObject *__pyx_find_code_object(int code_line);
-static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object);
-
-/* AddTraceback.proto */
-static void __Pyx_AddTraceback(const char *funcname, int c_line,
- int py_line, const char *filename);
-
-#if PY_MAJOR_VERSION < 3
- static int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags);
- static void __Pyx_ReleaseBuffer(Py_buffer *view);
-#else
- #define __Pyx_GetBuffer PyObject_GetBuffer
- #define __Pyx_ReleaseBuffer PyBuffer_Release
-#endif
-
-
-/* BufferStructDeclare.proto */
-typedef struct {
- Py_ssize_t shape, strides, suboffsets;
-} __Pyx_Buf_DimInfo;
-typedef struct {
- size_t refcount;
- Py_buffer pybuffer;
-} __Pyx_Buffer;
-typedef struct {
- __Pyx_Buffer *rcbuffer;
- char *data;
- __Pyx_Buf_DimInfo diminfo[8];
-} __Pyx_LocalBuf_ND;
-
-/* MemviewSliceIsContig.proto */
-static int __pyx_memviewslice_is_contig(const __Pyx_memviewslice mvs, char order, int ndim);
-
-/* OverlappingSlices.proto */
-static int __pyx_slices_overlap(__Pyx_memviewslice *slice1,
- __Pyx_memviewslice *slice2,
- int ndim, size_t itemsize);
-
-/* Capsule.proto */
-static CYTHON_INLINE PyObject *__pyx_capsule_create(void *p, const char *sig);
-
-/* IsLittleEndian.proto */
-static CYTHON_INLINE int __Pyx_Is_Little_Endian(void);
-
-/* BufferFormatCheck.proto */
-static const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const char* ts);
-static void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx,
- __Pyx_BufFmt_StackElem* stack,
- __Pyx_TypeInfo* type);
-
-/* TypeInfoCompare.proto */
-static int __pyx_typeinfo_cmp(__Pyx_TypeInfo *a, __Pyx_TypeInfo *b);
-
-/* MemviewSliceValidateAndInit.proto */
-static int __Pyx_ValidateAndInit_memviewslice(
- int *axes_specs,
- int c_or_f_flag,
- int buf_flags,
- int ndim,
- __Pyx_TypeInfo *dtype,
- __Pyx_BufFmt_StackElem stack[],
- __Pyx_memviewslice *memviewslice,
- PyObject *original_obj);
-
-/* ObjectToMemviewSlice.proto */
-static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_int(PyObject *, int writable_flag);
-
-/* ObjectToMemviewSlice.proto */
-static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_float(PyObject *, int writable_flag);
-
-/* ObjectToMemviewSlice.proto */
-static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_dc_int(PyObject *, int writable_flag);
-
-/* CIntToPy.proto */
-static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value);
-
-/* CIntToPy.proto */
-static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value);
-
-/* MemviewSliceCopyTemplate.proto */
-static __Pyx_memviewslice
-__pyx_memoryview_copy_new_contig(const __Pyx_memviewslice *from_mvs,
- const char *mode, int ndim,
- size_t sizeof_dtype, int contig_flag,
- int dtype_is_object);
-
-/* CIntFromPy.proto */
-static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *);
-
-/* CIntFromPy.proto */
-static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *);
-
-/* CIntFromPy.proto */
-static CYTHON_INLINE char __Pyx_PyInt_As_char(PyObject *);
-
-/* CheckBinaryVersion.proto */
-static int __Pyx_check_binary_version(void);
-
-/* InitStrings.proto */
-static int __Pyx_InitStrings(__Pyx_StringTabEntry *t);
-
-static PyObject *__pyx_array_get_memview(struct __pyx_array_obj *__pyx_v_self); /* proto*/
-static char *__pyx_memoryview_get_item_pointer(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index); /* proto*/
-static PyObject *__pyx_memoryview_is_slice(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj); /* proto*/
-static PyObject *__pyx_memoryview_setitem_slice_assignment(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_dst, PyObject *__pyx_v_src); /* proto*/
-static PyObject *__pyx_memoryview_setitem_slice_assign_scalar(struct __pyx_memoryview_obj *__pyx_v_self, struct __pyx_memoryview_obj *__pyx_v_dst, PyObject *__pyx_v_value); /* proto*/
-static PyObject *__pyx_memoryview_setitem_indexed(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value); /* proto*/
-static PyObject *__pyx_memoryview_convert_item_to_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp); /* proto*/
-static PyObject *__pyx_memoryview_assign_item_from_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value); /* proto*/
-static PyObject *__pyx_memoryviewslice_convert_item_to_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp); /* proto*/
-static PyObject *__pyx_memoryviewslice_assign_item_from_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value); /* proto*/
-
-/* Module declarations from 'cython.view' */
-
-/* Module declarations from 'cython' */
-
-/* Module declarations from 'monotonic_align.core' */
-static PyTypeObject *__pyx_array_type = 0;
-static PyTypeObject *__pyx_MemviewEnum_type = 0;
-static PyTypeObject *__pyx_memoryview_type = 0;
-static PyTypeObject *__pyx_memoryviewslice_type = 0;
-static PyObject *generic = 0;
-static PyObject *strided = 0;
-static PyObject *indirect = 0;
-static PyObject *contiguous = 0;
-static PyObject *indirect_contiguous = 0;
-static int __pyx_memoryview_thread_locks_used;
-static PyThread_type_lock __pyx_memoryview_thread_locks[8];
-static void __pyx_f_15monotonic_align_4core_maximum_path_each(__Pyx_memviewslice, __Pyx_memviewslice, int, int, struct __pyx_opt_args_15monotonic_align_4core_maximum_path_each *__pyx_optional_args); /*proto*/
-static void __pyx_f_15monotonic_align_4core_maximum_path_c(__Pyx_memviewslice, __Pyx_memviewslice, __Pyx_memviewslice, __Pyx_memviewslice, int __pyx_skip_dispatch); /*proto*/
-static struct __pyx_array_obj *__pyx_array_new(PyObject *, Py_ssize_t, char *, char *, char *); /*proto*/
-static void *__pyx_align_pointer(void *, size_t); /*proto*/
-static PyObject *__pyx_memoryview_new(PyObject *, int, int, __Pyx_TypeInfo *); /*proto*/
-static CYTHON_INLINE int __pyx_memoryview_check(PyObject *); /*proto*/
-static PyObject *_unellipsify(PyObject *, int); /*proto*/
-static PyObject *assert_direct_dimensions(Py_ssize_t *, int); /*proto*/
-static struct __pyx_memoryview_obj *__pyx_memview_slice(struct __pyx_memoryview_obj *, PyObject *); /*proto*/
-static int __pyx_memoryview_slice_memviewslice(__Pyx_memviewslice *, Py_ssize_t, Py_ssize_t, Py_ssize_t, int, int, int *, Py_ssize_t, Py_ssize_t, Py_ssize_t, int, int, int, int); /*proto*/
-static char *__pyx_pybuffer_index(Py_buffer *, char *, Py_ssize_t, Py_ssize_t); /*proto*/
-static int __pyx_memslice_transpose(__Pyx_memviewslice *); /*proto*/
-static PyObject *__pyx_memoryview_fromslice(__Pyx_memviewslice, int, PyObject *(*)(char *), int (*)(char *, PyObject *), int); /*proto*/
-static __Pyx_memviewslice *__pyx_memoryview_get_slice_from_memoryview(struct __pyx_memoryview_obj *, __Pyx_memviewslice *); /*proto*/
-static void __pyx_memoryview_slice_copy(struct __pyx_memoryview_obj *, __Pyx_memviewslice *); /*proto*/
-static PyObject *__pyx_memoryview_copy_object(struct __pyx_memoryview_obj *); /*proto*/
-static PyObject *__pyx_memoryview_copy_object_from_slice(struct __pyx_memoryview_obj *, __Pyx_memviewslice *); /*proto*/
-static Py_ssize_t abs_py_ssize_t(Py_ssize_t); /*proto*/
-static char __pyx_get_best_slice_order(__Pyx_memviewslice *, int); /*proto*/
-static void _copy_strided_to_strided(char *, Py_ssize_t *, char *, Py_ssize_t *, Py_ssize_t *, Py_ssize_t *, int, size_t); /*proto*/
-static void copy_strided_to_strided(__Pyx_memviewslice *, __Pyx_memviewslice *, int, size_t); /*proto*/
-static Py_ssize_t __pyx_memoryview_slice_get_size(__Pyx_memviewslice *, int); /*proto*/
-static Py_ssize_t __pyx_fill_contig_strides_array(Py_ssize_t *, Py_ssize_t *, Py_ssize_t, int, char); /*proto*/
-static void *__pyx_memoryview_copy_data_to_temp(__Pyx_memviewslice *, __Pyx_memviewslice *, char, int); /*proto*/
-static int __pyx_memoryview_err_extents(int, Py_ssize_t, Py_ssize_t); /*proto*/
-static int __pyx_memoryview_err_dim(PyObject *, char *, int); /*proto*/
-static int __pyx_memoryview_err(PyObject *, char *); /*proto*/
-static int __pyx_memoryview_copy_contents(__Pyx_memviewslice, __Pyx_memviewslice, int, int, int); /*proto*/
-static void __pyx_memoryview_broadcast_leading(__Pyx_memviewslice *, int, int); /*proto*/
-static void __pyx_memoryview_refcount_copying(__Pyx_memviewslice *, int, int, int); /*proto*/
-static void __pyx_memoryview_refcount_objects_in_slice_with_gil(char *, Py_ssize_t *, Py_ssize_t *, int, int); /*proto*/
-static void __pyx_memoryview_refcount_objects_in_slice(char *, Py_ssize_t *, Py_ssize_t *, int, int); /*proto*/
-static void __pyx_memoryview_slice_assign_scalar(__Pyx_memviewslice *, int, size_t, void *, int); /*proto*/
-static void __pyx_memoryview__slice_assign_scalar(char *, Py_ssize_t *, Py_ssize_t *, int, size_t, void *); /*proto*/
-static PyObject *__pyx_unpickle_Enum__set_state(struct __pyx_MemviewEnum_obj *, PyObject *); /*proto*/
-static __Pyx_TypeInfo __Pyx_TypeInfo_int = { "int", NULL, sizeof(int), { 0 }, 0, IS_UNSIGNED(int) ? 'U' : 'I', IS_UNSIGNED(int), 0 };
-static __Pyx_TypeInfo __Pyx_TypeInfo_float = { "float", NULL, sizeof(float), { 0 }, 0, 'R', 0, 0 };
-#define __Pyx_MODULE_NAME "monotonic_align.core"
-extern int __pyx_module_is_main_monotonic_align__core;
-int __pyx_module_is_main_monotonic_align__core = 0;
-
-/* Implementation of 'monotonic_align.core' */
-static PyObject *__pyx_builtin_range;
-static PyObject *__pyx_builtin_ValueError;
-static PyObject *__pyx_builtin_MemoryError;
-static PyObject *__pyx_builtin_enumerate;
-static PyObject *__pyx_builtin_TypeError;
-static PyObject *__pyx_builtin_Ellipsis;
-static PyObject *__pyx_builtin_id;
-static PyObject *__pyx_builtin_IndexError;
-static const char __pyx_k_O[] = "O";
-static const char __pyx_k_c[] = "c";
-static const char __pyx_k_id[] = "id";
-static const char __pyx_k_new[] = "__new__";
-static const char __pyx_k_obj[] = "obj";
-static const char __pyx_k_base[] = "base";
-static const char __pyx_k_dict[] = "__dict__";
-static const char __pyx_k_main[] = "__main__";
-static const char __pyx_k_mode[] = "mode";
-static const char __pyx_k_name[] = "name";
-static const char __pyx_k_ndim[] = "ndim";
-static const char __pyx_k_pack[] = "pack";
-static const char __pyx_k_size[] = "size";
-static const char __pyx_k_step[] = "step";
-static const char __pyx_k_stop[] = "stop";
-static const char __pyx_k_t_xs[] = "t_xs";
-static const char __pyx_k_t_ys[] = "t_ys";
-static const char __pyx_k_test[] = "__test__";
-static const char __pyx_k_ASCII[] = "ASCII";
-static const char __pyx_k_class[] = "__class__";
-static const char __pyx_k_error[] = "error";
-static const char __pyx_k_flags[] = "flags";
-static const char __pyx_k_paths[] = "paths";
-static const char __pyx_k_range[] = "range";
-static const char __pyx_k_shape[] = "shape";
-static const char __pyx_k_start[] = "start";
-static const char __pyx_k_encode[] = "encode";
-static const char __pyx_k_format[] = "format";
-static const char __pyx_k_import[] = "__import__";
-static const char __pyx_k_name_2[] = "__name__";
-static const char __pyx_k_pickle[] = "pickle";
-static const char __pyx_k_reduce[] = "__reduce__";
-static const char __pyx_k_struct[] = "struct";
-static const char __pyx_k_unpack[] = "unpack";
-static const char __pyx_k_update[] = "update";
-static const char __pyx_k_values[] = "values";
-static const char __pyx_k_fortran[] = "fortran";
-static const char __pyx_k_memview[] = "memview";
-static const char __pyx_k_Ellipsis[] = "Ellipsis";
-static const char __pyx_k_getstate[] = "__getstate__";
-static const char __pyx_k_itemsize[] = "itemsize";
-static const char __pyx_k_pyx_type[] = "__pyx_type";
-static const char __pyx_k_setstate[] = "__setstate__";
-static const char __pyx_k_TypeError[] = "TypeError";
-static const char __pyx_k_enumerate[] = "enumerate";
-static const char __pyx_k_pyx_state[] = "__pyx_state";
-static const char __pyx_k_reduce_ex[] = "__reduce_ex__";
-static const char __pyx_k_IndexError[] = "IndexError";
-static const char __pyx_k_ValueError[] = "ValueError";
-static const char __pyx_k_pyx_result[] = "__pyx_result";
-static const char __pyx_k_pyx_vtable[] = "__pyx_vtable__";
-static const char __pyx_k_MemoryError[] = "MemoryError";
-static const char __pyx_k_PickleError[] = "PickleError";
-static const char __pyx_k_pyx_checksum[] = "__pyx_checksum";
-static const char __pyx_k_stringsource[] = "stringsource";
-static const char __pyx_k_pyx_getbuffer[] = "__pyx_getbuffer";
-static const char __pyx_k_reduce_cython[] = "__reduce_cython__";
-static const char __pyx_k_View_MemoryView[] = "View.MemoryView";
-static const char __pyx_k_allocate_buffer[] = "allocate_buffer";
-static const char __pyx_k_dtype_is_object[] = "dtype_is_object";
-static const char __pyx_k_pyx_PickleError[] = "__pyx_PickleError";
-static const char __pyx_k_setstate_cython[] = "__setstate_cython__";
-static const char __pyx_k_pyx_unpickle_Enum[] = "__pyx_unpickle_Enum";
-static const char __pyx_k_cline_in_traceback[] = "cline_in_traceback";
-static const char __pyx_k_strided_and_direct[] = "";
-static const char __pyx_k_strided_and_indirect[] = "";
-static const char __pyx_k_contiguous_and_direct[] = "";
-static const char __pyx_k_MemoryView_of_r_object[] = "";
-static const char __pyx_k_MemoryView_of_r_at_0x_x[] = "";
-static const char __pyx_k_contiguous_and_indirect[] = "";
-static const char __pyx_k_Cannot_index_with_type_s[] = "Cannot index with type '%s'";
-static const char __pyx_k_Invalid_shape_in_axis_d_d[] = "Invalid shape in axis %d: %d.";
-static const char __pyx_k_itemsize_0_for_cython_array[] = "itemsize <= 0 for cython.array";
-static const char __pyx_k_unable_to_allocate_array_data[] = "unable to allocate array data.";
-static const char __pyx_k_strided_and_direct_or_indirect[] = "";
-static const char __pyx_k_Buffer_view_does_not_expose_stri[] = "Buffer view does not expose strides";
-static const char __pyx_k_Can_only_create_a_buffer_that_is[] = "Can only create a buffer that is contiguous in memory.";
-static const char __pyx_k_Cannot_assign_to_read_only_memor[] = "Cannot assign to read-only memoryview";
-static const char __pyx_k_Cannot_create_writable_memory_vi[] = "Cannot create writable memory view from read-only memoryview";
-static const char __pyx_k_Empty_shape_tuple_for_cython_arr[] = "Empty shape tuple for cython.array";
-static const char __pyx_k_Incompatible_checksums_s_vs_0xb0[] = "Incompatible checksums (%s vs 0xb068931 = (name))";
-static const char __pyx_k_Indirect_dimensions_not_supporte[] = "Indirect dimensions not supported";
-static const char __pyx_k_Invalid_mode_expected_c_or_fortr[] = "Invalid mode, expected 'c' or 'fortran', got %s";
-static const char __pyx_k_Out_of_bounds_on_buffer_access_a[] = "Out of bounds on buffer access (axis %d)";
-static const char __pyx_k_Unable_to_convert_item_to_object[] = "Unable to convert item to object";
-static const char __pyx_k_got_differing_extents_in_dimensi[] = "got differing extents in dimension %d (got %d and %d)";
-static const char __pyx_k_no_default___reduce___due_to_non[] = "no default __reduce__ due to non-trivial __cinit__";
-static const char __pyx_k_unable_to_allocate_shape_and_str[] = "unable to allocate shape and strides.";
-static PyObject *__pyx_n_s_ASCII;
-static PyObject *__pyx_kp_s_Buffer_view_does_not_expose_stri;
-static PyObject *__pyx_kp_s_Can_only_create_a_buffer_that_is;
-static PyObject *__pyx_kp_s_Cannot_assign_to_read_only_memor;
-static PyObject *__pyx_kp_s_Cannot_create_writable_memory_vi;
-static PyObject *__pyx_kp_s_Cannot_index_with_type_s;
-static PyObject *__pyx_n_s_Ellipsis;
-static PyObject *__pyx_kp_s_Empty_shape_tuple_for_cython_arr;
-static PyObject *__pyx_kp_s_Incompatible_checksums_s_vs_0xb0;
-static PyObject *__pyx_n_s_IndexError;
-static PyObject *__pyx_kp_s_Indirect_dimensions_not_supporte;
-static PyObject *__pyx_kp_s_Invalid_mode_expected_c_or_fortr;
-static PyObject *__pyx_kp_s_Invalid_shape_in_axis_d_d;
-static PyObject *__pyx_n_s_MemoryError;
-static PyObject *__pyx_kp_s_MemoryView_of_r_at_0x_x;
-static PyObject *__pyx_kp_s_MemoryView_of_r_object;
-static PyObject *__pyx_n_b_O;
-static PyObject *__pyx_kp_s_Out_of_bounds_on_buffer_access_a;
-static PyObject *__pyx_n_s_PickleError;
-static PyObject *__pyx_n_s_TypeError;
-static PyObject *__pyx_kp_s_Unable_to_convert_item_to_object;
-static PyObject *__pyx_n_s_ValueError;
-static PyObject *__pyx_n_s_View_MemoryView;
-static PyObject *__pyx_n_s_allocate_buffer;
-static PyObject *__pyx_n_s_base;
-static PyObject *__pyx_n_s_c;
-static PyObject *__pyx_n_u_c;
-static PyObject *__pyx_n_s_class;
-static PyObject *__pyx_n_s_cline_in_traceback;
-static PyObject *__pyx_kp_s_contiguous_and_direct;
-static PyObject *__pyx_kp_s_contiguous_and_indirect;
-static PyObject *__pyx_n_s_dict;
-static PyObject *__pyx_n_s_dtype_is_object;
-static PyObject *__pyx_n_s_encode;
-static PyObject *__pyx_n_s_enumerate;
-static PyObject *__pyx_n_s_error;
-static PyObject *__pyx_n_s_flags;
-static PyObject *__pyx_n_s_format;
-static PyObject *__pyx_n_s_fortran;
-static PyObject *__pyx_n_u_fortran;
-static PyObject *__pyx_n_s_getstate;
-static PyObject *__pyx_kp_s_got_differing_extents_in_dimensi;
-static PyObject *__pyx_n_s_id;
-static PyObject *__pyx_n_s_import;
-static PyObject *__pyx_n_s_itemsize;
-static PyObject *__pyx_kp_s_itemsize_0_for_cython_array;
-static PyObject *__pyx_n_s_main;
-static PyObject *__pyx_n_s_memview;
-static PyObject *__pyx_n_s_mode;
-static PyObject *__pyx_n_s_name;
-static PyObject *__pyx_n_s_name_2;
-static PyObject *__pyx_n_s_ndim;
-static PyObject *__pyx_n_s_new;
-static PyObject *__pyx_kp_s_no_default___reduce___due_to_non;
-static PyObject *__pyx_n_s_obj;
-static PyObject *__pyx_n_s_pack;
-static PyObject *__pyx_n_s_paths;
-static PyObject *__pyx_n_s_pickle;
-static PyObject *__pyx_n_s_pyx_PickleError;
-static PyObject *__pyx_n_s_pyx_checksum;
-static PyObject *__pyx_n_s_pyx_getbuffer;
-static PyObject *__pyx_n_s_pyx_result;
-static PyObject *__pyx_n_s_pyx_state;
-static PyObject *__pyx_n_s_pyx_type;
-static PyObject *__pyx_n_s_pyx_unpickle_Enum;
-static PyObject *__pyx_n_s_pyx_vtable;
-static PyObject *__pyx_n_s_range;
-static PyObject *__pyx_n_s_reduce;
-static PyObject *__pyx_n_s_reduce_cython;
-static PyObject *__pyx_n_s_reduce_ex;
-static PyObject *__pyx_n_s_setstate;
-static PyObject *__pyx_n_s_setstate_cython;
-static PyObject *__pyx_n_s_shape;
-static PyObject *__pyx_n_s_size;
-static PyObject *__pyx_n_s_start;
-static PyObject *__pyx_n_s_step;
-static PyObject *__pyx_n_s_stop;
-static PyObject *__pyx_kp_s_strided_and_direct;
-static PyObject *__pyx_kp_s_strided_and_direct_or_indirect;
-static PyObject *__pyx_kp_s_strided_and_indirect;
-static PyObject *__pyx_kp_s_stringsource;
-static PyObject *__pyx_n_s_struct;
-static PyObject *__pyx_n_s_t_xs;
-static PyObject *__pyx_n_s_t_ys;
-static PyObject *__pyx_n_s_test;
-static PyObject *__pyx_kp_s_unable_to_allocate_array_data;
-static PyObject *__pyx_kp_s_unable_to_allocate_shape_and_str;
-static PyObject *__pyx_n_s_unpack;
-static PyObject *__pyx_n_s_update;
-static PyObject *__pyx_n_s_values;
-static PyObject *__pyx_pf_15monotonic_align_4core_maximum_path_c(CYTHON_UNUSED PyObject *__pyx_self, __Pyx_memviewslice __pyx_v_paths, __Pyx_memviewslice __pyx_v_values, __Pyx_memviewslice __pyx_v_t_ys, __Pyx_memviewslice __pyx_v_t_xs); /* proto */
-static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array___cinit__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_shape, Py_ssize_t __pyx_v_itemsize, PyObject *__pyx_v_format, PyObject *__pyx_v_mode, int __pyx_v_allocate_buffer); /* proto */
-static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_2__getbuffer__(struct __pyx_array_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /* proto */
-static void __pyx_array___pyx_pf_15View_dot_MemoryView_5array_4__dealloc__(struct __pyx_array_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf_15View_dot_MemoryView_5array_7memview___get__(struct __pyx_array_obj *__pyx_v_self); /* proto */
-static Py_ssize_t __pyx_array___pyx_pf_15View_dot_MemoryView_5array_6__len__(struct __pyx_array_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_8__getattr__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_attr); /* proto */
-static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_10__getitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item); /* proto */
-static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_12__setitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value); /* proto */
-static PyObject *__pyx_pf___pyx_array___reduce_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf___pyx_array_2__setstate_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */
-static int __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum___init__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v_name); /* proto */
-static PyObject *__pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum_2__repr__(struct __pyx_MemviewEnum_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf___pyx_MemviewEnum___reduce_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf___pyx_MemviewEnum_2__setstate_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v___pyx_state); /* proto */
-static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview___cinit__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj, int __pyx_v_flags, int __pyx_v_dtype_is_object); /* proto */
-static void __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_2__dealloc__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_4__getitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index); /* proto */
-static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_6__setitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value); /* proto */
-static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_8__getbuffer__(struct __pyx_memoryview_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /* proto */
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_1T___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4base___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_5shape___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_7strides___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_10suboffsets___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4ndim___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_8itemsize___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_6nbytes___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4size___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static Py_ssize_t __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_10__len__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_12__repr__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_14__str__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_16is_c_contig(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_18is_f_contig(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_20copy(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_22copy_fortran(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf___pyx_memoryview___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf___pyx_memoryview_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */
-static void __pyx_memoryviewslice___pyx_pf_15View_dot_MemoryView_16_memoryviewslice___dealloc__(struct __pyx_memoryviewslice_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf_15View_dot_MemoryView_16_memoryviewslice_4base___get__(struct __pyx_memoryviewslice_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf___pyx_memoryviewslice___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf___pyx_memoryviewslice_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */
-static PyObject *__pyx_pf_15View_dot_MemoryView___pyx_unpickle_Enum(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v___pyx_type, long __pyx_v___pyx_checksum, PyObject *__pyx_v___pyx_state); /* proto */
-static PyObject *__pyx_tp_new_array(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/
-static PyObject *__pyx_tp_new_Enum(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/
-static PyObject *__pyx_tp_new_memoryview(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/
-static PyObject *__pyx_tp_new__memoryviewslice(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/
-static PyObject *__pyx_int_0;
-static PyObject *__pyx_int_1;
-static PyObject *__pyx_int_184977713;
-static PyObject *__pyx_int_neg_1;
-static float __pyx_k_;
-static PyObject *__pyx_tuple__2;
-static PyObject *__pyx_tuple__3;
-static PyObject *__pyx_tuple__4;
-static PyObject *__pyx_tuple__5;
-static PyObject *__pyx_tuple__6;
-static PyObject *__pyx_tuple__7;
-static PyObject *__pyx_tuple__8;
-static PyObject *__pyx_tuple__9;
-static PyObject *__pyx_slice__16;
-static PyObject *__pyx_tuple__10;
-static PyObject *__pyx_tuple__11;
-static PyObject *__pyx_tuple__12;
-static PyObject *__pyx_tuple__13;
-static PyObject *__pyx_tuple__14;
-static PyObject *__pyx_tuple__15;
-static PyObject *__pyx_tuple__17;
-static PyObject *__pyx_tuple__18;
-static PyObject *__pyx_tuple__19;
-static PyObject *__pyx_tuple__20;
-static PyObject *__pyx_tuple__21;
-static PyObject *__pyx_tuple__22;
-static PyObject *__pyx_tuple__23;
-static PyObject *__pyx_tuple__24;
-static PyObject *__pyx_tuple__25;
-static PyObject *__pyx_codeobj__26;
-/* Late includes */
-
-/* "monotonic_align/core.pyx":7
- * @cython.boundscheck(False)
- * @cython.wraparound(False)
- * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<<
- * cdef int x
- * cdef int y
- */
-
-static void __pyx_f_15monotonic_align_4core_maximum_path_each(__Pyx_memviewslice __pyx_v_path, __Pyx_memviewslice __pyx_v_value, int __pyx_v_t_y, int __pyx_v_t_x, struct __pyx_opt_args_15monotonic_align_4core_maximum_path_each *__pyx_optional_args) {
- float __pyx_v_max_neg_val = __pyx_k_;
- int __pyx_v_x;
- int __pyx_v_y;
- float __pyx_v_v_prev;
- float __pyx_v_v_cur;
- int __pyx_v_index;
- int __pyx_t_1;
- int __pyx_t_2;
- int __pyx_t_3;
- long __pyx_t_4;
- int __pyx_t_5;
- long __pyx_t_6;
- long __pyx_t_7;
- int __pyx_t_8;
- Py_ssize_t __pyx_t_9;
- Py_ssize_t __pyx_t_10;
- float __pyx_t_11;
- float __pyx_t_12;
- float __pyx_t_13;
- int __pyx_t_14;
- Py_ssize_t __pyx_t_15;
- Py_ssize_t __pyx_t_16;
- if (__pyx_optional_args) {
- if (__pyx_optional_args->__pyx_n > 0) {
- __pyx_v_max_neg_val = __pyx_optional_args->max_neg_val;
- }
- }
-
- /* "monotonic_align/core.pyx":13
- * cdef float v_cur
- * cdef float tmp
- * cdef int index = t_x - 1 # <<<<<<<<<<<<<<
- *
- * for y in range(t_y):
- */
- __pyx_v_index = (__pyx_v_t_x - 1);
-
- /* "monotonic_align/core.pyx":15
- * cdef int index = t_x - 1
- *
- * for y in range(t_y): # <<<<<<<<<<<<<<
- * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)):
- * if x == y:
- */
- __pyx_t_1 = __pyx_v_t_y;
- __pyx_t_2 = __pyx_t_1;
- for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) {
- __pyx_v_y = __pyx_t_3;
-
- /* "monotonic_align/core.pyx":16
- *
- * for y in range(t_y):
- * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): # <<<<<<<<<<<<<<
- * if x == y:
- * v_cur = max_neg_val
- */
- __pyx_t_4 = (__pyx_v_y + 1);
- __pyx_t_5 = __pyx_v_t_x;
- if (((__pyx_t_4 < __pyx_t_5) != 0)) {
- __pyx_t_6 = __pyx_t_4;
- } else {
- __pyx_t_6 = __pyx_t_5;
- }
- __pyx_t_4 = __pyx_t_6;
- __pyx_t_5 = ((__pyx_v_t_x + __pyx_v_y) - __pyx_v_t_y);
- __pyx_t_6 = 0;
- if (((__pyx_t_5 > __pyx_t_6) != 0)) {
- __pyx_t_7 = __pyx_t_5;
- } else {
- __pyx_t_7 = __pyx_t_6;
- }
- __pyx_t_6 = __pyx_t_4;
- for (__pyx_t_5 = __pyx_t_7; __pyx_t_5 < __pyx_t_6; __pyx_t_5+=1) {
- __pyx_v_x = __pyx_t_5;
-
- /* "monotonic_align/core.pyx":17
- * for y in range(t_y):
- * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)):
- * if x == y: # <<<<<<<<<<<<<<
- * v_cur = max_neg_val
- * else:
- */
- __pyx_t_8 = ((__pyx_v_x == __pyx_v_y) != 0);
- if (__pyx_t_8) {
-
- /* "monotonic_align/core.pyx":18
- * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)):
- * if x == y:
- * v_cur = max_neg_val # <<<<<<<<<<<<<<
- * else:
- * v_cur = value[y-1, x]
- */
- __pyx_v_v_cur = __pyx_v_max_neg_val;
-
- /* "monotonic_align/core.pyx":17
- * for y in range(t_y):
- * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)):
- * if x == y: # <<<<<<<<<<<<<<
- * v_cur = max_neg_val
- * else:
- */
- goto __pyx_L7;
- }
-
- /* "monotonic_align/core.pyx":20
- * v_cur = max_neg_val
- * else:
- * v_cur = value[y-1, x] # <<<<<<<<<<<<<<
- * if x == 0:
- * if y == 0:
- */
- /*else*/ {
- __pyx_t_9 = (__pyx_v_y - 1);
- __pyx_t_10 = __pyx_v_x;
- __pyx_v_v_cur = (*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_9 * __pyx_v_value.strides[0]) )) + __pyx_t_10)) )));
- }
- __pyx_L7:;
-
- /* "monotonic_align/core.pyx":21
- * else:
- * v_cur = value[y-1, x]
- * if x == 0: # <<<<<<<<<<<<<<
- * if y == 0:
- * v_prev = 0.
- */
- __pyx_t_8 = ((__pyx_v_x == 0) != 0);
- if (__pyx_t_8) {
-
- /* "monotonic_align/core.pyx":22
- * v_cur = value[y-1, x]
- * if x == 0:
- * if y == 0: # <<<<<<<<<<<<<<
- * v_prev = 0.
- * else:
- */
- __pyx_t_8 = ((__pyx_v_y == 0) != 0);
- if (__pyx_t_8) {
-
- /* "monotonic_align/core.pyx":23
- * if x == 0:
- * if y == 0:
- * v_prev = 0. # <<<<<<<<<<<<<<
- * else:
- * v_prev = max_neg_val
- */
- __pyx_v_v_prev = 0.;
-
- /* "monotonic_align/core.pyx":22
- * v_cur = value[y-1, x]
- * if x == 0:
- * if y == 0: # <<<<<<<<<<<<<<
- * v_prev = 0.
- * else:
- */
- goto __pyx_L9;
- }
-
- /* "monotonic_align/core.pyx":25
- * v_prev = 0.
- * else:
- * v_prev = max_neg_val # <<<<<<<<<<<<<<
- * else:
- * v_prev = value[y-1, x-1]
- */
- /*else*/ {
- __pyx_v_v_prev = __pyx_v_max_neg_val;
- }
- __pyx_L9:;
-
- /* "monotonic_align/core.pyx":21
- * else:
- * v_cur = value[y-1, x]
- * if x == 0: # <<<<<<<<<<<<<<
- * if y == 0:
- * v_prev = 0.
- */
- goto __pyx_L8;
- }
-
- /* "monotonic_align/core.pyx":27
- * v_prev = max_neg_val
- * else:
- * v_prev = value[y-1, x-1] # <<<<<<<<<<<<<<
- * value[y, x] += max(v_prev, v_cur)
- *
- */
- /*else*/ {
- __pyx_t_10 = (__pyx_v_y - 1);
- __pyx_t_9 = (__pyx_v_x - 1);
- __pyx_v_v_prev = (*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_10 * __pyx_v_value.strides[0]) )) + __pyx_t_9)) )));
- }
- __pyx_L8:;
-
- /* "monotonic_align/core.pyx":28
- * else:
- * v_prev = value[y-1, x-1]
- * value[y, x] += max(v_prev, v_cur) # <<<<<<<<<<<<<<
- *
- * for y in range(t_y - 1, -1, -1):
- */
- __pyx_t_11 = __pyx_v_v_cur;
- __pyx_t_12 = __pyx_v_v_prev;
- if (((__pyx_t_11 > __pyx_t_12) != 0)) {
- __pyx_t_13 = __pyx_t_11;
- } else {
- __pyx_t_13 = __pyx_t_12;
- }
- __pyx_t_9 = __pyx_v_y;
- __pyx_t_10 = __pyx_v_x;
- *((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_9 * __pyx_v_value.strides[0]) )) + __pyx_t_10)) )) += __pyx_t_13;
- }
- }
-
- /* "monotonic_align/core.pyx":30
- * value[y, x] += max(v_prev, v_cur)
- *
- * for y in range(t_y - 1, -1, -1): # <<<<<<<<<<<<<<
- * path[y, index] = 1
- * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]):
- */
- for (__pyx_t_1 = (__pyx_v_t_y - 1); __pyx_t_1 > -1; __pyx_t_1-=1) {
- __pyx_v_y = __pyx_t_1;
-
- /* "monotonic_align/core.pyx":31
- *
- * for y in range(t_y - 1, -1, -1):
- * path[y, index] = 1 # <<<<<<<<<<<<<<
- * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]):
- * index = index - 1
- */
- __pyx_t_10 = __pyx_v_y;
- __pyx_t_9 = __pyx_v_index;
- *((int *) ( /* dim=1 */ ((char *) (((int *) ( /* dim=0 */ (__pyx_v_path.data + __pyx_t_10 * __pyx_v_path.strides[0]) )) + __pyx_t_9)) )) = 1;
-
- /* "monotonic_align/core.pyx":32
- * for y in range(t_y - 1, -1, -1):
- * path[y, index] = 1
- * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): # <<<<<<<<<<<<<<
- * index = index - 1
- *
- */
- __pyx_t_14 = ((__pyx_v_index != 0) != 0);
- if (__pyx_t_14) {
- } else {
- __pyx_t_8 = __pyx_t_14;
- goto __pyx_L13_bool_binop_done;
- }
- __pyx_t_14 = ((__pyx_v_index == __pyx_v_y) != 0);
- if (!__pyx_t_14) {
- } else {
- __pyx_t_8 = __pyx_t_14;
- goto __pyx_L13_bool_binop_done;
- }
- __pyx_t_9 = (__pyx_v_y - 1);
- __pyx_t_10 = __pyx_v_index;
- __pyx_t_15 = (__pyx_v_y - 1);
- __pyx_t_16 = (__pyx_v_index - 1);
- __pyx_t_14 = (((*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_9 * __pyx_v_value.strides[0]) )) + __pyx_t_10)) ))) < (*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_15 * __pyx_v_value.strides[0]) )) + __pyx_t_16)) )))) != 0);
- __pyx_t_8 = __pyx_t_14;
- __pyx_L13_bool_binop_done:;
- if (__pyx_t_8) {
-
- /* "monotonic_align/core.pyx":33
- * path[y, index] = 1
- * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]):
- * index = index - 1 # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_v_index = (__pyx_v_index - 1);
-
- /* "monotonic_align/core.pyx":32
- * for y in range(t_y - 1, -1, -1):
- * path[y, index] = 1
- * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): # <<<<<<<<<<<<<<
- * index = index - 1
- *
- */
- }
- }
-
- /* "monotonic_align/core.pyx":7
- * @cython.boundscheck(False)
- * @cython.wraparound(False)
- * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<<
- * cdef int x
- * cdef int y
- */
-
- /* function exit code */
-}
-
-/* "monotonic_align/core.pyx":38
- * @cython.boundscheck(False)
- * @cython.wraparound(False)
- * cpdef void maximum_path_c(int[:,:,::1] paths, float[:,:,::1] values, int[::1] t_ys, int[::1] t_xs) nogil: # <<<<<<<<<<<<<<
- * cdef int b = paths.shape[0]
- * cdef int i
- */
-
-static PyObject *__pyx_pw_15monotonic_align_4core_1maximum_path_c(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
-static void __pyx_f_15monotonic_align_4core_maximum_path_c(__Pyx_memviewslice __pyx_v_paths, __Pyx_memviewslice __pyx_v_values, __Pyx_memviewslice __pyx_v_t_ys, __Pyx_memviewslice __pyx_v_t_xs, CYTHON_UNUSED int __pyx_skip_dispatch) {
- CYTHON_UNUSED int __pyx_v_b;
- int __pyx_v_i;
- int __pyx_t_1;
- int __pyx_t_2;
- int __pyx_t_3;
- __Pyx_memviewslice __pyx_t_4 = { 0, 0, { 0 }, { 0 }, { 0 } };
- __Pyx_memviewslice __pyx_t_5 = { 0, 0, { 0 }, { 0 }, { 0 } };
- Py_ssize_t __pyx_t_6;
- Py_ssize_t __pyx_t_7;
-
- /* "monotonic_align/core.pyx":39
- * @cython.wraparound(False)
- * cpdef void maximum_path_c(int[:,:,::1] paths, float[:,:,::1] values, int[::1] t_ys, int[::1] t_xs) nogil:
- * cdef int b = paths.shape[0] # <<<<<<<<<<<<<<
- * cdef int i
- * for i in prange(b, nogil=True):
- */
- __pyx_v_b = (__pyx_v_paths.shape[0]);
-
- /* "monotonic_align/core.pyx":41
- * cdef int b = paths.shape[0]
- * cdef int i
- * for i in prange(b, nogil=True): # <<<<<<<<<<<<<<
- * maximum_path_each(paths[i], values[i], t_ys[i], t_xs[i])
- */
- {
- #ifdef WITH_THREAD
- PyThreadState *_save;
- Py_UNBLOCK_THREADS
- __Pyx_FastGIL_Remember();
- #endif
- /*try:*/ {
- __pyx_t_1 = __pyx_v_b;
- if ((1 == 0)) abort();
- {
- #if ((defined(__APPLE__) || defined(__OSX__)) && (defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95)))))
- #undef likely
- #undef unlikely
- #define likely(x) (x)
- #define unlikely(x) (x)
- #endif
- __pyx_t_3 = (__pyx_t_1 - 0 + 1 - 1/abs(1)) / 1;
- if (__pyx_t_3 > 0)
- {
- #ifdef _OPENMP
- #pragma omp parallel private(__pyx_t_6, __pyx_t_7) firstprivate(__pyx_t_4, __pyx_t_5)
- #endif /* _OPENMP */
- {
- #ifdef _OPENMP
- #pragma omp for firstprivate(__pyx_v_i) lastprivate(__pyx_v_i)
- #endif /* _OPENMP */
- for (__pyx_t_2 = 0; __pyx_t_2 < __pyx_t_3; __pyx_t_2++){
- {
- __pyx_v_i = (int)(0 + 1 * __pyx_t_2);
-
- /* "monotonic_align/core.pyx":42
- * cdef int i
- * for i in prange(b, nogil=True):
- * maximum_path_each(paths[i], values[i], t_ys[i], t_xs[i]) # <<<<<<<<<<<<<<
- */
- __pyx_t_4.data = __pyx_v_paths.data;
- __pyx_t_4.memview = __pyx_v_paths.memview;
- __PYX_INC_MEMVIEW(&__pyx_t_4, 0);
- {
- Py_ssize_t __pyx_tmp_idx = __pyx_v_i;
- Py_ssize_t __pyx_tmp_stride = __pyx_v_paths.strides[0];
- __pyx_t_4.data += __pyx_tmp_idx * __pyx_tmp_stride;
-}
-
-__pyx_t_4.shape[0] = __pyx_v_paths.shape[1];
-__pyx_t_4.strides[0] = __pyx_v_paths.strides[1];
- __pyx_t_4.suboffsets[0] = -1;
-
-__pyx_t_4.shape[1] = __pyx_v_paths.shape[2];
-__pyx_t_4.strides[1] = __pyx_v_paths.strides[2];
- __pyx_t_4.suboffsets[1] = -1;
-
-__pyx_t_5.data = __pyx_v_values.data;
- __pyx_t_5.memview = __pyx_v_values.memview;
- __PYX_INC_MEMVIEW(&__pyx_t_5, 0);
- {
- Py_ssize_t __pyx_tmp_idx = __pyx_v_i;
- Py_ssize_t __pyx_tmp_stride = __pyx_v_values.strides[0];
- __pyx_t_5.data += __pyx_tmp_idx * __pyx_tmp_stride;
-}
-
-__pyx_t_5.shape[0] = __pyx_v_values.shape[1];
-__pyx_t_5.strides[0] = __pyx_v_values.strides[1];
- __pyx_t_5.suboffsets[0] = -1;
-
-__pyx_t_5.shape[1] = __pyx_v_values.shape[2];
-__pyx_t_5.strides[1] = __pyx_v_values.strides[2];
- __pyx_t_5.suboffsets[1] = -1;
-
-__pyx_t_6 = __pyx_v_i;
- __pyx_t_7 = __pyx_v_i;
- __pyx_f_15monotonic_align_4core_maximum_path_each(__pyx_t_4, __pyx_t_5, (*((int *) ( /* dim=0 */ ((char *) (((int *) __pyx_v_t_ys.data) + __pyx_t_6)) ))), (*((int *) ( /* dim=0 */ ((char *) (((int *) __pyx_v_t_xs.data) + __pyx_t_7)) ))), NULL);
- __PYX_XDEC_MEMVIEW(&__pyx_t_4, 0);
- __pyx_t_4.memview = NULL;
- __pyx_t_4.data = NULL;
- __PYX_XDEC_MEMVIEW(&__pyx_t_5, 0);
- __pyx_t_5.memview = NULL;
- __pyx_t_5.data = NULL;
- }
- }
- }
- }
- }
- #if ((defined(__APPLE__) || defined(__OSX__)) && (defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95)))))
- #undef likely
- #undef unlikely
- #define likely(x) __builtin_expect(!!(x), 1)
- #define unlikely(x) __builtin_expect(!!(x), 0)
- #endif
- }
-
- /* "monotonic_align/core.pyx":41
- * cdef int b = paths.shape[0]
- * cdef int i
- * for i in prange(b, nogil=True): # <<<<<<<<<<<<<<
- * maximum_path_each(paths[i], values[i], t_ys[i], t_xs[i])
- */
- /*finally:*/ {
- /*normal exit:*/{
- #ifdef WITH_THREAD
- __Pyx_FastGIL_Forget();
- Py_BLOCK_THREADS
- #endif
- goto __pyx_L5;
- }
- __pyx_L5:;
- }
- }
-
- /* "monotonic_align/core.pyx":38
- * @cython.boundscheck(False)
- * @cython.wraparound(False)
- * cpdef void maximum_path_c(int[:,:,::1] paths, float[:,:,::1] values, int[::1] t_ys, int[::1] t_xs) nogil: # <<<<<<<<<<<<<<
- * cdef int b = paths.shape[0]
- * cdef int i
- */
-
- /* function exit code */
-}
-
-/* Python wrapper */
-static PyObject *__pyx_pw_15monotonic_align_4core_1maximum_path_c(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
-static PyObject *__pyx_pw_15monotonic_align_4core_1maximum_path_c(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
- __Pyx_memviewslice __pyx_v_paths = { 0, 0, { 0 }, { 0 }, { 0 } };
- __Pyx_memviewslice __pyx_v_values = { 0, 0, { 0 }, { 0 }, { 0 } };
- __Pyx_memviewslice __pyx_v_t_ys = { 0, 0, { 0 }, { 0 }, { 0 } };
- __Pyx_memviewslice __pyx_v_t_xs = { 0, 0, { 0 }, { 0 }, { 0 } };
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("maximum_path_c (wrapper)", 0);
- {
- static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_paths,&__pyx_n_s_values,&__pyx_n_s_t_ys,&__pyx_n_s_t_xs,0};
- PyObject* values[4] = {0,0,0,0};
- if (unlikely(__pyx_kwds)) {
- Py_ssize_t kw_args;
- const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
- switch (pos_args) {
- case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3);
- CYTHON_FALLTHROUGH;
- case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
- CYTHON_FALLTHROUGH;
- case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
- CYTHON_FALLTHROUGH;
- case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
- CYTHON_FALLTHROUGH;
- case 0: break;
- default: goto __pyx_L5_argtuple_error;
- }
- kw_args = PyDict_Size(__pyx_kwds);
- switch (pos_args) {
- case 0:
- if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_paths)) != 0)) kw_args--;
- else goto __pyx_L5_argtuple_error;
- CYTHON_FALLTHROUGH;
- case 1:
- if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_values)) != 0)) kw_args--;
- else {
- __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, 1); __PYX_ERR(0, 38, __pyx_L3_error)
- }
- CYTHON_FALLTHROUGH;
- case 2:
- if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_t_ys)) != 0)) kw_args--;
- else {
- __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, 2); __PYX_ERR(0, 38, __pyx_L3_error)
- }
- CYTHON_FALLTHROUGH;
- case 3:
- if (likely((values[3] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_t_xs)) != 0)) kw_args--;
- else {
- __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, 3); __PYX_ERR(0, 38, __pyx_L3_error)
- }
- }
- if (unlikely(kw_args > 0)) {
- if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "maximum_path_c") < 0)) __PYX_ERR(0, 38, __pyx_L3_error)
- }
- } else if (PyTuple_GET_SIZE(__pyx_args) != 4) {
- goto __pyx_L5_argtuple_error;
- } else {
- values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
- values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
- values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
- values[3] = PyTuple_GET_ITEM(__pyx_args, 3);
- }
- __pyx_v_paths = __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_int(values[0], PyBUF_WRITABLE); if (unlikely(!__pyx_v_paths.memview)) __PYX_ERR(0, 38, __pyx_L3_error)
- __pyx_v_values = __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_float(values[1], PyBUF_WRITABLE); if (unlikely(!__pyx_v_values.memview)) __PYX_ERR(0, 38, __pyx_L3_error)
- __pyx_v_t_ys = __Pyx_PyObject_to_MemoryviewSlice_dc_int(values[2], PyBUF_WRITABLE); if (unlikely(!__pyx_v_t_ys.memview)) __PYX_ERR(0, 38, __pyx_L3_error)
- __pyx_v_t_xs = __Pyx_PyObject_to_MemoryviewSlice_dc_int(values[3], PyBUF_WRITABLE); if (unlikely(!__pyx_v_t_xs.memview)) __PYX_ERR(0, 38, __pyx_L3_error)
- }
- goto __pyx_L4_argument_unpacking_done;
- __pyx_L5_argtuple_error:;
- __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 38, __pyx_L3_error)
- __pyx_L3_error:;
- __Pyx_AddTraceback("monotonic_align.core.maximum_path_c", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __Pyx_RefNannyFinishContext();
- return NULL;
- __pyx_L4_argument_unpacking_done:;
- __pyx_r = __pyx_pf_15monotonic_align_4core_maximum_path_c(__pyx_self, __pyx_v_paths, __pyx_v_values, __pyx_v_t_ys, __pyx_v_t_xs);
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_15monotonic_align_4core_maximum_path_c(CYTHON_UNUSED PyObject *__pyx_self, __Pyx_memviewslice __pyx_v_paths, __Pyx_memviewslice __pyx_v_values, __Pyx_memviewslice __pyx_v_t_ys, __Pyx_memviewslice __pyx_v_t_xs) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("maximum_path_c", 0);
- __Pyx_XDECREF(__pyx_r);
- if (unlikely(!__pyx_v_paths.memview)) { __Pyx_RaiseUnboundLocalError("paths"); __PYX_ERR(0, 38, __pyx_L1_error) }
- if (unlikely(!__pyx_v_values.memview)) { __Pyx_RaiseUnboundLocalError("values"); __PYX_ERR(0, 38, __pyx_L1_error) }
- if (unlikely(!__pyx_v_t_ys.memview)) { __Pyx_RaiseUnboundLocalError("t_ys"); __PYX_ERR(0, 38, __pyx_L1_error) }
- if (unlikely(!__pyx_v_t_xs.memview)) { __Pyx_RaiseUnboundLocalError("t_xs"); __PYX_ERR(0, 38, __pyx_L1_error) }
- __pyx_t_1 = __Pyx_void_to_None(__pyx_f_15monotonic_align_4core_maximum_path_c(__pyx_v_paths, __pyx_v_values, __pyx_v_t_ys, __pyx_v_t_xs, 0)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 38, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_r = __pyx_t_1;
- __pyx_t_1 = 0;
- goto __pyx_L0;
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_AddTraceback("monotonic_align.core.maximum_path_c", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __PYX_XDEC_MEMVIEW(&__pyx_v_paths, 1);
- __PYX_XDEC_MEMVIEW(&__pyx_v_values, 1);
- __PYX_XDEC_MEMVIEW(&__pyx_v_t_ys, 1);
- __PYX_XDEC_MEMVIEW(&__pyx_v_t_xs, 1);
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":122
- * cdef bint dtype_is_object
- *
- * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, # <<<<<<<<<<<<<<
- * mode="c", bint allocate_buffer=True):
- *
- */
-
-/* Python wrapper */
-static int __pyx_array___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
-static int __pyx_array___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
- PyObject *__pyx_v_shape = 0;
- Py_ssize_t __pyx_v_itemsize;
- PyObject *__pyx_v_format = 0;
- PyObject *__pyx_v_mode = 0;
- int __pyx_v_allocate_buffer;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__cinit__ (wrapper)", 0);
- {
- static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_shape,&__pyx_n_s_itemsize,&__pyx_n_s_format,&__pyx_n_s_mode,&__pyx_n_s_allocate_buffer,0};
- PyObject* values[5] = {0,0,0,0,0};
- values[3] = ((PyObject *)__pyx_n_s_c);
- if (unlikely(__pyx_kwds)) {
- Py_ssize_t kw_args;
- const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
- switch (pos_args) {
- case 5: values[4] = PyTuple_GET_ITEM(__pyx_args, 4);
- CYTHON_FALLTHROUGH;
- case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3);
- CYTHON_FALLTHROUGH;
- case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
- CYTHON_FALLTHROUGH;
- case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
- CYTHON_FALLTHROUGH;
- case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
- CYTHON_FALLTHROUGH;
- case 0: break;
- default: goto __pyx_L5_argtuple_error;
- }
- kw_args = PyDict_Size(__pyx_kwds);
- switch (pos_args) {
- case 0:
- if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_shape)) != 0)) kw_args--;
- else goto __pyx_L5_argtuple_error;
- CYTHON_FALLTHROUGH;
- case 1:
- if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_itemsize)) != 0)) kw_args--;
- else {
- __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 3, 5, 1); __PYX_ERR(1, 122, __pyx_L3_error)
- }
- CYTHON_FALLTHROUGH;
- case 2:
- if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_format)) != 0)) kw_args--;
- else {
- __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 3, 5, 2); __PYX_ERR(1, 122, __pyx_L3_error)
- }
- CYTHON_FALLTHROUGH;
- case 3:
- if (kw_args > 0) {
- PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_mode);
- if (value) { values[3] = value; kw_args--; }
- }
- CYTHON_FALLTHROUGH;
- case 4:
- if (kw_args > 0) {
- PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_allocate_buffer);
- if (value) { values[4] = value; kw_args--; }
- }
- }
- if (unlikely(kw_args > 0)) {
- if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__cinit__") < 0)) __PYX_ERR(1, 122, __pyx_L3_error)
- }
- } else {
- switch (PyTuple_GET_SIZE(__pyx_args)) {
- case 5: values[4] = PyTuple_GET_ITEM(__pyx_args, 4);
- CYTHON_FALLTHROUGH;
- case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3);
- CYTHON_FALLTHROUGH;
- case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
- values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
- values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
- break;
- default: goto __pyx_L5_argtuple_error;
- }
- }
- __pyx_v_shape = ((PyObject*)values[0]);
- __pyx_v_itemsize = __Pyx_PyIndex_AsSsize_t(values[1]); if (unlikely((__pyx_v_itemsize == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 122, __pyx_L3_error)
- __pyx_v_format = values[2];
- __pyx_v_mode = values[3];
- if (values[4]) {
- __pyx_v_allocate_buffer = __Pyx_PyObject_IsTrue(values[4]); if (unlikely((__pyx_v_allocate_buffer == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 123, __pyx_L3_error)
- } else {
-
- /* "View.MemoryView":123
- *
- * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None,
- * mode="c", bint allocate_buffer=True): # <<<<<<<<<<<<<<
- *
- * cdef int idx
- */
- __pyx_v_allocate_buffer = ((int)1);
- }
- }
- goto __pyx_L4_argument_unpacking_done;
- __pyx_L5_argtuple_error:;
- __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 3, 5, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(1, 122, __pyx_L3_error)
- __pyx_L3_error:;
- __Pyx_AddTraceback("View.MemoryView.array.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __Pyx_RefNannyFinishContext();
- return -1;
- __pyx_L4_argument_unpacking_done:;
- if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_shape), (&PyTuple_Type), 1, "shape", 1))) __PYX_ERR(1, 122, __pyx_L1_error)
- if (unlikely(((PyObject *)__pyx_v_format) == Py_None)) {
- PyErr_Format(PyExc_TypeError, "Argument '%.200s' must not be None", "format"); __PYX_ERR(1, 122, __pyx_L1_error)
- }
- __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array___cinit__(((struct __pyx_array_obj *)__pyx_v_self), __pyx_v_shape, __pyx_v_itemsize, __pyx_v_format, __pyx_v_mode, __pyx_v_allocate_buffer);
-
- /* "View.MemoryView":122
- * cdef bint dtype_is_object
- *
- * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, # <<<<<<<<<<<<<<
- * mode="c", bint allocate_buffer=True):
- *
- */
-
- /* function exit code */
- goto __pyx_L0;
- __pyx_L1_error:;
- __pyx_r = -1;
- __pyx_L0:;
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array___cinit__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_shape, Py_ssize_t __pyx_v_itemsize, PyObject *__pyx_v_format, PyObject *__pyx_v_mode, int __pyx_v_allocate_buffer) {
- int __pyx_v_idx;
- Py_ssize_t __pyx_v_i;
- Py_ssize_t __pyx_v_dim;
- PyObject **__pyx_v_p;
- char __pyx_v_order;
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- Py_ssize_t __pyx_t_1;
- int __pyx_t_2;
- PyObject *__pyx_t_3 = NULL;
- int __pyx_t_4;
- PyObject *__pyx_t_5 = NULL;
- PyObject *__pyx_t_6 = NULL;
- char *__pyx_t_7;
- int __pyx_t_8;
- Py_ssize_t __pyx_t_9;
- PyObject *__pyx_t_10 = NULL;
- Py_ssize_t __pyx_t_11;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__cinit__", 0);
- __Pyx_INCREF(__pyx_v_format);
-
- /* "View.MemoryView":129
- * cdef PyObject **p
- *
- * self.ndim = len(shape) # <<<<<<<<<<<<<<
- * self.itemsize = itemsize
- *
- */
- if (unlikely(__pyx_v_shape == Py_None)) {
- PyErr_SetString(PyExc_TypeError, "object of type 'NoneType' has no len()");
- __PYX_ERR(1, 129, __pyx_L1_error)
- }
- __pyx_t_1 = PyTuple_GET_SIZE(__pyx_v_shape); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(1, 129, __pyx_L1_error)
- __pyx_v_self->ndim = ((int)__pyx_t_1);
-
- /* "View.MemoryView":130
- *
- * self.ndim = len(shape)
- * self.itemsize = itemsize # <<<<<<<<<<<<<<
- *
- * if not self.ndim:
- */
- __pyx_v_self->itemsize = __pyx_v_itemsize;
-
- /* "View.MemoryView":132
- * self.itemsize = itemsize
- *
- * if not self.ndim: # <<<<<<<<<<<<<<
- * raise ValueError("Empty shape tuple for cython.array")
- *
- */
- __pyx_t_2 = ((!(__pyx_v_self->ndim != 0)) != 0);
- if (unlikely(__pyx_t_2)) {
-
- /* "View.MemoryView":133
- *
- * if not self.ndim:
- * raise ValueError("Empty shape tuple for cython.array") # <<<<<<<<<<<<<<
- *
- * if itemsize <= 0:
- */
- __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__2, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 133, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_Raise(__pyx_t_3, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __PYX_ERR(1, 133, __pyx_L1_error)
-
- /* "View.MemoryView":132
- * self.itemsize = itemsize
- *
- * if not self.ndim: # <<<<<<<<<<<<<<
- * raise ValueError("Empty shape tuple for cython.array")
- *
- */
- }
-
- /* "View.MemoryView":135
- * raise ValueError("Empty shape tuple for cython.array")
- *
- * if itemsize <= 0: # <<<<<<<<<<<<<<
- * raise ValueError("itemsize <= 0 for cython.array")
- *
- */
- __pyx_t_2 = ((__pyx_v_itemsize <= 0) != 0);
- if (unlikely(__pyx_t_2)) {
-
- /* "View.MemoryView":136
- *
- * if itemsize <= 0:
- * raise ValueError("itemsize <= 0 for cython.array") # <<<<<<<<<<<<<<
- *
- * if not isinstance(format, bytes):
- */
- __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__3, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 136, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_Raise(__pyx_t_3, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __PYX_ERR(1, 136, __pyx_L1_error)
-
- /* "View.MemoryView":135
- * raise ValueError("Empty shape tuple for cython.array")
- *
- * if itemsize <= 0: # <<<<<<<<<<<<<<
- * raise ValueError("itemsize <= 0 for cython.array")
- *
- */
- }
-
- /* "View.MemoryView":138
- * raise ValueError("itemsize <= 0 for cython.array")
- *
- * if not isinstance(format, bytes): # <<<<<<<<<<<<<<
- * format = format.encode('ASCII')
- * self._format = format # keep a reference to the byte string
- */
- __pyx_t_2 = PyBytes_Check(__pyx_v_format);
- __pyx_t_4 = ((!(__pyx_t_2 != 0)) != 0);
- if (__pyx_t_4) {
-
- /* "View.MemoryView":139
- *
- * if not isinstance(format, bytes):
- * format = format.encode('ASCII') # <<<<<<<<<<<<<<
- * self._format = format # keep a reference to the byte string
- * self.format = self._format
- */
- __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_v_format, __pyx_n_s_encode); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 139, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- __pyx_t_6 = NULL;
- if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_5))) {
- __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_5);
- if (likely(__pyx_t_6)) {
- PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5);
- __Pyx_INCREF(__pyx_t_6);
- __Pyx_INCREF(function);
- __Pyx_DECREF_SET(__pyx_t_5, function);
- }
- }
- __pyx_t_3 = (__pyx_t_6) ? __Pyx_PyObject_Call2Args(__pyx_t_5, __pyx_t_6, __pyx_n_s_ASCII) : __Pyx_PyObject_CallOneArg(__pyx_t_5, __pyx_n_s_ASCII);
- __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;
- if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 139, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
- __Pyx_DECREF_SET(__pyx_v_format, __pyx_t_3);
- __pyx_t_3 = 0;
-
- /* "View.MemoryView":138
- * raise ValueError("itemsize <= 0 for cython.array")
- *
- * if not isinstance(format, bytes): # <<<<<<<<<<<<<<
- * format = format.encode('ASCII')
- * self._format = format # keep a reference to the byte string
- */
- }
-
- /* "View.MemoryView":140
- * if not isinstance(format, bytes):
- * format = format.encode('ASCII')
- * self._format = format # keep a reference to the byte string # <<<<<<<<<<<<<<
- * self.format = self._format
- *
- */
- if (!(likely(PyBytes_CheckExact(__pyx_v_format))||((__pyx_v_format) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "bytes", Py_TYPE(__pyx_v_format)->tp_name), 0))) __PYX_ERR(1, 140, __pyx_L1_error)
- __pyx_t_3 = __pyx_v_format;
- __Pyx_INCREF(__pyx_t_3);
- __Pyx_GIVEREF(__pyx_t_3);
- __Pyx_GOTREF(__pyx_v_self->_format);
- __Pyx_DECREF(__pyx_v_self->_format);
- __pyx_v_self->_format = ((PyObject*)__pyx_t_3);
- __pyx_t_3 = 0;
-
- /* "View.MemoryView":141
- * format = format.encode('ASCII')
- * self._format = format # keep a reference to the byte string
- * self.format = self._format # <<<<<<<<<<<<<<
- *
- *
- */
- if (unlikely(__pyx_v_self->_format == Py_None)) {
- PyErr_SetString(PyExc_TypeError, "expected bytes, NoneType found");
- __PYX_ERR(1, 141, __pyx_L1_error)
- }
- __pyx_t_7 = __Pyx_PyBytes_AsWritableString(__pyx_v_self->_format); if (unlikely((!__pyx_t_7) && PyErr_Occurred())) __PYX_ERR(1, 141, __pyx_L1_error)
- __pyx_v_self->format = __pyx_t_7;
-
- /* "View.MemoryView":144
- *
- *
- * self._shape = PyObject_Malloc(sizeof(Py_ssize_t)*self.ndim*2) # <<<<<<<<<<<<<<
- * self._strides = self._shape + self.ndim
- *
- */
- __pyx_v_self->_shape = ((Py_ssize_t *)PyObject_Malloc((((sizeof(Py_ssize_t)) * __pyx_v_self->ndim) * 2)));
-
- /* "View.MemoryView":145
- *
- * self._shape = PyObject_Malloc(sizeof(Py_ssize_t)*self.ndim*2)
- * self._strides = self._shape + self.ndim # <<<<<<<<<<<<<<
- *
- * if not self._shape:
- */
- __pyx_v_self->_strides = (__pyx_v_self->_shape + __pyx_v_self->ndim);
-
- /* "View.MemoryView":147
- * self._strides = self._shape + self.ndim
- *
- * if not self._shape: # <<<<<<<<<<<<<<
- * raise MemoryError("unable to allocate shape and strides.")
- *
- */
- __pyx_t_4 = ((!(__pyx_v_self->_shape != 0)) != 0);
- if (unlikely(__pyx_t_4)) {
-
- /* "View.MemoryView":148
- *
- * if not self._shape:
- * raise MemoryError("unable to allocate shape and strides.") # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_MemoryError, __pyx_tuple__4, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 148, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_Raise(__pyx_t_3, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __PYX_ERR(1, 148, __pyx_L1_error)
-
- /* "View.MemoryView":147
- * self._strides = self._shape + self.ndim
- *
- * if not self._shape: # <<<<<<<<<<<<<<
- * raise MemoryError("unable to allocate shape and strides.")
- *
- */
- }
-
- /* "View.MemoryView":151
- *
- *
- * for idx, dim in enumerate(shape): # <<<<<<<<<<<<<<
- * if dim <= 0:
- * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim))
- */
- __pyx_t_8 = 0;
- __pyx_t_3 = __pyx_v_shape; __Pyx_INCREF(__pyx_t_3); __pyx_t_1 = 0;
- for (;;) {
- if (__pyx_t_1 >= PyTuple_GET_SIZE(__pyx_t_3)) break;
- #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- __pyx_t_5 = PyTuple_GET_ITEM(__pyx_t_3, __pyx_t_1); __Pyx_INCREF(__pyx_t_5); __pyx_t_1++; if (unlikely(0 < 0)) __PYX_ERR(1, 151, __pyx_L1_error)
- #else
- __pyx_t_5 = PySequence_ITEM(__pyx_t_3, __pyx_t_1); __pyx_t_1++; if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 151, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- #endif
- __pyx_t_9 = __Pyx_PyIndex_AsSsize_t(__pyx_t_5); if (unlikely((__pyx_t_9 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 151, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
- __pyx_v_dim = __pyx_t_9;
- __pyx_v_idx = __pyx_t_8;
- __pyx_t_8 = (__pyx_t_8 + 1);
-
- /* "View.MemoryView":152
- *
- * for idx, dim in enumerate(shape):
- * if dim <= 0: # <<<<<<<<<<<<<<
- * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim))
- * self._shape[idx] = dim
- */
- __pyx_t_4 = ((__pyx_v_dim <= 0) != 0);
- if (unlikely(__pyx_t_4)) {
-
- /* "View.MemoryView":153
- * for idx, dim in enumerate(shape):
- * if dim <= 0:
- * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim)) # <<<<<<<<<<<<<<
- * self._shape[idx] = dim
- *
- */
- __pyx_t_5 = __Pyx_PyInt_From_int(__pyx_v_idx); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 153, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- __pyx_t_6 = PyInt_FromSsize_t(__pyx_v_dim); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 153, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_6);
- __pyx_t_10 = PyTuple_New(2); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 153, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_10);
- __Pyx_GIVEREF(__pyx_t_5);
- PyTuple_SET_ITEM(__pyx_t_10, 0, __pyx_t_5);
- __Pyx_GIVEREF(__pyx_t_6);
- PyTuple_SET_ITEM(__pyx_t_10, 1, __pyx_t_6);
- __pyx_t_5 = 0;
- __pyx_t_6 = 0;
- __pyx_t_6 = __Pyx_PyString_Format(__pyx_kp_s_Invalid_shape_in_axis_d_d, __pyx_t_10); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 153, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_6);
- __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;
- __pyx_t_10 = __Pyx_PyObject_CallOneArg(__pyx_builtin_ValueError, __pyx_t_6); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 153, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_10);
- __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
- __Pyx_Raise(__pyx_t_10, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;
- __PYX_ERR(1, 153, __pyx_L1_error)
-
- /* "View.MemoryView":152
- *
- * for idx, dim in enumerate(shape):
- * if dim <= 0: # <<<<<<<<<<<<<<
- * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim))
- * self._shape[idx] = dim
- */
- }
-
- /* "View.MemoryView":154
- * if dim <= 0:
- * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim))
- * self._shape[idx] = dim # <<<<<<<<<<<<<<
- *
- * cdef char order
- */
- (__pyx_v_self->_shape[__pyx_v_idx]) = __pyx_v_dim;
-
- /* "View.MemoryView":151
- *
- *
- * for idx, dim in enumerate(shape): # <<<<<<<<<<<<<<
- * if dim <= 0:
- * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim))
- */
- }
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
-
- /* "View.MemoryView":157
- *
- * cdef char order
- * if mode == 'fortran': # <<<<<<<<<<<<<<
- * order = b'F'
- * self.mode = u'fortran'
- */
- __pyx_t_4 = (__Pyx_PyString_Equals(__pyx_v_mode, __pyx_n_s_fortran, Py_EQ)); if (unlikely(__pyx_t_4 < 0)) __PYX_ERR(1, 157, __pyx_L1_error)
- if (__pyx_t_4) {
-
- /* "View.MemoryView":158
- * cdef char order
- * if mode == 'fortran':
- * order = b'F' # <<<<<<<<<<<<<<
- * self.mode = u'fortran'
- * elif mode == 'c':
- */
- __pyx_v_order = 'F';
-
- /* "View.MemoryView":159
- * if mode == 'fortran':
- * order = b'F'
- * self.mode = u'fortran' # <<<<<<<<<<<<<<
- * elif mode == 'c':
- * order = b'C'
- */
- __Pyx_INCREF(__pyx_n_u_fortran);
- __Pyx_GIVEREF(__pyx_n_u_fortran);
- __Pyx_GOTREF(__pyx_v_self->mode);
- __Pyx_DECREF(__pyx_v_self->mode);
- __pyx_v_self->mode = __pyx_n_u_fortran;
-
- /* "View.MemoryView":157
- *
- * cdef char order
- * if mode == 'fortran': # <<<<<<<<<<<<<<
- * order = b'F'
- * self.mode = u'fortran'
- */
- goto __pyx_L10;
- }
-
- /* "View.MemoryView":160
- * order = b'F'
- * self.mode = u'fortran'
- * elif mode == 'c': # <<<<<<<<<<<<<<
- * order = b'C'
- * self.mode = u'c'
- */
- __pyx_t_4 = (__Pyx_PyString_Equals(__pyx_v_mode, __pyx_n_s_c, Py_EQ)); if (unlikely(__pyx_t_4 < 0)) __PYX_ERR(1, 160, __pyx_L1_error)
- if (likely(__pyx_t_4)) {
-
- /* "View.MemoryView":161
- * self.mode = u'fortran'
- * elif mode == 'c':
- * order = b'C' # <<<<<<<<<<<<<<
- * self.mode = u'c'
- * else:
- */
- __pyx_v_order = 'C';
-
- /* "View.MemoryView":162
- * elif mode == 'c':
- * order = b'C'
- * self.mode = u'c' # <<<<<<<<<<<<<<
- * else:
- * raise ValueError("Invalid mode, expected 'c' or 'fortran', got %s" % mode)
- */
- __Pyx_INCREF(__pyx_n_u_c);
- __Pyx_GIVEREF(__pyx_n_u_c);
- __Pyx_GOTREF(__pyx_v_self->mode);
- __Pyx_DECREF(__pyx_v_self->mode);
- __pyx_v_self->mode = __pyx_n_u_c;
-
- /* "View.MemoryView":160
- * order = b'F'
- * self.mode = u'fortran'
- * elif mode == 'c': # <<<<<<<<<<<<<<
- * order = b'C'
- * self.mode = u'c'
- */
- goto __pyx_L10;
- }
-
- /* "View.MemoryView":164
- * self.mode = u'c'
- * else:
- * raise ValueError("Invalid mode, expected 'c' or 'fortran', got %s" % mode) # <<<<<<<<<<<<<<
- *
- * self.len = fill_contig_strides_array(self._shape, self._strides,
- */
- /*else*/ {
- __pyx_t_3 = __Pyx_PyString_FormatSafe(__pyx_kp_s_Invalid_mode_expected_c_or_fortr, __pyx_v_mode); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 164, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_t_10 = __Pyx_PyObject_CallOneArg(__pyx_builtin_ValueError, __pyx_t_3); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 164, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_10);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __Pyx_Raise(__pyx_t_10, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;
- __PYX_ERR(1, 164, __pyx_L1_error)
- }
- __pyx_L10:;
-
- /* "View.MemoryView":166
- * raise ValueError("Invalid mode, expected 'c' or 'fortran', got %s" % mode)
- *
- * self.len = fill_contig_strides_array(self._shape, self._strides, # <<<<<<<<<<<<<<
- * itemsize, self.ndim, order)
- *
- */
- __pyx_v_self->len = __pyx_fill_contig_strides_array(__pyx_v_self->_shape, __pyx_v_self->_strides, __pyx_v_itemsize, __pyx_v_self->ndim, __pyx_v_order);
-
- /* "View.MemoryView":169
- * itemsize, self.ndim, order)
- *
- * self.free_data = allocate_buffer # <<<<<<<<<<<<<<
- * self.dtype_is_object = format == b'O'
- * if allocate_buffer:
- */
- __pyx_v_self->free_data = __pyx_v_allocate_buffer;
-
- /* "View.MemoryView":170
- *
- * self.free_data = allocate_buffer
- * self.dtype_is_object = format == b'O' # <<<<<<<<<<<<<<
- * if allocate_buffer:
- *
- */
- __pyx_t_10 = PyObject_RichCompare(__pyx_v_format, __pyx_n_b_O, Py_EQ); __Pyx_XGOTREF(__pyx_t_10); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 170, __pyx_L1_error)
- __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_t_10); if (unlikely((__pyx_t_4 == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 170, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;
- __pyx_v_self->dtype_is_object = __pyx_t_4;
-
- /* "View.MemoryView":171
- * self.free_data = allocate_buffer
- * self.dtype_is_object = format == b'O'
- * if allocate_buffer: # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_t_4 = (__pyx_v_allocate_buffer != 0);
- if (__pyx_t_4) {
-
- /* "View.MemoryView":174
- *
- *
- * self.data = malloc(self.len) # <<<<<<<<<<<<<<
- * if not self.data:
- * raise MemoryError("unable to allocate array data.")
- */
- __pyx_v_self->data = ((char *)malloc(__pyx_v_self->len));
-
- /* "View.MemoryView":175
- *
- * self.data = malloc(self.len)
- * if not self.data: # <<<<<<<<<<<<<<
- * raise MemoryError("unable to allocate array data.")
- *
- */
- __pyx_t_4 = ((!(__pyx_v_self->data != 0)) != 0);
- if (unlikely(__pyx_t_4)) {
-
- /* "View.MemoryView":176
- * self.data = malloc(self.len)
- * if not self.data:
- * raise MemoryError("unable to allocate array data.") # <<<<<<<<<<<<<<
- *
- * if self.dtype_is_object:
- */
- __pyx_t_10 = __Pyx_PyObject_Call(__pyx_builtin_MemoryError, __pyx_tuple__5, NULL); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 176, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_10);
- __Pyx_Raise(__pyx_t_10, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;
- __PYX_ERR(1, 176, __pyx_L1_error)
-
- /* "View.MemoryView":175
- *
- * self.data = malloc(self.len)
- * if not self.data: # <<<<<<<<<<<<<<
- * raise MemoryError("unable to allocate array data.")
- *
- */
- }
-
- /* "View.MemoryView":178
- * raise MemoryError("unable to allocate array data.")
- *
- * if self.dtype_is_object: # <<<<<<<<<<<<<<
- * p = self.data
- * for i in range(self.len / itemsize):
- */
- __pyx_t_4 = (__pyx_v_self->dtype_is_object != 0);
- if (__pyx_t_4) {
-
- /* "View.MemoryView":179
- *
- * if self.dtype_is_object:
- * p = self.data # <<<<<<<<<<<<<<
- * for i in range(self.len / itemsize):
- * p[i] = Py_None
- */
- __pyx_v_p = ((PyObject **)__pyx_v_self->data);
-
- /* "View.MemoryView":180
- * if self.dtype_is_object:
- * p = self.data
- * for i in range(self.len / itemsize): # <<<<<<<<<<<<<<
- * p[i] = Py_None
- * Py_INCREF(Py_None)
- */
- if (unlikely(__pyx_v_itemsize == 0)) {
- PyErr_SetString(PyExc_ZeroDivisionError, "integer division or modulo by zero");
- __PYX_ERR(1, 180, __pyx_L1_error)
- }
- else if (sizeof(Py_ssize_t) == sizeof(long) && (!(((Py_ssize_t)-1) > 0)) && unlikely(__pyx_v_itemsize == (Py_ssize_t)-1) && unlikely(UNARY_NEG_WOULD_OVERFLOW(__pyx_v_self->len))) {
- PyErr_SetString(PyExc_OverflowError, "value too large to perform division");
- __PYX_ERR(1, 180, __pyx_L1_error)
- }
- __pyx_t_1 = __Pyx_div_Py_ssize_t(__pyx_v_self->len, __pyx_v_itemsize);
- __pyx_t_9 = __pyx_t_1;
- for (__pyx_t_11 = 0; __pyx_t_11 < __pyx_t_9; __pyx_t_11+=1) {
- __pyx_v_i = __pyx_t_11;
-
- /* "View.MemoryView":181
- * p = self.data
- * for i in range(self.len / itemsize):
- * p[i] = Py_None # <<<<<<<<<<<<<<
- * Py_INCREF(Py_None)
- *
- */
- (__pyx_v_p[__pyx_v_i]) = Py_None;
-
- /* "View.MemoryView":182
- * for i in range(self.len / itemsize):
- * p[i] = Py_None
- * Py_INCREF(Py_None) # <<<<<<<<<<<<<<
- *
- * @cname('getbuffer')
- */
- Py_INCREF(Py_None);
- }
-
- /* "View.MemoryView":178
- * raise MemoryError("unable to allocate array data.")
- *
- * if self.dtype_is_object: # <<<<<<<<<<<<<<
- * p = self.data
- * for i in range(self.len / itemsize):
- */
- }
-
- /* "View.MemoryView":171
- * self.free_data = allocate_buffer
- * self.dtype_is_object = format == b'O'
- * if allocate_buffer: # <<<<<<<<<<<<<<
- *
- *
- */
- }
-
- /* "View.MemoryView":122
- * cdef bint dtype_is_object
- *
- * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, # <<<<<<<<<<<<<<
- * mode="c", bint allocate_buffer=True):
- *
- */
-
- /* function exit code */
- __pyx_r = 0;
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_XDECREF(__pyx_t_5);
- __Pyx_XDECREF(__pyx_t_6);
- __Pyx_XDECREF(__pyx_t_10);
- __Pyx_AddTraceback("View.MemoryView.array.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = -1;
- __pyx_L0:;
- __Pyx_XDECREF(__pyx_v_format);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":185
- *
- * @cname('getbuffer')
- * def __getbuffer__(self, Py_buffer *info, int flags): # <<<<<<<<<<<<<<
- * cdef int bufmode = -1
- * if self.mode == u"c":
- */
-
-/* Python wrapper */
-static CYTHON_UNUSED int __pyx_array_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/
-static CYTHON_UNUSED int __pyx_array_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) {
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__getbuffer__ (wrapper)", 0);
- __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_2__getbuffer__(((struct __pyx_array_obj *)__pyx_v_self), ((Py_buffer *)__pyx_v_info), ((int)__pyx_v_flags));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_2__getbuffer__(struct __pyx_array_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) {
- int __pyx_v_bufmode;
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- int __pyx_t_2;
- PyObject *__pyx_t_3 = NULL;
- char *__pyx_t_4;
- Py_ssize_t __pyx_t_5;
- int __pyx_t_6;
- Py_ssize_t *__pyx_t_7;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- if (__pyx_v_info == NULL) {
- PyErr_SetString(PyExc_BufferError, "PyObject_GetBuffer: view==NULL argument is obsolete");
- return -1;
- }
- __Pyx_RefNannySetupContext("__getbuffer__", 0);
- __pyx_v_info->obj = Py_None; __Pyx_INCREF(Py_None);
- __Pyx_GIVEREF(__pyx_v_info->obj);
-
- /* "View.MemoryView":186
- * @cname('getbuffer')
- * def __getbuffer__(self, Py_buffer *info, int flags):
- * cdef int bufmode = -1 # <<<<<<<<<<<<<<
- * if self.mode == u"c":
- * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS
- */
- __pyx_v_bufmode = -1;
-
- /* "View.MemoryView":187
- * def __getbuffer__(self, Py_buffer *info, int flags):
- * cdef int bufmode = -1
- * if self.mode == u"c": # <<<<<<<<<<<<<<
- * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS
- * elif self.mode == u"fortran":
- */
- __pyx_t_1 = (__Pyx_PyUnicode_Equals(__pyx_v_self->mode, __pyx_n_u_c, Py_EQ)); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 187, __pyx_L1_error)
- __pyx_t_2 = (__pyx_t_1 != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":188
- * cdef int bufmode = -1
- * if self.mode == u"c":
- * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS # <<<<<<<<<<<<<<
- * elif self.mode == u"fortran":
- * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS
- */
- __pyx_v_bufmode = (PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS);
-
- /* "View.MemoryView":187
- * def __getbuffer__(self, Py_buffer *info, int flags):
- * cdef int bufmode = -1
- * if self.mode == u"c": # <<<<<<<<<<<<<<
- * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS
- * elif self.mode == u"fortran":
- */
- goto __pyx_L3;
- }
-
- /* "View.MemoryView":189
- * if self.mode == u"c":
- * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS
- * elif self.mode == u"fortran": # <<<<<<<<<<<<<<
- * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS
- * if not (flags & bufmode):
- */
- __pyx_t_2 = (__Pyx_PyUnicode_Equals(__pyx_v_self->mode, __pyx_n_u_fortran, Py_EQ)); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(1, 189, __pyx_L1_error)
- __pyx_t_1 = (__pyx_t_2 != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":190
- * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS
- * elif self.mode == u"fortran":
- * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS # <<<<<<<<<<<<<<
- * if not (flags & bufmode):
- * raise ValueError("Can only create a buffer that is contiguous in memory.")
- */
- __pyx_v_bufmode = (PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS);
-
- /* "View.MemoryView":189
- * if self.mode == u"c":
- * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS
- * elif self.mode == u"fortran": # <<<<<<<<<<<<<<
- * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS
- * if not (flags & bufmode):
- */
- }
- __pyx_L3:;
-
- /* "View.MemoryView":191
- * elif self.mode == u"fortran":
- * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS
- * if not (flags & bufmode): # <<<<<<<<<<<<<<
- * raise ValueError("Can only create a buffer that is contiguous in memory.")
- * info.buf = self.data
- */
- __pyx_t_1 = ((!((__pyx_v_flags & __pyx_v_bufmode) != 0)) != 0);
- if (unlikely(__pyx_t_1)) {
-
- /* "View.MemoryView":192
- * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS
- * if not (flags & bufmode):
- * raise ValueError("Can only create a buffer that is contiguous in memory.") # <<<<<<<<<<<<<<
- * info.buf = self.data
- * info.len = self.len
- */
- __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__6, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 192, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_Raise(__pyx_t_3, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __PYX_ERR(1, 192, __pyx_L1_error)
-
- /* "View.MemoryView":191
- * elif self.mode == u"fortran":
- * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS
- * if not (flags & bufmode): # <<<<<<<<<<<<<<
- * raise ValueError("Can only create a buffer that is contiguous in memory.")
- * info.buf = self.data
- */
- }
-
- /* "View.MemoryView":193
- * if not (flags & bufmode):
- * raise ValueError("Can only create a buffer that is contiguous in memory.")
- * info.buf = self.data # <<<<<<<<<<<<<<
- * info.len = self.len
- * info.ndim = self.ndim
- */
- __pyx_t_4 = __pyx_v_self->data;
- __pyx_v_info->buf = __pyx_t_4;
-
- /* "View.MemoryView":194
- * raise ValueError("Can only create a buffer that is contiguous in memory.")
- * info.buf = self.data
- * info.len = self.len # <<<<<<<<<<<<<<
- * info.ndim = self.ndim
- * info.shape = self._shape
- */
- __pyx_t_5 = __pyx_v_self->len;
- __pyx_v_info->len = __pyx_t_5;
-
- /* "View.MemoryView":195
- * info.buf = self.data
- * info.len = self.len
- * info.ndim = self.ndim # <<<<<<<<<<<<<<
- * info.shape = self._shape
- * info.strides = self._strides
- */
- __pyx_t_6 = __pyx_v_self->ndim;
- __pyx_v_info->ndim = __pyx_t_6;
-
- /* "View.MemoryView":196
- * info.len = self.len
- * info.ndim = self.ndim
- * info.shape = self._shape # <<<<<<<<<<<<<<
- * info.strides = self._strides
- * info.suboffsets = NULL
- */
- __pyx_t_7 = __pyx_v_self->_shape;
- __pyx_v_info->shape = __pyx_t_7;
-
- /* "View.MemoryView":197
- * info.ndim = self.ndim
- * info.shape = self._shape
- * info.strides = self._strides # <<<<<<<<<<<<<<
- * info.suboffsets = NULL
- * info.itemsize = self.itemsize
- */
- __pyx_t_7 = __pyx_v_self->_strides;
- __pyx_v_info->strides = __pyx_t_7;
-
- /* "View.MemoryView":198
- * info.shape = self._shape
- * info.strides = self._strides
- * info.suboffsets = NULL # <<<<<<<<<<<<<<
- * info.itemsize = self.itemsize
- * info.readonly = 0
- */
- __pyx_v_info->suboffsets = NULL;
-
- /* "View.MemoryView":199
- * info.strides = self._strides
- * info.suboffsets = NULL
- * info.itemsize = self.itemsize # <<<<<<<<<<<<<<
- * info.readonly = 0
- *
- */
- __pyx_t_5 = __pyx_v_self->itemsize;
- __pyx_v_info->itemsize = __pyx_t_5;
-
- /* "View.MemoryView":200
- * info.suboffsets = NULL
- * info.itemsize = self.itemsize
- * info.readonly = 0 # <<<<<<<<<<<<<<
- *
- * if flags & PyBUF_FORMAT:
- */
- __pyx_v_info->readonly = 0;
-
- /* "View.MemoryView":202
- * info.readonly = 0
- *
- * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<<
- * info.format = self.format
- * else:
- */
- __pyx_t_1 = ((__pyx_v_flags & PyBUF_FORMAT) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":203
- *
- * if flags & PyBUF_FORMAT:
- * info.format = self.format # <<<<<<<<<<<<<<
- * else:
- * info.format = NULL
- */
- __pyx_t_4 = __pyx_v_self->format;
- __pyx_v_info->format = __pyx_t_4;
-
- /* "View.MemoryView":202
- * info.readonly = 0
- *
- * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<<
- * info.format = self.format
- * else:
- */
- goto __pyx_L5;
- }
-
- /* "View.MemoryView":205
- * info.format = self.format
- * else:
- * info.format = NULL # <<<<<<<<<<<<<<
- *
- * info.obj = self
- */
- /*else*/ {
- __pyx_v_info->format = NULL;
- }
- __pyx_L5:;
-
- /* "View.MemoryView":207
- * info.format = NULL
- *
- * info.obj = self # <<<<<<<<<<<<<<
- *
- * __pyx_getbuffer = capsule( &__pyx_array_getbuffer, "getbuffer(obj, view, flags)")
- */
- __Pyx_INCREF(((PyObject *)__pyx_v_self));
- __Pyx_GIVEREF(((PyObject *)__pyx_v_self));
- __Pyx_GOTREF(__pyx_v_info->obj);
- __Pyx_DECREF(__pyx_v_info->obj);
- __pyx_v_info->obj = ((PyObject *)__pyx_v_self);
-
- /* "View.MemoryView":185
- *
- * @cname('getbuffer')
- * def __getbuffer__(self, Py_buffer *info, int flags): # <<<<<<<<<<<<<<
- * cdef int bufmode = -1
- * if self.mode == u"c":
- */
-
- /* function exit code */
- __pyx_r = 0;
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_AddTraceback("View.MemoryView.array.__getbuffer__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = -1;
- if (__pyx_v_info->obj != NULL) {
- __Pyx_GOTREF(__pyx_v_info->obj);
- __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0;
- }
- goto __pyx_L2;
- __pyx_L0:;
- if (__pyx_v_info->obj == Py_None) {
- __Pyx_GOTREF(__pyx_v_info->obj);
- __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0;
- }
- __pyx_L2:;
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":211
- * __pyx_getbuffer = capsule( &__pyx_array_getbuffer, "getbuffer(obj, view, flags)")
- *
- * def __dealloc__(array self): # <<<<<<<<<<<<<<
- * if self.callback_free_data != NULL:
- * self.callback_free_data(self.data)
- */
-
-/* Python wrapper */
-static void __pyx_array___dealloc__(PyObject *__pyx_v_self); /*proto*/
-static void __pyx_array___dealloc__(PyObject *__pyx_v_self) {
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__dealloc__ (wrapper)", 0);
- __pyx_array___pyx_pf_15View_dot_MemoryView_5array_4__dealloc__(((struct __pyx_array_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
-}
-
-static void __pyx_array___pyx_pf_15View_dot_MemoryView_5array_4__dealloc__(struct __pyx_array_obj *__pyx_v_self) {
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- __Pyx_RefNannySetupContext("__dealloc__", 0);
-
- /* "View.MemoryView":212
- *
- * def __dealloc__(array self):
- * if self.callback_free_data != NULL: # <<<<<<<<<<<<<<
- * self.callback_free_data(self.data)
- * elif self.free_data:
- */
- __pyx_t_1 = ((__pyx_v_self->callback_free_data != NULL) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":213
- * def __dealloc__(array self):
- * if self.callback_free_data != NULL:
- * self.callback_free_data(self.data) # <<<<<<<<<<<<<<
- * elif self.free_data:
- * if self.dtype_is_object:
- */
- __pyx_v_self->callback_free_data(__pyx_v_self->data);
-
- /* "View.MemoryView":212
- *
- * def __dealloc__(array self):
- * if self.callback_free_data != NULL: # <<<<<<<<<<<<<<
- * self.callback_free_data(self.data)
- * elif self.free_data:
- */
- goto __pyx_L3;
- }
-
- /* "View.MemoryView":214
- * if self.callback_free_data != NULL:
- * self.callback_free_data(self.data)
- * elif self.free_data: # <<<<<<<<<<<<<<
- * if self.dtype_is_object:
- * refcount_objects_in_slice(self.data, self._shape,
- */
- __pyx_t_1 = (__pyx_v_self->free_data != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":215
- * self.callback_free_data(self.data)
- * elif self.free_data:
- * if self.dtype_is_object: # <<<<<<<<<<<<<<
- * refcount_objects_in_slice(self.data, self._shape,
- * self._strides, self.ndim, False)
- */
- __pyx_t_1 = (__pyx_v_self->dtype_is_object != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":216
- * elif self.free_data:
- * if self.dtype_is_object:
- * refcount_objects_in_slice(self.data, self._shape, # <<<<<<<<<<<<<<
- * self._strides, self.ndim, False)
- * free(self.data)
- */
- __pyx_memoryview_refcount_objects_in_slice(__pyx_v_self->data, __pyx_v_self->_shape, __pyx_v_self->_strides, __pyx_v_self->ndim, 0);
-
- /* "View.MemoryView":215
- * self.callback_free_data(self.data)
- * elif self.free_data:
- * if self.dtype_is_object: # <<<<<<<<<<<<<<
- * refcount_objects_in_slice(self.data, self._shape,
- * self._strides, self.ndim, False)
- */
- }
-
- /* "View.MemoryView":218
- * refcount_objects_in_slice(self.data, self._shape,
- * self._strides, self.ndim, False)
- * free(self.data) # <<<<<<<<<<<<<<
- * PyObject_Free(self._shape)
- *
- */
- free(__pyx_v_self->data);
-
- /* "View.MemoryView":214
- * if self.callback_free_data != NULL:
- * self.callback_free_data(self.data)
- * elif self.free_data: # <<<<<<<<<<<<<<
- * if self.dtype_is_object:
- * refcount_objects_in_slice(self.data, self._shape,
- */
- }
- __pyx_L3:;
-
- /* "View.MemoryView":219
- * self._strides, self.ndim, False)
- * free(self.data)
- * PyObject_Free(self._shape) # <<<<<<<<<<<<<<
- *
- * @property
- */
- PyObject_Free(__pyx_v_self->_shape);
-
- /* "View.MemoryView":211
- * __pyx_getbuffer = capsule( &__pyx_array_getbuffer, "getbuffer(obj, view, flags)")
- *
- * def __dealloc__(array self): # <<<<<<<<<<<<<<
- * if self.callback_free_data != NULL:
- * self.callback_free_data(self.data)
- */
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
-}
-
-/* "View.MemoryView":222
- *
- * @property
- * def memview(self): # <<<<<<<<<<<<<<
- * return self.get_memview()
- *
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw_15View_dot_MemoryView_5array_7memview_1__get__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_pw_15View_dot_MemoryView_5array_7memview_1__get__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__get__ (wrapper)", 0);
- __pyx_r = __pyx_pf_15View_dot_MemoryView_5array_7memview___get__(((struct __pyx_array_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_15View_dot_MemoryView_5array_7memview___get__(struct __pyx_array_obj *__pyx_v_self) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__get__", 0);
-
- /* "View.MemoryView":223
- * @property
- * def memview(self):
- * return self.get_memview() # <<<<<<<<<<<<<<
- *
- * @cname('get_memview')
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_1 = ((struct __pyx_vtabstruct_array *)__pyx_v_self->__pyx_vtab)->get_memview(__pyx_v_self); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 223, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_r = __pyx_t_1;
- __pyx_t_1 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":222
- *
- * @property
- * def memview(self): # <<<<<<<<<<<<<<
- * return self.get_memview()
- *
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_AddTraceback("View.MemoryView.array.memview.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":226
- *
- * @cname('get_memview')
- * cdef get_memview(self): # <<<<<<<<<<<<<<
- * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE
- * return memoryview(self, flags, self.dtype_is_object)
- */
-
-static PyObject *__pyx_array_get_memview(struct __pyx_array_obj *__pyx_v_self) {
- int __pyx_v_flags;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- PyObject *__pyx_t_2 = NULL;
- PyObject *__pyx_t_3 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("get_memview", 0);
-
- /* "View.MemoryView":227
- * @cname('get_memview')
- * cdef get_memview(self):
- * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE # <<<<<<<<<<<<<<
- * return memoryview(self, flags, self.dtype_is_object)
- *
- */
- __pyx_v_flags = ((PyBUF_ANY_CONTIGUOUS | PyBUF_FORMAT) | PyBUF_WRITABLE);
-
- /* "View.MemoryView":228
- * cdef get_memview(self):
- * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE
- * return memoryview(self, flags, self.dtype_is_object) # <<<<<<<<<<<<<<
- *
- * def __len__(self):
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_flags); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 228, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_v_self->dtype_is_object); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 228, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 228, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_INCREF(((PyObject *)__pyx_v_self));
- __Pyx_GIVEREF(((PyObject *)__pyx_v_self));
- PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_v_self));
- __Pyx_GIVEREF(__pyx_t_1);
- PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1);
- __Pyx_GIVEREF(__pyx_t_2);
- PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_t_2);
- __pyx_t_1 = 0;
- __pyx_t_2 = 0;
- __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryview_type), __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 228, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __pyx_r = __pyx_t_2;
- __pyx_t_2 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":226
- *
- * @cname('get_memview')
- * cdef get_memview(self): # <<<<<<<<<<<<<<
- * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE
- * return memoryview(self, flags, self.dtype_is_object)
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_AddTraceback("View.MemoryView.array.get_memview", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":230
- * return memoryview(self, flags, self.dtype_is_object)
- *
- * def __len__(self): # <<<<<<<<<<<<<<
- * return self._shape[0]
- *
- */
-
-/* Python wrapper */
-static Py_ssize_t __pyx_array___len__(PyObject *__pyx_v_self); /*proto*/
-static Py_ssize_t __pyx_array___len__(PyObject *__pyx_v_self) {
- Py_ssize_t __pyx_r;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__len__ (wrapper)", 0);
- __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_6__len__(((struct __pyx_array_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static Py_ssize_t __pyx_array___pyx_pf_15View_dot_MemoryView_5array_6__len__(struct __pyx_array_obj *__pyx_v_self) {
- Py_ssize_t __pyx_r;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__len__", 0);
-
- /* "View.MemoryView":231
- *
- * def __len__(self):
- * return self._shape[0] # <<<<<<<<<<<<<<
- *
- * def __getattr__(self, attr):
- */
- __pyx_r = (__pyx_v_self->_shape[0]);
- goto __pyx_L0;
-
- /* "View.MemoryView":230
- * return memoryview(self, flags, self.dtype_is_object)
- *
- * def __len__(self): # <<<<<<<<<<<<<<
- * return self._shape[0]
- *
- */
-
- /* function exit code */
- __pyx_L0:;
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":233
- * return self._shape[0]
- *
- * def __getattr__(self, attr): # <<<<<<<<<<<<<<
- * return getattr(self.memview, attr)
- *
- */
-
-/* Python wrapper */
-static PyObject *__pyx_array___getattr__(PyObject *__pyx_v_self, PyObject *__pyx_v_attr); /*proto*/
-static PyObject *__pyx_array___getattr__(PyObject *__pyx_v_self, PyObject *__pyx_v_attr) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__getattr__ (wrapper)", 0);
- __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_8__getattr__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v_attr));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_8__getattr__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_attr) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- PyObject *__pyx_t_2 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__getattr__", 0);
-
- /* "View.MemoryView":234
- *
- * def __getattr__(self, attr):
- * return getattr(self.memview, attr) # <<<<<<<<<<<<<<
- *
- * def __getitem__(self, item):
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_memview); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 234, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_2 = __Pyx_GetAttr(__pyx_t_1, __pyx_v_attr); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 234, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __pyx_r = __pyx_t_2;
- __pyx_t_2 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":233
- * return self._shape[0]
- *
- * def __getattr__(self, attr): # <<<<<<<<<<<<<<
- * return getattr(self.memview, attr)
- *
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_AddTraceback("View.MemoryView.array.__getattr__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":236
- * return getattr(self.memview, attr)
- *
- * def __getitem__(self, item): # <<<<<<<<<<<<<<
- * return self.memview[item]
- *
- */
-
-/* Python wrapper */
-static PyObject *__pyx_array___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item); /*proto*/
-static PyObject *__pyx_array___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__getitem__ (wrapper)", 0);
- __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_10__getitem__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v_item));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_10__getitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- PyObject *__pyx_t_2 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__getitem__", 0);
-
- /* "View.MemoryView":237
- *
- * def __getitem__(self, item):
- * return self.memview[item] # <<<<<<<<<<<<<<
- *
- * def __setitem__(self, item, value):
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_memview); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 237, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_2 = __Pyx_PyObject_GetItem(__pyx_t_1, __pyx_v_item); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 237, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __pyx_r = __pyx_t_2;
- __pyx_t_2 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":236
- * return getattr(self.memview, attr)
- *
- * def __getitem__(self, item): # <<<<<<<<<<<<<<
- * return self.memview[item]
- *
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_AddTraceback("View.MemoryView.array.__getitem__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":239
- * return self.memview[item]
- *
- * def __setitem__(self, item, value): # <<<<<<<<<<<<<<
- * self.memview[item] = value
- *
- */
-
-/* Python wrapper */
-static int __pyx_array___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value); /*proto*/
-static int __pyx_array___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value) {
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__setitem__ (wrapper)", 0);
- __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_12__setitem__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v_item), ((PyObject *)__pyx_v_value));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_12__setitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value) {
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__setitem__", 0);
-
- /* "View.MemoryView":240
- *
- * def __setitem__(self, item, value):
- * self.memview[item] = value # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_memview); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 240, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- if (unlikely(PyObject_SetItem(__pyx_t_1, __pyx_v_item, __pyx_v_value) < 0)) __PYX_ERR(1, 240, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
-
- /* "View.MemoryView":239
- * return self.memview[item]
- *
- * def __setitem__(self, item, value): # <<<<<<<<<<<<<<
- * self.memview[item] = value
- *
- */
-
- /* function exit code */
- __pyx_r = 0;
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_AddTraceback("View.MemoryView.array.__setitem__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = -1;
- __pyx_L0:;
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "(tree fragment)":1
- * def __reduce_cython__(self): # <<<<<<<<<<<<<<
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state):
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw___pyx_array_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/
-static PyObject *__pyx_pw___pyx_array_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0);
- __pyx_r = __pyx_pf___pyx_array___reduce_cython__(((struct __pyx_array_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf___pyx_array___reduce_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__reduce_cython__", 0);
-
- /* "(tree fragment)":2
- * def __reduce_cython__(self):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<<
- * def __setstate_cython__(self, __pyx_state):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- */
- __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__7, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_Raise(__pyx_t_1, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __PYX_ERR(1, 2, __pyx_L1_error)
-
- /* "(tree fragment)":1
- * def __reduce_cython__(self): # <<<<<<<<<<<<<<
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state):
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_AddTraceback("View.MemoryView.array.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "(tree fragment)":3
- * def __reduce_cython__(self):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<<
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw___pyx_array_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/
-static PyObject *__pyx_pw___pyx_array_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0);
- __pyx_r = __pyx_pf___pyx_array_2__setstate_cython__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf___pyx_array_2__setstate_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__setstate_cython__", 0);
-
- /* "(tree fragment)":4
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<<
- */
- __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__8, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_Raise(__pyx_t_1, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __PYX_ERR(1, 4, __pyx_L1_error)
-
- /* "(tree fragment)":3
- * def __reduce_cython__(self):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<<
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_AddTraceback("View.MemoryView.array.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":244
- *
- * @cname("__pyx_array_new")
- * cdef array array_cwrapper(tuple shape, Py_ssize_t itemsize, char *format, # <<<<<<<<<<<<<<
- * char *mode, char *buf):
- * cdef array result
- */
-
-static struct __pyx_array_obj *__pyx_array_new(PyObject *__pyx_v_shape, Py_ssize_t __pyx_v_itemsize, char *__pyx_v_format, char *__pyx_v_mode, char *__pyx_v_buf) {
- struct __pyx_array_obj *__pyx_v_result = 0;
- struct __pyx_array_obj *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- PyObject *__pyx_t_2 = NULL;
- PyObject *__pyx_t_3 = NULL;
- PyObject *__pyx_t_4 = NULL;
- PyObject *__pyx_t_5 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("array_cwrapper", 0);
-
- /* "View.MemoryView":248
- * cdef array result
- *
- * if buf == NULL: # <<<<<<<<<<<<<<
- * result = array(shape, itemsize, format, mode.decode('ASCII'))
- * else:
- */
- __pyx_t_1 = ((__pyx_v_buf == NULL) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":249
- *
- * if buf == NULL:
- * result = array(shape, itemsize, format, mode.decode('ASCII')) # <<<<<<<<<<<<<<
- * else:
- * result = array(shape, itemsize, format, mode.decode('ASCII'),
- */
- __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_itemsize); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 249, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_3 = __Pyx_PyBytes_FromString(__pyx_v_format); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 249, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_t_4 = __Pyx_decode_c_string(__pyx_v_mode, 0, strlen(__pyx_v_mode), NULL, NULL, PyUnicode_DecodeASCII); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 249, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __pyx_t_5 = PyTuple_New(4); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 249, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- __Pyx_INCREF(__pyx_v_shape);
- __Pyx_GIVEREF(__pyx_v_shape);
- PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_v_shape);
- __Pyx_GIVEREF(__pyx_t_2);
- PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_2);
- __Pyx_GIVEREF(__pyx_t_3);
- PyTuple_SET_ITEM(__pyx_t_5, 2, __pyx_t_3);
- __Pyx_GIVEREF(__pyx_t_4);
- PyTuple_SET_ITEM(__pyx_t_5, 3, __pyx_t_4);
- __pyx_t_2 = 0;
- __pyx_t_3 = 0;
- __pyx_t_4 = 0;
- __pyx_t_4 = __Pyx_PyObject_Call(((PyObject *)__pyx_array_type), __pyx_t_5, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 249, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
- __pyx_v_result = ((struct __pyx_array_obj *)__pyx_t_4);
- __pyx_t_4 = 0;
-
- /* "View.MemoryView":248
- * cdef array result
- *
- * if buf == NULL: # <<<<<<<<<<<<<<
- * result = array(shape, itemsize, format, mode.decode('ASCII'))
- * else:
- */
- goto __pyx_L3;
- }
-
- /* "View.MemoryView":251
- * result = array(shape, itemsize, format, mode.decode('ASCII'))
- * else:
- * result = array(shape, itemsize, format, mode.decode('ASCII'), # <<<<<<<<<<<<<<
- * allocate_buffer=False)
- * result.data = buf
- */
- /*else*/ {
- __pyx_t_4 = PyInt_FromSsize_t(__pyx_v_itemsize); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 251, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __pyx_t_5 = __Pyx_PyBytes_FromString(__pyx_v_format); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 251, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- __pyx_t_3 = __Pyx_decode_c_string(__pyx_v_mode, 0, strlen(__pyx_v_mode), NULL, NULL, PyUnicode_DecodeASCII); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 251, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_t_2 = PyTuple_New(4); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 251, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_INCREF(__pyx_v_shape);
- __Pyx_GIVEREF(__pyx_v_shape);
- PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_shape);
- __Pyx_GIVEREF(__pyx_t_4);
- PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_4);
- __Pyx_GIVEREF(__pyx_t_5);
- PyTuple_SET_ITEM(__pyx_t_2, 2, __pyx_t_5);
- __Pyx_GIVEREF(__pyx_t_3);
- PyTuple_SET_ITEM(__pyx_t_2, 3, __pyx_t_3);
- __pyx_t_4 = 0;
- __pyx_t_5 = 0;
- __pyx_t_3 = 0;
-
- /* "View.MemoryView":252
- * else:
- * result = array(shape, itemsize, format, mode.decode('ASCII'),
- * allocate_buffer=False) # <<<<<<<<<<<<<<
- * result.data = buf
- *
- */
- __pyx_t_3 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 252, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- if (PyDict_SetItem(__pyx_t_3, __pyx_n_s_allocate_buffer, Py_False) < 0) __PYX_ERR(1, 252, __pyx_L1_error)
-
- /* "View.MemoryView":251
- * result = array(shape, itemsize, format, mode.decode('ASCII'))
- * else:
- * result = array(shape, itemsize, format, mode.decode('ASCII'), # <<<<<<<<<<<<<<
- * allocate_buffer=False)
- * result.data = buf
- */
- __pyx_t_5 = __Pyx_PyObject_Call(((PyObject *)__pyx_array_type), __pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 251, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __pyx_v_result = ((struct __pyx_array_obj *)__pyx_t_5);
- __pyx_t_5 = 0;
-
- /* "View.MemoryView":253
- * result = array(shape, itemsize, format, mode.decode('ASCII'),
- * allocate_buffer=False)
- * result.data = buf # <<<<<<<<<<<<<<
- *
- * return result
- */
- __pyx_v_result->data = __pyx_v_buf;
- }
- __pyx_L3:;
-
- /* "View.MemoryView":255
- * result.data = buf
- *
- * return result # <<<<<<<<<<<<<<
- *
- *
- */
- __Pyx_XDECREF(((PyObject *)__pyx_r));
- __Pyx_INCREF(((PyObject *)__pyx_v_result));
- __pyx_r = __pyx_v_result;
- goto __pyx_L0;
-
- /* "View.MemoryView":244
- *
- * @cname("__pyx_array_new")
- * cdef array array_cwrapper(tuple shape, Py_ssize_t itemsize, char *format, # <<<<<<<<<<<<<<
- * char *mode, char *buf):
- * cdef array result
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_XDECREF(__pyx_t_4);
- __Pyx_XDECREF(__pyx_t_5);
- __Pyx_AddTraceback("View.MemoryView.array_cwrapper", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XDECREF((PyObject *)__pyx_v_result);
- __Pyx_XGIVEREF((PyObject *)__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":281
- * cdef class Enum(object):
- * cdef object name
- * def __init__(self, name): # <<<<<<<<<<<<<<
- * self.name = name
- * def __repr__(self):
- */
-
-/* Python wrapper */
-static int __pyx_MemviewEnum___init__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
-static int __pyx_MemviewEnum___init__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
- PyObject *__pyx_v_name = 0;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__init__ (wrapper)", 0);
- {
- static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_name,0};
- PyObject* values[1] = {0};
- if (unlikely(__pyx_kwds)) {
- Py_ssize_t kw_args;
- const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
- switch (pos_args) {
- case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
- CYTHON_FALLTHROUGH;
- case 0: break;
- default: goto __pyx_L5_argtuple_error;
- }
- kw_args = PyDict_Size(__pyx_kwds);
- switch (pos_args) {
- case 0:
- if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_name)) != 0)) kw_args--;
- else goto __pyx_L5_argtuple_error;
- }
- if (unlikely(kw_args > 0)) {
- if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__init__") < 0)) __PYX_ERR(1, 281, __pyx_L3_error)
- }
- } else if (PyTuple_GET_SIZE(__pyx_args) != 1) {
- goto __pyx_L5_argtuple_error;
- } else {
- values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
- }
- __pyx_v_name = values[0];
- }
- goto __pyx_L4_argument_unpacking_done;
- __pyx_L5_argtuple_error:;
- __Pyx_RaiseArgtupleInvalid("__init__", 1, 1, 1, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(1, 281, __pyx_L3_error)
- __pyx_L3_error:;
- __Pyx_AddTraceback("View.MemoryView.Enum.__init__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __Pyx_RefNannyFinishContext();
- return -1;
- __pyx_L4_argument_unpacking_done:;
- __pyx_r = __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum___init__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self), __pyx_v_name);
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static int __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum___init__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v_name) {
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__init__", 0);
-
- /* "View.MemoryView":282
- * cdef object name
- * def __init__(self, name):
- * self.name = name # <<<<<<<<<<<<<<
- * def __repr__(self):
- * return self.name
- */
- __Pyx_INCREF(__pyx_v_name);
- __Pyx_GIVEREF(__pyx_v_name);
- __Pyx_GOTREF(__pyx_v_self->name);
- __Pyx_DECREF(__pyx_v_self->name);
- __pyx_v_self->name = __pyx_v_name;
-
- /* "View.MemoryView":281
- * cdef class Enum(object):
- * cdef object name
- * def __init__(self, name): # <<<<<<<<<<<<<<
- * self.name = name
- * def __repr__(self):
- */
-
- /* function exit code */
- __pyx_r = 0;
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":283
- * def __init__(self, name):
- * self.name = name
- * def __repr__(self): # <<<<<<<<<<<<<<
- * return self.name
- *
- */
-
-/* Python wrapper */
-static PyObject *__pyx_MemviewEnum___repr__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_MemviewEnum___repr__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__repr__ (wrapper)", 0);
- __pyx_r = __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum_2__repr__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum_2__repr__(struct __pyx_MemviewEnum_obj *__pyx_v_self) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__repr__", 0);
-
- /* "View.MemoryView":284
- * self.name = name
- * def __repr__(self):
- * return self.name # <<<<<<<<<<<<<<
- *
- * cdef generic = Enum("")
- */
- __Pyx_XDECREF(__pyx_r);
- __Pyx_INCREF(__pyx_v_self->name);
- __pyx_r = __pyx_v_self->name;
- goto __pyx_L0;
-
- /* "View.MemoryView":283
- * def __init__(self, name):
- * self.name = name
- * def __repr__(self): # <<<<<<<<<<<<<<
- * return self.name
- *
- */
-
- /* function exit code */
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "(tree fragment)":1
- * def __reduce_cython__(self): # <<<<<<<<<<<<<<
- * cdef tuple state
- * cdef object _dict
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw___pyx_MemviewEnum_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/
-static PyObject *__pyx_pw___pyx_MemviewEnum_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0);
- __pyx_r = __pyx_pf___pyx_MemviewEnum___reduce_cython__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf___pyx_MemviewEnum___reduce_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self) {
- PyObject *__pyx_v_state = 0;
- PyObject *__pyx_v__dict = 0;
- int __pyx_v_use_setstate;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_t_2;
- int __pyx_t_3;
- PyObject *__pyx_t_4 = NULL;
- PyObject *__pyx_t_5 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__reduce_cython__", 0);
-
- /* "(tree fragment)":5
- * cdef object _dict
- * cdef bint use_setstate
- * state = (self.name,) # <<<<<<<<<<<<<<
- * _dict = getattr(self, '__dict__', None)
- * if _dict is not None:
- */
- __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 5, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_INCREF(__pyx_v_self->name);
- __Pyx_GIVEREF(__pyx_v_self->name);
- PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_self->name);
- __pyx_v_state = ((PyObject*)__pyx_t_1);
- __pyx_t_1 = 0;
-
- /* "(tree fragment)":6
- * cdef bint use_setstate
- * state = (self.name,)
- * _dict = getattr(self, '__dict__', None) # <<<<<<<<<<<<<<
- * if _dict is not None:
- * state += (_dict,)
- */
- __pyx_t_1 = __Pyx_GetAttr3(((PyObject *)__pyx_v_self), __pyx_n_s_dict, Py_None); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 6, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_v__dict = __pyx_t_1;
- __pyx_t_1 = 0;
-
- /* "(tree fragment)":7
- * state = (self.name,)
- * _dict = getattr(self, '__dict__', None)
- * if _dict is not None: # <<<<<<<<<<<<<<
- * state += (_dict,)
- * use_setstate = True
- */
- __pyx_t_2 = (__pyx_v__dict != Py_None);
- __pyx_t_3 = (__pyx_t_2 != 0);
- if (__pyx_t_3) {
-
- /* "(tree fragment)":8
- * _dict = getattr(self, '__dict__', None)
- * if _dict is not None:
- * state += (_dict,) # <<<<<<<<<<<<<<
- * use_setstate = True
- * else:
- */
- __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 8, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_INCREF(__pyx_v__dict);
- __Pyx_GIVEREF(__pyx_v__dict);
- PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v__dict);
- __pyx_t_4 = PyNumber_InPlaceAdd(__pyx_v_state, __pyx_t_1); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 8, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __Pyx_DECREF_SET(__pyx_v_state, ((PyObject*)__pyx_t_4));
- __pyx_t_4 = 0;
-
- /* "(tree fragment)":9
- * if _dict is not None:
- * state += (_dict,)
- * use_setstate = True # <<<<<<<<<<<<<<
- * else:
- * use_setstate = self.name is not None
- */
- __pyx_v_use_setstate = 1;
-
- /* "(tree fragment)":7
- * state = (self.name,)
- * _dict = getattr(self, '__dict__', None)
- * if _dict is not None: # <<<<<<<<<<<<<<
- * state += (_dict,)
- * use_setstate = True
- */
- goto __pyx_L3;
- }
-
- /* "(tree fragment)":11
- * use_setstate = True
- * else:
- * use_setstate = self.name is not None # <<<<<<<<<<<<<<
- * if use_setstate:
- * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state
- */
- /*else*/ {
- __pyx_t_3 = (__pyx_v_self->name != Py_None);
- __pyx_v_use_setstate = __pyx_t_3;
- }
- __pyx_L3:;
-
- /* "(tree fragment)":12
- * else:
- * use_setstate = self.name is not None
- * if use_setstate: # <<<<<<<<<<<<<<
- * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state
- * else:
- */
- __pyx_t_3 = (__pyx_v_use_setstate != 0);
- if (__pyx_t_3) {
-
- /* "(tree fragment)":13
- * use_setstate = self.name is not None
- * if use_setstate:
- * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state # <<<<<<<<<<<<<<
- * else:
- * return __pyx_unpickle_Enum, (type(self), 0xb068931, state)
- */
- __Pyx_XDECREF(__pyx_r);
- __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_pyx_unpickle_Enum); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 13, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __pyx_t_1 = PyTuple_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 13, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_INCREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
- __Pyx_GIVEREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
- PyTuple_SET_ITEM(__pyx_t_1, 0, ((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
- __Pyx_INCREF(__pyx_int_184977713);
- __Pyx_GIVEREF(__pyx_int_184977713);
- PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_int_184977713);
- __Pyx_INCREF(Py_None);
- __Pyx_GIVEREF(Py_None);
- PyTuple_SET_ITEM(__pyx_t_1, 2, Py_None);
- __pyx_t_5 = PyTuple_New(3); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 13, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- __Pyx_GIVEREF(__pyx_t_4);
- PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_4);
- __Pyx_GIVEREF(__pyx_t_1);
- PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_1);
- __Pyx_INCREF(__pyx_v_state);
- __Pyx_GIVEREF(__pyx_v_state);
- PyTuple_SET_ITEM(__pyx_t_5, 2, __pyx_v_state);
- __pyx_t_4 = 0;
- __pyx_t_1 = 0;
- __pyx_r = __pyx_t_5;
- __pyx_t_5 = 0;
- goto __pyx_L0;
-
- /* "(tree fragment)":12
- * else:
- * use_setstate = self.name is not None
- * if use_setstate: # <<<<<<<<<<<<<<
- * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state
- * else:
- */
- }
-
- /* "(tree fragment)":15
- * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state
- * else:
- * return __pyx_unpickle_Enum, (type(self), 0xb068931, state) # <<<<<<<<<<<<<<
- * def __setstate_cython__(self, __pyx_state):
- * __pyx_unpickle_Enum__set_state(self, __pyx_state)
- */
- /*else*/ {
- __Pyx_XDECREF(__pyx_r);
- __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_pyx_unpickle_Enum); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 15, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- __pyx_t_1 = PyTuple_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 15, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_INCREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
- __Pyx_GIVEREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
- PyTuple_SET_ITEM(__pyx_t_1, 0, ((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
- __Pyx_INCREF(__pyx_int_184977713);
- __Pyx_GIVEREF(__pyx_int_184977713);
- PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_int_184977713);
- __Pyx_INCREF(__pyx_v_state);
- __Pyx_GIVEREF(__pyx_v_state);
- PyTuple_SET_ITEM(__pyx_t_1, 2, __pyx_v_state);
- __pyx_t_4 = PyTuple_New(2); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 15, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __Pyx_GIVEREF(__pyx_t_5);
- PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_5);
- __Pyx_GIVEREF(__pyx_t_1);
- PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_1);
- __pyx_t_5 = 0;
- __pyx_t_1 = 0;
- __pyx_r = __pyx_t_4;
- __pyx_t_4 = 0;
- goto __pyx_L0;
- }
-
- /* "(tree fragment)":1
- * def __reduce_cython__(self): # <<<<<<<<<<<<<<
- * cdef tuple state
- * cdef object _dict
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_4);
- __Pyx_XDECREF(__pyx_t_5);
- __Pyx_AddTraceback("View.MemoryView.Enum.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XDECREF(__pyx_v_state);
- __Pyx_XDECREF(__pyx_v__dict);
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "(tree fragment)":16
- * else:
- * return __pyx_unpickle_Enum, (type(self), 0xb068931, state)
- * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<<
- * __pyx_unpickle_Enum__set_state(self, __pyx_state)
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw___pyx_MemviewEnum_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/
-static PyObject *__pyx_pw___pyx_MemviewEnum_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0);
- __pyx_r = __pyx_pf___pyx_MemviewEnum_2__setstate_cython__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf___pyx_MemviewEnum_2__setstate_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v___pyx_state) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__setstate_cython__", 0);
-
- /* "(tree fragment)":17
- * return __pyx_unpickle_Enum, (type(self), 0xb068931, state)
- * def __setstate_cython__(self, __pyx_state):
- * __pyx_unpickle_Enum__set_state(self, __pyx_state) # <<<<<<<<<<<<<<
- */
- if (!(likely(PyTuple_CheckExact(__pyx_v___pyx_state))||((__pyx_v___pyx_state) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "tuple", Py_TYPE(__pyx_v___pyx_state)->tp_name), 0))) __PYX_ERR(1, 17, __pyx_L1_error)
- __pyx_t_1 = __pyx_unpickle_Enum__set_state(__pyx_v_self, ((PyObject*)__pyx_v___pyx_state)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 17, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
-
- /* "(tree fragment)":16
- * else:
- * return __pyx_unpickle_Enum, (type(self), 0xb068931, state)
- * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<<
- * __pyx_unpickle_Enum__set_state(self, __pyx_state)
- */
-
- /* function exit code */
- __pyx_r = Py_None; __Pyx_INCREF(Py_None);
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_AddTraceback("View.MemoryView.Enum.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":298
- *
- * @cname('__pyx_align_pointer')
- * cdef void *align_pointer(void *memory, size_t alignment) nogil: # <<<<<<<<<<<<<<
- * "Align pointer memory on a given boundary"
- * cdef Py_intptr_t aligned_p = memory
- */
-
-static void *__pyx_align_pointer(void *__pyx_v_memory, size_t __pyx_v_alignment) {
- Py_intptr_t __pyx_v_aligned_p;
- size_t __pyx_v_offset;
- void *__pyx_r;
- int __pyx_t_1;
-
- /* "View.MemoryView":300
- * cdef void *align_pointer(void *memory, size_t alignment) nogil:
- * "Align pointer memory on a given boundary"
- * cdef Py_intptr_t aligned_p = memory # <<<<<<<<<<<<<<
- * cdef size_t offset
- *
- */
- __pyx_v_aligned_p = ((Py_intptr_t)__pyx_v_memory);
-
- /* "View.MemoryView":304
- *
- * with cython.cdivision(True):
- * offset = aligned_p % alignment # <<<<<<<<<<<<<<
- *
- * if offset > 0:
- */
- __pyx_v_offset = (__pyx_v_aligned_p % __pyx_v_alignment);
-
- /* "View.MemoryView":306
- * offset = aligned_p % alignment
- *
- * if offset > 0: # <<<<<<<<<<<<<<
- * aligned_p += alignment - offset
- *
- */
- __pyx_t_1 = ((__pyx_v_offset > 0) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":307
- *
- * if offset > 0:
- * aligned_p += alignment - offset # <<<<<<<<<<<<<<
- *
- * return aligned_p
- */
- __pyx_v_aligned_p = (__pyx_v_aligned_p + (__pyx_v_alignment - __pyx_v_offset));
-
- /* "View.MemoryView":306
- * offset = aligned_p % alignment
- *
- * if offset > 0: # <<<<<<<<<<<<<<
- * aligned_p += alignment - offset
- *
- */
- }
-
- /* "View.MemoryView":309
- * aligned_p += alignment - offset
- *
- * return aligned_p # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_r = ((void *)__pyx_v_aligned_p);
- goto __pyx_L0;
-
- /* "View.MemoryView":298
- *
- * @cname('__pyx_align_pointer')
- * cdef void *align_pointer(void *memory, size_t alignment) nogil: # <<<<<<<<<<<<<<
- * "Align pointer memory on a given boundary"
- * cdef Py_intptr_t aligned_p = memory
- */
-
- /* function exit code */
- __pyx_L0:;
- return __pyx_r;
-}
-
-/* "View.MemoryView":345
- * cdef __Pyx_TypeInfo *typeinfo
- *
- * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False): # <<<<<<<<<<<<<<
- * self.obj = obj
- * self.flags = flags
- */
-
-/* Python wrapper */
-static int __pyx_memoryview___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
-static int __pyx_memoryview___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
- PyObject *__pyx_v_obj = 0;
- int __pyx_v_flags;
- int __pyx_v_dtype_is_object;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__cinit__ (wrapper)", 0);
- {
- static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_obj,&__pyx_n_s_flags,&__pyx_n_s_dtype_is_object,0};
- PyObject* values[3] = {0,0,0};
- if (unlikely(__pyx_kwds)) {
- Py_ssize_t kw_args;
- const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
- switch (pos_args) {
- case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
- CYTHON_FALLTHROUGH;
- case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
- CYTHON_FALLTHROUGH;
- case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
- CYTHON_FALLTHROUGH;
- case 0: break;
- default: goto __pyx_L5_argtuple_error;
- }
- kw_args = PyDict_Size(__pyx_kwds);
- switch (pos_args) {
- case 0:
- if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_obj)) != 0)) kw_args--;
- else goto __pyx_L5_argtuple_error;
- CYTHON_FALLTHROUGH;
- case 1:
- if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_flags)) != 0)) kw_args--;
- else {
- __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 2, 3, 1); __PYX_ERR(1, 345, __pyx_L3_error)
- }
- CYTHON_FALLTHROUGH;
- case 2:
- if (kw_args > 0) {
- PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_dtype_is_object);
- if (value) { values[2] = value; kw_args--; }
- }
- }
- if (unlikely(kw_args > 0)) {
- if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__cinit__") < 0)) __PYX_ERR(1, 345, __pyx_L3_error)
- }
- } else {
- switch (PyTuple_GET_SIZE(__pyx_args)) {
- case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
- CYTHON_FALLTHROUGH;
- case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
- values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
- break;
- default: goto __pyx_L5_argtuple_error;
- }
- }
- __pyx_v_obj = values[0];
- __pyx_v_flags = __Pyx_PyInt_As_int(values[1]); if (unlikely((__pyx_v_flags == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 345, __pyx_L3_error)
- if (values[2]) {
- __pyx_v_dtype_is_object = __Pyx_PyObject_IsTrue(values[2]); if (unlikely((__pyx_v_dtype_is_object == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 345, __pyx_L3_error)
- } else {
- __pyx_v_dtype_is_object = ((int)0);
- }
- }
- goto __pyx_L4_argument_unpacking_done;
- __pyx_L5_argtuple_error:;
- __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 2, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(1, 345, __pyx_L3_error)
- __pyx_L3_error:;
- __Pyx_AddTraceback("View.MemoryView.memoryview.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __Pyx_RefNannyFinishContext();
- return -1;
- __pyx_L4_argument_unpacking_done:;
- __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview___cinit__(((struct __pyx_memoryview_obj *)__pyx_v_self), __pyx_v_obj, __pyx_v_flags, __pyx_v_dtype_is_object);
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview___cinit__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj, int __pyx_v_flags, int __pyx_v_dtype_is_object) {
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- int __pyx_t_2;
- int __pyx_t_3;
- int __pyx_t_4;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__cinit__", 0);
-
- /* "View.MemoryView":346
- *
- * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False):
- * self.obj = obj # <<<<<<<<<<<<<<
- * self.flags = flags
- * if type(self) is memoryview or obj is not None:
- */
- __Pyx_INCREF(__pyx_v_obj);
- __Pyx_GIVEREF(__pyx_v_obj);
- __Pyx_GOTREF(__pyx_v_self->obj);
- __Pyx_DECREF(__pyx_v_self->obj);
- __pyx_v_self->obj = __pyx_v_obj;
-
- /* "View.MemoryView":347
- * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False):
- * self.obj = obj
- * self.flags = flags # <<<<<<<<<<<<<<
- * if type(self) is memoryview or obj is not None:
- * __Pyx_GetBuffer(obj, &self.view, flags)
- */
- __pyx_v_self->flags = __pyx_v_flags;
-
- /* "View.MemoryView":348
- * self.obj = obj
- * self.flags = flags
- * if type(self) is memoryview or obj is not None: # <<<<<<<<<<<<<<
- * __Pyx_GetBuffer(obj, &self.view, flags)
- * if self.view.obj == NULL:
- */
- __pyx_t_2 = (((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))) == ((PyObject *)__pyx_memoryview_type));
- __pyx_t_3 = (__pyx_t_2 != 0);
- if (!__pyx_t_3) {
- } else {
- __pyx_t_1 = __pyx_t_3;
- goto __pyx_L4_bool_binop_done;
- }
- __pyx_t_3 = (__pyx_v_obj != Py_None);
- __pyx_t_2 = (__pyx_t_3 != 0);
- __pyx_t_1 = __pyx_t_2;
- __pyx_L4_bool_binop_done:;
- if (__pyx_t_1) {
-
- /* "View.MemoryView":349
- * self.flags = flags
- * if type(self) is memoryview or obj is not None:
- * __Pyx_GetBuffer(obj, &self.view, flags) # <<<<<<<<<<<<<<
- * if self.view.obj == NULL:
- * (<__pyx_buffer *> &self.view).obj = Py_None
- */
- __pyx_t_4 = __Pyx_GetBuffer(__pyx_v_obj, (&__pyx_v_self->view), __pyx_v_flags); if (unlikely(__pyx_t_4 == ((int)-1))) __PYX_ERR(1, 349, __pyx_L1_error)
-
- /* "View.MemoryView":350
- * if type(self) is memoryview or obj is not None:
- * __Pyx_GetBuffer(obj, &self.view, flags)
- * if self.view.obj == NULL: # <<<<<<<<<<<<<<
- * (<__pyx_buffer *> &self.view).obj = Py_None
- * Py_INCREF(Py_None)
- */
- __pyx_t_1 = ((((PyObject *)__pyx_v_self->view.obj) == NULL) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":351
- * __Pyx_GetBuffer(obj, &self.view, flags)
- * if self.view.obj == NULL:
- * (<__pyx_buffer *> &self.view).obj = Py_None # <<<<<<<<<<<<<<
- * Py_INCREF(Py_None)
- *
- */
- ((Py_buffer *)(&__pyx_v_self->view))->obj = Py_None;
-
- /* "View.MemoryView":352
- * if self.view.obj == NULL:
- * (<__pyx_buffer *> &self.view).obj = Py_None
- * Py_INCREF(Py_None) # <<<<<<<<<<<<<<
- *
- * global __pyx_memoryview_thread_locks_used
- */
- Py_INCREF(Py_None);
-
- /* "View.MemoryView":350
- * if type(self) is memoryview or obj is not None:
- * __Pyx_GetBuffer(obj, &self.view, flags)
- * if self.view.obj == NULL: # <<<<<<<<<<<<<<
- * (<__pyx_buffer *> &self.view).obj = Py_None
- * Py_INCREF(Py_None)
- */
- }
-
- /* "View.MemoryView":348
- * self.obj = obj
- * self.flags = flags
- * if type(self) is memoryview or obj is not None: # <<<<<<<<<<<<<<
- * __Pyx_GetBuffer(obj, &self.view, flags)
- * if self.view.obj == NULL:
- */
- }
-
- /* "View.MemoryView":355
- *
- * global __pyx_memoryview_thread_locks_used
- * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED: # <<<<<<<<<<<<<<
- * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]
- * __pyx_memoryview_thread_locks_used += 1
- */
- __pyx_t_1 = ((__pyx_memoryview_thread_locks_used < 8) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":356
- * global __pyx_memoryview_thread_locks_used
- * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED:
- * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] # <<<<<<<<<<<<<<
- * __pyx_memoryview_thread_locks_used += 1
- * if self.lock is NULL:
- */
- __pyx_v_self->lock = (__pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]);
-
- /* "View.MemoryView":357
- * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED:
- * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]
- * __pyx_memoryview_thread_locks_used += 1 # <<<<<<<<<<<<<<
- * if self.lock is NULL:
- * self.lock = PyThread_allocate_lock()
- */
- __pyx_memoryview_thread_locks_used = (__pyx_memoryview_thread_locks_used + 1);
-
- /* "View.MemoryView":355
- *
- * global __pyx_memoryview_thread_locks_used
- * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED: # <<<<<<<<<<<<<<
- * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]
- * __pyx_memoryview_thread_locks_used += 1
- */
- }
-
- /* "View.MemoryView":358
- * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]
- * __pyx_memoryview_thread_locks_used += 1
- * if self.lock is NULL: # <<<<<<<<<<<<<<
- * self.lock = PyThread_allocate_lock()
- * if self.lock is NULL:
- */
- __pyx_t_1 = ((__pyx_v_self->lock == NULL) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":359
- * __pyx_memoryview_thread_locks_used += 1
- * if self.lock is NULL:
- * self.lock = PyThread_allocate_lock() # <<<<<<<<<<<<<<
- * if self.lock is NULL:
- * raise MemoryError
- */
- __pyx_v_self->lock = PyThread_allocate_lock();
-
- /* "View.MemoryView":360
- * if self.lock is NULL:
- * self.lock = PyThread_allocate_lock()
- * if self.lock is NULL: # <<<<<<<<<<<<<<
- * raise MemoryError
- *
- */
- __pyx_t_1 = ((__pyx_v_self->lock == NULL) != 0);
- if (unlikely(__pyx_t_1)) {
-
- /* "View.MemoryView":361
- * self.lock = PyThread_allocate_lock()
- * if self.lock is NULL:
- * raise MemoryError # <<<<<<<<<<<<<<
- *
- * if flags & PyBUF_FORMAT:
- */
- PyErr_NoMemory(); __PYX_ERR(1, 361, __pyx_L1_error)
-
- /* "View.MemoryView":360
- * if self.lock is NULL:
- * self.lock = PyThread_allocate_lock()
- * if self.lock is NULL: # <<<<<<<<<<<<<<
- * raise MemoryError
- *
- */
- }
-
- /* "View.MemoryView":358
- * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]
- * __pyx_memoryview_thread_locks_used += 1
- * if self.lock is NULL: # <<<<<<<<<<<<<<
- * self.lock = PyThread_allocate_lock()
- * if self.lock is NULL:
- */
- }
-
- /* "View.MemoryView":363
- * raise MemoryError
- *
- * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<<
- * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0')
- * else:
- */
- __pyx_t_1 = ((__pyx_v_flags & PyBUF_FORMAT) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":364
- *
- * if flags & PyBUF_FORMAT:
- * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0') # <<<<<<<<<<<<<<
- * else:
- * self.dtype_is_object = dtype_is_object
- */
- __pyx_t_2 = (((__pyx_v_self->view.format[0]) == 'O') != 0);
- if (__pyx_t_2) {
- } else {
- __pyx_t_1 = __pyx_t_2;
- goto __pyx_L11_bool_binop_done;
- }
- __pyx_t_2 = (((__pyx_v_self->view.format[1]) == '\x00') != 0);
- __pyx_t_1 = __pyx_t_2;
- __pyx_L11_bool_binop_done:;
- __pyx_v_self->dtype_is_object = __pyx_t_1;
-
- /* "View.MemoryView":363
- * raise MemoryError
- *
- * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<<
- * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0')
- * else:
- */
- goto __pyx_L10;
- }
-
- /* "View.MemoryView":366
- * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0')
- * else:
- * self.dtype_is_object = dtype_is_object # <<<<<<<<<<<<<<
- *
- * self.acquisition_count_aligned_p = <__pyx_atomic_int *> align_pointer(
- */
- /*else*/ {
- __pyx_v_self->dtype_is_object = __pyx_v_dtype_is_object;
- }
- __pyx_L10:;
-
- /* "View.MemoryView":368
- * self.dtype_is_object = dtype_is_object
- *
- * self.acquisition_count_aligned_p = <__pyx_atomic_int *> align_pointer( # <<<<<<<<<<<<<<
- * &self.acquisition_count[0], sizeof(__pyx_atomic_int))
- * self.typeinfo = NULL
- */
- __pyx_v_self->acquisition_count_aligned_p = ((__pyx_atomic_int *)__pyx_align_pointer(((void *)(&(__pyx_v_self->acquisition_count[0]))), (sizeof(__pyx_atomic_int))));
-
- /* "View.MemoryView":370
- * self.acquisition_count_aligned_p = <__pyx_atomic_int *> align_pointer(
- * &self.acquisition_count[0], sizeof(__pyx_atomic_int))
- * self.typeinfo = NULL # <<<<<<<<<<<<<<
- *
- * def __dealloc__(memoryview self):
- */
- __pyx_v_self->typeinfo = NULL;
-
- /* "View.MemoryView":345
- * cdef __Pyx_TypeInfo *typeinfo
- *
- * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False): # <<<<<<<<<<<<<<
- * self.obj = obj
- * self.flags = flags
- */
-
- /* function exit code */
- __pyx_r = 0;
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_AddTraceback("View.MemoryView.memoryview.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = -1;
- __pyx_L0:;
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":372
- * self.typeinfo = NULL
- *
- * def __dealloc__(memoryview self): # <<<<<<<<<<<<<<
- * if self.obj is not None:
- * __Pyx_ReleaseBuffer(&self.view)
- */
-
-/* Python wrapper */
-static void __pyx_memoryview___dealloc__(PyObject *__pyx_v_self); /*proto*/
-static void __pyx_memoryview___dealloc__(PyObject *__pyx_v_self) {
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__dealloc__ (wrapper)", 0);
- __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_2__dealloc__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
-}
-
-static void __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_2__dealloc__(struct __pyx_memoryview_obj *__pyx_v_self) {
- int __pyx_v_i;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- int __pyx_t_2;
- int __pyx_t_3;
- int __pyx_t_4;
- int __pyx_t_5;
- PyThread_type_lock __pyx_t_6;
- PyThread_type_lock __pyx_t_7;
- __Pyx_RefNannySetupContext("__dealloc__", 0);
-
- /* "View.MemoryView":373
- *
- * def __dealloc__(memoryview self):
- * if self.obj is not None: # <<<<<<<<<<<<<<
- * __Pyx_ReleaseBuffer(&self.view)
- * elif (<__pyx_buffer *> &self.view).obj == Py_None:
- */
- __pyx_t_1 = (__pyx_v_self->obj != Py_None);
- __pyx_t_2 = (__pyx_t_1 != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":374
- * def __dealloc__(memoryview self):
- * if self.obj is not None:
- * __Pyx_ReleaseBuffer(&self.view) # <<<<<<<<<<<<<<
- * elif (<__pyx_buffer *> &self.view).obj == Py_None:
- *
- */
- __Pyx_ReleaseBuffer((&__pyx_v_self->view));
-
- /* "View.MemoryView":373
- *
- * def __dealloc__(memoryview self):
- * if self.obj is not None: # <<<<<<<<<<<<<<
- * __Pyx_ReleaseBuffer(&self.view)
- * elif (<__pyx_buffer *> &self.view).obj == Py_None:
- */
- goto __pyx_L3;
- }
-
- /* "View.MemoryView":375
- * if self.obj is not None:
- * __Pyx_ReleaseBuffer(&self.view)
- * elif (<__pyx_buffer *> &self.view).obj == Py_None: # <<<<<<<<<<<<<<
- *
- * (<__pyx_buffer *> &self.view).obj = NULL
- */
- __pyx_t_2 = ((((Py_buffer *)(&__pyx_v_self->view))->obj == Py_None) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":377
- * elif (<__pyx_buffer *> &self.view).obj == Py_None:
- *
- * (<__pyx_buffer *> &self.view).obj = NULL # <<<<<<<<<<<<<<
- * Py_DECREF(Py_None)
- *
- */
- ((Py_buffer *)(&__pyx_v_self->view))->obj = NULL;
-
- /* "View.MemoryView":378
- *
- * (<__pyx_buffer *> &self.view).obj = NULL
- * Py_DECREF(Py_None) # <<<<<<<<<<<<<<
- *
- * cdef int i
- */
- Py_DECREF(Py_None);
-
- /* "View.MemoryView":375
- * if self.obj is not None:
- * __Pyx_ReleaseBuffer(&self.view)
- * elif (<__pyx_buffer *> &self.view).obj == Py_None: # <<<<<<<<<<<<<<
- *
- * (<__pyx_buffer *> &self.view).obj = NULL
- */
- }
- __pyx_L3:;
-
- /* "View.MemoryView":382
- * cdef int i
- * global __pyx_memoryview_thread_locks_used
- * if self.lock != NULL: # <<<<<<<<<<<<<<
- * for i in range(__pyx_memoryview_thread_locks_used):
- * if __pyx_memoryview_thread_locks[i] is self.lock:
- */
- __pyx_t_2 = ((__pyx_v_self->lock != NULL) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":383
- * global __pyx_memoryview_thread_locks_used
- * if self.lock != NULL:
- * for i in range(__pyx_memoryview_thread_locks_used): # <<<<<<<<<<<<<<
- * if __pyx_memoryview_thread_locks[i] is self.lock:
- * __pyx_memoryview_thread_locks_used -= 1
- */
- __pyx_t_3 = __pyx_memoryview_thread_locks_used;
- __pyx_t_4 = __pyx_t_3;
- for (__pyx_t_5 = 0; __pyx_t_5 < __pyx_t_4; __pyx_t_5+=1) {
- __pyx_v_i = __pyx_t_5;
-
- /* "View.MemoryView":384
- * if self.lock != NULL:
- * for i in range(__pyx_memoryview_thread_locks_used):
- * if __pyx_memoryview_thread_locks[i] is self.lock: # <<<<<<<<<<<<<<
- * __pyx_memoryview_thread_locks_used -= 1
- * if i != __pyx_memoryview_thread_locks_used:
- */
- __pyx_t_2 = (((__pyx_memoryview_thread_locks[__pyx_v_i]) == __pyx_v_self->lock) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":385
- * for i in range(__pyx_memoryview_thread_locks_used):
- * if __pyx_memoryview_thread_locks[i] is self.lock:
- * __pyx_memoryview_thread_locks_used -= 1 # <<<<<<<<<<<<<<
- * if i != __pyx_memoryview_thread_locks_used:
- * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = (
- */
- __pyx_memoryview_thread_locks_used = (__pyx_memoryview_thread_locks_used - 1);
-
- /* "View.MemoryView":386
- * if __pyx_memoryview_thread_locks[i] is self.lock:
- * __pyx_memoryview_thread_locks_used -= 1
- * if i != __pyx_memoryview_thread_locks_used: # <<<<<<<<<<<<<<
- * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = (
- * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i])
- */
- __pyx_t_2 = ((__pyx_v_i != __pyx_memoryview_thread_locks_used) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":388
- * if i != __pyx_memoryview_thread_locks_used:
- * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = (
- * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) # <<<<<<<<<<<<<<
- * break
- * else:
- */
- __pyx_t_6 = (__pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]);
- __pyx_t_7 = (__pyx_memoryview_thread_locks[__pyx_v_i]);
-
- /* "View.MemoryView":387
- * __pyx_memoryview_thread_locks_used -= 1
- * if i != __pyx_memoryview_thread_locks_used:
- * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( # <<<<<<<<<<<<<<
- * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i])
- * break
- */
- (__pyx_memoryview_thread_locks[__pyx_v_i]) = __pyx_t_6;
- (__pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]) = __pyx_t_7;
-
- /* "View.MemoryView":386
- * if __pyx_memoryview_thread_locks[i] is self.lock:
- * __pyx_memoryview_thread_locks_used -= 1
- * if i != __pyx_memoryview_thread_locks_used: # <<<<<<<<<<<<<<
- * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = (
- * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i])
- */
- }
-
- /* "View.MemoryView":389
- * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = (
- * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i])
- * break # <<<<<<<<<<<<<<
- * else:
- * PyThread_free_lock(self.lock)
- */
- goto __pyx_L6_break;
-
- /* "View.MemoryView":384
- * if self.lock != NULL:
- * for i in range(__pyx_memoryview_thread_locks_used):
- * if __pyx_memoryview_thread_locks[i] is self.lock: # <<<<<<<<<<<<<<
- * __pyx_memoryview_thread_locks_used -= 1
- * if i != __pyx_memoryview_thread_locks_used:
- */
- }
- }
- /*else*/ {
-
- /* "View.MemoryView":391
- * break
- * else:
- * PyThread_free_lock(self.lock) # <<<<<<<<<<<<<<
- *
- * cdef char *get_item_pointer(memoryview self, object index) except NULL:
- */
- PyThread_free_lock(__pyx_v_self->lock);
- }
- __pyx_L6_break:;
-
- /* "View.MemoryView":382
- * cdef int i
- * global __pyx_memoryview_thread_locks_used
- * if self.lock != NULL: # <<<<<<<<<<<<<<
- * for i in range(__pyx_memoryview_thread_locks_used):
- * if __pyx_memoryview_thread_locks[i] is self.lock:
- */
- }
-
- /* "View.MemoryView":372
- * self.typeinfo = NULL
- *
- * def __dealloc__(memoryview self): # <<<<<<<<<<<<<<
- * if self.obj is not None:
- * __Pyx_ReleaseBuffer(&self.view)
- */
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
-}
-
-/* "View.MemoryView":393
- * PyThread_free_lock(self.lock)
- *
- * cdef char *get_item_pointer(memoryview self, object index) except NULL: # <<<<<<<<<<<<<<
- * cdef Py_ssize_t dim
- * cdef char *itemp = self.view.buf
- */
-
-static char *__pyx_memoryview_get_item_pointer(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index) {
- Py_ssize_t __pyx_v_dim;
- char *__pyx_v_itemp;
- PyObject *__pyx_v_idx = NULL;
- char *__pyx_r;
- __Pyx_RefNannyDeclarations
- Py_ssize_t __pyx_t_1;
- PyObject *__pyx_t_2 = NULL;
- Py_ssize_t __pyx_t_3;
- PyObject *(*__pyx_t_4)(PyObject *);
- PyObject *__pyx_t_5 = NULL;
- Py_ssize_t __pyx_t_6;
- char *__pyx_t_7;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("get_item_pointer", 0);
-
- /* "View.MemoryView":395
- * cdef char *get_item_pointer(memoryview self, object index) except NULL:
- * cdef Py_ssize_t dim
- * cdef char *itemp = self.view.buf # <<<<<<<<<<<<<<
- *
- * for dim, idx in enumerate(index):
- */
- __pyx_v_itemp = ((char *)__pyx_v_self->view.buf);
-
- /* "View.MemoryView":397
- * cdef char *itemp = self.view.buf
- *
- * for dim, idx in enumerate(index): # <<<<<<<<<<<<<<
- * itemp = pybuffer_index(&self.view, itemp, idx, dim)
- *
- */
- __pyx_t_1 = 0;
- if (likely(PyList_CheckExact(__pyx_v_index)) || PyTuple_CheckExact(__pyx_v_index)) {
- __pyx_t_2 = __pyx_v_index; __Pyx_INCREF(__pyx_t_2); __pyx_t_3 = 0;
- __pyx_t_4 = NULL;
- } else {
- __pyx_t_3 = -1; __pyx_t_2 = PyObject_GetIter(__pyx_v_index); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 397, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_4 = Py_TYPE(__pyx_t_2)->tp_iternext; if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 397, __pyx_L1_error)
- }
- for (;;) {
- if (likely(!__pyx_t_4)) {
- if (likely(PyList_CheckExact(__pyx_t_2))) {
- if (__pyx_t_3 >= PyList_GET_SIZE(__pyx_t_2)) break;
- #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- __pyx_t_5 = PyList_GET_ITEM(__pyx_t_2, __pyx_t_3); __Pyx_INCREF(__pyx_t_5); __pyx_t_3++; if (unlikely(0 < 0)) __PYX_ERR(1, 397, __pyx_L1_error)
- #else
- __pyx_t_5 = PySequence_ITEM(__pyx_t_2, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 397, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- #endif
- } else {
- if (__pyx_t_3 >= PyTuple_GET_SIZE(__pyx_t_2)) break;
- #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- __pyx_t_5 = PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_3); __Pyx_INCREF(__pyx_t_5); __pyx_t_3++; if (unlikely(0 < 0)) __PYX_ERR(1, 397, __pyx_L1_error)
- #else
- __pyx_t_5 = PySequence_ITEM(__pyx_t_2, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 397, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- #endif
- }
- } else {
- __pyx_t_5 = __pyx_t_4(__pyx_t_2);
- if (unlikely(!__pyx_t_5)) {
- PyObject* exc_type = PyErr_Occurred();
- if (exc_type) {
- if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();
- else __PYX_ERR(1, 397, __pyx_L1_error)
- }
- break;
- }
- __Pyx_GOTREF(__pyx_t_5);
- }
- __Pyx_XDECREF_SET(__pyx_v_idx, __pyx_t_5);
- __pyx_t_5 = 0;
- __pyx_v_dim = __pyx_t_1;
- __pyx_t_1 = (__pyx_t_1 + 1);
-
- /* "View.MemoryView":398
- *
- * for dim, idx in enumerate(index):
- * itemp = pybuffer_index(&self.view, itemp, idx, dim) # <<<<<<<<<<<<<<
- *
- * return itemp
- */
- __pyx_t_6 = __Pyx_PyIndex_AsSsize_t(__pyx_v_idx); if (unlikely((__pyx_t_6 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 398, __pyx_L1_error)
- __pyx_t_7 = __pyx_pybuffer_index((&__pyx_v_self->view), __pyx_v_itemp, __pyx_t_6, __pyx_v_dim); if (unlikely(__pyx_t_7 == ((char *)NULL))) __PYX_ERR(1, 398, __pyx_L1_error)
- __pyx_v_itemp = __pyx_t_7;
-
- /* "View.MemoryView":397
- * cdef char *itemp = self.view.buf
- *
- * for dim, idx in enumerate(index): # <<<<<<<<<<<<<<
- * itemp = pybuffer_index(&self.view, itemp, idx, dim)
- *
- */
- }
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
-
- /* "View.MemoryView":400
- * itemp = pybuffer_index(&self.view, itemp, idx, dim)
- *
- * return itemp # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_r = __pyx_v_itemp;
- goto __pyx_L0;
-
- /* "View.MemoryView":393
- * PyThread_free_lock(self.lock)
- *
- * cdef char *get_item_pointer(memoryview self, object index) except NULL: # <<<<<<<<<<<<<<
- * cdef Py_ssize_t dim
- * cdef char *itemp = self.view.buf
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_5);
- __Pyx_AddTraceback("View.MemoryView.memoryview.get_item_pointer", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XDECREF(__pyx_v_idx);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":403
- *
- *
- * def __getitem__(memoryview self, object index): # <<<<<<<<<<<<<<
- * if index is Ellipsis:
- * return self
- */
-
-/* Python wrapper */
-static PyObject *__pyx_memoryview___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index); /*proto*/
-static PyObject *__pyx_memoryview___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__getitem__ (wrapper)", 0);
- __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_4__getitem__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((PyObject *)__pyx_v_index));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_4__getitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index) {
- PyObject *__pyx_v_have_slices = NULL;
- PyObject *__pyx_v_indices = NULL;
- char *__pyx_v_itemp;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- int __pyx_t_2;
- PyObject *__pyx_t_3 = NULL;
- PyObject *__pyx_t_4 = NULL;
- PyObject *__pyx_t_5 = NULL;
- char *__pyx_t_6;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__getitem__", 0);
-
- /* "View.MemoryView":404
- *
- * def __getitem__(memoryview self, object index):
- * if index is Ellipsis: # <<<<<<<<<<<<<<
- * return self
- *
- */
- __pyx_t_1 = (__pyx_v_index == __pyx_builtin_Ellipsis);
- __pyx_t_2 = (__pyx_t_1 != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":405
- * def __getitem__(memoryview self, object index):
- * if index is Ellipsis:
- * return self # <<<<<<<<<<<<<<
- *
- * have_slices, indices = _unellipsify(index, self.view.ndim)
- */
- __Pyx_XDECREF(__pyx_r);
- __Pyx_INCREF(((PyObject *)__pyx_v_self));
- __pyx_r = ((PyObject *)__pyx_v_self);
- goto __pyx_L0;
-
- /* "View.MemoryView":404
- *
- * def __getitem__(memoryview self, object index):
- * if index is Ellipsis: # <<<<<<<<<<<<<<
- * return self
- *
- */
- }
-
- /* "View.MemoryView":407
- * return self
- *
- * have_slices, indices = _unellipsify(index, self.view.ndim) # <<<<<<<<<<<<<<
- *
- * cdef char *itemp
- */
- __pyx_t_3 = _unellipsify(__pyx_v_index, __pyx_v_self->view.ndim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 407, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- if (likely(__pyx_t_3 != Py_None)) {
- PyObject* sequence = __pyx_t_3;
- Py_ssize_t size = __Pyx_PySequence_SIZE(sequence);
- if (unlikely(size != 2)) {
- if (size > 2) __Pyx_RaiseTooManyValuesError(2);
- else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size);
- __PYX_ERR(1, 407, __pyx_L1_error)
- }
- #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- __pyx_t_4 = PyTuple_GET_ITEM(sequence, 0);
- __pyx_t_5 = PyTuple_GET_ITEM(sequence, 1);
- __Pyx_INCREF(__pyx_t_4);
- __Pyx_INCREF(__pyx_t_5);
- #else
- __pyx_t_4 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 407, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __pyx_t_5 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 407, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- #endif
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- } else {
- __Pyx_RaiseNoneNotIterableError(); __PYX_ERR(1, 407, __pyx_L1_error)
- }
- __pyx_v_have_slices = __pyx_t_4;
- __pyx_t_4 = 0;
- __pyx_v_indices = __pyx_t_5;
- __pyx_t_5 = 0;
-
- /* "View.MemoryView":410
- *
- * cdef char *itemp
- * if have_slices: # <<<<<<<<<<<<<<
- * return memview_slice(self, indices)
- * else:
- */
- __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_v_have_slices); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(1, 410, __pyx_L1_error)
- if (__pyx_t_2) {
-
- /* "View.MemoryView":411
- * cdef char *itemp
- * if have_slices:
- * return memview_slice(self, indices) # <<<<<<<<<<<<<<
- * else:
- * itemp = self.get_item_pointer(indices)
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_3 = ((PyObject *)__pyx_memview_slice(__pyx_v_self, __pyx_v_indices)); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 411, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_r = __pyx_t_3;
- __pyx_t_3 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":410
- *
- * cdef char *itemp
- * if have_slices: # <<<<<<<<<<<<<<
- * return memview_slice(self, indices)
- * else:
- */
- }
-
- /* "View.MemoryView":413
- * return memview_slice(self, indices)
- * else:
- * itemp = self.get_item_pointer(indices) # <<<<<<<<<<<<<<
- * return self.convert_item_to_object(itemp)
- *
- */
- /*else*/ {
- __pyx_t_6 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->get_item_pointer(__pyx_v_self, __pyx_v_indices); if (unlikely(__pyx_t_6 == ((char *)NULL))) __PYX_ERR(1, 413, __pyx_L1_error)
- __pyx_v_itemp = __pyx_t_6;
-
- /* "View.MemoryView":414
- * else:
- * itemp = self.get_item_pointer(indices)
- * return self.convert_item_to_object(itemp) # <<<<<<<<<<<<<<
- *
- * def __setitem__(memoryview self, object index, object value):
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_3 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->convert_item_to_object(__pyx_v_self, __pyx_v_itemp); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 414, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_r = __pyx_t_3;
- __pyx_t_3 = 0;
- goto __pyx_L0;
- }
-
- /* "View.MemoryView":403
- *
- *
- * def __getitem__(memoryview self, object index): # <<<<<<<<<<<<<<
- * if index is Ellipsis:
- * return self
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_XDECREF(__pyx_t_4);
- __Pyx_XDECREF(__pyx_t_5);
- __Pyx_AddTraceback("View.MemoryView.memoryview.__getitem__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XDECREF(__pyx_v_have_slices);
- __Pyx_XDECREF(__pyx_v_indices);
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":416
- * return self.convert_item_to_object(itemp)
- *
- * def __setitem__(memoryview self, object index, object value): # <<<<<<<<<<<<<<
- * if self.view.readonly:
- * raise TypeError("Cannot assign to read-only memoryview")
- */
-
-/* Python wrapper */
-static int __pyx_memoryview___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value); /*proto*/
-static int __pyx_memoryview___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value) {
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__setitem__ (wrapper)", 0);
- __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_6__setitem__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((PyObject *)__pyx_v_index), ((PyObject *)__pyx_v_value));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_6__setitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value) {
- PyObject *__pyx_v_have_slices = NULL;
- PyObject *__pyx_v_obj = NULL;
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- PyObject *__pyx_t_2 = NULL;
- PyObject *__pyx_t_3 = NULL;
- PyObject *__pyx_t_4 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__setitem__", 0);
- __Pyx_INCREF(__pyx_v_index);
-
- /* "View.MemoryView":417
- *
- * def __setitem__(memoryview self, object index, object value):
- * if self.view.readonly: # <<<<<<<<<<<<<<
- * raise TypeError("Cannot assign to read-only memoryview")
- *
- */
- __pyx_t_1 = (__pyx_v_self->view.readonly != 0);
- if (unlikely(__pyx_t_1)) {
-
- /* "View.MemoryView":418
- * def __setitem__(memoryview self, object index, object value):
- * if self.view.readonly:
- * raise TypeError("Cannot assign to read-only memoryview") # <<<<<<<<<<<<<<
- *
- * have_slices, index = _unellipsify(index, self.view.ndim)
- */
- __pyx_t_2 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__9, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 418, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_Raise(__pyx_t_2, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __PYX_ERR(1, 418, __pyx_L1_error)
-
- /* "View.MemoryView":417
- *
- * def __setitem__(memoryview self, object index, object value):
- * if self.view.readonly: # <<<<<<<<<<<<<<
- * raise TypeError("Cannot assign to read-only memoryview")
- *
- */
- }
-
- /* "View.MemoryView":420
- * raise TypeError("Cannot assign to read-only memoryview")
- *
- * have_slices, index = _unellipsify(index, self.view.ndim) # <<<<<<<<<<<<<<
- *
- * if have_slices:
- */
- __pyx_t_2 = _unellipsify(__pyx_v_index, __pyx_v_self->view.ndim); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 420, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- if (likely(__pyx_t_2 != Py_None)) {
- PyObject* sequence = __pyx_t_2;
- Py_ssize_t size = __Pyx_PySequence_SIZE(sequence);
- if (unlikely(size != 2)) {
- if (size > 2) __Pyx_RaiseTooManyValuesError(2);
- else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size);
- __PYX_ERR(1, 420, __pyx_L1_error)
- }
- #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- __pyx_t_3 = PyTuple_GET_ITEM(sequence, 0);
- __pyx_t_4 = PyTuple_GET_ITEM(sequence, 1);
- __Pyx_INCREF(__pyx_t_3);
- __Pyx_INCREF(__pyx_t_4);
- #else
- __pyx_t_3 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 420, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_t_4 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 420, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- #endif
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- } else {
- __Pyx_RaiseNoneNotIterableError(); __PYX_ERR(1, 420, __pyx_L1_error)
- }
- __pyx_v_have_slices = __pyx_t_3;
- __pyx_t_3 = 0;
- __Pyx_DECREF_SET(__pyx_v_index, __pyx_t_4);
- __pyx_t_4 = 0;
-
- /* "View.MemoryView":422
- * have_slices, index = _unellipsify(index, self.view.ndim)
- *
- * if have_slices: # <<<<<<<<<<<<<<
- * obj = self.is_slice(value)
- * if obj:
- */
- __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_v_have_slices); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 422, __pyx_L1_error)
- if (__pyx_t_1) {
-
- /* "View.MemoryView":423
- *
- * if have_slices:
- * obj = self.is_slice(value) # <<<<<<<<<<<<<<
- * if obj:
- * self.setitem_slice_assignment(self[index], obj)
- */
- __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->is_slice(__pyx_v_self, __pyx_v_value); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 423, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_v_obj = __pyx_t_2;
- __pyx_t_2 = 0;
-
- /* "View.MemoryView":424
- * if have_slices:
- * obj = self.is_slice(value)
- * if obj: # <<<<<<<<<<<<<<
- * self.setitem_slice_assignment(self[index], obj)
- * else:
- */
- __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_v_obj); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 424, __pyx_L1_error)
- if (__pyx_t_1) {
-
- /* "View.MemoryView":425
- * obj = self.is_slice(value)
- * if obj:
- * self.setitem_slice_assignment(self[index], obj) # <<<<<<<<<<<<<<
- * else:
- * self.setitem_slice_assign_scalar(self[index], value)
- */
- __pyx_t_2 = __Pyx_PyObject_GetItem(((PyObject *)__pyx_v_self), __pyx_v_index); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 425, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_4 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->setitem_slice_assignment(__pyx_v_self, __pyx_t_2, __pyx_v_obj); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 425, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
-
- /* "View.MemoryView":424
- * if have_slices:
- * obj = self.is_slice(value)
- * if obj: # <<<<<<<<<<<<<<
- * self.setitem_slice_assignment(self[index], obj)
- * else:
- */
- goto __pyx_L5;
- }
-
- /* "View.MemoryView":427
- * self.setitem_slice_assignment(self[index], obj)
- * else:
- * self.setitem_slice_assign_scalar(self[index], value) # <<<<<<<<<<<<<<
- * else:
- * self.setitem_indexed(index, value)
- */
- /*else*/ {
- __pyx_t_4 = __Pyx_PyObject_GetItem(((PyObject *)__pyx_v_self), __pyx_v_index); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 427, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- if (!(likely(((__pyx_t_4) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_4, __pyx_memoryview_type))))) __PYX_ERR(1, 427, __pyx_L1_error)
- __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->setitem_slice_assign_scalar(__pyx_v_self, ((struct __pyx_memoryview_obj *)__pyx_t_4), __pyx_v_value); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 427, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- }
- __pyx_L5:;
-
- /* "View.MemoryView":422
- * have_slices, index = _unellipsify(index, self.view.ndim)
- *
- * if have_slices: # <<<<<<<<<<<<<<
- * obj = self.is_slice(value)
- * if obj:
- */
- goto __pyx_L4;
- }
-
- /* "View.MemoryView":429
- * self.setitem_slice_assign_scalar(self[index], value)
- * else:
- * self.setitem_indexed(index, value) # <<<<<<<<<<<<<<
- *
- * cdef is_slice(self, obj):
- */
- /*else*/ {
- __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->setitem_indexed(__pyx_v_self, __pyx_v_index, __pyx_v_value); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 429, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- }
- __pyx_L4:;
-
- /* "View.MemoryView":416
- * return self.convert_item_to_object(itemp)
- *
- * def __setitem__(memoryview self, object index, object value): # <<<<<<<<<<<<<<
- * if self.view.readonly:
- * raise TypeError("Cannot assign to read-only memoryview")
- */
-
- /* function exit code */
- __pyx_r = 0;
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_XDECREF(__pyx_t_4);
- __Pyx_AddTraceback("View.MemoryView.memoryview.__setitem__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = -1;
- __pyx_L0:;
- __Pyx_XDECREF(__pyx_v_have_slices);
- __Pyx_XDECREF(__pyx_v_obj);
- __Pyx_XDECREF(__pyx_v_index);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":431
- * self.setitem_indexed(index, value)
- *
- * cdef is_slice(self, obj): # <<<<<<<<<<<<<<
- * if not isinstance(obj, memoryview):
- * try:
- */
-
-static PyObject *__pyx_memoryview_is_slice(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- int __pyx_t_2;
- PyObject *__pyx_t_3 = NULL;
- PyObject *__pyx_t_4 = NULL;
- PyObject *__pyx_t_5 = NULL;
- PyObject *__pyx_t_6 = NULL;
- PyObject *__pyx_t_7 = NULL;
- PyObject *__pyx_t_8 = NULL;
- int __pyx_t_9;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("is_slice", 0);
- __Pyx_INCREF(__pyx_v_obj);
-
- /* "View.MemoryView":432
- *
- * cdef is_slice(self, obj):
- * if not isinstance(obj, memoryview): # <<<<<<<<<<<<<<
- * try:
- * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS,
- */
- __pyx_t_1 = __Pyx_TypeCheck(__pyx_v_obj, __pyx_memoryview_type);
- __pyx_t_2 = ((!(__pyx_t_1 != 0)) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":433
- * cdef is_slice(self, obj):
- * if not isinstance(obj, memoryview):
- * try: # <<<<<<<<<<<<<<
- * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS,
- * self.dtype_is_object)
- */
- {
- __Pyx_PyThreadState_declare
- __Pyx_PyThreadState_assign
- __Pyx_ExceptionSave(&__pyx_t_3, &__pyx_t_4, &__pyx_t_5);
- __Pyx_XGOTREF(__pyx_t_3);
- __Pyx_XGOTREF(__pyx_t_4);
- __Pyx_XGOTREF(__pyx_t_5);
- /*try:*/ {
-
- /* "View.MemoryView":434
- * if not isinstance(obj, memoryview):
- * try:
- * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, # <<<<<<<<<<<<<<
- * self.dtype_is_object)
- * except TypeError:
- */
- __pyx_t_6 = __Pyx_PyInt_From_int(((__pyx_v_self->flags & (~PyBUF_WRITABLE)) | PyBUF_ANY_CONTIGUOUS)); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 434, __pyx_L4_error)
- __Pyx_GOTREF(__pyx_t_6);
-
- /* "View.MemoryView":435
- * try:
- * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS,
- * self.dtype_is_object) # <<<<<<<<<<<<<<
- * except TypeError:
- * return None
- */
- __pyx_t_7 = __Pyx_PyBool_FromLong(__pyx_v_self->dtype_is_object); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 435, __pyx_L4_error)
- __Pyx_GOTREF(__pyx_t_7);
-
- /* "View.MemoryView":434
- * if not isinstance(obj, memoryview):
- * try:
- * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, # <<<<<<<<<<<<<<
- * self.dtype_is_object)
- * except TypeError:
- */
- __pyx_t_8 = PyTuple_New(3); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 434, __pyx_L4_error)
- __Pyx_GOTREF(__pyx_t_8);
- __Pyx_INCREF(__pyx_v_obj);
- __Pyx_GIVEREF(__pyx_v_obj);
- PyTuple_SET_ITEM(__pyx_t_8, 0, __pyx_v_obj);
- __Pyx_GIVEREF(__pyx_t_6);
- PyTuple_SET_ITEM(__pyx_t_8, 1, __pyx_t_6);
- __Pyx_GIVEREF(__pyx_t_7);
- PyTuple_SET_ITEM(__pyx_t_8, 2, __pyx_t_7);
- __pyx_t_6 = 0;
- __pyx_t_7 = 0;
- __pyx_t_7 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryview_type), __pyx_t_8, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 434, __pyx_L4_error)
- __Pyx_GOTREF(__pyx_t_7);
- __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
- __Pyx_DECREF_SET(__pyx_v_obj, __pyx_t_7);
- __pyx_t_7 = 0;
-
- /* "View.MemoryView":433
- * cdef is_slice(self, obj):
- * if not isinstance(obj, memoryview):
- * try: # <<<<<<<<<<<<<<
- * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS,
- * self.dtype_is_object)
- */
- }
- __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
- __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;
- __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;
- goto __pyx_L9_try_end;
- __pyx_L4_error:;
- __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;
- __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;
- __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0;
-
- /* "View.MemoryView":436
- * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS,
- * self.dtype_is_object)
- * except TypeError: # <<<<<<<<<<<<<<
- * return None
- *
- */
- __pyx_t_9 = __Pyx_PyErr_ExceptionMatches(__pyx_builtin_TypeError);
- if (__pyx_t_9) {
- __Pyx_AddTraceback("View.MemoryView.memoryview.is_slice", __pyx_clineno, __pyx_lineno, __pyx_filename);
- if (__Pyx_GetException(&__pyx_t_7, &__pyx_t_8, &__pyx_t_6) < 0) __PYX_ERR(1, 436, __pyx_L6_except_error)
- __Pyx_GOTREF(__pyx_t_7);
- __Pyx_GOTREF(__pyx_t_8);
- __Pyx_GOTREF(__pyx_t_6);
-
- /* "View.MemoryView":437
- * self.dtype_is_object)
- * except TypeError:
- * return None # <<<<<<<<<<<<<<
- *
- * return obj
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_r = Py_None; __Pyx_INCREF(Py_None);
- __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
- __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
- __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
- goto __pyx_L7_except_return;
- }
- goto __pyx_L6_except_error;
- __pyx_L6_except_error:;
-
- /* "View.MemoryView":433
- * cdef is_slice(self, obj):
- * if not isinstance(obj, memoryview):
- * try: # <<<<<<<<<<<<<<
- * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS,
- * self.dtype_is_object)
- */
- __Pyx_XGIVEREF(__pyx_t_3);
- __Pyx_XGIVEREF(__pyx_t_4);
- __Pyx_XGIVEREF(__pyx_t_5);
- __Pyx_ExceptionReset(__pyx_t_3, __pyx_t_4, __pyx_t_5);
- goto __pyx_L1_error;
- __pyx_L7_except_return:;
- __Pyx_XGIVEREF(__pyx_t_3);
- __Pyx_XGIVEREF(__pyx_t_4);
- __Pyx_XGIVEREF(__pyx_t_5);
- __Pyx_ExceptionReset(__pyx_t_3, __pyx_t_4, __pyx_t_5);
- goto __pyx_L0;
- __pyx_L9_try_end:;
- }
-
- /* "View.MemoryView":432
- *
- * cdef is_slice(self, obj):
- * if not isinstance(obj, memoryview): # <<<<<<<<<<<<<<
- * try:
- * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS,
- */
- }
-
- /* "View.MemoryView":439
- * return None
- *
- * return obj # <<<<<<<<<<<<<<
- *
- * cdef setitem_slice_assignment(self, dst, src):
- */
- __Pyx_XDECREF(__pyx_r);
- __Pyx_INCREF(__pyx_v_obj);
- __pyx_r = __pyx_v_obj;
- goto __pyx_L0;
-
- /* "View.MemoryView":431
- * self.setitem_indexed(index, value)
- *
- * cdef is_slice(self, obj): # <<<<<<<<<<<<<<
- * if not isinstance(obj, memoryview):
- * try:
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_6);
- __Pyx_XDECREF(__pyx_t_7);
- __Pyx_XDECREF(__pyx_t_8);
- __Pyx_AddTraceback("View.MemoryView.memoryview.is_slice", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XDECREF(__pyx_v_obj);
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":441
- * return obj
- *
- * cdef setitem_slice_assignment(self, dst, src): # <<<<<<<<<<<<<<
- * cdef __Pyx_memviewslice dst_slice
- * cdef __Pyx_memviewslice src_slice
- */
-
-static PyObject *__pyx_memoryview_setitem_slice_assignment(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_dst, PyObject *__pyx_v_src) {
- __Pyx_memviewslice __pyx_v_dst_slice;
- __Pyx_memviewslice __pyx_v_src_slice;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- __Pyx_memviewslice *__pyx_t_1;
- __Pyx_memviewslice *__pyx_t_2;
- PyObject *__pyx_t_3 = NULL;
- int __pyx_t_4;
- int __pyx_t_5;
- int __pyx_t_6;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("setitem_slice_assignment", 0);
-
- /* "View.MemoryView":445
- * cdef __Pyx_memviewslice src_slice
- *
- * memoryview_copy_contents(get_slice_from_memview(src, &src_slice)[0], # <<<<<<<<<<<<<<
- * get_slice_from_memview(dst, &dst_slice)[0],
- * src.ndim, dst.ndim, self.dtype_is_object)
- */
- if (!(likely(((__pyx_v_src) == Py_None) || likely(__Pyx_TypeTest(__pyx_v_src, __pyx_memoryview_type))))) __PYX_ERR(1, 445, __pyx_L1_error)
- __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(((struct __pyx_memoryview_obj *)__pyx_v_src), (&__pyx_v_src_slice)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 445, __pyx_L1_error)
-
- /* "View.MemoryView":446
- *
- * memoryview_copy_contents(get_slice_from_memview(src, &src_slice)[0],
- * get_slice_from_memview(dst, &dst_slice)[0], # <<<<<<<<<<<<<<
- * src.ndim, dst.ndim, self.dtype_is_object)
- *
- */
- if (!(likely(((__pyx_v_dst) == Py_None) || likely(__Pyx_TypeTest(__pyx_v_dst, __pyx_memoryview_type))))) __PYX_ERR(1, 446, __pyx_L1_error)
- __pyx_t_2 = __pyx_memoryview_get_slice_from_memoryview(((struct __pyx_memoryview_obj *)__pyx_v_dst), (&__pyx_v_dst_slice)); if (unlikely(__pyx_t_2 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 446, __pyx_L1_error)
-
- /* "View.MemoryView":447
- * memoryview_copy_contents(get_slice_from_memview(src, &src_slice)[0],
- * get_slice_from_memview(dst, &dst_slice)[0],
- * src.ndim, dst.ndim, self.dtype_is_object) # <<<<<<<<<<<<<<
- *
- * cdef setitem_slice_assign_scalar(self, memoryview dst, value):
- */
- __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_src, __pyx_n_s_ndim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 447, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_t_4 = __Pyx_PyInt_As_int(__pyx_t_3); if (unlikely((__pyx_t_4 == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 447, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_dst, __pyx_n_s_ndim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 447, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_t_5 = __Pyx_PyInt_As_int(__pyx_t_3); if (unlikely((__pyx_t_5 == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 447, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
-
- /* "View.MemoryView":445
- * cdef __Pyx_memviewslice src_slice
- *
- * memoryview_copy_contents(get_slice_from_memview(src, &src_slice)[0], # <<<<<<<<<<<<<<
- * get_slice_from_memview(dst, &dst_slice)[0],
- * src.ndim, dst.ndim, self.dtype_is_object)
- */
- __pyx_t_6 = __pyx_memoryview_copy_contents((__pyx_t_1[0]), (__pyx_t_2[0]), __pyx_t_4, __pyx_t_5, __pyx_v_self->dtype_is_object); if (unlikely(__pyx_t_6 == ((int)-1))) __PYX_ERR(1, 445, __pyx_L1_error)
-
- /* "View.MemoryView":441
- * return obj
- *
- * cdef setitem_slice_assignment(self, dst, src): # <<<<<<<<<<<<<<
- * cdef __Pyx_memviewslice dst_slice
- * cdef __Pyx_memviewslice src_slice
- */
-
- /* function exit code */
- __pyx_r = Py_None; __Pyx_INCREF(Py_None);
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_AddTraceback("View.MemoryView.memoryview.setitem_slice_assignment", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":449
- * src.ndim, dst.ndim, self.dtype_is_object)
- *
- * cdef setitem_slice_assign_scalar(self, memoryview dst, value): # <<<<<<<<<<<<<<
- * cdef int array[128]
- * cdef void *tmp = NULL
- */
-
-static PyObject *__pyx_memoryview_setitem_slice_assign_scalar(struct __pyx_memoryview_obj *__pyx_v_self, struct __pyx_memoryview_obj *__pyx_v_dst, PyObject *__pyx_v_value) {
- int __pyx_v_array[0x80];
- void *__pyx_v_tmp;
- void *__pyx_v_item;
- __Pyx_memviewslice *__pyx_v_dst_slice;
- __Pyx_memviewslice __pyx_v_tmp_slice;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- __Pyx_memviewslice *__pyx_t_1;
- int __pyx_t_2;
- PyObject *__pyx_t_3 = NULL;
- int __pyx_t_4;
- int __pyx_t_5;
- char const *__pyx_t_6;
- PyObject *__pyx_t_7 = NULL;
- PyObject *__pyx_t_8 = NULL;
- PyObject *__pyx_t_9 = NULL;
- PyObject *__pyx_t_10 = NULL;
- PyObject *__pyx_t_11 = NULL;
- PyObject *__pyx_t_12 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("setitem_slice_assign_scalar", 0);
-
- /* "View.MemoryView":451
- * cdef setitem_slice_assign_scalar(self, memoryview dst, value):
- * cdef int array[128]
- * cdef void *tmp = NULL # <<<<<<<<<<<<<<
- * cdef void *item
- *
- */
- __pyx_v_tmp = NULL;
-
- /* "View.MemoryView":456
- * cdef __Pyx_memviewslice *dst_slice
- * cdef __Pyx_memviewslice tmp_slice
- * dst_slice = get_slice_from_memview(dst, &tmp_slice) # <<<<<<<<<<<<<<
- *
- * if self.view.itemsize > sizeof(array):
- */
- __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(__pyx_v_dst, (&__pyx_v_tmp_slice)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 456, __pyx_L1_error)
- __pyx_v_dst_slice = __pyx_t_1;
-
- /* "View.MemoryView":458
- * dst_slice = get_slice_from_memview(dst, &tmp_slice)
- *
- * if self.view.itemsize > sizeof(array): # <<<<<<<<<<<<<<
- * tmp = PyMem_Malloc(self.view.itemsize)
- * if tmp == NULL:
- */
- __pyx_t_2 = ((((size_t)__pyx_v_self->view.itemsize) > (sizeof(__pyx_v_array))) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":459
- *
- * if self.view.itemsize > sizeof(array):
- * tmp = PyMem_Malloc(self.view.itemsize) # <<<<<<<<<<<<<<
- * if tmp == NULL:
- * raise MemoryError
- */
- __pyx_v_tmp = PyMem_Malloc(__pyx_v_self->view.itemsize);
-
- /* "View.MemoryView":460
- * if self.view.itemsize > sizeof(array):
- * tmp = PyMem_Malloc(self.view.itemsize)
- * if tmp == NULL: # <<<<<<<<<<<<<<
- * raise MemoryError
- * item = tmp
- */
- __pyx_t_2 = ((__pyx_v_tmp == NULL) != 0);
- if (unlikely(__pyx_t_2)) {
-
- /* "View.MemoryView":461
- * tmp = PyMem_Malloc(self.view.itemsize)
- * if tmp == NULL:
- * raise MemoryError # <<<<<<<<<<<<<<
- * item = tmp
- * else:
- */
- PyErr_NoMemory(); __PYX_ERR(1, 461, __pyx_L1_error)
-
- /* "View.MemoryView":460
- * if self.view.itemsize > sizeof(array):
- * tmp = PyMem_Malloc(self.view.itemsize)
- * if tmp == NULL: # <<<<<<<<<<<<<<
- * raise MemoryError
- * item = tmp
- */
- }
-
- /* "View.MemoryView":462
- * if tmp == NULL:
- * raise MemoryError
- * item = tmp # <<<<<<<<<<<<<<
- * else:
- * item = array
- */
- __pyx_v_item = __pyx_v_tmp;
-
- /* "View.MemoryView":458
- * dst_slice = get_slice_from_memview(dst, &tmp_slice)
- *
- * if self.view.itemsize > sizeof(array): # <<<<<<<<<<<<<<
- * tmp = PyMem_Malloc(self.view.itemsize)
- * if tmp == NULL:
- */
- goto __pyx_L3;
- }
-
- /* "View.MemoryView":464
- * item = tmp
- * else:
- * item = array # <<<<<<<<<<<<<<
- *
- * try:
- */
- /*else*/ {
- __pyx_v_item = ((void *)__pyx_v_array);
- }
- __pyx_L3:;
-
- /* "View.MemoryView":466
- * item = array
- *
- * try: # <<<<<<<<<<<<<<
- * if self.dtype_is_object:
- * ( item)[0] = value
- */
- /*try:*/ {
-
- /* "View.MemoryView":467
- *
- * try:
- * if self.dtype_is_object: # <<<<<<<<<<<<<<
- * ( item)[0] = value
- * else:
- */
- __pyx_t_2 = (__pyx_v_self->dtype_is_object != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":468
- * try:
- * if self.dtype_is_object:
- * ( item)[0] = value # <<<<<<<<<<<<<<
- * else:
- * self.assign_item_from_object( item, value)
- */
- (((PyObject **)__pyx_v_item)[0]) = ((PyObject *)__pyx_v_value);
-
- /* "View.MemoryView":467
- *
- * try:
- * if self.dtype_is_object: # <<<<<<<<<<<<<<
- * ( item)[0] = value
- * else:
- */
- goto __pyx_L8;
- }
-
- /* "View.MemoryView":470
- * ( item)[0] = value
- * else:
- * self.assign_item_from_object( item, value) # <<<<<<<<<<<<<<
- *
- *
- */
- /*else*/ {
- __pyx_t_3 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->assign_item_from_object(__pyx_v_self, ((char *)__pyx_v_item), __pyx_v_value); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 470, __pyx_L6_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- }
- __pyx_L8:;
-
- /* "View.MemoryView":474
- *
- *
- * if self.view.suboffsets != NULL: # <<<<<<<<<<<<<<
- * assert_direct_dimensions(self.view.suboffsets, self.view.ndim)
- * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize,
- */
- __pyx_t_2 = ((__pyx_v_self->view.suboffsets != NULL) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":475
- *
- * if self.view.suboffsets != NULL:
- * assert_direct_dimensions(self.view.suboffsets, self.view.ndim) # <<<<<<<<<<<<<<
- * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize,
- * item, self.dtype_is_object)
- */
- __pyx_t_3 = assert_direct_dimensions(__pyx_v_self->view.suboffsets, __pyx_v_self->view.ndim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 475, __pyx_L6_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
-
- /* "View.MemoryView":474
- *
- *
- * if self.view.suboffsets != NULL: # <<<<<<<<<<<<<<
- * assert_direct_dimensions(self.view.suboffsets, self.view.ndim)
- * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize,
- */
- }
-
- /* "View.MemoryView":476
- * if self.view.suboffsets != NULL:
- * assert_direct_dimensions(self.view.suboffsets, self.view.ndim)
- * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize, # <<<<<<<<<<<<<<
- * item, self.dtype_is_object)
- * finally:
- */
- __pyx_memoryview_slice_assign_scalar(__pyx_v_dst_slice, __pyx_v_dst->view.ndim, __pyx_v_self->view.itemsize, __pyx_v_item, __pyx_v_self->dtype_is_object);
- }
-
- /* "View.MemoryView":479
- * item, self.dtype_is_object)
- * finally:
- * PyMem_Free(tmp) # <<<<<<<<<<<<<<
- *
- * cdef setitem_indexed(self, index, value):
- */
- /*finally:*/ {
- /*normal exit:*/{
- PyMem_Free(__pyx_v_tmp);
- goto __pyx_L7;
- }
- __pyx_L6_error:;
- /*exception exit:*/{
- __Pyx_PyThreadState_declare
- __Pyx_PyThreadState_assign
- __pyx_t_7 = 0; __pyx_t_8 = 0; __pyx_t_9 = 0; __pyx_t_10 = 0; __pyx_t_11 = 0; __pyx_t_12 = 0;
- __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
- if (PY_MAJOR_VERSION >= 3) __Pyx_ExceptionSwap(&__pyx_t_10, &__pyx_t_11, &__pyx_t_12);
- if ((PY_MAJOR_VERSION < 3) || unlikely(__Pyx_GetException(&__pyx_t_7, &__pyx_t_8, &__pyx_t_9) < 0)) __Pyx_ErrFetch(&__pyx_t_7, &__pyx_t_8, &__pyx_t_9);
- __Pyx_XGOTREF(__pyx_t_7);
- __Pyx_XGOTREF(__pyx_t_8);
- __Pyx_XGOTREF(__pyx_t_9);
- __Pyx_XGOTREF(__pyx_t_10);
- __Pyx_XGOTREF(__pyx_t_11);
- __Pyx_XGOTREF(__pyx_t_12);
- __pyx_t_4 = __pyx_lineno; __pyx_t_5 = __pyx_clineno; __pyx_t_6 = __pyx_filename;
- {
- PyMem_Free(__pyx_v_tmp);
- }
- if (PY_MAJOR_VERSION >= 3) {
- __Pyx_XGIVEREF(__pyx_t_10);
- __Pyx_XGIVEREF(__pyx_t_11);
- __Pyx_XGIVEREF(__pyx_t_12);
- __Pyx_ExceptionReset(__pyx_t_10, __pyx_t_11, __pyx_t_12);
- }
- __Pyx_XGIVEREF(__pyx_t_7);
- __Pyx_XGIVEREF(__pyx_t_8);
- __Pyx_XGIVEREF(__pyx_t_9);
- __Pyx_ErrRestore(__pyx_t_7, __pyx_t_8, __pyx_t_9);
- __pyx_t_7 = 0; __pyx_t_8 = 0; __pyx_t_9 = 0; __pyx_t_10 = 0; __pyx_t_11 = 0; __pyx_t_12 = 0;
- __pyx_lineno = __pyx_t_4; __pyx_clineno = __pyx_t_5; __pyx_filename = __pyx_t_6;
- goto __pyx_L1_error;
- }
- __pyx_L7:;
- }
-
- /* "View.MemoryView":449
- * src.ndim, dst.ndim, self.dtype_is_object)
- *
- * cdef setitem_slice_assign_scalar(self, memoryview dst, value): # <<<<<<<<<<<<<<
- * cdef int array[128]
- * cdef void *tmp = NULL
- */
-
- /* function exit code */
- __pyx_r = Py_None; __Pyx_INCREF(Py_None);
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_AddTraceback("View.MemoryView.memoryview.setitem_slice_assign_scalar", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":481
- * PyMem_Free(tmp)
- *
- * cdef setitem_indexed(self, index, value): # <<<<<<<<<<<<<<
- * cdef char *itemp = self.get_item_pointer(index)
- * self.assign_item_from_object(itemp, value)
- */
-
-static PyObject *__pyx_memoryview_setitem_indexed(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value) {
- char *__pyx_v_itemp;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- char *__pyx_t_1;
- PyObject *__pyx_t_2 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("setitem_indexed", 0);
-
- /* "View.MemoryView":482
- *
- * cdef setitem_indexed(self, index, value):
- * cdef char *itemp = self.get_item_pointer(index) # <<<<<<<<<<<<<<
- * self.assign_item_from_object(itemp, value)
- *
- */
- __pyx_t_1 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->get_item_pointer(__pyx_v_self, __pyx_v_index); if (unlikely(__pyx_t_1 == ((char *)NULL))) __PYX_ERR(1, 482, __pyx_L1_error)
- __pyx_v_itemp = __pyx_t_1;
-
- /* "View.MemoryView":483
- * cdef setitem_indexed(self, index, value):
- * cdef char *itemp = self.get_item_pointer(index)
- * self.assign_item_from_object(itemp, value) # <<<<<<<<<<<<<<
- *
- * cdef convert_item_to_object(self, char *itemp):
- */
- __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->assign_item_from_object(__pyx_v_self, __pyx_v_itemp, __pyx_v_value); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 483, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
-
- /* "View.MemoryView":481
- * PyMem_Free(tmp)
- *
- * cdef setitem_indexed(self, index, value): # <<<<<<<<<<<<<<
- * cdef char *itemp = self.get_item_pointer(index)
- * self.assign_item_from_object(itemp, value)
- */
-
- /* function exit code */
- __pyx_r = Py_None; __Pyx_INCREF(Py_None);
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_AddTraceback("View.MemoryView.memoryview.setitem_indexed", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":485
- * self.assign_item_from_object(itemp, value)
- *
- * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<<
- * """Only used if instantiated manually by the user, or if Cython doesn't
- * know how to convert the type"""
- */
-
-static PyObject *__pyx_memoryview_convert_item_to_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp) {
- PyObject *__pyx_v_struct = NULL;
- PyObject *__pyx_v_bytesitem = 0;
- PyObject *__pyx_v_result = NULL;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- PyObject *__pyx_t_2 = NULL;
- PyObject *__pyx_t_3 = NULL;
- PyObject *__pyx_t_4 = NULL;
- PyObject *__pyx_t_5 = NULL;
- PyObject *__pyx_t_6 = NULL;
- PyObject *__pyx_t_7 = NULL;
- int __pyx_t_8;
- PyObject *__pyx_t_9 = NULL;
- size_t __pyx_t_10;
- int __pyx_t_11;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("convert_item_to_object", 0);
-
- /* "View.MemoryView":488
- * """Only used if instantiated manually by the user, or if Cython doesn't
- * know how to convert the type"""
- * import struct # <<<<<<<<<<<<<<
- * cdef bytes bytesitem
- *
- */
- __pyx_t_1 = __Pyx_Import(__pyx_n_s_struct, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 488, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_v_struct = __pyx_t_1;
- __pyx_t_1 = 0;
-
- /* "View.MemoryView":491
- * cdef bytes bytesitem
- *
- * bytesitem = itemp[:self.view.itemsize] # <<<<<<<<<<<<<<
- * try:
- * result = struct.unpack(self.view.format, bytesitem)
- */
- __pyx_t_1 = __Pyx_PyBytes_FromStringAndSize(__pyx_v_itemp + 0, __pyx_v_self->view.itemsize - 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 491, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_v_bytesitem = ((PyObject*)__pyx_t_1);
- __pyx_t_1 = 0;
-
- /* "View.MemoryView":492
- *
- * bytesitem = itemp[:self.view.itemsize]
- * try: # <<<<<<<<<<<<<<
- * result = struct.unpack(self.view.format, bytesitem)
- * except struct.error:
- */
- {
- __Pyx_PyThreadState_declare
- __Pyx_PyThreadState_assign
- __Pyx_ExceptionSave(&__pyx_t_2, &__pyx_t_3, &__pyx_t_4);
- __Pyx_XGOTREF(__pyx_t_2);
- __Pyx_XGOTREF(__pyx_t_3);
- __Pyx_XGOTREF(__pyx_t_4);
- /*try:*/ {
-
- /* "View.MemoryView":493
- * bytesitem = itemp[:self.view.itemsize]
- * try:
- * result = struct.unpack(self.view.format, bytesitem) # <<<<<<<<<<<<<<
- * except struct.error:
- * raise ValueError("Unable to convert item to object")
- */
- __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_unpack); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 493, __pyx_L3_error)
- __Pyx_GOTREF(__pyx_t_5);
- __pyx_t_6 = __Pyx_PyBytes_FromString(__pyx_v_self->view.format); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 493, __pyx_L3_error)
- __Pyx_GOTREF(__pyx_t_6);
- __pyx_t_7 = NULL;
- __pyx_t_8 = 0;
- if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_5))) {
- __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_5);
- if (likely(__pyx_t_7)) {
- PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5);
- __Pyx_INCREF(__pyx_t_7);
- __Pyx_INCREF(function);
- __Pyx_DECREF_SET(__pyx_t_5, function);
- __pyx_t_8 = 1;
- }
- }
- #if CYTHON_FAST_PYCALL
- if (PyFunction_Check(__pyx_t_5)) {
- PyObject *__pyx_temp[3] = {__pyx_t_7, __pyx_t_6, __pyx_v_bytesitem};
- __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_5, __pyx_temp+1-__pyx_t_8, 2+__pyx_t_8); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 493, __pyx_L3_error)
- __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
- } else
- #endif
- #if CYTHON_FAST_PYCCALL
- if (__Pyx_PyFastCFunction_Check(__pyx_t_5)) {
- PyObject *__pyx_temp[3] = {__pyx_t_7, __pyx_t_6, __pyx_v_bytesitem};
- __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_5, __pyx_temp+1-__pyx_t_8, 2+__pyx_t_8); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 493, __pyx_L3_error)
- __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
- } else
- #endif
- {
- __pyx_t_9 = PyTuple_New(2+__pyx_t_8); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 493, __pyx_L3_error)
- __Pyx_GOTREF(__pyx_t_9);
- if (__pyx_t_7) {
- __Pyx_GIVEREF(__pyx_t_7); PyTuple_SET_ITEM(__pyx_t_9, 0, __pyx_t_7); __pyx_t_7 = NULL;
- }
- __Pyx_GIVEREF(__pyx_t_6);
- PyTuple_SET_ITEM(__pyx_t_9, 0+__pyx_t_8, __pyx_t_6);
- __Pyx_INCREF(__pyx_v_bytesitem);
- __Pyx_GIVEREF(__pyx_v_bytesitem);
- PyTuple_SET_ITEM(__pyx_t_9, 1+__pyx_t_8, __pyx_v_bytesitem);
- __pyx_t_6 = 0;
- __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_5, __pyx_t_9, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 493, __pyx_L3_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
- }
- __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
- __pyx_v_result = __pyx_t_1;
- __pyx_t_1 = 0;
-
- /* "View.MemoryView":492
- *
- * bytesitem = itemp[:self.view.itemsize]
- * try: # <<<<<<<<<<<<<<
- * result = struct.unpack(self.view.format, bytesitem)
- * except struct.error:
- */
- }
-
- /* "View.MemoryView":497
- * raise ValueError("Unable to convert item to object")
- * else:
- * if len(self.view.format) == 1: # <<<<<<<<<<<<<<
- * return result[0]
- * return result
- */
- /*else:*/ {
- __pyx_t_10 = strlen(__pyx_v_self->view.format);
- __pyx_t_11 = ((__pyx_t_10 == 1) != 0);
- if (__pyx_t_11) {
-
- /* "View.MemoryView":498
- * else:
- * if len(self.view.format) == 1:
- * return result[0] # <<<<<<<<<<<<<<
- * return result
- *
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_result, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 498, __pyx_L5_except_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_r = __pyx_t_1;
- __pyx_t_1 = 0;
- goto __pyx_L6_except_return;
-
- /* "View.MemoryView":497
- * raise ValueError("Unable to convert item to object")
- * else:
- * if len(self.view.format) == 1: # <<<<<<<<<<<<<<
- * return result[0]
- * return result
- */
- }
-
- /* "View.MemoryView":499
- * if len(self.view.format) == 1:
- * return result[0]
- * return result # <<<<<<<<<<<<<<
- *
- * cdef assign_item_from_object(self, char *itemp, object value):
- */
- __Pyx_XDECREF(__pyx_r);
- __Pyx_INCREF(__pyx_v_result);
- __pyx_r = __pyx_v_result;
- goto __pyx_L6_except_return;
- }
- __pyx_L3_error:;
- __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0;
- __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;
- __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;
- __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;
- __Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0;
-
- /* "View.MemoryView":494
- * try:
- * result = struct.unpack(self.view.format, bytesitem)
- * except struct.error: # <<<<<<<<<<<<<<
- * raise ValueError("Unable to convert item to object")
- * else:
- */
- __Pyx_ErrFetch(&__pyx_t_1, &__pyx_t_5, &__pyx_t_9);
- __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_error); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 494, __pyx_L5_except_error)
- __Pyx_GOTREF(__pyx_t_6);
- __pyx_t_8 = __Pyx_PyErr_GivenExceptionMatches(__pyx_t_1, __pyx_t_6);
- __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
- __Pyx_ErrRestore(__pyx_t_1, __pyx_t_5, __pyx_t_9);
- __pyx_t_1 = 0; __pyx_t_5 = 0; __pyx_t_9 = 0;
- if (__pyx_t_8) {
- __Pyx_AddTraceback("View.MemoryView.memoryview.convert_item_to_object", __pyx_clineno, __pyx_lineno, __pyx_filename);
- if (__Pyx_GetException(&__pyx_t_9, &__pyx_t_5, &__pyx_t_1) < 0) __PYX_ERR(1, 494, __pyx_L5_except_error)
- __Pyx_GOTREF(__pyx_t_9);
- __Pyx_GOTREF(__pyx_t_5);
- __Pyx_GOTREF(__pyx_t_1);
-
- /* "View.MemoryView":495
- * result = struct.unpack(self.view.format, bytesitem)
- * except struct.error:
- * raise ValueError("Unable to convert item to object") # <<<<<<<<<<<<<<
- * else:
- * if len(self.view.format) == 1:
- */
- __pyx_t_6 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__10, NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 495, __pyx_L5_except_error)
- __Pyx_GOTREF(__pyx_t_6);
- __Pyx_Raise(__pyx_t_6, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
- __PYX_ERR(1, 495, __pyx_L5_except_error)
- }
- goto __pyx_L5_except_error;
- __pyx_L5_except_error:;
-
- /* "View.MemoryView":492
- *
- * bytesitem = itemp[:self.view.itemsize]
- * try: # <<<<<<<<<<<<<<
- * result = struct.unpack(self.view.format, bytesitem)
- * except struct.error:
- */
- __Pyx_XGIVEREF(__pyx_t_2);
- __Pyx_XGIVEREF(__pyx_t_3);
- __Pyx_XGIVEREF(__pyx_t_4);
- __Pyx_ExceptionReset(__pyx_t_2, __pyx_t_3, __pyx_t_4);
- goto __pyx_L1_error;
- __pyx_L6_except_return:;
- __Pyx_XGIVEREF(__pyx_t_2);
- __Pyx_XGIVEREF(__pyx_t_3);
- __Pyx_XGIVEREF(__pyx_t_4);
- __Pyx_ExceptionReset(__pyx_t_2, __pyx_t_3, __pyx_t_4);
- goto __pyx_L0;
- }
-
- /* "View.MemoryView":485
- * self.assign_item_from_object(itemp, value)
- *
- * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<<
- * """Only used if instantiated manually by the user, or if Cython doesn't
- * know how to convert the type"""
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_5);
- __Pyx_XDECREF(__pyx_t_6);
- __Pyx_XDECREF(__pyx_t_7);
- __Pyx_XDECREF(__pyx_t_9);
- __Pyx_AddTraceback("View.MemoryView.memoryview.convert_item_to_object", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XDECREF(__pyx_v_struct);
- __Pyx_XDECREF(__pyx_v_bytesitem);
- __Pyx_XDECREF(__pyx_v_result);
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":501
- * return result
- *
- * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<<
- * """Only used if instantiated manually by the user, or if Cython doesn't
- * know how to convert the type"""
- */
-
-static PyObject *__pyx_memoryview_assign_item_from_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value) {
- PyObject *__pyx_v_struct = NULL;
- char __pyx_v_c;
- PyObject *__pyx_v_bytesvalue = 0;
- Py_ssize_t __pyx_v_i;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_t_2;
- int __pyx_t_3;
- PyObject *__pyx_t_4 = NULL;
- PyObject *__pyx_t_5 = NULL;
- PyObject *__pyx_t_6 = NULL;
- int __pyx_t_7;
- PyObject *__pyx_t_8 = NULL;
- Py_ssize_t __pyx_t_9;
- PyObject *__pyx_t_10 = NULL;
- char *__pyx_t_11;
- char *__pyx_t_12;
- char *__pyx_t_13;
- char *__pyx_t_14;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("assign_item_from_object", 0);
-
- /* "View.MemoryView":504
- * """Only used if instantiated manually by the user, or if Cython doesn't
- * know how to convert the type"""
- * import struct # <<<<<<<<<<<<<<
- * cdef char c
- * cdef bytes bytesvalue
- */
- __pyx_t_1 = __Pyx_Import(__pyx_n_s_struct, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 504, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_v_struct = __pyx_t_1;
- __pyx_t_1 = 0;
-
- /* "View.MemoryView":509
- * cdef Py_ssize_t i
- *
- * if isinstance(value, tuple): # <<<<<<<<<<<<<<
- * bytesvalue = struct.pack(self.view.format, *value)
- * else:
- */
- __pyx_t_2 = PyTuple_Check(__pyx_v_value);
- __pyx_t_3 = (__pyx_t_2 != 0);
- if (__pyx_t_3) {
-
- /* "View.MemoryView":510
- *
- * if isinstance(value, tuple):
- * bytesvalue = struct.pack(self.view.format, *value) # <<<<<<<<<<<<<<
- * else:
- * bytesvalue = struct.pack(self.view.format, value)
- */
- __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_pack); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 510, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_4 = __Pyx_PyBytes_FromString(__pyx_v_self->view.format); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 510, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 510, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- __Pyx_GIVEREF(__pyx_t_4);
- PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_4);
- __pyx_t_4 = 0;
- __pyx_t_4 = __Pyx_PySequence_Tuple(__pyx_v_value); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 510, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __pyx_t_6 = PyNumber_Add(__pyx_t_5, __pyx_t_4); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 510, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_6);
- __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
- __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
- __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_6, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 510, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
- if (!(likely(PyBytes_CheckExact(__pyx_t_4))||((__pyx_t_4) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "bytes", Py_TYPE(__pyx_t_4)->tp_name), 0))) __PYX_ERR(1, 510, __pyx_L1_error)
- __pyx_v_bytesvalue = ((PyObject*)__pyx_t_4);
- __pyx_t_4 = 0;
-
- /* "View.MemoryView":509
- * cdef Py_ssize_t i
- *
- * if isinstance(value, tuple): # <<<<<<<<<<<<<<
- * bytesvalue = struct.pack(self.view.format, *value)
- * else:
- */
- goto __pyx_L3;
- }
-
- /* "View.MemoryView":512
- * bytesvalue = struct.pack(self.view.format, *value)
- * else:
- * bytesvalue = struct.pack(self.view.format, value) # <<<<<<<<<<<<<<
- *
- * for i, c in enumerate(bytesvalue):
- */
- /*else*/ {
- __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_pack); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 512, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_6);
- __pyx_t_1 = __Pyx_PyBytes_FromString(__pyx_v_self->view.format); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 512, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_5 = NULL;
- __pyx_t_7 = 0;
- if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_6))) {
- __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_6);
- if (likely(__pyx_t_5)) {
- PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_6);
- __Pyx_INCREF(__pyx_t_5);
- __Pyx_INCREF(function);
- __Pyx_DECREF_SET(__pyx_t_6, function);
- __pyx_t_7 = 1;
- }
- }
- #if CYTHON_FAST_PYCALL
- if (PyFunction_Check(__pyx_t_6)) {
- PyObject *__pyx_temp[3] = {__pyx_t_5, __pyx_t_1, __pyx_v_value};
- __pyx_t_4 = __Pyx_PyFunction_FastCall(__pyx_t_6, __pyx_temp+1-__pyx_t_7, 2+__pyx_t_7); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 512, __pyx_L1_error)
- __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;
- __Pyx_GOTREF(__pyx_t_4);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- } else
- #endif
- #if CYTHON_FAST_PYCCALL
- if (__Pyx_PyFastCFunction_Check(__pyx_t_6)) {
- PyObject *__pyx_temp[3] = {__pyx_t_5, __pyx_t_1, __pyx_v_value};
- __pyx_t_4 = __Pyx_PyCFunction_FastCall(__pyx_t_6, __pyx_temp+1-__pyx_t_7, 2+__pyx_t_7); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 512, __pyx_L1_error)
- __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;
- __Pyx_GOTREF(__pyx_t_4);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- } else
- #endif
- {
- __pyx_t_8 = PyTuple_New(2+__pyx_t_7); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 512, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_8);
- if (__pyx_t_5) {
- __Pyx_GIVEREF(__pyx_t_5); PyTuple_SET_ITEM(__pyx_t_8, 0, __pyx_t_5); __pyx_t_5 = NULL;
- }
- __Pyx_GIVEREF(__pyx_t_1);
- PyTuple_SET_ITEM(__pyx_t_8, 0+__pyx_t_7, __pyx_t_1);
- __Pyx_INCREF(__pyx_v_value);
- __Pyx_GIVEREF(__pyx_v_value);
- PyTuple_SET_ITEM(__pyx_t_8, 1+__pyx_t_7, __pyx_v_value);
- __pyx_t_1 = 0;
- __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_6, __pyx_t_8, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 512, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
- }
- __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
- if (!(likely(PyBytes_CheckExact(__pyx_t_4))||((__pyx_t_4) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "bytes", Py_TYPE(__pyx_t_4)->tp_name), 0))) __PYX_ERR(1, 512, __pyx_L1_error)
- __pyx_v_bytesvalue = ((PyObject*)__pyx_t_4);
- __pyx_t_4 = 0;
- }
- __pyx_L3:;
-
- /* "View.MemoryView":514
- * bytesvalue = struct.pack(self.view.format, value)
- *
- * for i, c in enumerate(bytesvalue): # <<<<<<<<<<<<<<
- * itemp[i] = c
- *
- */
- __pyx_t_9 = 0;
- if (unlikely(__pyx_v_bytesvalue == Py_None)) {
- PyErr_SetString(PyExc_TypeError, "'NoneType' is not iterable");
- __PYX_ERR(1, 514, __pyx_L1_error)
- }
- __Pyx_INCREF(__pyx_v_bytesvalue);
- __pyx_t_10 = __pyx_v_bytesvalue;
- __pyx_t_12 = PyBytes_AS_STRING(__pyx_t_10);
- __pyx_t_13 = (__pyx_t_12 + PyBytes_GET_SIZE(__pyx_t_10));
- for (__pyx_t_14 = __pyx_t_12; __pyx_t_14 < __pyx_t_13; __pyx_t_14++) {
- __pyx_t_11 = __pyx_t_14;
- __pyx_v_c = (__pyx_t_11[0]);
-
- /* "View.MemoryView":515
- *
- * for i, c in enumerate(bytesvalue):
- * itemp[i] = c # <<<<<<<<<<<<<<
- *
- * @cname('getbuffer')
- */
- __pyx_v_i = __pyx_t_9;
-
- /* "View.MemoryView":514
- * bytesvalue = struct.pack(self.view.format, value)
- *
- * for i, c in enumerate(bytesvalue): # <<<<<<<<<<<<<<
- * itemp[i] = c
- *
- */
- __pyx_t_9 = (__pyx_t_9 + 1);
-
- /* "View.MemoryView":515
- *
- * for i, c in enumerate(bytesvalue):
- * itemp[i] = c # <<<<<<<<<<<<<<
- *
- * @cname('getbuffer')
- */
- (__pyx_v_itemp[__pyx_v_i]) = __pyx_v_c;
- }
- __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;
-
- /* "View.MemoryView":501
- * return result
- *
- * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<<
- * """Only used if instantiated manually by the user, or if Cython doesn't
- * know how to convert the type"""
- */
-
- /* function exit code */
- __pyx_r = Py_None; __Pyx_INCREF(Py_None);
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_4);
- __Pyx_XDECREF(__pyx_t_5);
- __Pyx_XDECREF(__pyx_t_6);
- __Pyx_XDECREF(__pyx_t_8);
- __Pyx_XDECREF(__pyx_t_10);
- __Pyx_AddTraceback("View.MemoryView.memoryview.assign_item_from_object", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XDECREF(__pyx_v_struct);
- __Pyx_XDECREF(__pyx_v_bytesvalue);
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":518
- *
- * @cname('getbuffer')
- * def __getbuffer__(self, Py_buffer *info, int flags): # <<<<<<<<<<<<<<
- * if flags & PyBUF_WRITABLE and self.view.readonly:
- * raise ValueError("Cannot create writable memory view from read-only memoryview")
- */
-
-/* Python wrapper */
-static CYTHON_UNUSED int __pyx_memoryview_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/
-static CYTHON_UNUSED int __pyx_memoryview_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) {
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__getbuffer__ (wrapper)", 0);
- __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_8__getbuffer__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((Py_buffer *)__pyx_v_info), ((int)__pyx_v_flags));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_8__getbuffer__(struct __pyx_memoryview_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) {
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- int __pyx_t_2;
- PyObject *__pyx_t_3 = NULL;
- Py_ssize_t *__pyx_t_4;
- char *__pyx_t_5;
- void *__pyx_t_6;
- int __pyx_t_7;
- Py_ssize_t __pyx_t_8;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- if (__pyx_v_info == NULL) {
- PyErr_SetString(PyExc_BufferError, "PyObject_GetBuffer: view==NULL argument is obsolete");
- return -1;
- }
- __Pyx_RefNannySetupContext("__getbuffer__", 0);
- __pyx_v_info->obj = Py_None; __Pyx_INCREF(Py_None);
- __Pyx_GIVEREF(__pyx_v_info->obj);
-
- /* "View.MemoryView":519
- * @cname('getbuffer')
- * def __getbuffer__(self, Py_buffer *info, int flags):
- * if flags & PyBUF_WRITABLE and self.view.readonly: # <<<<<<<<<<<<<<
- * raise ValueError("Cannot create writable memory view from read-only memoryview")
- *
- */
- __pyx_t_2 = ((__pyx_v_flags & PyBUF_WRITABLE) != 0);
- if (__pyx_t_2) {
- } else {
- __pyx_t_1 = __pyx_t_2;
- goto __pyx_L4_bool_binop_done;
- }
- __pyx_t_2 = (__pyx_v_self->view.readonly != 0);
- __pyx_t_1 = __pyx_t_2;
- __pyx_L4_bool_binop_done:;
- if (unlikely(__pyx_t_1)) {
-
- /* "View.MemoryView":520
- * def __getbuffer__(self, Py_buffer *info, int flags):
- * if flags & PyBUF_WRITABLE and self.view.readonly:
- * raise ValueError("Cannot create writable memory view from read-only memoryview") # <<<<<<<<<<<<<<
- *
- * if flags & PyBUF_ND:
- */
- __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__11, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 520, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_Raise(__pyx_t_3, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __PYX_ERR(1, 520, __pyx_L1_error)
-
- /* "View.MemoryView":519
- * @cname('getbuffer')
- * def __getbuffer__(self, Py_buffer *info, int flags):
- * if flags & PyBUF_WRITABLE and self.view.readonly: # <<<<<<<<<<<<<<
- * raise ValueError("Cannot create writable memory view from read-only memoryview")
- *
- */
- }
-
- /* "View.MemoryView":522
- * raise ValueError("Cannot create writable memory view from read-only memoryview")
- *
- * if flags & PyBUF_ND: # <<<<<<<<<<<<<<
- * info.shape = self.view.shape
- * else:
- */
- __pyx_t_1 = ((__pyx_v_flags & PyBUF_ND) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":523
- *
- * if flags & PyBUF_ND:
- * info.shape = self.view.shape # <<<<<<<<<<<<<<
- * else:
- * info.shape = NULL
- */
- __pyx_t_4 = __pyx_v_self->view.shape;
- __pyx_v_info->shape = __pyx_t_4;
-
- /* "View.MemoryView":522
- * raise ValueError("Cannot create writable memory view from read-only memoryview")
- *
- * if flags & PyBUF_ND: # <<<<<<<<<<<<<<
- * info.shape = self.view.shape
- * else:
- */
- goto __pyx_L6;
- }
-
- /* "View.MemoryView":525
- * info.shape = self.view.shape
- * else:
- * info.shape = NULL # <<<<<<<<<<<<<<
- *
- * if flags & PyBUF_STRIDES:
- */
- /*else*/ {
- __pyx_v_info->shape = NULL;
- }
- __pyx_L6:;
-
- /* "View.MemoryView":527
- * info.shape = NULL
- *
- * if flags & PyBUF_STRIDES: # <<<<<<<<<<<<<<
- * info.strides = self.view.strides
- * else:
- */
- __pyx_t_1 = ((__pyx_v_flags & PyBUF_STRIDES) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":528
- *
- * if flags & PyBUF_STRIDES:
- * info.strides = self.view.strides # <<<<<<<<<<<<<<
- * else:
- * info.strides = NULL
- */
- __pyx_t_4 = __pyx_v_self->view.strides;
- __pyx_v_info->strides = __pyx_t_4;
-
- /* "View.MemoryView":527
- * info.shape = NULL
- *
- * if flags & PyBUF_STRIDES: # <<<<<<<<<<<<<<
- * info.strides = self.view.strides
- * else:
- */
- goto __pyx_L7;
- }
-
- /* "View.MemoryView":530
- * info.strides = self.view.strides
- * else:
- * info.strides = NULL # <<<<<<<<<<<<<<
- *
- * if flags & PyBUF_INDIRECT:
- */
- /*else*/ {
- __pyx_v_info->strides = NULL;
- }
- __pyx_L7:;
-
- /* "View.MemoryView":532
- * info.strides = NULL
- *
- * if flags & PyBUF_INDIRECT: # <<<<<<<<<<<<<<
- * info.suboffsets = self.view.suboffsets
- * else:
- */
- __pyx_t_1 = ((__pyx_v_flags & PyBUF_INDIRECT) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":533
- *
- * if flags & PyBUF_INDIRECT:
- * info.suboffsets = self.view.suboffsets # <<<<<<<<<<<<<<
- * else:
- * info.suboffsets = NULL
- */
- __pyx_t_4 = __pyx_v_self->view.suboffsets;
- __pyx_v_info->suboffsets = __pyx_t_4;
-
- /* "View.MemoryView":532
- * info.strides = NULL
- *
- * if flags & PyBUF_INDIRECT: # <<<<<<<<<<<<<<
- * info.suboffsets = self.view.suboffsets
- * else:
- */
- goto __pyx_L8;
- }
-
- /* "View.MemoryView":535
- * info.suboffsets = self.view.suboffsets
- * else:
- * info.suboffsets = NULL # <<<<<<<<<<<<<<
- *
- * if flags & PyBUF_FORMAT:
- */
- /*else*/ {
- __pyx_v_info->suboffsets = NULL;
- }
- __pyx_L8:;
-
- /* "View.MemoryView":537
- * info.suboffsets = NULL
- *
- * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<<
- * info.format = self.view.format
- * else:
- */
- __pyx_t_1 = ((__pyx_v_flags & PyBUF_FORMAT) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":538
- *
- * if flags & PyBUF_FORMAT:
- * info.format = self.view.format # <<<<<<<<<<<<<<
- * else:
- * info.format = NULL
- */
- __pyx_t_5 = __pyx_v_self->view.format;
- __pyx_v_info->format = __pyx_t_5;
-
- /* "View.MemoryView":537
- * info.suboffsets = NULL
- *
- * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<<
- * info.format = self.view.format
- * else:
- */
- goto __pyx_L9;
- }
-
- /* "View.MemoryView":540
- * info.format = self.view.format
- * else:
- * info.format = NULL # <<<<<<<<<<<<<<
- *
- * info.buf = self.view.buf
- */
- /*else*/ {
- __pyx_v_info->format = NULL;
- }
- __pyx_L9:;
-
- /* "View.MemoryView":542
- * info.format = NULL
- *
- * info.buf = self.view.buf # <<<<<<<<<<<<<<
- * info.ndim = self.view.ndim
- * info.itemsize = self.view.itemsize
- */
- __pyx_t_6 = __pyx_v_self->view.buf;
- __pyx_v_info->buf = __pyx_t_6;
-
- /* "View.MemoryView":543
- *
- * info.buf = self.view.buf
- * info.ndim = self.view.ndim # <<<<<<<<<<<<<<
- * info.itemsize = self.view.itemsize
- * info.len = self.view.len
- */
- __pyx_t_7 = __pyx_v_self->view.ndim;
- __pyx_v_info->ndim = __pyx_t_7;
-
- /* "View.MemoryView":544
- * info.buf = self.view.buf
- * info.ndim = self.view.ndim
- * info.itemsize = self.view.itemsize # <<<<<<<<<<<<<<
- * info.len = self.view.len
- * info.readonly = self.view.readonly
- */
- __pyx_t_8 = __pyx_v_self->view.itemsize;
- __pyx_v_info->itemsize = __pyx_t_8;
-
- /* "View.MemoryView":545
- * info.ndim = self.view.ndim
- * info.itemsize = self.view.itemsize
- * info.len = self.view.len # <<<<<<<<<<<<<<
- * info.readonly = self.view.readonly
- * info.obj = self
- */
- __pyx_t_8 = __pyx_v_self->view.len;
- __pyx_v_info->len = __pyx_t_8;
-
- /* "View.MemoryView":546
- * info.itemsize = self.view.itemsize
- * info.len = self.view.len
- * info.readonly = self.view.readonly # <<<<<<<<<<<<<<
- * info.obj = self
- *
- */
- __pyx_t_1 = __pyx_v_self->view.readonly;
- __pyx_v_info->readonly = __pyx_t_1;
-
- /* "View.MemoryView":547
- * info.len = self.view.len
- * info.readonly = self.view.readonly
- * info.obj = self # <<<<<<<<<<<<<<
- *
- * __pyx_getbuffer = capsule( &__pyx_memoryview_getbuffer, "getbuffer(obj, view, flags)")
- */
- __Pyx_INCREF(((PyObject *)__pyx_v_self));
- __Pyx_GIVEREF(((PyObject *)__pyx_v_self));
- __Pyx_GOTREF(__pyx_v_info->obj);
- __Pyx_DECREF(__pyx_v_info->obj);
- __pyx_v_info->obj = ((PyObject *)__pyx_v_self);
-
- /* "View.MemoryView":518
- *
- * @cname('getbuffer')
- * def __getbuffer__(self, Py_buffer *info, int flags): # <<<<<<<<<<<<<<
- * if flags & PyBUF_WRITABLE and self.view.readonly:
- * raise ValueError("Cannot create writable memory view from read-only memoryview")
- */
-
- /* function exit code */
- __pyx_r = 0;
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_AddTraceback("View.MemoryView.memoryview.__getbuffer__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = -1;
- if (__pyx_v_info->obj != NULL) {
- __Pyx_GOTREF(__pyx_v_info->obj);
- __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0;
- }
- goto __pyx_L2;
- __pyx_L0:;
- if (__pyx_v_info->obj == Py_None) {
- __Pyx_GOTREF(__pyx_v_info->obj);
- __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0;
- }
- __pyx_L2:;
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":553
- *
- * @property
- * def T(self): # <<<<<<<<<<<<<<
- * cdef _memoryviewslice result = memoryview_copy(self)
- * transpose_memslice(&result.from_slice)
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_1T_1__get__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_1T_1__get__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__get__ (wrapper)", 0);
- __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_1T___get__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_1T___get__(struct __pyx_memoryview_obj *__pyx_v_self) {
- struct __pyx_memoryviewslice_obj *__pyx_v_result = 0;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_t_2;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__get__", 0);
-
- /* "View.MemoryView":554
- * @property
- * def T(self):
- * cdef _memoryviewslice result = memoryview_copy(self) # <<<<<<<<<<<<<<
- * transpose_memslice(&result.from_slice)
- * return result
- */
- __pyx_t_1 = __pyx_memoryview_copy_object(__pyx_v_self); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 554, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- if (!(likely(((__pyx_t_1) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_1, __pyx_memoryviewslice_type))))) __PYX_ERR(1, 554, __pyx_L1_error)
- __pyx_v_result = ((struct __pyx_memoryviewslice_obj *)__pyx_t_1);
- __pyx_t_1 = 0;
-
- /* "View.MemoryView":555
- * def T(self):
- * cdef _memoryviewslice result = memoryview_copy(self)
- * transpose_memslice(&result.from_slice) # <<<<<<<<<<<<<<
- * return result
- *
- */
- __pyx_t_2 = __pyx_memslice_transpose((&__pyx_v_result->from_slice)); if (unlikely(__pyx_t_2 == ((int)0))) __PYX_ERR(1, 555, __pyx_L1_error)
-
- /* "View.MemoryView":556
- * cdef _memoryviewslice result = memoryview_copy(self)
- * transpose_memslice(&result.from_slice)
- * return result # <<<<<<<<<<<<<<
- *
- * @property
- */
- __Pyx_XDECREF(__pyx_r);
- __Pyx_INCREF(((PyObject *)__pyx_v_result));
- __pyx_r = ((PyObject *)__pyx_v_result);
- goto __pyx_L0;
-
- /* "View.MemoryView":553
- *
- * @property
- * def T(self): # <<<<<<<<<<<<<<
- * cdef _memoryviewslice result = memoryview_copy(self)
- * transpose_memslice(&result.from_slice)
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_AddTraceback("View.MemoryView.memoryview.T.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XDECREF((PyObject *)__pyx_v_result);
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":559
- *
- * @property
- * def base(self): # <<<<<<<<<<<<<<
- * return self.obj
- *
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4base_1__get__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4base_1__get__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__get__ (wrapper)", 0);
- __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_4base___get__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4base___get__(struct __pyx_memoryview_obj *__pyx_v_self) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__get__", 0);
-
- /* "View.MemoryView":560
- * @property
- * def base(self):
- * return self.obj # <<<<<<<<<<<<<<
- *
- * @property
- */
- __Pyx_XDECREF(__pyx_r);
- __Pyx_INCREF(__pyx_v_self->obj);
- __pyx_r = __pyx_v_self->obj;
- goto __pyx_L0;
-
- /* "View.MemoryView":559
- *
- * @property
- * def base(self): # <<<<<<<<<<<<<<
- * return self.obj
- *
- */
-
- /* function exit code */
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":563
- *
- * @property
- * def shape(self): # <<<<<<<<<<<<<<
- * return tuple([length for length in self.view.shape[:self.view.ndim]])
- *
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_5shape_1__get__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_5shape_1__get__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__get__ (wrapper)", 0);
- __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_5shape___get__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_5shape___get__(struct __pyx_memoryview_obj *__pyx_v_self) {
- Py_ssize_t __pyx_v_length;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- Py_ssize_t *__pyx_t_2;
- Py_ssize_t *__pyx_t_3;
- Py_ssize_t *__pyx_t_4;
- PyObject *__pyx_t_5 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__get__", 0);
-
- /* "View.MemoryView":564
- * @property
- * def shape(self):
- * return tuple([length for length in self.view.shape[:self.view.ndim]]) # <<<<<<<<<<<<<<
- *
- * @property
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 564, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_3 = (__pyx_v_self->view.shape + __pyx_v_self->view.ndim);
- for (__pyx_t_4 = __pyx_v_self->view.shape; __pyx_t_4 < __pyx_t_3; __pyx_t_4++) {
- __pyx_t_2 = __pyx_t_4;
- __pyx_v_length = (__pyx_t_2[0]);
- __pyx_t_5 = PyInt_FromSsize_t(__pyx_v_length); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 564, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- if (unlikely(__Pyx_ListComp_Append(__pyx_t_1, (PyObject*)__pyx_t_5))) __PYX_ERR(1, 564, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
- }
- __pyx_t_5 = PyList_AsTuple(((PyObject*)__pyx_t_1)); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 564, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __pyx_r = __pyx_t_5;
- __pyx_t_5 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":563
- *
- * @property
- * def shape(self): # <<<<<<<<<<<<<<
- * return tuple([length for length in self.view.shape[:self.view.ndim]])
- *
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_5);
- __Pyx_AddTraceback("View.MemoryView.memoryview.shape.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":567
- *
- * @property
- * def strides(self): # <<<<<<<<<<<<<<
- * if self.view.strides == NULL:
- *
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_7strides_1__get__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_7strides_1__get__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__get__ (wrapper)", 0);
- __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_7strides___get__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_7strides___get__(struct __pyx_memoryview_obj *__pyx_v_self) {
- Py_ssize_t __pyx_v_stride;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- PyObject *__pyx_t_2 = NULL;
- Py_ssize_t *__pyx_t_3;
- Py_ssize_t *__pyx_t_4;
- Py_ssize_t *__pyx_t_5;
- PyObject *__pyx_t_6 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__get__", 0);
-
- /* "View.MemoryView":568
- * @property
- * def strides(self):
- * if self.view.strides == NULL: # <<<<<<<<<<<<<<
- *
- * raise ValueError("Buffer view does not expose strides")
- */
- __pyx_t_1 = ((__pyx_v_self->view.strides == NULL) != 0);
- if (unlikely(__pyx_t_1)) {
-
- /* "View.MemoryView":570
- * if self.view.strides == NULL:
- *
- * raise ValueError("Buffer view does not expose strides") # <<<<<<<<<<<<<<
- *
- * return tuple([stride for stride in self.view.strides[:self.view.ndim]])
- */
- __pyx_t_2 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__12, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 570, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_Raise(__pyx_t_2, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __PYX_ERR(1, 570, __pyx_L1_error)
-
- /* "View.MemoryView":568
- * @property
- * def strides(self):
- * if self.view.strides == NULL: # <<<<<<<<<<<<<<
- *
- * raise ValueError("Buffer view does not expose strides")
- */
- }
-
- /* "View.MemoryView":572
- * raise ValueError("Buffer view does not expose strides")
- *
- * return tuple([stride for stride in self.view.strides[:self.view.ndim]]) # <<<<<<<<<<<<<<
- *
- * @property
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_2 = PyList_New(0); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 572, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_4 = (__pyx_v_self->view.strides + __pyx_v_self->view.ndim);
- for (__pyx_t_5 = __pyx_v_self->view.strides; __pyx_t_5 < __pyx_t_4; __pyx_t_5++) {
- __pyx_t_3 = __pyx_t_5;
- __pyx_v_stride = (__pyx_t_3[0]);
- __pyx_t_6 = PyInt_FromSsize_t(__pyx_v_stride); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 572, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_6);
- if (unlikely(__Pyx_ListComp_Append(__pyx_t_2, (PyObject*)__pyx_t_6))) __PYX_ERR(1, 572, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
- }
- __pyx_t_6 = PyList_AsTuple(((PyObject*)__pyx_t_2)); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 572, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_6);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __pyx_r = __pyx_t_6;
- __pyx_t_6 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":567
- *
- * @property
- * def strides(self): # <<<<<<<<<<<<<<
- * if self.view.strides == NULL:
- *
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_6);
- __Pyx_AddTraceback("View.MemoryView.memoryview.strides.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":575
- *
- * @property
- * def suboffsets(self): # <<<<<<<<<<<<<<
- * if self.view.suboffsets == NULL:
- * return (-1,) * self.view.ndim
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_10suboffsets_1__get__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_10suboffsets_1__get__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__get__ (wrapper)", 0);
- __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_10suboffsets___get__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_10suboffsets___get__(struct __pyx_memoryview_obj *__pyx_v_self) {
- Py_ssize_t __pyx_v_suboffset;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- PyObject *__pyx_t_2 = NULL;
- PyObject *__pyx_t_3 = NULL;
- Py_ssize_t *__pyx_t_4;
- Py_ssize_t *__pyx_t_5;
- Py_ssize_t *__pyx_t_6;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__get__", 0);
-
- /* "View.MemoryView":576
- * @property
- * def suboffsets(self):
- * if self.view.suboffsets == NULL: # <<<<<<<<<<<<<<
- * return (-1,) * self.view.ndim
- *
- */
- __pyx_t_1 = ((__pyx_v_self->view.suboffsets == NULL) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":577
- * def suboffsets(self):
- * if self.view.suboffsets == NULL:
- * return (-1,) * self.view.ndim # <<<<<<<<<<<<<<
- *
- * return tuple([suboffset for suboffset in self.view.suboffsets[:self.view.ndim]])
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_2 = __Pyx_PyInt_From_int(__pyx_v_self->view.ndim); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 577, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_3 = PyNumber_Multiply(__pyx_tuple__13, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 577, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __pyx_r = __pyx_t_3;
- __pyx_t_3 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":576
- * @property
- * def suboffsets(self):
- * if self.view.suboffsets == NULL: # <<<<<<<<<<<<<<
- * return (-1,) * self.view.ndim
- *
- */
- }
-
- /* "View.MemoryView":579
- * return (-1,) * self.view.ndim
- *
- * return tuple([suboffset for suboffset in self.view.suboffsets[:self.view.ndim]]) # <<<<<<<<<<<<<<
- *
- * @property
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_3 = PyList_New(0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 579, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_t_5 = (__pyx_v_self->view.suboffsets + __pyx_v_self->view.ndim);
- for (__pyx_t_6 = __pyx_v_self->view.suboffsets; __pyx_t_6 < __pyx_t_5; __pyx_t_6++) {
- __pyx_t_4 = __pyx_t_6;
- __pyx_v_suboffset = (__pyx_t_4[0]);
- __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_suboffset); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 579, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- if (unlikely(__Pyx_ListComp_Append(__pyx_t_3, (PyObject*)__pyx_t_2))) __PYX_ERR(1, 579, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- }
- __pyx_t_2 = PyList_AsTuple(((PyObject*)__pyx_t_3)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 579, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __pyx_r = __pyx_t_2;
- __pyx_t_2 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":575
- *
- * @property
- * def suboffsets(self): # <<<<<<<<<<<<<<
- * if self.view.suboffsets == NULL:
- * return (-1,) * self.view.ndim
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_AddTraceback("View.MemoryView.memoryview.suboffsets.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":582
- *
- * @property
- * def ndim(self): # <<<<<<<<<<<<<<
- * return self.view.ndim
- *
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4ndim_1__get__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4ndim_1__get__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__get__ (wrapper)", 0);
- __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_4ndim___get__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4ndim___get__(struct __pyx_memoryview_obj *__pyx_v_self) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__get__", 0);
-
- /* "View.MemoryView":583
- * @property
- * def ndim(self):
- * return self.view.ndim # <<<<<<<<<<<<<<
- *
- * @property
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_self->view.ndim); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 583, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_r = __pyx_t_1;
- __pyx_t_1 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":582
- *
- * @property
- * def ndim(self): # <<<<<<<<<<<<<<
- * return self.view.ndim
- *
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_AddTraceback("View.MemoryView.memoryview.ndim.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":586
- *
- * @property
- * def itemsize(self): # <<<<<<<<<<<<<<
- * return self.view.itemsize
- *
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_8itemsize_1__get__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_8itemsize_1__get__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__get__ (wrapper)", 0);
- __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_8itemsize___get__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_8itemsize___get__(struct __pyx_memoryview_obj *__pyx_v_self) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__get__", 0);
-
- /* "View.MemoryView":587
- * @property
- * def itemsize(self):
- * return self.view.itemsize # <<<<<<<<<<<<<<
- *
- * @property
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_1 = PyInt_FromSsize_t(__pyx_v_self->view.itemsize); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 587, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_r = __pyx_t_1;
- __pyx_t_1 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":586
- *
- * @property
- * def itemsize(self): # <<<<<<<<<<<<<<
- * return self.view.itemsize
- *
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_AddTraceback("View.MemoryView.memoryview.itemsize.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":590
- *
- * @property
- * def nbytes(self): # <<<<<<<<<<<<<<
- * return self.size * self.view.itemsize
- *
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_6nbytes_1__get__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_6nbytes_1__get__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__get__ (wrapper)", 0);
- __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_6nbytes___get__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_6nbytes___get__(struct __pyx_memoryview_obj *__pyx_v_self) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- PyObject *__pyx_t_2 = NULL;
- PyObject *__pyx_t_3 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__get__", 0);
-
- /* "View.MemoryView":591
- * @property
- * def nbytes(self):
- * return self.size * self.view.itemsize # <<<<<<<<<<<<<<
- *
- * @property
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_size); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 591, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_self->view.itemsize); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 591, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_3 = PyNumber_Multiply(__pyx_t_1, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 591, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __pyx_r = __pyx_t_3;
- __pyx_t_3 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":590
- *
- * @property
- * def nbytes(self): # <<<<<<<<<<<<<<
- * return self.size * self.view.itemsize
- *
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_AddTraceback("View.MemoryView.memoryview.nbytes.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":594
- *
- * @property
- * def size(self): # <<<<<<<<<<<<<<
- * if self._size is None:
- * result = 1
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4size_1__get__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4size_1__get__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__get__ (wrapper)", 0);
- __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_4size___get__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4size___get__(struct __pyx_memoryview_obj *__pyx_v_self) {
- PyObject *__pyx_v_result = NULL;
- PyObject *__pyx_v_length = NULL;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- int __pyx_t_2;
- Py_ssize_t *__pyx_t_3;
- Py_ssize_t *__pyx_t_4;
- Py_ssize_t *__pyx_t_5;
- PyObject *__pyx_t_6 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__get__", 0);
-
- /* "View.MemoryView":595
- * @property
- * def size(self):
- * if self._size is None: # <<<<<<<<<<<<<<
- * result = 1
- *
- */
- __pyx_t_1 = (__pyx_v_self->_size == Py_None);
- __pyx_t_2 = (__pyx_t_1 != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":596
- * def size(self):
- * if self._size is None:
- * result = 1 # <<<<<<<<<<<<<<
- *
- * for length in self.view.shape[:self.view.ndim]:
- */
- __Pyx_INCREF(__pyx_int_1);
- __pyx_v_result = __pyx_int_1;
-
- /* "View.MemoryView":598
- * result = 1
- *
- * for length in self.view.shape[:self.view.ndim]: # <<<<<<<<<<<<<<
- * result *= length
- *
- */
- __pyx_t_4 = (__pyx_v_self->view.shape + __pyx_v_self->view.ndim);
- for (__pyx_t_5 = __pyx_v_self->view.shape; __pyx_t_5 < __pyx_t_4; __pyx_t_5++) {
- __pyx_t_3 = __pyx_t_5;
- __pyx_t_6 = PyInt_FromSsize_t((__pyx_t_3[0])); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 598, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_6);
- __Pyx_XDECREF_SET(__pyx_v_length, __pyx_t_6);
- __pyx_t_6 = 0;
-
- /* "View.MemoryView":599
- *
- * for length in self.view.shape[:self.view.ndim]:
- * result *= length # <<<<<<<<<<<<<<
- *
- * self._size = result
- */
- __pyx_t_6 = PyNumber_InPlaceMultiply(__pyx_v_result, __pyx_v_length); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 599, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_6);
- __Pyx_DECREF_SET(__pyx_v_result, __pyx_t_6);
- __pyx_t_6 = 0;
- }
-
- /* "View.MemoryView":601
- * result *= length
- *
- * self._size = result # <<<<<<<<<<<<<<
- *
- * return self._size
- */
- __Pyx_INCREF(__pyx_v_result);
- __Pyx_GIVEREF(__pyx_v_result);
- __Pyx_GOTREF(__pyx_v_self->_size);
- __Pyx_DECREF(__pyx_v_self->_size);
- __pyx_v_self->_size = __pyx_v_result;
-
- /* "View.MemoryView":595
- * @property
- * def size(self):
- * if self._size is None: # <<<<<<<<<<<<<<
- * result = 1
- *
- */
- }
-
- /* "View.MemoryView":603
- * self._size = result
- *
- * return self._size # <<<<<<<<<<<<<<
- *
- * def __len__(self):
- */
- __Pyx_XDECREF(__pyx_r);
- __Pyx_INCREF(__pyx_v_self->_size);
- __pyx_r = __pyx_v_self->_size;
- goto __pyx_L0;
-
- /* "View.MemoryView":594
- *
- * @property
- * def size(self): # <<<<<<<<<<<<<<
- * if self._size is None:
- * result = 1
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_6);
- __Pyx_AddTraceback("View.MemoryView.memoryview.size.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XDECREF(__pyx_v_result);
- __Pyx_XDECREF(__pyx_v_length);
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":605
- * return self._size
- *
- * def __len__(self): # <<<<<<<<<<<<<<
- * if self.view.ndim >= 1:
- * return self.view.shape[0]
- */
-
-/* Python wrapper */
-static Py_ssize_t __pyx_memoryview___len__(PyObject *__pyx_v_self); /*proto*/
-static Py_ssize_t __pyx_memoryview___len__(PyObject *__pyx_v_self) {
- Py_ssize_t __pyx_r;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__len__ (wrapper)", 0);
- __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_10__len__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static Py_ssize_t __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_10__len__(struct __pyx_memoryview_obj *__pyx_v_self) {
- Py_ssize_t __pyx_r;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- __Pyx_RefNannySetupContext("__len__", 0);
-
- /* "View.MemoryView":606
- *
- * def __len__(self):
- * if self.view.ndim >= 1: # <<<<<<<<<<<<<<
- * return self.view.shape[0]
- *
- */
- __pyx_t_1 = ((__pyx_v_self->view.ndim >= 1) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":607
- * def __len__(self):
- * if self.view.ndim >= 1:
- * return self.view.shape[0] # <<<<<<<<<<<<<<
- *
- * return 0
- */
- __pyx_r = (__pyx_v_self->view.shape[0]);
- goto __pyx_L0;
-
- /* "View.MemoryView":606
- *
- * def __len__(self):
- * if self.view.ndim >= 1: # <<<<<<<<<<<<<<
- * return self.view.shape[0]
- *
- */
- }
-
- /* "View.MemoryView":609
- * return self.view.shape[0]
- *
- * return 0 # <<<<<<<<<<<<<<
- *
- * def __repr__(self):
- */
- __pyx_r = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":605
- * return self._size
- *
- * def __len__(self): # <<<<<<<<<<<<<<
- * if self.view.ndim >= 1:
- * return self.view.shape[0]
- */
-
- /* function exit code */
- __pyx_L0:;
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":611
- * return 0
- *
- * def __repr__(self): # <<<<<<<<<<<<<<
- * return "" % (self.base.__class__.__name__,
- * id(self))
- */
-
-/* Python wrapper */
-static PyObject *__pyx_memoryview___repr__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_memoryview___repr__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__repr__ (wrapper)", 0);
- __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_12__repr__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_12__repr__(struct __pyx_memoryview_obj *__pyx_v_self) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- PyObject *__pyx_t_2 = NULL;
- PyObject *__pyx_t_3 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__repr__", 0);
-
- /* "View.MemoryView":612
- *
- * def __repr__(self):
- * return "" % (self.base.__class__.__name__, # <<<<<<<<<<<<<<
- * id(self))
- *
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_base); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 612, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_class); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 612, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_name_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 612, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
-
- /* "View.MemoryView":613
- * def __repr__(self):
- * return "" % (self.base.__class__.__name__,
- * id(self)) # <<<<<<<<<<<<<<
- *
- * def __str__(self):
- */
- __pyx_t_2 = __Pyx_PyObject_CallOneArg(__pyx_builtin_id, ((PyObject *)__pyx_v_self)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 613, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
-
- /* "View.MemoryView":612
- *
- * def __repr__(self):
- * return "" % (self.base.__class__.__name__, # <<<<<<<<<<<<<<
- * id(self))
- *
- */
- __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 612, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_GIVEREF(__pyx_t_1);
- PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_1);
- __Pyx_GIVEREF(__pyx_t_2);
- PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_2);
- __pyx_t_1 = 0;
- __pyx_t_2 = 0;
- __pyx_t_2 = __Pyx_PyString_Format(__pyx_kp_s_MemoryView_of_r_at_0x_x, __pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 612, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __pyx_r = __pyx_t_2;
- __pyx_t_2 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":611
- * return 0
- *
- * def __repr__(self): # <<<<<<<<<<<<<<
- * return "" % (self.base.__class__.__name__,
- * id(self))
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_AddTraceback("View.MemoryView.memoryview.__repr__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":615
- * id(self))
- *
- * def __str__(self): # <<<<<<<<<<<<<<
- * return "" % (self.base.__class__.__name__,)
- *
- */
-
-/* Python wrapper */
-static PyObject *__pyx_memoryview___str__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_memoryview___str__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__str__ (wrapper)", 0);
- __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_14__str__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_14__str__(struct __pyx_memoryview_obj *__pyx_v_self) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- PyObject *__pyx_t_2 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__str__", 0);
-
- /* "View.MemoryView":616
- *
- * def __str__(self):
- * return "" % (self.base.__class__.__name__,) # <<<<<<<<<<<<<<
- *
- *
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_base); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 616, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_class); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 616, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_name_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 616, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 616, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_GIVEREF(__pyx_t_1);
- PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_1);
- __pyx_t_1 = 0;
- __pyx_t_1 = __Pyx_PyString_Format(__pyx_kp_s_MemoryView_of_r_object, __pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 616, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __pyx_r = __pyx_t_1;
- __pyx_t_1 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":615
- * id(self))
- *
- * def __str__(self): # <<<<<<<<<<<<<<
- * return "" % (self.base.__class__.__name__,)
- *
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_AddTraceback("View.MemoryView.memoryview.__str__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":619
- *
- *
- * def is_c_contig(self): # <<<<<<<<<<<<<<
- * cdef __Pyx_memviewslice *mslice
- * cdef __Pyx_memviewslice tmp
- */
-
-/* Python wrapper */
-static PyObject *__pyx_memoryview_is_c_contig(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/
-static PyObject *__pyx_memoryview_is_c_contig(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("is_c_contig (wrapper)", 0);
- __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_16is_c_contig(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_16is_c_contig(struct __pyx_memoryview_obj *__pyx_v_self) {
- __Pyx_memviewslice *__pyx_v_mslice;
- __Pyx_memviewslice __pyx_v_tmp;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- __Pyx_memviewslice *__pyx_t_1;
- PyObject *__pyx_t_2 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("is_c_contig", 0);
-
- /* "View.MemoryView":622
- * cdef __Pyx_memviewslice *mslice
- * cdef __Pyx_memviewslice tmp
- * mslice = get_slice_from_memview(self, &tmp) # <<<<<<<<<<<<<<
- * return slice_is_contig(mslice[0], 'C', self.view.ndim)
- *
- */
- __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(__pyx_v_self, (&__pyx_v_tmp)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 622, __pyx_L1_error)
- __pyx_v_mslice = __pyx_t_1;
-
- /* "View.MemoryView":623
- * cdef __Pyx_memviewslice tmp
- * mslice = get_slice_from_memview(self, &tmp)
- * return slice_is_contig(mslice[0], 'C', self.view.ndim) # <<<<<<<<<<<<<<
- *
- * def is_f_contig(self):
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_memviewslice_is_contig((__pyx_v_mslice[0]), 'C', __pyx_v_self->view.ndim)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 623, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_r = __pyx_t_2;
- __pyx_t_2 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":619
- *
- *
- * def is_c_contig(self): # <<<<<<<<<<<<<<
- * cdef __Pyx_memviewslice *mslice
- * cdef __Pyx_memviewslice tmp
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_AddTraceback("View.MemoryView.memoryview.is_c_contig", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":625
- * return slice_is_contig(mslice[0], 'C', self.view.ndim)
- *
- * def is_f_contig(self): # <<<<<<<<<<<<<<
- * cdef __Pyx_memviewslice *mslice
- * cdef __Pyx_memviewslice tmp
- */
-
-/* Python wrapper */
-static PyObject *__pyx_memoryview_is_f_contig(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/
-static PyObject *__pyx_memoryview_is_f_contig(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("is_f_contig (wrapper)", 0);
- __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_18is_f_contig(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_18is_f_contig(struct __pyx_memoryview_obj *__pyx_v_self) {
- __Pyx_memviewslice *__pyx_v_mslice;
- __Pyx_memviewslice __pyx_v_tmp;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- __Pyx_memviewslice *__pyx_t_1;
- PyObject *__pyx_t_2 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("is_f_contig", 0);
-
- /* "View.MemoryView":628
- * cdef __Pyx_memviewslice *mslice
- * cdef __Pyx_memviewslice tmp
- * mslice = get_slice_from_memview(self, &tmp) # <<<<<<<<<<<<<<
- * return slice_is_contig(mslice[0], 'F', self.view.ndim)
- *
- */
- __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(__pyx_v_self, (&__pyx_v_tmp)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 628, __pyx_L1_error)
- __pyx_v_mslice = __pyx_t_1;
-
- /* "View.MemoryView":629
- * cdef __Pyx_memviewslice tmp
- * mslice = get_slice_from_memview(self, &tmp)
- * return slice_is_contig(mslice[0], 'F', self.view.ndim) # <<<<<<<<<<<<<<
- *
- * def copy(self):
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_memviewslice_is_contig((__pyx_v_mslice[0]), 'F', __pyx_v_self->view.ndim)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 629, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_r = __pyx_t_2;
- __pyx_t_2 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":625
- * return slice_is_contig(mslice[0], 'C', self.view.ndim)
- *
- * def is_f_contig(self): # <<<<<<<<<<<<<<
- * cdef __Pyx_memviewslice *mslice
- * cdef __Pyx_memviewslice tmp
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_AddTraceback("View.MemoryView.memoryview.is_f_contig", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":631
- * return slice_is_contig(mslice[0], 'F', self.view.ndim)
- *
- * def copy(self): # <<<<<<<<<<<<<<
- * cdef __Pyx_memviewslice mslice
- * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS
- */
-
-/* Python wrapper */
-static PyObject *__pyx_memoryview_copy(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/
-static PyObject *__pyx_memoryview_copy(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("copy (wrapper)", 0);
- __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_20copy(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_20copy(struct __pyx_memoryview_obj *__pyx_v_self) {
- __Pyx_memviewslice __pyx_v_mslice;
- int __pyx_v_flags;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- __Pyx_memviewslice __pyx_t_1;
- PyObject *__pyx_t_2 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("copy", 0);
-
- /* "View.MemoryView":633
- * def copy(self):
- * cdef __Pyx_memviewslice mslice
- * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS # <<<<<<<<<<<<<<
- *
- * slice_copy(self, &mslice)
- */
- __pyx_v_flags = (__pyx_v_self->flags & (~PyBUF_F_CONTIGUOUS));
-
- /* "View.MemoryView":635
- * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS
- *
- * slice_copy(self, &mslice) # <<<<<<<<<<<<<<
- * mslice = slice_copy_contig(&mslice, "c", self.view.ndim,
- * self.view.itemsize,
- */
- __pyx_memoryview_slice_copy(__pyx_v_self, (&__pyx_v_mslice));
-
- /* "View.MemoryView":636
- *
- * slice_copy(self, &mslice)
- * mslice = slice_copy_contig(&mslice, "c", self.view.ndim, # <<<<<<<<<<<<<<
- * self.view.itemsize,
- * flags|PyBUF_C_CONTIGUOUS,
- */
- __pyx_t_1 = __pyx_memoryview_copy_new_contig((&__pyx_v_mslice), ((char *)"c"), __pyx_v_self->view.ndim, __pyx_v_self->view.itemsize, (__pyx_v_flags | PyBUF_C_CONTIGUOUS), __pyx_v_self->dtype_is_object); if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 636, __pyx_L1_error)
- __pyx_v_mslice = __pyx_t_1;
-
- /* "View.MemoryView":641
- * self.dtype_is_object)
- *
- * return memoryview_copy_from_slice(self, &mslice) # <<<<<<<<<<<<<<
- *
- * def copy_fortran(self):
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_2 = __pyx_memoryview_copy_object_from_slice(__pyx_v_self, (&__pyx_v_mslice)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 641, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_r = __pyx_t_2;
- __pyx_t_2 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":631
- * return slice_is_contig(mslice[0], 'F', self.view.ndim)
- *
- * def copy(self): # <<<<<<<<<<<<<<
- * cdef __Pyx_memviewslice mslice
- * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_AddTraceback("View.MemoryView.memoryview.copy", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":643
- * return memoryview_copy_from_slice(self, &mslice)
- *
- * def copy_fortran(self): # <<<<<<<<<<<<<<
- * cdef __Pyx_memviewslice src, dst
- * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS
- */
-
-/* Python wrapper */
-static PyObject *__pyx_memoryview_copy_fortran(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/
-static PyObject *__pyx_memoryview_copy_fortran(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("copy_fortran (wrapper)", 0);
- __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_22copy_fortran(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_22copy_fortran(struct __pyx_memoryview_obj *__pyx_v_self) {
- __Pyx_memviewslice __pyx_v_src;
- __Pyx_memviewslice __pyx_v_dst;
- int __pyx_v_flags;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- __Pyx_memviewslice __pyx_t_1;
- PyObject *__pyx_t_2 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("copy_fortran", 0);
-
- /* "View.MemoryView":645
- * def copy_fortran(self):
- * cdef __Pyx_memviewslice src, dst
- * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS # <<<<<<<<<<<<<<
- *
- * slice_copy(self, &src)
- */
- __pyx_v_flags = (__pyx_v_self->flags & (~PyBUF_C_CONTIGUOUS));
-
- /* "View.MemoryView":647
- * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS
- *
- * slice_copy(self, &src) # <<<<<<<<<<<<<<
- * dst = slice_copy_contig(&src, "fortran", self.view.ndim,
- * self.view.itemsize,
- */
- __pyx_memoryview_slice_copy(__pyx_v_self, (&__pyx_v_src));
-
- /* "View.MemoryView":648
- *
- * slice_copy(self, &src)
- * dst = slice_copy_contig(&src, "fortran", self.view.ndim, # <<<<<<<<<<<<<<
- * self.view.itemsize,
- * flags|PyBUF_F_CONTIGUOUS,
- */
- __pyx_t_1 = __pyx_memoryview_copy_new_contig((&__pyx_v_src), ((char *)"fortran"), __pyx_v_self->view.ndim, __pyx_v_self->view.itemsize, (__pyx_v_flags | PyBUF_F_CONTIGUOUS), __pyx_v_self->dtype_is_object); if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 648, __pyx_L1_error)
- __pyx_v_dst = __pyx_t_1;
-
- /* "View.MemoryView":653
- * self.dtype_is_object)
- *
- * return memoryview_copy_from_slice(self, &dst) # <<<<<<<<<<<<<<
- *
- *
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_2 = __pyx_memoryview_copy_object_from_slice(__pyx_v_self, (&__pyx_v_dst)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 653, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_r = __pyx_t_2;
- __pyx_t_2 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":643
- * return memoryview_copy_from_slice(self, &mslice)
- *
- * def copy_fortran(self): # <<<<<<<<<<<<<<
- * cdef __Pyx_memviewslice src, dst
- * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_AddTraceback("View.MemoryView.memoryview.copy_fortran", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "(tree fragment)":1
- * def __reduce_cython__(self): # <<<<<<<<<<<<<<
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state):
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw___pyx_memoryview_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/
-static PyObject *__pyx_pw___pyx_memoryview_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0);
- __pyx_r = __pyx_pf___pyx_memoryview___reduce_cython__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf___pyx_memoryview___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__reduce_cython__", 0);
-
- /* "(tree fragment)":2
- * def __reduce_cython__(self):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<<
- * def __setstate_cython__(self, __pyx_state):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- */
- __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__14, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_Raise(__pyx_t_1, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __PYX_ERR(1, 2, __pyx_L1_error)
-
- /* "(tree fragment)":1
- * def __reduce_cython__(self): # <<<<<<<<<<<<<<
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state):
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_AddTraceback("View.MemoryView.memoryview.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "(tree fragment)":3
- * def __reduce_cython__(self):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<<
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw___pyx_memoryview_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/
-static PyObject *__pyx_pw___pyx_memoryview_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0);
- __pyx_r = __pyx_pf___pyx_memoryview_2__setstate_cython__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf___pyx_memoryview_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__setstate_cython__", 0);
-
- /* "(tree fragment)":4
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<<
- */
- __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__15, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_Raise(__pyx_t_1, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __PYX_ERR(1, 4, __pyx_L1_error)
-
- /* "(tree fragment)":3
- * def __reduce_cython__(self):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<<
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_AddTraceback("View.MemoryView.memoryview.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":657
- *
- * @cname('__pyx_memoryview_new')
- * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo): # <<<<<<<<<<<<<<
- * cdef memoryview result = memoryview(o, flags, dtype_is_object)
- * result.typeinfo = typeinfo
- */
-
-static PyObject *__pyx_memoryview_new(PyObject *__pyx_v_o, int __pyx_v_flags, int __pyx_v_dtype_is_object, __Pyx_TypeInfo *__pyx_v_typeinfo) {
- struct __pyx_memoryview_obj *__pyx_v_result = 0;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- PyObject *__pyx_t_2 = NULL;
- PyObject *__pyx_t_3 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("memoryview_cwrapper", 0);
-
- /* "View.MemoryView":658
- * @cname('__pyx_memoryview_new')
- * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo):
- * cdef memoryview result = memoryview(o, flags, dtype_is_object) # <<<<<<<<<<<<<<
- * result.typeinfo = typeinfo
- * return result
- */
- __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_flags); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 658, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_v_dtype_is_object); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 658, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 658, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_INCREF(__pyx_v_o);
- __Pyx_GIVEREF(__pyx_v_o);
- PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_o);
- __Pyx_GIVEREF(__pyx_t_1);
- PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1);
- __Pyx_GIVEREF(__pyx_t_2);
- PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_t_2);
- __pyx_t_1 = 0;
- __pyx_t_2 = 0;
- __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryview_type), __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 658, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __pyx_v_result = ((struct __pyx_memoryview_obj *)__pyx_t_2);
- __pyx_t_2 = 0;
-
- /* "View.MemoryView":659
- * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo):
- * cdef memoryview result = memoryview(o, flags, dtype_is_object)
- * result.typeinfo = typeinfo # <<<<<<<<<<<<<<
- * return result
- *
- */
- __pyx_v_result->typeinfo = __pyx_v_typeinfo;
-
- /* "View.MemoryView":660
- * cdef memoryview result = memoryview(o, flags, dtype_is_object)
- * result.typeinfo = typeinfo
- * return result # <<<<<<<<<<<<<<
- *
- * @cname('__pyx_memoryview_check')
- */
- __Pyx_XDECREF(__pyx_r);
- __Pyx_INCREF(((PyObject *)__pyx_v_result));
- __pyx_r = ((PyObject *)__pyx_v_result);
- goto __pyx_L0;
-
- /* "View.MemoryView":657
- *
- * @cname('__pyx_memoryview_new')
- * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo): # <<<<<<<<<<<<<<
- * cdef memoryview result = memoryview(o, flags, dtype_is_object)
- * result.typeinfo = typeinfo
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_AddTraceback("View.MemoryView.memoryview_cwrapper", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XDECREF((PyObject *)__pyx_v_result);
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":663
- *
- * @cname('__pyx_memoryview_check')
- * cdef inline bint memoryview_check(object o): # <<<<<<<<<<<<<<
- * return isinstance(o, memoryview)
- *
- */
-
-static CYTHON_INLINE int __pyx_memoryview_check(PyObject *__pyx_v_o) {
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- __Pyx_RefNannySetupContext("memoryview_check", 0);
-
- /* "View.MemoryView":664
- * @cname('__pyx_memoryview_check')
- * cdef inline bint memoryview_check(object o):
- * return isinstance(o, memoryview) # <<<<<<<<<<<<<<
- *
- * cdef tuple _unellipsify(object index, int ndim):
- */
- __pyx_t_1 = __Pyx_TypeCheck(__pyx_v_o, __pyx_memoryview_type);
- __pyx_r = __pyx_t_1;
- goto __pyx_L0;
-
- /* "View.MemoryView":663
- *
- * @cname('__pyx_memoryview_check')
- * cdef inline bint memoryview_check(object o): # <<<<<<<<<<<<<<
- * return isinstance(o, memoryview)
- *
- */
-
- /* function exit code */
- __pyx_L0:;
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":666
- * return isinstance(o, memoryview)
- *
- * cdef tuple _unellipsify(object index, int ndim): # <<<<<<<<<<<<<<
- * """
- * Replace all ellipses with full slices and fill incomplete indices with
- */
-
-static PyObject *_unellipsify(PyObject *__pyx_v_index, int __pyx_v_ndim) {
- PyObject *__pyx_v_tup = NULL;
- PyObject *__pyx_v_result = NULL;
- int __pyx_v_have_slices;
- int __pyx_v_seen_ellipsis;
- CYTHON_UNUSED PyObject *__pyx_v_idx = NULL;
- PyObject *__pyx_v_item = NULL;
- Py_ssize_t __pyx_v_nslices;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- int __pyx_t_2;
- PyObject *__pyx_t_3 = NULL;
- PyObject *__pyx_t_4 = NULL;
- Py_ssize_t __pyx_t_5;
- PyObject *(*__pyx_t_6)(PyObject *);
- PyObject *__pyx_t_7 = NULL;
- Py_ssize_t __pyx_t_8;
- int __pyx_t_9;
- int __pyx_t_10;
- PyObject *__pyx_t_11 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("_unellipsify", 0);
-
- /* "View.MemoryView":671
- * full slices.
- * """
- * if not isinstance(index, tuple): # <<<<<<<<<<<<<<
- * tup = (index,)
- * else:
- */
- __pyx_t_1 = PyTuple_Check(__pyx_v_index);
- __pyx_t_2 = ((!(__pyx_t_1 != 0)) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":672
- * """
- * if not isinstance(index, tuple):
- * tup = (index,) # <<<<<<<<<<<<<<
- * else:
- * tup = index
- */
- __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 672, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_INCREF(__pyx_v_index);
- __Pyx_GIVEREF(__pyx_v_index);
- PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_index);
- __pyx_v_tup = __pyx_t_3;
- __pyx_t_3 = 0;
-
- /* "View.MemoryView":671
- * full slices.
- * """
- * if not isinstance(index, tuple): # <<<<<<<<<<<<<<
- * tup = (index,)
- * else:
- */
- goto __pyx_L3;
- }
-
- /* "View.MemoryView":674
- * tup = (index,)
- * else:
- * tup = index # <<<<<<<<<<<<<<
- *
- * result = []
- */
- /*else*/ {
- __Pyx_INCREF(__pyx_v_index);
- __pyx_v_tup = __pyx_v_index;
- }
- __pyx_L3:;
-
- /* "View.MemoryView":676
- * tup = index
- *
- * result = [] # <<<<<<<<<<<<<<
- * have_slices = False
- * seen_ellipsis = False
- */
- __pyx_t_3 = PyList_New(0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 676, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_v_result = ((PyObject*)__pyx_t_3);
- __pyx_t_3 = 0;
-
- /* "View.MemoryView":677
- *
- * result = []
- * have_slices = False # <<<<<<<<<<<<<<
- * seen_ellipsis = False
- * for idx, item in enumerate(tup):
- */
- __pyx_v_have_slices = 0;
-
- /* "View.MemoryView":678
- * result = []
- * have_slices = False
- * seen_ellipsis = False # <<<<<<<<<<<<<<
- * for idx, item in enumerate(tup):
- * if item is Ellipsis:
- */
- __pyx_v_seen_ellipsis = 0;
-
- /* "View.MemoryView":679
- * have_slices = False
- * seen_ellipsis = False
- * for idx, item in enumerate(tup): # <<<<<<<<<<<<<<
- * if item is Ellipsis:
- * if not seen_ellipsis:
- */
- __Pyx_INCREF(__pyx_int_0);
- __pyx_t_3 = __pyx_int_0;
- if (likely(PyList_CheckExact(__pyx_v_tup)) || PyTuple_CheckExact(__pyx_v_tup)) {
- __pyx_t_4 = __pyx_v_tup; __Pyx_INCREF(__pyx_t_4); __pyx_t_5 = 0;
- __pyx_t_6 = NULL;
- } else {
- __pyx_t_5 = -1; __pyx_t_4 = PyObject_GetIter(__pyx_v_tup); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 679, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __pyx_t_6 = Py_TYPE(__pyx_t_4)->tp_iternext; if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 679, __pyx_L1_error)
- }
- for (;;) {
- if (likely(!__pyx_t_6)) {
- if (likely(PyList_CheckExact(__pyx_t_4))) {
- if (__pyx_t_5 >= PyList_GET_SIZE(__pyx_t_4)) break;
- #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- __pyx_t_7 = PyList_GET_ITEM(__pyx_t_4, __pyx_t_5); __Pyx_INCREF(__pyx_t_7); __pyx_t_5++; if (unlikely(0 < 0)) __PYX_ERR(1, 679, __pyx_L1_error)
- #else
- __pyx_t_7 = PySequence_ITEM(__pyx_t_4, __pyx_t_5); __pyx_t_5++; if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 679, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_7);
- #endif
- } else {
- if (__pyx_t_5 >= PyTuple_GET_SIZE(__pyx_t_4)) break;
- #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- __pyx_t_7 = PyTuple_GET_ITEM(__pyx_t_4, __pyx_t_5); __Pyx_INCREF(__pyx_t_7); __pyx_t_5++; if (unlikely(0 < 0)) __PYX_ERR(1, 679, __pyx_L1_error)
- #else
- __pyx_t_7 = PySequence_ITEM(__pyx_t_4, __pyx_t_5); __pyx_t_5++; if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 679, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_7);
- #endif
- }
- } else {
- __pyx_t_7 = __pyx_t_6(__pyx_t_4);
- if (unlikely(!__pyx_t_7)) {
- PyObject* exc_type = PyErr_Occurred();
- if (exc_type) {
- if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();
- else __PYX_ERR(1, 679, __pyx_L1_error)
- }
- break;
- }
- __Pyx_GOTREF(__pyx_t_7);
- }
- __Pyx_XDECREF_SET(__pyx_v_item, __pyx_t_7);
- __pyx_t_7 = 0;
- __Pyx_INCREF(__pyx_t_3);
- __Pyx_XDECREF_SET(__pyx_v_idx, __pyx_t_3);
- __pyx_t_7 = __Pyx_PyInt_AddObjC(__pyx_t_3, __pyx_int_1, 1, 0, 0); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 679, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_7);
- __Pyx_DECREF(__pyx_t_3);
- __pyx_t_3 = __pyx_t_7;
- __pyx_t_7 = 0;
-
- /* "View.MemoryView":680
- * seen_ellipsis = False
- * for idx, item in enumerate(tup):
- * if item is Ellipsis: # <<<<<<<<<<<<<<
- * if not seen_ellipsis:
- * result.extend([slice(None)] * (ndim - len(tup) + 1))
- */
- __pyx_t_2 = (__pyx_v_item == __pyx_builtin_Ellipsis);
- __pyx_t_1 = (__pyx_t_2 != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":681
- * for idx, item in enumerate(tup):
- * if item is Ellipsis:
- * if not seen_ellipsis: # <<<<<<<<<<<<<<
- * result.extend([slice(None)] * (ndim - len(tup) + 1))
- * seen_ellipsis = True
- */
- __pyx_t_1 = ((!(__pyx_v_seen_ellipsis != 0)) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":682
- * if item is Ellipsis:
- * if not seen_ellipsis:
- * result.extend([slice(None)] * (ndim - len(tup) + 1)) # <<<<<<<<<<<<<<
- * seen_ellipsis = True
- * else:
- */
- __pyx_t_8 = PyObject_Length(__pyx_v_tup); if (unlikely(__pyx_t_8 == ((Py_ssize_t)-1))) __PYX_ERR(1, 682, __pyx_L1_error)
- __pyx_t_7 = PyList_New(1 * ((((__pyx_v_ndim - __pyx_t_8) + 1)<0) ? 0:((__pyx_v_ndim - __pyx_t_8) + 1))); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 682, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_7);
- { Py_ssize_t __pyx_temp;
- for (__pyx_temp=0; __pyx_temp < ((__pyx_v_ndim - __pyx_t_8) + 1); __pyx_temp++) {
- __Pyx_INCREF(__pyx_slice__16);
- __Pyx_GIVEREF(__pyx_slice__16);
- PyList_SET_ITEM(__pyx_t_7, __pyx_temp, __pyx_slice__16);
- }
- }
- __pyx_t_9 = __Pyx_PyList_Extend(__pyx_v_result, __pyx_t_7); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 682, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
-
- /* "View.MemoryView":683
- * if not seen_ellipsis:
- * result.extend([slice(None)] * (ndim - len(tup) + 1))
- * seen_ellipsis = True # <<<<<<<<<<<<<<
- * else:
- * result.append(slice(None))
- */
- __pyx_v_seen_ellipsis = 1;
-
- /* "View.MemoryView":681
- * for idx, item in enumerate(tup):
- * if item is Ellipsis:
- * if not seen_ellipsis: # <<<<<<<<<<<<<<
- * result.extend([slice(None)] * (ndim - len(tup) + 1))
- * seen_ellipsis = True
- */
- goto __pyx_L7;
- }
-
- /* "View.MemoryView":685
- * seen_ellipsis = True
- * else:
- * result.append(slice(None)) # <<<<<<<<<<<<<<
- * have_slices = True
- * else:
- */
- /*else*/ {
- __pyx_t_9 = __Pyx_PyList_Append(__pyx_v_result, __pyx_slice__16); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 685, __pyx_L1_error)
- }
- __pyx_L7:;
-
- /* "View.MemoryView":686
- * else:
- * result.append(slice(None))
- * have_slices = True # <<<<<<<<<<<<<<
- * else:
- * if not isinstance(item, slice) and not PyIndex_Check(item):
- */
- __pyx_v_have_slices = 1;
-
- /* "View.MemoryView":680
- * seen_ellipsis = False
- * for idx, item in enumerate(tup):
- * if item is Ellipsis: # <<<<<<<<<<<<<<
- * if not seen_ellipsis:
- * result.extend([slice(None)] * (ndim - len(tup) + 1))
- */
- goto __pyx_L6;
- }
-
- /* "View.MemoryView":688
- * have_slices = True
- * else:
- * if not isinstance(item, slice) and not PyIndex_Check(item): # <<<<<<<<<<<<<<
- * raise TypeError("Cannot index with type '%s'" % type(item))
- *
- */
- /*else*/ {
- __pyx_t_2 = PySlice_Check(__pyx_v_item);
- __pyx_t_10 = ((!(__pyx_t_2 != 0)) != 0);
- if (__pyx_t_10) {
- } else {
- __pyx_t_1 = __pyx_t_10;
- goto __pyx_L9_bool_binop_done;
- }
- __pyx_t_10 = ((!(PyIndex_Check(__pyx_v_item) != 0)) != 0);
- __pyx_t_1 = __pyx_t_10;
- __pyx_L9_bool_binop_done:;
- if (unlikely(__pyx_t_1)) {
-
- /* "View.MemoryView":689
- * else:
- * if not isinstance(item, slice) and not PyIndex_Check(item):
- * raise TypeError("Cannot index with type '%s'" % type(item)) # <<<<<<<<<<<<<<
- *
- * have_slices = have_slices or isinstance(item, slice)
- */
- __pyx_t_7 = __Pyx_PyString_FormatSafe(__pyx_kp_s_Cannot_index_with_type_s, ((PyObject *)Py_TYPE(__pyx_v_item))); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 689, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_7);
- __pyx_t_11 = __Pyx_PyObject_CallOneArg(__pyx_builtin_TypeError, __pyx_t_7); if (unlikely(!__pyx_t_11)) __PYX_ERR(1, 689, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_11);
- __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
- __Pyx_Raise(__pyx_t_11, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;
- __PYX_ERR(1, 689, __pyx_L1_error)
-
- /* "View.MemoryView":688
- * have_slices = True
- * else:
- * if not isinstance(item, slice) and not PyIndex_Check(item): # <<<<<<<<<<<<<<
- * raise TypeError("Cannot index with type '%s'" % type(item))
- *
- */
- }
-
- /* "View.MemoryView":691
- * raise TypeError("Cannot index with type '%s'" % type(item))
- *
- * have_slices = have_slices or isinstance(item, slice) # <<<<<<<<<<<<<<
- * result.append(item)
- *
- */
- __pyx_t_10 = (__pyx_v_have_slices != 0);
- if (!__pyx_t_10) {
- } else {
- __pyx_t_1 = __pyx_t_10;
- goto __pyx_L11_bool_binop_done;
- }
- __pyx_t_10 = PySlice_Check(__pyx_v_item);
- __pyx_t_2 = (__pyx_t_10 != 0);
- __pyx_t_1 = __pyx_t_2;
- __pyx_L11_bool_binop_done:;
- __pyx_v_have_slices = __pyx_t_1;
-
- /* "View.MemoryView":692
- *
- * have_slices = have_slices or isinstance(item, slice)
- * result.append(item) # <<<<<<<<<<<<<<
- *
- * nslices = ndim - len(result)
- */
- __pyx_t_9 = __Pyx_PyList_Append(__pyx_v_result, __pyx_v_item); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 692, __pyx_L1_error)
- }
- __pyx_L6:;
-
- /* "View.MemoryView":679
- * have_slices = False
- * seen_ellipsis = False
- * for idx, item in enumerate(tup): # <<<<<<<<<<<<<<
- * if item is Ellipsis:
- * if not seen_ellipsis:
- */
- }
- __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
-
- /* "View.MemoryView":694
- * result.append(item)
- *
- * nslices = ndim - len(result) # <<<<<<<<<<<<<<
- * if nslices:
- * result.extend([slice(None)] * nslices)
- */
- __pyx_t_5 = PyList_GET_SIZE(__pyx_v_result); if (unlikely(__pyx_t_5 == ((Py_ssize_t)-1))) __PYX_ERR(1, 694, __pyx_L1_error)
- __pyx_v_nslices = (__pyx_v_ndim - __pyx_t_5);
-
- /* "View.MemoryView":695
- *
- * nslices = ndim - len(result)
- * if nslices: # <<<<<<<<<<<<<<
- * result.extend([slice(None)] * nslices)
- *
- */
- __pyx_t_1 = (__pyx_v_nslices != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":696
- * nslices = ndim - len(result)
- * if nslices:
- * result.extend([slice(None)] * nslices) # <<<<<<<<<<<<<<
- *
- * return have_slices or nslices, tuple(result)
- */
- __pyx_t_3 = PyList_New(1 * ((__pyx_v_nslices<0) ? 0:__pyx_v_nslices)); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 696, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- { Py_ssize_t __pyx_temp;
- for (__pyx_temp=0; __pyx_temp < __pyx_v_nslices; __pyx_temp++) {
- __Pyx_INCREF(__pyx_slice__16);
- __Pyx_GIVEREF(__pyx_slice__16);
- PyList_SET_ITEM(__pyx_t_3, __pyx_temp, __pyx_slice__16);
- }
- }
- __pyx_t_9 = __Pyx_PyList_Extend(__pyx_v_result, __pyx_t_3); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 696, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
-
- /* "View.MemoryView":695
- *
- * nslices = ndim - len(result)
- * if nslices: # <<<<<<<<<<<<<<
- * result.extend([slice(None)] * nslices)
- *
- */
- }
-
- /* "View.MemoryView":698
- * result.extend([slice(None)] * nslices)
- *
- * return have_slices or nslices, tuple(result) # <<<<<<<<<<<<<<
- *
- * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim):
- */
- __Pyx_XDECREF(__pyx_r);
- if (!__pyx_v_have_slices) {
- } else {
- __pyx_t_4 = __Pyx_PyBool_FromLong(__pyx_v_have_slices); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 698, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __pyx_t_3 = __pyx_t_4;
- __pyx_t_4 = 0;
- goto __pyx_L14_bool_binop_done;
- }
- __pyx_t_4 = PyInt_FromSsize_t(__pyx_v_nslices); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 698, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __pyx_t_3 = __pyx_t_4;
- __pyx_t_4 = 0;
- __pyx_L14_bool_binop_done:;
- __pyx_t_4 = PyList_AsTuple(__pyx_v_result); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 698, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __pyx_t_11 = PyTuple_New(2); if (unlikely(!__pyx_t_11)) __PYX_ERR(1, 698, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_11);
- __Pyx_GIVEREF(__pyx_t_3);
- PyTuple_SET_ITEM(__pyx_t_11, 0, __pyx_t_3);
- __Pyx_GIVEREF(__pyx_t_4);
- PyTuple_SET_ITEM(__pyx_t_11, 1, __pyx_t_4);
- __pyx_t_3 = 0;
- __pyx_t_4 = 0;
- __pyx_r = ((PyObject*)__pyx_t_11);
- __pyx_t_11 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":666
- * return isinstance(o, memoryview)
- *
- * cdef tuple _unellipsify(object index, int ndim): # <<<<<<<<<<<<<<
- * """
- * Replace all ellipses with full slices and fill incomplete indices with
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_XDECREF(__pyx_t_4);
- __Pyx_XDECREF(__pyx_t_7);
- __Pyx_XDECREF(__pyx_t_11);
- __Pyx_AddTraceback("View.MemoryView._unellipsify", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XDECREF(__pyx_v_tup);
- __Pyx_XDECREF(__pyx_v_result);
- __Pyx_XDECREF(__pyx_v_idx);
- __Pyx_XDECREF(__pyx_v_item);
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":700
- * return have_slices or nslices, tuple(result)
- *
- * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): # <<<<<<<<<<<<<<
- * for suboffset in suboffsets[:ndim]:
- * if suboffset >= 0:
- */
-
-static PyObject *assert_direct_dimensions(Py_ssize_t *__pyx_v_suboffsets, int __pyx_v_ndim) {
- Py_ssize_t __pyx_v_suboffset;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- Py_ssize_t *__pyx_t_1;
- Py_ssize_t *__pyx_t_2;
- Py_ssize_t *__pyx_t_3;
- int __pyx_t_4;
- PyObject *__pyx_t_5 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("assert_direct_dimensions", 0);
-
- /* "View.MemoryView":701
- *
- * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim):
- * for suboffset in suboffsets[:ndim]: # <<<<<<<<<<<<<<
- * if suboffset >= 0:
- * raise ValueError("Indirect dimensions not supported")
- */
- __pyx_t_2 = (__pyx_v_suboffsets + __pyx_v_ndim);
- for (__pyx_t_3 = __pyx_v_suboffsets; __pyx_t_3 < __pyx_t_2; __pyx_t_3++) {
- __pyx_t_1 = __pyx_t_3;
- __pyx_v_suboffset = (__pyx_t_1[0]);
-
- /* "View.MemoryView":702
- * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim):
- * for suboffset in suboffsets[:ndim]:
- * if suboffset >= 0: # <<<<<<<<<<<<<<
- * raise ValueError("Indirect dimensions not supported")
- *
- */
- __pyx_t_4 = ((__pyx_v_suboffset >= 0) != 0);
- if (unlikely(__pyx_t_4)) {
-
- /* "View.MemoryView":703
- * for suboffset in suboffsets[:ndim]:
- * if suboffset >= 0:
- * raise ValueError("Indirect dimensions not supported") # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_t_5 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__17, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 703, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- __Pyx_Raise(__pyx_t_5, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
- __PYX_ERR(1, 703, __pyx_L1_error)
-
- /* "View.MemoryView":702
- * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim):
- * for suboffset in suboffsets[:ndim]:
- * if suboffset >= 0: # <<<<<<<<<<<<<<
- * raise ValueError("Indirect dimensions not supported")
- *
- */
- }
- }
-
- /* "View.MemoryView":700
- * return have_slices or nslices, tuple(result)
- *
- * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): # <<<<<<<<<<<<<<
- * for suboffset in suboffsets[:ndim]:
- * if suboffset >= 0:
- */
-
- /* function exit code */
- __pyx_r = Py_None; __Pyx_INCREF(Py_None);
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_5);
- __Pyx_AddTraceback("View.MemoryView.assert_direct_dimensions", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":710
- *
- * @cname('__pyx_memview_slice')
- * cdef memoryview memview_slice(memoryview memview, object indices): # <<<<<<<<<<<<<<
- * cdef int new_ndim = 0, suboffset_dim = -1, dim
- * cdef bint negative_step
- */
-
-static struct __pyx_memoryview_obj *__pyx_memview_slice(struct __pyx_memoryview_obj *__pyx_v_memview, PyObject *__pyx_v_indices) {
- int __pyx_v_new_ndim;
- int __pyx_v_suboffset_dim;
- int __pyx_v_dim;
- __Pyx_memviewslice __pyx_v_src;
- __Pyx_memviewslice __pyx_v_dst;
- __Pyx_memviewslice *__pyx_v_p_src;
- struct __pyx_memoryviewslice_obj *__pyx_v_memviewsliceobj = 0;
- __Pyx_memviewslice *__pyx_v_p_dst;
- int *__pyx_v_p_suboffset_dim;
- Py_ssize_t __pyx_v_start;
- Py_ssize_t __pyx_v_stop;
- Py_ssize_t __pyx_v_step;
- int __pyx_v_have_start;
- int __pyx_v_have_stop;
- int __pyx_v_have_step;
- PyObject *__pyx_v_index = NULL;
- struct __pyx_memoryview_obj *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- int __pyx_t_2;
- PyObject *__pyx_t_3 = NULL;
- struct __pyx_memoryview_obj *__pyx_t_4;
- char *__pyx_t_5;
- int __pyx_t_6;
- Py_ssize_t __pyx_t_7;
- PyObject *(*__pyx_t_8)(PyObject *);
- PyObject *__pyx_t_9 = NULL;
- Py_ssize_t __pyx_t_10;
- int __pyx_t_11;
- Py_ssize_t __pyx_t_12;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("memview_slice", 0);
-
- /* "View.MemoryView":711
- * @cname('__pyx_memview_slice')
- * cdef memoryview memview_slice(memoryview memview, object indices):
- * cdef int new_ndim = 0, suboffset_dim = -1, dim # <<<<<<<<<<<<<<
- * cdef bint negative_step
- * cdef __Pyx_memviewslice src, dst
- */
- __pyx_v_new_ndim = 0;
- __pyx_v_suboffset_dim = -1;
-
- /* "View.MemoryView":718
- *
- *
- * memset(&dst, 0, sizeof(dst)) # <<<<<<<<<<<<<<
- *
- * cdef _memoryviewslice memviewsliceobj
- */
- (void)(memset((&__pyx_v_dst), 0, (sizeof(__pyx_v_dst))));
-
- /* "View.MemoryView":722
- * cdef _memoryviewslice memviewsliceobj
- *
- * assert memview.view.ndim > 0 # <<<<<<<<<<<<<<
- *
- * if isinstance(memview, _memoryviewslice):
- */
- #ifndef CYTHON_WITHOUT_ASSERTIONS
- if (unlikely(!Py_OptimizeFlag)) {
- if (unlikely(!((__pyx_v_memview->view.ndim > 0) != 0))) {
- PyErr_SetNone(PyExc_AssertionError);
- __PYX_ERR(1, 722, __pyx_L1_error)
- }
- }
- #endif
-
- /* "View.MemoryView":724
- * assert memview.view.ndim > 0
- *
- * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<<
- * memviewsliceobj = memview
- * p_src = &memviewsliceobj.from_slice
- */
- __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type);
- __pyx_t_2 = (__pyx_t_1 != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":725
- *
- * if isinstance(memview, _memoryviewslice):
- * memviewsliceobj = memview # <<<<<<<<<<<<<<
- * p_src = &memviewsliceobj.from_slice
- * else:
- */
- if (!(likely(((((PyObject *)__pyx_v_memview)) == Py_None) || likely(__Pyx_TypeTest(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type))))) __PYX_ERR(1, 725, __pyx_L1_error)
- __pyx_t_3 = ((PyObject *)__pyx_v_memview);
- __Pyx_INCREF(__pyx_t_3);
- __pyx_v_memviewsliceobj = ((struct __pyx_memoryviewslice_obj *)__pyx_t_3);
- __pyx_t_3 = 0;
-
- /* "View.MemoryView":726
- * if isinstance(memview, _memoryviewslice):
- * memviewsliceobj = memview
- * p_src = &memviewsliceobj.from_slice # <<<<<<<<<<<<<<
- * else:
- * slice_copy(memview, &src)
- */
- __pyx_v_p_src = (&__pyx_v_memviewsliceobj->from_slice);
-
- /* "View.MemoryView":724
- * assert memview.view.ndim > 0
- *
- * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<<
- * memviewsliceobj = memview
- * p_src = &memviewsliceobj.from_slice
- */
- goto __pyx_L3;
- }
-
- /* "View.MemoryView":728
- * p_src = &memviewsliceobj.from_slice
- * else:
- * slice_copy(memview, &src) # <<<<<<<<<<<<<<
- * p_src = &src
- *
- */
- /*else*/ {
- __pyx_memoryview_slice_copy(__pyx_v_memview, (&__pyx_v_src));
-
- /* "View.MemoryView":729
- * else:
- * slice_copy(memview, &src)
- * p_src = &src # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_v_p_src = (&__pyx_v_src);
- }
- __pyx_L3:;
-
- /* "View.MemoryView":735
- *
- *
- * dst.memview = p_src.memview # <<<<<<<<<<<<<<
- * dst.data = p_src.data
- *
- */
- __pyx_t_4 = __pyx_v_p_src->memview;
- __pyx_v_dst.memview = __pyx_t_4;
-
- /* "View.MemoryView":736
- *
- * dst.memview = p_src.memview
- * dst.data = p_src.data # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_t_5 = __pyx_v_p_src->data;
- __pyx_v_dst.data = __pyx_t_5;
-
- /* "View.MemoryView":741
- *
- *
- * cdef __Pyx_memviewslice *p_dst = &dst # <<<<<<<<<<<<<<
- * cdef int *p_suboffset_dim = &suboffset_dim
- * cdef Py_ssize_t start, stop, step
- */
- __pyx_v_p_dst = (&__pyx_v_dst);
-
- /* "View.MemoryView":742
- *
- * cdef __Pyx_memviewslice *p_dst = &dst
- * cdef int *p_suboffset_dim = &suboffset_dim # <<<<<<<<<<<<<<
- * cdef Py_ssize_t start, stop, step
- * cdef bint have_start, have_stop, have_step
- */
- __pyx_v_p_suboffset_dim = (&__pyx_v_suboffset_dim);
-
- /* "View.MemoryView":746
- * cdef bint have_start, have_stop, have_step
- *
- * for dim, index in enumerate(indices): # <<<<<<<<<<<<<<
- * if PyIndex_Check(index):
- * slice_memviewslice(
- */
- __pyx_t_6 = 0;
- if (likely(PyList_CheckExact(__pyx_v_indices)) || PyTuple_CheckExact(__pyx_v_indices)) {
- __pyx_t_3 = __pyx_v_indices; __Pyx_INCREF(__pyx_t_3); __pyx_t_7 = 0;
- __pyx_t_8 = NULL;
- } else {
- __pyx_t_7 = -1; __pyx_t_3 = PyObject_GetIter(__pyx_v_indices); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 746, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_t_8 = Py_TYPE(__pyx_t_3)->tp_iternext; if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 746, __pyx_L1_error)
- }
- for (;;) {
- if (likely(!__pyx_t_8)) {
- if (likely(PyList_CheckExact(__pyx_t_3))) {
- if (__pyx_t_7 >= PyList_GET_SIZE(__pyx_t_3)) break;
- #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- __pyx_t_9 = PyList_GET_ITEM(__pyx_t_3, __pyx_t_7); __Pyx_INCREF(__pyx_t_9); __pyx_t_7++; if (unlikely(0 < 0)) __PYX_ERR(1, 746, __pyx_L1_error)
- #else
- __pyx_t_9 = PySequence_ITEM(__pyx_t_3, __pyx_t_7); __pyx_t_7++; if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 746, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_9);
- #endif
- } else {
- if (__pyx_t_7 >= PyTuple_GET_SIZE(__pyx_t_3)) break;
- #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- __pyx_t_9 = PyTuple_GET_ITEM(__pyx_t_3, __pyx_t_7); __Pyx_INCREF(__pyx_t_9); __pyx_t_7++; if (unlikely(0 < 0)) __PYX_ERR(1, 746, __pyx_L1_error)
- #else
- __pyx_t_9 = PySequence_ITEM(__pyx_t_3, __pyx_t_7); __pyx_t_7++; if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 746, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_9);
- #endif
- }
- } else {
- __pyx_t_9 = __pyx_t_8(__pyx_t_3);
- if (unlikely(!__pyx_t_9)) {
- PyObject* exc_type = PyErr_Occurred();
- if (exc_type) {
- if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();
- else __PYX_ERR(1, 746, __pyx_L1_error)
- }
- break;
- }
- __Pyx_GOTREF(__pyx_t_9);
- }
- __Pyx_XDECREF_SET(__pyx_v_index, __pyx_t_9);
- __pyx_t_9 = 0;
- __pyx_v_dim = __pyx_t_6;
- __pyx_t_6 = (__pyx_t_6 + 1);
-
- /* "View.MemoryView":747
- *
- * for dim, index in enumerate(indices):
- * if PyIndex_Check(index): # <<<<<<<<<<<<<<
- * slice_memviewslice(
- * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim],
- */
- __pyx_t_2 = (PyIndex_Check(__pyx_v_index) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":751
- * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim],
- * dim, new_ndim, p_suboffset_dim,
- * index, 0, 0, # start, stop, step # <<<<<<<<<<<<<<
- * 0, 0, 0, # have_{start,stop,step}
- * False)
- */
- __pyx_t_10 = __Pyx_PyIndex_AsSsize_t(__pyx_v_index); if (unlikely((__pyx_t_10 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 751, __pyx_L1_error)
-
- /* "View.MemoryView":748
- * for dim, index in enumerate(indices):
- * if PyIndex_Check(index):
- * slice_memviewslice( # <<<<<<<<<<<<<<
- * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim],
- * dim, new_ndim, p_suboffset_dim,
- */
- __pyx_t_11 = __pyx_memoryview_slice_memviewslice(__pyx_v_p_dst, (__pyx_v_p_src->shape[__pyx_v_dim]), (__pyx_v_p_src->strides[__pyx_v_dim]), (__pyx_v_p_src->suboffsets[__pyx_v_dim]), __pyx_v_dim, __pyx_v_new_ndim, __pyx_v_p_suboffset_dim, __pyx_t_10, 0, 0, 0, 0, 0, 0); if (unlikely(__pyx_t_11 == ((int)-1))) __PYX_ERR(1, 748, __pyx_L1_error)
-
- /* "View.MemoryView":747
- *
- * for dim, index in enumerate(indices):
- * if PyIndex_Check(index): # <<<<<<<<<<<<<<
- * slice_memviewslice(
- * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim],
- */
- goto __pyx_L6;
- }
-
- /* "View.MemoryView":754
- * 0, 0, 0, # have_{start,stop,step}
- * False)
- * elif index is None: # <<<<<<<<<<<<<<
- * p_dst.shape[new_ndim] = 1
- * p_dst.strides[new_ndim] = 0
- */
- __pyx_t_2 = (__pyx_v_index == Py_None);
- __pyx_t_1 = (__pyx_t_2 != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":755
- * False)
- * elif index is None:
- * p_dst.shape[new_ndim] = 1 # <<<<<<<<<<<<<<
- * p_dst.strides[new_ndim] = 0
- * p_dst.suboffsets[new_ndim] = -1
- */
- (__pyx_v_p_dst->shape[__pyx_v_new_ndim]) = 1;
-
- /* "View.MemoryView":756
- * elif index is None:
- * p_dst.shape[new_ndim] = 1
- * p_dst.strides[new_ndim] = 0 # <<<<<<<<<<<<<<
- * p_dst.suboffsets[new_ndim] = -1
- * new_ndim += 1
- */
- (__pyx_v_p_dst->strides[__pyx_v_new_ndim]) = 0;
-
- /* "View.MemoryView":757
- * p_dst.shape[new_ndim] = 1
- * p_dst.strides[new_ndim] = 0
- * p_dst.suboffsets[new_ndim] = -1 # <<<<<<<<<<<<<<
- * new_ndim += 1
- * else:
- */
- (__pyx_v_p_dst->suboffsets[__pyx_v_new_ndim]) = -1L;
-
- /* "View.MemoryView":758
- * p_dst.strides[new_ndim] = 0
- * p_dst.suboffsets[new_ndim] = -1
- * new_ndim += 1 # <<<<<<<<<<<<<<
- * else:
- * start = index.start or 0
- */
- __pyx_v_new_ndim = (__pyx_v_new_ndim + 1);
-
- /* "View.MemoryView":754
- * 0, 0, 0, # have_{start,stop,step}
- * False)
- * elif index is None: # <<<<<<<<<<<<<<
- * p_dst.shape[new_ndim] = 1
- * p_dst.strides[new_ndim] = 0
- */
- goto __pyx_L6;
- }
-
- /* "View.MemoryView":760
- * new_ndim += 1
- * else:
- * start = index.start or 0 # <<<<<<<<<<<<<<
- * stop = index.stop or 0
- * step = index.step or 0
- */
- /*else*/ {
- __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_start); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 760, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_9);
- __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_9); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 760, __pyx_L1_error)
- if (!__pyx_t_1) {
- __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
- } else {
- __pyx_t_12 = __Pyx_PyIndex_AsSsize_t(__pyx_t_9); if (unlikely((__pyx_t_12 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 760, __pyx_L1_error)
- __pyx_t_10 = __pyx_t_12;
- __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
- goto __pyx_L7_bool_binop_done;
- }
- __pyx_t_10 = 0;
- __pyx_L7_bool_binop_done:;
- __pyx_v_start = __pyx_t_10;
-
- /* "View.MemoryView":761
- * else:
- * start = index.start or 0
- * stop = index.stop or 0 # <<<<<<<<<<<<<<
- * step = index.step or 0
- *
- */
- __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_stop); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 761, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_9);
- __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_9); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 761, __pyx_L1_error)
- if (!__pyx_t_1) {
- __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
- } else {
- __pyx_t_12 = __Pyx_PyIndex_AsSsize_t(__pyx_t_9); if (unlikely((__pyx_t_12 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 761, __pyx_L1_error)
- __pyx_t_10 = __pyx_t_12;
- __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
- goto __pyx_L9_bool_binop_done;
- }
- __pyx_t_10 = 0;
- __pyx_L9_bool_binop_done:;
- __pyx_v_stop = __pyx_t_10;
-
- /* "View.MemoryView":762
- * start = index.start or 0
- * stop = index.stop or 0
- * step = index.step or 0 # <<<<<<<<<<<<<<
- *
- * have_start = index.start is not None
- */
- __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_step); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 762, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_9);
- __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_9); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 762, __pyx_L1_error)
- if (!__pyx_t_1) {
- __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
- } else {
- __pyx_t_12 = __Pyx_PyIndex_AsSsize_t(__pyx_t_9); if (unlikely((__pyx_t_12 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 762, __pyx_L1_error)
- __pyx_t_10 = __pyx_t_12;
- __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
- goto __pyx_L11_bool_binop_done;
- }
- __pyx_t_10 = 0;
- __pyx_L11_bool_binop_done:;
- __pyx_v_step = __pyx_t_10;
-
- /* "View.MemoryView":764
- * step = index.step or 0
- *
- * have_start = index.start is not None # <<<<<<<<<<<<<<
- * have_stop = index.stop is not None
- * have_step = index.step is not None
- */
- __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_start); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 764, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_9);
- __pyx_t_1 = (__pyx_t_9 != Py_None);
- __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
- __pyx_v_have_start = __pyx_t_1;
-
- /* "View.MemoryView":765
- *
- * have_start = index.start is not None
- * have_stop = index.stop is not None # <<<<<<<<<<<<<<
- * have_step = index.step is not None
- *
- */
- __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_stop); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 765, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_9);
- __pyx_t_1 = (__pyx_t_9 != Py_None);
- __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
- __pyx_v_have_stop = __pyx_t_1;
-
- /* "View.MemoryView":766
- * have_start = index.start is not None
- * have_stop = index.stop is not None
- * have_step = index.step is not None # <<<<<<<<<<<<<<
- *
- * slice_memviewslice(
- */
- __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_step); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 766, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_9);
- __pyx_t_1 = (__pyx_t_9 != Py_None);
- __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
- __pyx_v_have_step = __pyx_t_1;
-
- /* "View.MemoryView":768
- * have_step = index.step is not None
- *
- * slice_memviewslice( # <<<<<<<<<<<<<<
- * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim],
- * dim, new_ndim, p_suboffset_dim,
- */
- __pyx_t_11 = __pyx_memoryview_slice_memviewslice(__pyx_v_p_dst, (__pyx_v_p_src->shape[__pyx_v_dim]), (__pyx_v_p_src->strides[__pyx_v_dim]), (__pyx_v_p_src->suboffsets[__pyx_v_dim]), __pyx_v_dim, __pyx_v_new_ndim, __pyx_v_p_suboffset_dim, __pyx_v_start, __pyx_v_stop, __pyx_v_step, __pyx_v_have_start, __pyx_v_have_stop, __pyx_v_have_step, 1); if (unlikely(__pyx_t_11 == ((int)-1))) __PYX_ERR(1, 768, __pyx_L1_error)
-
- /* "View.MemoryView":774
- * have_start, have_stop, have_step,
- * True)
- * new_ndim += 1 # <<<<<<<<<<<<<<
- *
- * if isinstance(memview, _memoryviewslice):
- */
- __pyx_v_new_ndim = (__pyx_v_new_ndim + 1);
- }
- __pyx_L6:;
-
- /* "View.MemoryView":746
- * cdef bint have_start, have_stop, have_step
- *
- * for dim, index in enumerate(indices): # <<<<<<<<<<<<<<
- * if PyIndex_Check(index):
- * slice_memviewslice(
- */
- }
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
-
- /* "View.MemoryView":776
- * new_ndim += 1
- *
- * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<<
- * return memoryview_fromslice(dst, new_ndim,
- * memviewsliceobj.to_object_func,
- */
- __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type);
- __pyx_t_2 = (__pyx_t_1 != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":777
- *
- * if isinstance(memview, _memoryviewslice):
- * return memoryview_fromslice(dst, new_ndim, # <<<<<<<<<<<<<<
- * memviewsliceobj.to_object_func,
- * memviewsliceobj.to_dtype_func,
- */
- __Pyx_XDECREF(((PyObject *)__pyx_r));
-
- /* "View.MemoryView":778
- * if isinstance(memview, _memoryviewslice):
- * return memoryview_fromslice(dst, new_ndim,
- * memviewsliceobj.to_object_func, # <<<<<<<<<<<<<<
- * memviewsliceobj.to_dtype_func,
- * memview.dtype_is_object)
- */
- if (unlikely(!__pyx_v_memviewsliceobj)) { __Pyx_RaiseUnboundLocalError("memviewsliceobj"); __PYX_ERR(1, 778, __pyx_L1_error) }
-
- /* "View.MemoryView":779
- * return memoryview_fromslice(dst, new_ndim,
- * memviewsliceobj.to_object_func,
- * memviewsliceobj.to_dtype_func, # <<<<<<<<<<<<<<
- * memview.dtype_is_object)
- * else:
- */
- if (unlikely(!__pyx_v_memviewsliceobj)) { __Pyx_RaiseUnboundLocalError("memviewsliceobj"); __PYX_ERR(1, 779, __pyx_L1_error) }
-
- /* "View.MemoryView":777
- *
- * if isinstance(memview, _memoryviewslice):
- * return memoryview_fromslice(dst, new_ndim, # <<<<<<<<<<<<<<
- * memviewsliceobj.to_object_func,
- * memviewsliceobj.to_dtype_func,
- */
- __pyx_t_3 = __pyx_memoryview_fromslice(__pyx_v_dst, __pyx_v_new_ndim, __pyx_v_memviewsliceobj->to_object_func, __pyx_v_memviewsliceobj->to_dtype_func, __pyx_v_memview->dtype_is_object); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 777, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_memoryview_type))))) __PYX_ERR(1, 777, __pyx_L1_error)
- __pyx_r = ((struct __pyx_memoryview_obj *)__pyx_t_3);
- __pyx_t_3 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":776
- * new_ndim += 1
- *
- * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<<
- * return memoryview_fromslice(dst, new_ndim,
- * memviewsliceobj.to_object_func,
- */
- }
-
- /* "View.MemoryView":782
- * memview.dtype_is_object)
- * else:
- * return memoryview_fromslice(dst, new_ndim, NULL, NULL, # <<<<<<<<<<<<<<
- * memview.dtype_is_object)
- *
- */
- /*else*/ {
- __Pyx_XDECREF(((PyObject *)__pyx_r));
-
- /* "View.MemoryView":783
- * else:
- * return memoryview_fromslice(dst, new_ndim, NULL, NULL,
- * memview.dtype_is_object) # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_t_3 = __pyx_memoryview_fromslice(__pyx_v_dst, __pyx_v_new_ndim, NULL, NULL, __pyx_v_memview->dtype_is_object); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 782, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
-
- /* "View.MemoryView":782
- * memview.dtype_is_object)
- * else:
- * return memoryview_fromslice(dst, new_ndim, NULL, NULL, # <<<<<<<<<<<<<<
- * memview.dtype_is_object)
- *
- */
- if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_memoryview_type))))) __PYX_ERR(1, 782, __pyx_L1_error)
- __pyx_r = ((struct __pyx_memoryview_obj *)__pyx_t_3);
- __pyx_t_3 = 0;
- goto __pyx_L0;
- }
-
- /* "View.MemoryView":710
- *
- * @cname('__pyx_memview_slice')
- * cdef memoryview memview_slice(memoryview memview, object indices): # <<<<<<<<<<<<<<
- * cdef int new_ndim = 0, suboffset_dim = -1, dim
- * cdef bint negative_step
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_XDECREF(__pyx_t_9);
- __Pyx_AddTraceback("View.MemoryView.memview_slice", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XDECREF((PyObject *)__pyx_v_memviewsliceobj);
- __Pyx_XDECREF(__pyx_v_index);
- __Pyx_XGIVEREF((PyObject *)__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":807
- *
- * @cname('__pyx_memoryview_slice_memviewslice')
- * cdef int slice_memviewslice( # <<<<<<<<<<<<<<
- * __Pyx_memviewslice *dst,
- * Py_ssize_t shape, Py_ssize_t stride, Py_ssize_t suboffset,
- */
-
-static int __pyx_memoryview_slice_memviewslice(__Pyx_memviewslice *__pyx_v_dst, Py_ssize_t __pyx_v_shape, Py_ssize_t __pyx_v_stride, Py_ssize_t __pyx_v_suboffset, int __pyx_v_dim, int __pyx_v_new_ndim, int *__pyx_v_suboffset_dim, Py_ssize_t __pyx_v_start, Py_ssize_t __pyx_v_stop, Py_ssize_t __pyx_v_step, int __pyx_v_have_start, int __pyx_v_have_stop, int __pyx_v_have_step, int __pyx_v_is_slice) {
- Py_ssize_t __pyx_v_new_shape;
- int __pyx_v_negative_step;
- int __pyx_r;
- int __pyx_t_1;
- int __pyx_t_2;
- int __pyx_t_3;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
-
- /* "View.MemoryView":827
- * cdef bint negative_step
- *
- * if not is_slice: # <<<<<<<<<<<<<<
- *
- * if start < 0:
- */
- __pyx_t_1 = ((!(__pyx_v_is_slice != 0)) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":829
- * if not is_slice:
- *
- * if start < 0: # <<<<<<<<<<<<<<
- * start += shape
- * if not 0 <= start < shape:
- */
- __pyx_t_1 = ((__pyx_v_start < 0) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":830
- *
- * if start < 0:
- * start += shape # <<<<<<<<<<<<<<
- * if not 0 <= start < shape:
- * _err_dim(IndexError, "Index out of bounds (axis %d)", dim)
- */
- __pyx_v_start = (__pyx_v_start + __pyx_v_shape);
-
- /* "View.MemoryView":829
- * if not is_slice:
- *
- * if start < 0: # <<<<<<<<<<<<<<
- * start += shape
- * if not 0 <= start < shape:
- */
- }
-
- /* "View.MemoryView":831
- * if start < 0:
- * start += shape
- * if not 0 <= start < shape: # <<<<<<<<<<<<<<
- * _err_dim(IndexError, "Index out of bounds (axis %d)", dim)
- * else:
- */
- __pyx_t_1 = (0 <= __pyx_v_start);
- if (__pyx_t_1) {
- __pyx_t_1 = (__pyx_v_start < __pyx_v_shape);
- }
- __pyx_t_2 = ((!(__pyx_t_1 != 0)) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":832
- * start += shape
- * if not 0 <= start < shape:
- * _err_dim(IndexError, "Index out of bounds (axis %d)", dim) # <<<<<<<<<<<<<<
- * else:
- *
- */
- __pyx_t_3 = __pyx_memoryview_err_dim(__pyx_builtin_IndexError, ((char *)"Index out of bounds (axis %d)"), __pyx_v_dim); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 832, __pyx_L1_error)
-
- /* "View.MemoryView":831
- * if start < 0:
- * start += shape
- * if not 0 <= start < shape: # <<<<<<<<<<<<<<
- * _err_dim(IndexError, "Index out of bounds (axis %d)", dim)
- * else:
- */
- }
-
- /* "View.MemoryView":827
- * cdef bint negative_step
- *
- * if not is_slice: # <<<<<<<<<<<<<<
- *
- * if start < 0:
- */
- goto __pyx_L3;
- }
-
- /* "View.MemoryView":835
- * else:
- *
- * negative_step = have_step != 0 and step < 0 # <<<<<<<<<<<<<<
- *
- * if have_step and step == 0:
- */
- /*else*/ {
- __pyx_t_1 = ((__pyx_v_have_step != 0) != 0);
- if (__pyx_t_1) {
- } else {
- __pyx_t_2 = __pyx_t_1;
- goto __pyx_L6_bool_binop_done;
- }
- __pyx_t_1 = ((__pyx_v_step < 0) != 0);
- __pyx_t_2 = __pyx_t_1;
- __pyx_L6_bool_binop_done:;
- __pyx_v_negative_step = __pyx_t_2;
-
- /* "View.MemoryView":837
- * negative_step = have_step != 0 and step < 0
- *
- * if have_step and step == 0: # <<<<<<<<<<<<<<
- * _err_dim(ValueError, "Step may not be zero (axis %d)", dim)
- *
- */
- __pyx_t_1 = (__pyx_v_have_step != 0);
- if (__pyx_t_1) {
- } else {
- __pyx_t_2 = __pyx_t_1;
- goto __pyx_L9_bool_binop_done;
- }
- __pyx_t_1 = ((__pyx_v_step == 0) != 0);
- __pyx_t_2 = __pyx_t_1;
- __pyx_L9_bool_binop_done:;
- if (__pyx_t_2) {
-
- /* "View.MemoryView":838
- *
- * if have_step and step == 0:
- * _err_dim(ValueError, "Step may not be zero (axis %d)", dim) # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_t_3 = __pyx_memoryview_err_dim(__pyx_builtin_ValueError, ((char *)"Step may not be zero (axis %d)"), __pyx_v_dim); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 838, __pyx_L1_error)
-
- /* "View.MemoryView":837
- * negative_step = have_step != 0 and step < 0
- *
- * if have_step and step == 0: # <<<<<<<<<<<<<<
- * _err_dim(ValueError, "Step may not be zero (axis %d)", dim)
- *
- */
- }
-
- /* "View.MemoryView":841
- *
- *
- * if have_start: # <<<<<<<<<<<<<<
- * if start < 0:
- * start += shape
- */
- __pyx_t_2 = (__pyx_v_have_start != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":842
- *
- * if have_start:
- * if start < 0: # <<<<<<<<<<<<<<
- * start += shape
- * if start < 0:
- */
- __pyx_t_2 = ((__pyx_v_start < 0) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":843
- * if have_start:
- * if start < 0:
- * start += shape # <<<<<<<<<<<<<<
- * if start < 0:
- * start = 0
- */
- __pyx_v_start = (__pyx_v_start + __pyx_v_shape);
-
- /* "View.MemoryView":844
- * if start < 0:
- * start += shape
- * if start < 0: # <<<<<<<<<<<<<<
- * start = 0
- * elif start >= shape:
- */
- __pyx_t_2 = ((__pyx_v_start < 0) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":845
- * start += shape
- * if start < 0:
- * start = 0 # <<<<<<<<<<<<<<
- * elif start >= shape:
- * if negative_step:
- */
- __pyx_v_start = 0;
-
- /* "View.MemoryView":844
- * if start < 0:
- * start += shape
- * if start < 0: # <<<<<<<<<<<<<<
- * start = 0
- * elif start >= shape:
- */
- }
-
- /* "View.MemoryView":842
- *
- * if have_start:
- * if start < 0: # <<<<<<<<<<<<<<
- * start += shape
- * if start < 0:
- */
- goto __pyx_L12;
- }
-
- /* "View.MemoryView":846
- * if start < 0:
- * start = 0
- * elif start >= shape: # <<<<<<<<<<<<<<
- * if negative_step:
- * start = shape - 1
- */
- __pyx_t_2 = ((__pyx_v_start >= __pyx_v_shape) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":847
- * start = 0
- * elif start >= shape:
- * if negative_step: # <<<<<<<<<<<<<<
- * start = shape - 1
- * else:
- */
- __pyx_t_2 = (__pyx_v_negative_step != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":848
- * elif start >= shape:
- * if negative_step:
- * start = shape - 1 # <<<<<<<<<<<<<<
- * else:
- * start = shape
- */
- __pyx_v_start = (__pyx_v_shape - 1);
-
- /* "View.MemoryView":847
- * start = 0
- * elif start >= shape:
- * if negative_step: # <<<<<<<<<<<<<<
- * start = shape - 1
- * else:
- */
- goto __pyx_L14;
- }
-
- /* "View.MemoryView":850
- * start = shape - 1
- * else:
- * start = shape # <<<<<<<<<<<<<<
- * else:
- * if negative_step:
- */
- /*else*/ {
- __pyx_v_start = __pyx_v_shape;
- }
- __pyx_L14:;
-
- /* "View.MemoryView":846
- * if start < 0:
- * start = 0
- * elif start >= shape: # <<<<<<<<<<<<<<
- * if negative_step:
- * start = shape - 1
- */
- }
- __pyx_L12:;
-
- /* "View.MemoryView":841
- *
- *
- * if have_start: # <<<<<<<<<<<<<<
- * if start < 0:
- * start += shape
- */
- goto __pyx_L11;
- }
-
- /* "View.MemoryView":852
- * start = shape
- * else:
- * if negative_step: # <<<<<<<<<<<<<<
- * start = shape - 1
- * else:
- */
- /*else*/ {
- __pyx_t_2 = (__pyx_v_negative_step != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":853
- * else:
- * if negative_step:
- * start = shape - 1 # <<<<<<<<<<<<<<
- * else:
- * start = 0
- */
- __pyx_v_start = (__pyx_v_shape - 1);
-
- /* "View.MemoryView":852
- * start = shape
- * else:
- * if negative_step: # <<<<<<<<<<<<<<
- * start = shape - 1
- * else:
- */
- goto __pyx_L15;
- }
-
- /* "View.MemoryView":855
- * start = shape - 1
- * else:
- * start = 0 # <<<<<<<<<<<<<<
- *
- * if have_stop:
- */
- /*else*/ {
- __pyx_v_start = 0;
- }
- __pyx_L15:;
- }
- __pyx_L11:;
-
- /* "View.MemoryView":857
- * start = 0
- *
- * if have_stop: # <<<<<<<<<<<<<<
- * if stop < 0:
- * stop += shape
- */
- __pyx_t_2 = (__pyx_v_have_stop != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":858
- *
- * if have_stop:
- * if stop < 0: # <<<<<<<<<<<<<<
- * stop += shape
- * if stop < 0:
- */
- __pyx_t_2 = ((__pyx_v_stop < 0) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":859
- * if have_stop:
- * if stop < 0:
- * stop += shape # <<<<<<<<<<<<<<
- * if stop < 0:
- * stop = 0
- */
- __pyx_v_stop = (__pyx_v_stop + __pyx_v_shape);
-
- /* "View.MemoryView":860
- * if stop < 0:
- * stop += shape
- * if stop < 0: # <<<<<<<<<<<<<<
- * stop = 0
- * elif stop > shape:
- */
- __pyx_t_2 = ((__pyx_v_stop < 0) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":861
- * stop += shape
- * if stop < 0:
- * stop = 0 # <<<<<<<<<<<<<<
- * elif stop > shape:
- * stop = shape
- */
- __pyx_v_stop = 0;
-
- /* "View.MemoryView":860
- * if stop < 0:
- * stop += shape
- * if stop < 0: # <<<<<<<<<<<<<<
- * stop = 0
- * elif stop > shape:
- */
- }
-
- /* "View.MemoryView":858
- *
- * if have_stop:
- * if stop < 0: # <<<<<<<<<<<<<<
- * stop += shape
- * if stop < 0:
- */
- goto __pyx_L17;
- }
-
- /* "View.MemoryView":862
- * if stop < 0:
- * stop = 0
- * elif stop > shape: # <<<<<<<<<<<<<<
- * stop = shape
- * else:
- */
- __pyx_t_2 = ((__pyx_v_stop > __pyx_v_shape) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":863
- * stop = 0
- * elif stop > shape:
- * stop = shape # <<<<<<<<<<<<<<
- * else:
- * if negative_step:
- */
- __pyx_v_stop = __pyx_v_shape;
-
- /* "View.MemoryView":862
- * if stop < 0:
- * stop = 0
- * elif stop > shape: # <<<<<<<<<<<<<<
- * stop = shape
- * else:
- */
- }
- __pyx_L17:;
-
- /* "View.MemoryView":857
- * start = 0
- *
- * if have_stop: # <<<<<<<<<<<<<<
- * if stop < 0:
- * stop += shape
- */
- goto __pyx_L16;
- }
-
- /* "View.MemoryView":865
- * stop = shape
- * else:
- * if negative_step: # <<<<<<<<<<<<<<
- * stop = -1
- * else:
- */
- /*else*/ {
- __pyx_t_2 = (__pyx_v_negative_step != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":866
- * else:
- * if negative_step:
- * stop = -1 # <<<<<<<<<<<<<<
- * else:
- * stop = shape
- */
- __pyx_v_stop = -1L;
-
- /* "View.MemoryView":865
- * stop = shape
- * else:
- * if negative_step: # <<<<<<<<<<<<<<
- * stop = -1
- * else:
- */
- goto __pyx_L19;
- }
-
- /* "View.MemoryView":868
- * stop = -1
- * else:
- * stop = shape # <<<<<<<<<<<<<<
- *
- * if not have_step:
- */
- /*else*/ {
- __pyx_v_stop = __pyx_v_shape;
- }
- __pyx_L19:;
- }
- __pyx_L16:;
-
- /* "View.MemoryView":870
- * stop = shape
- *
- * if not have_step: # <<<<<<<<<<<<<<
- * step = 1
- *
- */
- __pyx_t_2 = ((!(__pyx_v_have_step != 0)) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":871
- *
- * if not have_step:
- * step = 1 # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_v_step = 1;
-
- /* "View.MemoryView":870
- * stop = shape
- *
- * if not have_step: # <<<<<<<<<<<<<<
- * step = 1
- *
- */
- }
-
- /* "View.MemoryView":875
- *
- * with cython.cdivision(True):
- * new_shape = (stop - start) // step # <<<<<<<<<<<<<<
- *
- * if (stop - start) - step * new_shape:
- */
- __pyx_v_new_shape = ((__pyx_v_stop - __pyx_v_start) / __pyx_v_step);
-
- /* "View.MemoryView":877
- * new_shape = (stop - start) // step
- *
- * if (stop - start) - step * new_shape: # <<<<<<<<<<<<<<
- * new_shape += 1
- *
- */
- __pyx_t_2 = (((__pyx_v_stop - __pyx_v_start) - (__pyx_v_step * __pyx_v_new_shape)) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":878
- *
- * if (stop - start) - step * new_shape:
- * new_shape += 1 # <<<<<<<<<<<<<<
- *
- * if new_shape < 0:
- */
- __pyx_v_new_shape = (__pyx_v_new_shape + 1);
-
- /* "View.MemoryView":877
- * new_shape = (stop - start) // step
- *
- * if (stop - start) - step * new_shape: # <<<<<<<<<<<<<<
- * new_shape += 1
- *
- */
- }
-
- /* "View.MemoryView":880
- * new_shape += 1
- *
- * if new_shape < 0: # <<<<<<<<<<<<<<
- * new_shape = 0
- *
- */
- __pyx_t_2 = ((__pyx_v_new_shape < 0) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":881
- *
- * if new_shape < 0:
- * new_shape = 0 # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_v_new_shape = 0;
-
- /* "View.MemoryView":880
- * new_shape += 1
- *
- * if new_shape < 0: # <<<<<<<<<<<<<<
- * new_shape = 0
- *
- */
- }
-
- /* "View.MemoryView":884
- *
- *
- * dst.strides[new_ndim] = stride * step # <<<<<<<<<<<<<<
- * dst.shape[new_ndim] = new_shape
- * dst.suboffsets[new_ndim] = suboffset
- */
- (__pyx_v_dst->strides[__pyx_v_new_ndim]) = (__pyx_v_stride * __pyx_v_step);
-
- /* "View.MemoryView":885
- *
- * dst.strides[new_ndim] = stride * step
- * dst.shape[new_ndim] = new_shape # <<<<<<<<<<<<<<
- * dst.suboffsets[new_ndim] = suboffset
- *
- */
- (__pyx_v_dst->shape[__pyx_v_new_ndim]) = __pyx_v_new_shape;
-
- /* "View.MemoryView":886
- * dst.strides[new_ndim] = stride * step
- * dst.shape[new_ndim] = new_shape
- * dst.suboffsets[new_ndim] = suboffset # <<<<<<<<<<<<<<
- *
- *
- */
- (__pyx_v_dst->suboffsets[__pyx_v_new_ndim]) = __pyx_v_suboffset;
- }
- __pyx_L3:;
-
- /* "View.MemoryView":889
- *
- *
- * if suboffset_dim[0] < 0: # <<<<<<<<<<<<<<
- * dst.data += start * stride
- * else:
- */
- __pyx_t_2 = (((__pyx_v_suboffset_dim[0]) < 0) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":890
- *
- * if suboffset_dim[0] < 0:
- * dst.data += start * stride # <<<<<<<<<<<<<<
- * else:
- * dst.suboffsets[suboffset_dim[0]] += start * stride
- */
- __pyx_v_dst->data = (__pyx_v_dst->data + (__pyx_v_start * __pyx_v_stride));
-
- /* "View.MemoryView":889
- *
- *
- * if suboffset_dim[0] < 0: # <<<<<<<<<<<<<<
- * dst.data += start * stride
- * else:
- */
- goto __pyx_L23;
- }
-
- /* "View.MemoryView":892
- * dst.data += start * stride
- * else:
- * dst.suboffsets[suboffset_dim[0]] += start * stride # <<<<<<<<<<<<<<
- *
- * if suboffset >= 0:
- */
- /*else*/ {
- __pyx_t_3 = (__pyx_v_suboffset_dim[0]);
- (__pyx_v_dst->suboffsets[__pyx_t_3]) = ((__pyx_v_dst->suboffsets[__pyx_t_3]) + (__pyx_v_start * __pyx_v_stride));
- }
- __pyx_L23:;
-
- /* "View.MemoryView":894
- * dst.suboffsets[suboffset_dim[0]] += start * stride
- *
- * if suboffset >= 0: # <<<<<<<<<<<<<<
- * if not is_slice:
- * if new_ndim == 0:
- */
- __pyx_t_2 = ((__pyx_v_suboffset >= 0) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":895
- *
- * if suboffset >= 0:
- * if not is_slice: # <<<<<<<<<<<<<<
- * if new_ndim == 0:
- * dst.data = ( dst.data)[0] + suboffset
- */
- __pyx_t_2 = ((!(__pyx_v_is_slice != 0)) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":896
- * if suboffset >= 0:
- * if not is_slice:
- * if new_ndim == 0: # <<<<<<<<<<<<<<
- * dst.data = ( dst.data)[0] + suboffset
- * else:
- */
- __pyx_t_2 = ((__pyx_v_new_ndim == 0) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":897
- * if not is_slice:
- * if new_ndim == 0:
- * dst.data = ( dst.data)[0] + suboffset # <<<<<<<<<<<<<<
- * else:
- * _err_dim(IndexError, "All dimensions preceding dimension %d "
- */
- __pyx_v_dst->data = ((((char **)__pyx_v_dst->data)[0]) + __pyx_v_suboffset);
-
- /* "View.MemoryView":896
- * if suboffset >= 0:
- * if not is_slice:
- * if new_ndim == 0: # <<<<<<<<<<<<<<
- * dst.data = ( dst.data)[0] + suboffset
- * else:
- */
- goto __pyx_L26;
- }
-
- /* "View.MemoryView":899
- * dst.data = ( dst.data)[0] + suboffset
- * else:
- * _err_dim(IndexError, "All dimensions preceding dimension %d " # <<<<<<<<<<<<<<
- * "must be indexed and not sliced", dim)
- * else:
- */
- /*else*/ {
-
- /* "View.MemoryView":900
- * else:
- * _err_dim(IndexError, "All dimensions preceding dimension %d "
- * "must be indexed and not sliced", dim) # <<<<<<<<<<<<<<
- * else:
- * suboffset_dim[0] = new_ndim
- */
- __pyx_t_3 = __pyx_memoryview_err_dim(__pyx_builtin_IndexError, ((char *)"All dimensions preceding dimension %d must be indexed and not sliced"), __pyx_v_dim); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 899, __pyx_L1_error)
- }
- __pyx_L26:;
-
- /* "View.MemoryView":895
- *
- * if suboffset >= 0:
- * if not is_slice: # <<<<<<<<<<<<<<
- * if new_ndim == 0:
- * dst.data = ( dst.data)[0] + suboffset
- */
- goto __pyx_L25;
- }
-
- /* "View.MemoryView":902
- * "must be indexed and not sliced", dim)
- * else:
- * suboffset_dim[0] = new_ndim # <<<<<<<<<<<<<<
- *
- * return 0
- */
- /*else*/ {
- (__pyx_v_suboffset_dim[0]) = __pyx_v_new_ndim;
- }
- __pyx_L25:;
-
- /* "View.MemoryView":894
- * dst.suboffsets[suboffset_dim[0]] += start * stride
- *
- * if suboffset >= 0: # <<<<<<<<<<<<<<
- * if not is_slice:
- * if new_ndim == 0:
- */
- }
-
- /* "View.MemoryView":904
- * suboffset_dim[0] = new_ndim
- *
- * return 0 # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_r = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":807
- *
- * @cname('__pyx_memoryview_slice_memviewslice')
- * cdef int slice_memviewslice( # <<<<<<<<<<<<<<
- * __Pyx_memviewslice *dst,
- * Py_ssize_t shape, Py_ssize_t stride, Py_ssize_t suboffset,
- */
-
- /* function exit code */
- __pyx_L1_error:;
- {
- #ifdef WITH_THREAD
- PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure();
- #endif
- __Pyx_AddTraceback("View.MemoryView.slice_memviewslice", __pyx_clineno, __pyx_lineno, __pyx_filename);
- #ifdef WITH_THREAD
- __Pyx_PyGILState_Release(__pyx_gilstate_save);
- #endif
- }
- __pyx_r = -1;
- __pyx_L0:;
- return __pyx_r;
-}
-
-/* "View.MemoryView":910
- *
- * @cname('__pyx_pybuffer_index')
- * cdef char *pybuffer_index(Py_buffer *view, char *bufp, Py_ssize_t index, # <<<<<<<<<<<<<<
- * Py_ssize_t dim) except NULL:
- * cdef Py_ssize_t shape, stride, suboffset = -1
- */
-
-static char *__pyx_pybuffer_index(Py_buffer *__pyx_v_view, char *__pyx_v_bufp, Py_ssize_t __pyx_v_index, Py_ssize_t __pyx_v_dim) {
- Py_ssize_t __pyx_v_shape;
- Py_ssize_t __pyx_v_stride;
- Py_ssize_t __pyx_v_suboffset;
- Py_ssize_t __pyx_v_itemsize;
- char *__pyx_v_resultp;
- char *__pyx_r;
- __Pyx_RefNannyDeclarations
- Py_ssize_t __pyx_t_1;
- int __pyx_t_2;
- PyObject *__pyx_t_3 = NULL;
- PyObject *__pyx_t_4 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("pybuffer_index", 0);
-
- /* "View.MemoryView":912
- * cdef char *pybuffer_index(Py_buffer *view, char *bufp, Py_ssize_t index,
- * Py_ssize_t dim) except NULL:
- * cdef Py_ssize_t shape, stride, suboffset = -1 # <<<<<<<<<<<<<<
- * cdef Py_ssize_t itemsize = view.itemsize
- * cdef char *resultp
- */
- __pyx_v_suboffset = -1L;
-
- /* "View.MemoryView":913
- * Py_ssize_t dim) except NULL:
- * cdef Py_ssize_t shape, stride, suboffset = -1
- * cdef Py_ssize_t itemsize = view.itemsize # <<<<<<<<<<<<<<
- * cdef char *resultp
- *
- */
- __pyx_t_1 = __pyx_v_view->itemsize;
- __pyx_v_itemsize = __pyx_t_1;
-
- /* "View.MemoryView":916
- * cdef char *resultp
- *
- * if view.ndim == 0: # <<<<<<<<<<<<<<
- * shape = view.len / itemsize
- * stride = itemsize
- */
- __pyx_t_2 = ((__pyx_v_view->ndim == 0) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":917
- *
- * if view.ndim == 0:
- * shape = view.len / itemsize # <<<<<<<<<<<<<<
- * stride = itemsize
- * else:
- */
- if (unlikely(__pyx_v_itemsize == 0)) {
- PyErr_SetString(PyExc_ZeroDivisionError, "integer division or modulo by zero");
- __PYX_ERR(1, 917, __pyx_L1_error)
- }
- else if (sizeof(Py_ssize_t) == sizeof(long) && (!(((Py_ssize_t)-1) > 0)) && unlikely(__pyx_v_itemsize == (Py_ssize_t)-1) && unlikely(UNARY_NEG_WOULD_OVERFLOW(__pyx_v_view->len))) {
- PyErr_SetString(PyExc_OverflowError, "value too large to perform division");
- __PYX_ERR(1, 917, __pyx_L1_error)
- }
- __pyx_v_shape = __Pyx_div_Py_ssize_t(__pyx_v_view->len, __pyx_v_itemsize);
-
- /* "View.MemoryView":918
- * if view.ndim == 0:
- * shape = view.len / itemsize
- * stride = itemsize # <<<<<<<<<<<<<<
- * else:
- * shape = view.shape[dim]
- */
- __pyx_v_stride = __pyx_v_itemsize;
-
- /* "View.MemoryView":916
- * cdef char *resultp
- *
- * if view.ndim == 0: # <<<<<<<<<<<<<<
- * shape = view.len / itemsize
- * stride = itemsize
- */
- goto __pyx_L3;
- }
-
- /* "View.MemoryView":920
- * stride = itemsize
- * else:
- * shape = view.shape[dim] # <<<<<<<<<<<<<<
- * stride = view.strides[dim]
- * if view.suboffsets != NULL:
- */
- /*else*/ {
- __pyx_v_shape = (__pyx_v_view->shape[__pyx_v_dim]);
-
- /* "View.MemoryView":921
- * else:
- * shape = view.shape[dim]
- * stride = view.strides[dim] # <<<<<<<<<<<<<<
- * if view.suboffsets != NULL:
- * suboffset = view.suboffsets[dim]
- */
- __pyx_v_stride = (__pyx_v_view->strides[__pyx_v_dim]);
-
- /* "View.MemoryView":922
- * shape = view.shape[dim]
- * stride = view.strides[dim]
- * if view.suboffsets != NULL: # <<<<<<<<<<<<<<
- * suboffset = view.suboffsets[dim]
- *
- */
- __pyx_t_2 = ((__pyx_v_view->suboffsets != NULL) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":923
- * stride = view.strides[dim]
- * if view.suboffsets != NULL:
- * suboffset = view.suboffsets[dim] # <<<<<<<<<<<<<<
- *
- * if index < 0:
- */
- __pyx_v_suboffset = (__pyx_v_view->suboffsets[__pyx_v_dim]);
-
- /* "View.MemoryView":922
- * shape = view.shape[dim]
- * stride = view.strides[dim]
- * if view.suboffsets != NULL: # <<<<<<<<<<<<<<
- * suboffset = view.suboffsets[dim]
- *
- */
- }
- }
- __pyx_L3:;
-
- /* "View.MemoryView":925
- * suboffset = view.suboffsets[dim]
- *
- * if index < 0: # <<<<<<<<<<<<<<
- * index += view.shape[dim]
- * if index < 0:
- */
- __pyx_t_2 = ((__pyx_v_index < 0) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":926
- *
- * if index < 0:
- * index += view.shape[dim] # <<<<<<<<<<<<<<
- * if index < 0:
- * raise IndexError("Out of bounds on buffer access (axis %d)" % dim)
- */
- __pyx_v_index = (__pyx_v_index + (__pyx_v_view->shape[__pyx_v_dim]));
-
- /* "View.MemoryView":927
- * if index < 0:
- * index += view.shape[dim]
- * if index < 0: # <<<<<<<<<<<<<<
- * raise IndexError("Out of bounds on buffer access (axis %d)" % dim)
- *
- */
- __pyx_t_2 = ((__pyx_v_index < 0) != 0);
- if (unlikely(__pyx_t_2)) {
-
- /* "View.MemoryView":928
- * index += view.shape[dim]
- * if index < 0:
- * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) # <<<<<<<<<<<<<<
- *
- * if index >= shape:
- */
- __pyx_t_3 = PyInt_FromSsize_t(__pyx_v_dim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 928, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_t_4 = __Pyx_PyString_Format(__pyx_kp_s_Out_of_bounds_on_buffer_access_a, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 928, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_builtin_IndexError, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 928, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
- __Pyx_Raise(__pyx_t_3, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __PYX_ERR(1, 928, __pyx_L1_error)
-
- /* "View.MemoryView":927
- * if index < 0:
- * index += view.shape[dim]
- * if index < 0: # <<<<<<<<<<<<<<
- * raise IndexError("Out of bounds on buffer access (axis %d)" % dim)
- *
- */
- }
-
- /* "View.MemoryView":925
- * suboffset = view.suboffsets[dim]
- *
- * if index < 0: # <<<<<<<<<<<<<<
- * index += view.shape[dim]
- * if index < 0:
- */
- }
-
- /* "View.MemoryView":930
- * raise IndexError("Out of bounds on buffer access (axis %d)" % dim)
- *
- * if index >= shape: # <<<<<<<<<<<<<<
- * raise IndexError("Out of bounds on buffer access (axis %d)" % dim)
- *
- */
- __pyx_t_2 = ((__pyx_v_index >= __pyx_v_shape) != 0);
- if (unlikely(__pyx_t_2)) {
-
- /* "View.MemoryView":931
- *
- * if index >= shape:
- * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) # <<<<<<<<<<<<<<
- *
- * resultp = bufp + index * stride
- */
- __pyx_t_3 = PyInt_FromSsize_t(__pyx_v_dim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 931, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_t_4 = __Pyx_PyString_Format(__pyx_kp_s_Out_of_bounds_on_buffer_access_a, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 931, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_builtin_IndexError, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 931, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
- __Pyx_Raise(__pyx_t_3, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __PYX_ERR(1, 931, __pyx_L1_error)
-
- /* "View.MemoryView":930
- * raise IndexError("Out of bounds on buffer access (axis %d)" % dim)
- *
- * if index >= shape: # <<<<<<<<<<<<<<
- * raise IndexError("Out of bounds on buffer access (axis %d)" % dim)
- *
- */
- }
-
- /* "View.MemoryView":933
- * raise IndexError("Out of bounds on buffer access (axis %d)" % dim)
- *
- * resultp = bufp + index * stride # <<<<<<<<<<<<<<
- * if suboffset >= 0:
- * resultp = ( resultp)[0] + suboffset
- */
- __pyx_v_resultp = (__pyx_v_bufp + (__pyx_v_index * __pyx_v_stride));
-
- /* "View.MemoryView":934
- *
- * resultp = bufp + index * stride
- * if suboffset >= 0: # <<<<<<<<<<<<<<
- * resultp = ( resultp)[0] + suboffset
- *
- */
- __pyx_t_2 = ((__pyx_v_suboffset >= 0) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":935
- * resultp = bufp + index * stride
- * if suboffset >= 0:
- * resultp = ( resultp)[0] + suboffset # <<<<<<<<<<<<<<
- *
- * return resultp
- */
- __pyx_v_resultp = ((((char **)__pyx_v_resultp)[0]) + __pyx_v_suboffset);
-
- /* "View.MemoryView":934
- *
- * resultp = bufp + index * stride
- * if suboffset >= 0: # <<<<<<<<<<<<<<
- * resultp = ( resultp)[0] + suboffset
- *
- */
- }
-
- /* "View.MemoryView":937
- * resultp = ( resultp)[0] + suboffset
- *
- * return resultp # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_r = __pyx_v_resultp;
- goto __pyx_L0;
-
- /* "View.MemoryView":910
- *
- * @cname('__pyx_pybuffer_index')
- * cdef char *pybuffer_index(Py_buffer *view, char *bufp, Py_ssize_t index, # <<<<<<<<<<<<<<
- * Py_ssize_t dim) except NULL:
- * cdef Py_ssize_t shape, stride, suboffset = -1
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_XDECREF(__pyx_t_4);
- __Pyx_AddTraceback("View.MemoryView.pybuffer_index", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":943
- *
- * @cname('__pyx_memslice_transpose')
- * cdef int transpose_memslice(__Pyx_memviewslice *memslice) nogil except 0: # <<<<<<<<<<<<<<
- * cdef int ndim = memslice.memview.view.ndim
- *
- */
-
-static int __pyx_memslice_transpose(__Pyx_memviewslice *__pyx_v_memslice) {
- int __pyx_v_ndim;
- Py_ssize_t *__pyx_v_shape;
- Py_ssize_t *__pyx_v_strides;
- int __pyx_v_i;
- int __pyx_v_j;
- int __pyx_r;
- int __pyx_t_1;
- Py_ssize_t *__pyx_t_2;
- long __pyx_t_3;
- long __pyx_t_4;
- Py_ssize_t __pyx_t_5;
- Py_ssize_t __pyx_t_6;
- int __pyx_t_7;
- int __pyx_t_8;
- int __pyx_t_9;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
-
- /* "View.MemoryView":944
- * @cname('__pyx_memslice_transpose')
- * cdef int transpose_memslice(__Pyx_memviewslice *memslice) nogil except 0:
- * cdef int ndim = memslice.memview.view.ndim # <<<<<<<<<<<<<<
- *
- * cdef Py_ssize_t *shape = memslice.shape
- */
- __pyx_t_1 = __pyx_v_memslice->memview->view.ndim;
- __pyx_v_ndim = __pyx_t_1;
-
- /* "View.MemoryView":946
- * cdef int ndim = memslice.memview.view.ndim
- *
- * cdef Py_ssize_t *shape = memslice.shape # <<<<<<<<<<<<<<
- * cdef Py_ssize_t *strides = memslice.strides
- *
- */
- __pyx_t_2 = __pyx_v_memslice->shape;
- __pyx_v_shape = __pyx_t_2;
-
- /* "View.MemoryView":947
- *
- * cdef Py_ssize_t *shape = memslice.shape
- * cdef Py_ssize_t *strides = memslice.strides # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_t_2 = __pyx_v_memslice->strides;
- __pyx_v_strides = __pyx_t_2;
-
- /* "View.MemoryView":951
- *
- * cdef int i, j
- * for i in range(ndim / 2): # <<<<<<<<<<<<<<
- * j = ndim - 1 - i
- * strides[i], strides[j] = strides[j], strides[i]
- */
- __pyx_t_3 = __Pyx_div_long(__pyx_v_ndim, 2);
- __pyx_t_4 = __pyx_t_3;
- for (__pyx_t_1 = 0; __pyx_t_1 < __pyx_t_4; __pyx_t_1+=1) {
- __pyx_v_i = __pyx_t_1;
-
- /* "View.MemoryView":952
- * cdef int i, j
- * for i in range(ndim / 2):
- * j = ndim - 1 - i # <<<<<<<<<<<<<<
- * strides[i], strides[j] = strides[j], strides[i]
- * shape[i], shape[j] = shape[j], shape[i]
- */
- __pyx_v_j = ((__pyx_v_ndim - 1) - __pyx_v_i);
-
- /* "View.MemoryView":953
- * for i in range(ndim / 2):
- * j = ndim - 1 - i
- * strides[i], strides[j] = strides[j], strides[i] # <<<<<<<<<<<<<<
- * shape[i], shape[j] = shape[j], shape[i]
- *
- */
- __pyx_t_5 = (__pyx_v_strides[__pyx_v_j]);
- __pyx_t_6 = (__pyx_v_strides[__pyx_v_i]);
- (__pyx_v_strides[__pyx_v_i]) = __pyx_t_5;
- (__pyx_v_strides[__pyx_v_j]) = __pyx_t_6;
-
- /* "View.MemoryView":954
- * j = ndim - 1 - i
- * strides[i], strides[j] = strides[j], strides[i]
- * shape[i], shape[j] = shape[j], shape[i] # <<<<<<<<<<<<<<
- *
- * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0:
- */
- __pyx_t_6 = (__pyx_v_shape[__pyx_v_j]);
- __pyx_t_5 = (__pyx_v_shape[__pyx_v_i]);
- (__pyx_v_shape[__pyx_v_i]) = __pyx_t_6;
- (__pyx_v_shape[__pyx_v_j]) = __pyx_t_5;
-
- /* "View.MemoryView":956
- * shape[i], shape[j] = shape[j], shape[i]
- *
- * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0: # <<<<<<<<<<<<<<
- * _err(ValueError, "Cannot transpose memoryview with indirect dimensions")
- *
- */
- __pyx_t_8 = (((__pyx_v_memslice->suboffsets[__pyx_v_i]) >= 0) != 0);
- if (!__pyx_t_8) {
- } else {
- __pyx_t_7 = __pyx_t_8;
- goto __pyx_L6_bool_binop_done;
- }
- __pyx_t_8 = (((__pyx_v_memslice->suboffsets[__pyx_v_j]) >= 0) != 0);
- __pyx_t_7 = __pyx_t_8;
- __pyx_L6_bool_binop_done:;
- if (__pyx_t_7) {
-
- /* "View.MemoryView":957
- *
- * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0:
- * _err(ValueError, "Cannot transpose memoryview with indirect dimensions") # <<<<<<<<<<<<<<
- *
- * return 1
- */
- __pyx_t_9 = __pyx_memoryview_err(__pyx_builtin_ValueError, ((char *)"Cannot transpose memoryview with indirect dimensions")); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 957, __pyx_L1_error)
-
- /* "View.MemoryView":956
- * shape[i], shape[j] = shape[j], shape[i]
- *
- * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0: # <<<<<<<<<<<<<<
- * _err(ValueError, "Cannot transpose memoryview with indirect dimensions")
- *
- */
- }
- }
-
- /* "View.MemoryView":959
- * _err(ValueError, "Cannot transpose memoryview with indirect dimensions")
- *
- * return 1 # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_r = 1;
- goto __pyx_L0;
-
- /* "View.MemoryView":943
- *
- * @cname('__pyx_memslice_transpose')
- * cdef int transpose_memslice(__Pyx_memviewslice *memslice) nogil except 0: # <<<<<<<<<<<<<<
- * cdef int ndim = memslice.memview.view.ndim
- *
- */
-
- /* function exit code */
- __pyx_L1_error:;
- {
- #ifdef WITH_THREAD
- PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure();
- #endif
- __Pyx_AddTraceback("View.MemoryView.transpose_memslice", __pyx_clineno, __pyx_lineno, __pyx_filename);
- #ifdef WITH_THREAD
- __Pyx_PyGILState_Release(__pyx_gilstate_save);
- #endif
- }
- __pyx_r = 0;
- __pyx_L0:;
- return __pyx_r;
-}
-
-/* "View.MemoryView":976
- * cdef int (*to_dtype_func)(char *, object) except 0
- *
- * def __dealloc__(self): # <<<<<<<<<<<<<<
- * __PYX_XDEC_MEMVIEW(&self.from_slice, 1)
- *
- */
-
-/* Python wrapper */
-static void __pyx_memoryviewslice___dealloc__(PyObject *__pyx_v_self); /*proto*/
-static void __pyx_memoryviewslice___dealloc__(PyObject *__pyx_v_self) {
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__dealloc__ (wrapper)", 0);
- __pyx_memoryviewslice___pyx_pf_15View_dot_MemoryView_16_memoryviewslice___dealloc__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
-}
-
-static void __pyx_memoryviewslice___pyx_pf_15View_dot_MemoryView_16_memoryviewslice___dealloc__(struct __pyx_memoryviewslice_obj *__pyx_v_self) {
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__dealloc__", 0);
-
- /* "View.MemoryView":977
- *
- * def __dealloc__(self):
- * __PYX_XDEC_MEMVIEW(&self.from_slice, 1) # <<<<<<<<<<<<<<
- *
- * cdef convert_item_to_object(self, char *itemp):
- */
- __PYX_XDEC_MEMVIEW((&__pyx_v_self->from_slice), 1);
-
- /* "View.MemoryView":976
- * cdef int (*to_dtype_func)(char *, object) except 0
- *
- * def __dealloc__(self): # <<<<<<<<<<<<<<
- * __PYX_XDEC_MEMVIEW(&self.from_slice, 1)
- *
- */
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
-}
-
-/* "View.MemoryView":979
- * __PYX_XDEC_MEMVIEW(&self.from_slice, 1)
- *
- * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<<
- * if self.to_object_func != NULL:
- * return self.to_object_func(itemp)
- */
-
-static PyObject *__pyx_memoryviewslice_convert_item_to_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- PyObject *__pyx_t_2 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("convert_item_to_object", 0);
-
- /* "View.MemoryView":980
- *
- * cdef convert_item_to_object(self, char *itemp):
- * if self.to_object_func != NULL: # <<<<<<<<<<<<<<
- * return self.to_object_func(itemp)
- * else:
- */
- __pyx_t_1 = ((__pyx_v_self->to_object_func != NULL) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":981
- * cdef convert_item_to_object(self, char *itemp):
- * if self.to_object_func != NULL:
- * return self.to_object_func(itemp) # <<<<<<<<<<<<<<
- * else:
- * return memoryview.convert_item_to_object(self, itemp)
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_2 = __pyx_v_self->to_object_func(__pyx_v_itemp); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 981, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_r = __pyx_t_2;
- __pyx_t_2 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":980
- *
- * cdef convert_item_to_object(self, char *itemp):
- * if self.to_object_func != NULL: # <<<<<<<<<<<<<<
- * return self.to_object_func(itemp)
- * else:
- */
- }
-
- /* "View.MemoryView":983
- * return self.to_object_func(itemp)
- * else:
- * return memoryview.convert_item_to_object(self, itemp) # <<<<<<<<<<<<<<
- *
- * cdef assign_item_from_object(self, char *itemp, object value):
- */
- /*else*/ {
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_2 = __pyx_memoryview_convert_item_to_object(((struct __pyx_memoryview_obj *)__pyx_v_self), __pyx_v_itemp); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 983, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_r = __pyx_t_2;
- __pyx_t_2 = 0;
- goto __pyx_L0;
- }
-
- /* "View.MemoryView":979
- * __PYX_XDEC_MEMVIEW(&self.from_slice, 1)
- *
- * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<<
- * if self.to_object_func != NULL:
- * return self.to_object_func(itemp)
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_AddTraceback("View.MemoryView._memoryviewslice.convert_item_to_object", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":985
- * return memoryview.convert_item_to_object(self, itemp)
- *
- * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<<
- * if self.to_dtype_func != NULL:
- * self.to_dtype_func(itemp, value)
- */
-
-static PyObject *__pyx_memoryviewslice_assign_item_from_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- int __pyx_t_2;
- PyObject *__pyx_t_3 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("assign_item_from_object", 0);
-
- /* "View.MemoryView":986
- *
- * cdef assign_item_from_object(self, char *itemp, object value):
- * if self.to_dtype_func != NULL: # <<<<<<<<<<<<<<
- * self.to_dtype_func(itemp, value)
- * else:
- */
- __pyx_t_1 = ((__pyx_v_self->to_dtype_func != NULL) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":987
- * cdef assign_item_from_object(self, char *itemp, object value):
- * if self.to_dtype_func != NULL:
- * self.to_dtype_func(itemp, value) # <<<<<<<<<<<<<<
- * else:
- * memoryview.assign_item_from_object(self, itemp, value)
- */
- __pyx_t_2 = __pyx_v_self->to_dtype_func(__pyx_v_itemp, __pyx_v_value); if (unlikely(__pyx_t_2 == ((int)0))) __PYX_ERR(1, 987, __pyx_L1_error)
-
- /* "View.MemoryView":986
- *
- * cdef assign_item_from_object(self, char *itemp, object value):
- * if self.to_dtype_func != NULL: # <<<<<<<<<<<<<<
- * self.to_dtype_func(itemp, value)
- * else:
- */
- goto __pyx_L3;
- }
-
- /* "View.MemoryView":989
- * self.to_dtype_func(itemp, value)
- * else:
- * memoryview.assign_item_from_object(self, itemp, value) # <<<<<<<<<<<<<<
- *
- * @property
- */
- /*else*/ {
- __pyx_t_3 = __pyx_memoryview_assign_item_from_object(((struct __pyx_memoryview_obj *)__pyx_v_self), __pyx_v_itemp, __pyx_v_value); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 989, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- }
- __pyx_L3:;
-
- /* "View.MemoryView":985
- * return memoryview.convert_item_to_object(self, itemp)
- *
- * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<<
- * if self.to_dtype_func != NULL:
- * self.to_dtype_func(itemp, value)
- */
-
- /* function exit code */
- __pyx_r = Py_None; __Pyx_INCREF(Py_None);
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_AddTraceback("View.MemoryView._memoryviewslice.assign_item_from_object", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":992
- *
- * @property
- * def base(self): # <<<<<<<<<<<<<<
- * return self.from_object
- *
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw_15View_dot_MemoryView_16_memoryviewslice_4base_1__get__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_pw_15View_dot_MemoryView_16_memoryviewslice_4base_1__get__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__get__ (wrapper)", 0);
- __pyx_r = __pyx_pf_15View_dot_MemoryView_16_memoryviewslice_4base___get__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_15View_dot_MemoryView_16_memoryviewslice_4base___get__(struct __pyx_memoryviewslice_obj *__pyx_v_self) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__get__", 0);
-
- /* "View.MemoryView":993
- * @property
- * def base(self):
- * return self.from_object # <<<<<<<<<<<<<<
- *
- * __pyx_getbuffer = capsule( &__pyx_memoryview_getbuffer, "getbuffer(obj, view, flags)")
- */
- __Pyx_XDECREF(__pyx_r);
- __Pyx_INCREF(__pyx_v_self->from_object);
- __pyx_r = __pyx_v_self->from_object;
- goto __pyx_L0;
-
- /* "View.MemoryView":992
- *
- * @property
- * def base(self): # <<<<<<<<<<<<<<
- * return self.from_object
- *
- */
-
- /* function exit code */
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "(tree fragment)":1
- * def __reduce_cython__(self): # <<<<<<<<<<<<<<
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state):
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw___pyx_memoryviewslice_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/
-static PyObject *__pyx_pw___pyx_memoryviewslice_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0);
- __pyx_r = __pyx_pf___pyx_memoryviewslice___reduce_cython__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf___pyx_memoryviewslice___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__reduce_cython__", 0);
-
- /* "(tree fragment)":2
- * def __reduce_cython__(self):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<<
- * def __setstate_cython__(self, __pyx_state):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- */
- __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__18, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_Raise(__pyx_t_1, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __PYX_ERR(1, 2, __pyx_L1_error)
-
- /* "(tree fragment)":1
- * def __reduce_cython__(self): # <<<<<<<<<<<<<<
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state):
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_AddTraceback("View.MemoryView._memoryviewslice.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "(tree fragment)":3
- * def __reduce_cython__(self):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<<
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw___pyx_memoryviewslice_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/
-static PyObject *__pyx_pw___pyx_memoryviewslice_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0);
- __pyx_r = __pyx_pf___pyx_memoryviewslice_2__setstate_cython__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf___pyx_memoryviewslice_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__setstate_cython__", 0);
-
- /* "(tree fragment)":4
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<<
- */
- __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__19, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_Raise(__pyx_t_1, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __PYX_ERR(1, 4, __pyx_L1_error)
-
- /* "(tree fragment)":3
- * def __reduce_cython__(self):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<<
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_AddTraceback("View.MemoryView._memoryviewslice.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":999
- *
- * @cname('__pyx_memoryview_fromslice')
- * cdef memoryview_fromslice(__Pyx_memviewslice memviewslice, # <<<<<<<<<<<<<<
- * int ndim,
- * object (*to_object_func)(char *),
- */
-
-static PyObject *__pyx_memoryview_fromslice(__Pyx_memviewslice __pyx_v_memviewslice, int __pyx_v_ndim, PyObject *(*__pyx_v_to_object_func)(char *), int (*__pyx_v_to_dtype_func)(char *, PyObject *), int __pyx_v_dtype_is_object) {
- struct __pyx_memoryviewslice_obj *__pyx_v_result = 0;
- Py_ssize_t __pyx_v_suboffset;
- PyObject *__pyx_v_length = NULL;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- PyObject *__pyx_t_2 = NULL;
- PyObject *__pyx_t_3 = NULL;
- __Pyx_TypeInfo *__pyx_t_4;
- Py_buffer __pyx_t_5;
- Py_ssize_t *__pyx_t_6;
- Py_ssize_t *__pyx_t_7;
- Py_ssize_t *__pyx_t_8;
- Py_ssize_t __pyx_t_9;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("memoryview_fromslice", 0);
-
- /* "View.MemoryView":1007
- * cdef _memoryviewslice result
- *
- * if memviewslice.memview == Py_None: # <<<<<<<<<<<<<<
- * return None
- *
- */
- __pyx_t_1 = ((((PyObject *)__pyx_v_memviewslice.memview) == Py_None) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":1008
- *
- * if memviewslice.memview == Py_None:
- * return None # <<<<<<<<<<<<<<
- *
- *
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_r = Py_None; __Pyx_INCREF(Py_None);
- goto __pyx_L0;
-
- /* "View.MemoryView":1007
- * cdef _memoryviewslice result
- *
- * if memviewslice.memview == Py_None: # <<<<<<<<<<<<<<
- * return None
- *
- */
- }
-
- /* "View.MemoryView":1013
- *
- *
- * result = _memoryviewslice(None, 0, dtype_is_object) # <<<<<<<<<<<<<<
- *
- * result.from_slice = memviewslice
- */
- __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_v_dtype_is_object); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1013, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1013, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_INCREF(Py_None);
- __Pyx_GIVEREF(Py_None);
- PyTuple_SET_ITEM(__pyx_t_3, 0, Py_None);
- __Pyx_INCREF(__pyx_int_0);
- __Pyx_GIVEREF(__pyx_int_0);
- PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_int_0);
- __Pyx_GIVEREF(__pyx_t_2);
- PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_t_2);
- __pyx_t_2 = 0;
- __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryviewslice_type), __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1013, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __pyx_v_result = ((struct __pyx_memoryviewslice_obj *)__pyx_t_2);
- __pyx_t_2 = 0;
-
- /* "View.MemoryView":1015
- * result = _memoryviewslice(None, 0, dtype_is_object)
- *
- * result.from_slice = memviewslice # <<<<<<<<<<<<<<
- * __PYX_INC_MEMVIEW(&memviewslice, 1)
- *
- */
- __pyx_v_result->from_slice = __pyx_v_memviewslice;
-
- /* "View.MemoryView":1016
- *
- * result.from_slice = memviewslice
- * __PYX_INC_MEMVIEW(&memviewslice, 1) # <<<<<<<<<<<<<<
- *
- * result.from_object = ( memviewslice.memview).base
- */
- __PYX_INC_MEMVIEW((&__pyx_v_memviewslice), 1);
-
- /* "View.MemoryView":1018
- * __PYX_INC_MEMVIEW(&memviewslice, 1)
- *
- * result.from_object = ( memviewslice.memview).base # <<<<<<<<<<<<<<
- * result.typeinfo = memviewslice.memview.typeinfo
- *
- */
- __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_memviewslice.memview), __pyx_n_s_base); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1018, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_GIVEREF(__pyx_t_2);
- __Pyx_GOTREF(__pyx_v_result->from_object);
- __Pyx_DECREF(__pyx_v_result->from_object);
- __pyx_v_result->from_object = __pyx_t_2;
- __pyx_t_2 = 0;
-
- /* "View.MemoryView":1019
- *
- * result.from_object = ( memviewslice.memview).base
- * result.typeinfo = memviewslice.memview.typeinfo # <<<<<<<<<<<<<<
- *
- * result.view = memviewslice.memview.view
- */
- __pyx_t_4 = __pyx_v_memviewslice.memview->typeinfo;
- __pyx_v_result->__pyx_base.typeinfo = __pyx_t_4;
-
- /* "View.MemoryView":1021
- * result.typeinfo = memviewslice.memview.typeinfo
- *
- * result.view = memviewslice.memview.view # <<<<<<<<<<<<<<
- * result.view.buf = memviewslice.data
- * result.view.ndim = ndim
- */
- __pyx_t_5 = __pyx_v_memviewslice.memview->view;
- __pyx_v_result->__pyx_base.view = __pyx_t_5;
-
- /* "View.MemoryView":1022
- *
- * result.view = memviewslice.memview.view
- * result.view.buf = memviewslice.data # <<<<<<<<<<<<<<
- * result.view.ndim = ndim
- * (<__pyx_buffer *> &result.view).obj = Py_None
- */
- __pyx_v_result->__pyx_base.view.buf = ((void *)__pyx_v_memviewslice.data);
-
- /* "View.MemoryView":1023
- * result.view = memviewslice.memview.view
- * result.view.buf = memviewslice.data
- * result.view.ndim = ndim # <<<<<<<<<<<<<<
- * (<__pyx_buffer *> &result.view).obj = Py_None
- * Py_INCREF(Py_None)
- */
- __pyx_v_result->__pyx_base.view.ndim = __pyx_v_ndim;
-
- /* "View.MemoryView":1024
- * result.view.buf = memviewslice.data
- * result.view.ndim = ndim
- * (<__pyx_buffer *> &result.view).obj = Py_None # <<<<<<<<<<<<<<
- * Py_INCREF(Py_None)
- *
- */
- ((Py_buffer *)(&__pyx_v_result->__pyx_base.view))->obj = Py_None;
-
- /* "View.MemoryView":1025
- * result.view.ndim = ndim
- * (<__pyx_buffer *> &result.view).obj = Py_None
- * Py_INCREF(Py_None) # <<<<<<<<<<<<<<
- *
- * if (memviewslice.memview).flags & PyBUF_WRITABLE:
- */
- Py_INCREF(Py_None);
-
- /* "View.MemoryView":1027
- * Py_INCREF(Py_None)
- *
- * if (memviewslice.memview).flags & PyBUF_WRITABLE: # <<<<<<<<<<<<<<
- * result.flags = PyBUF_RECORDS
- * else:
- */
- __pyx_t_1 = ((((struct __pyx_memoryview_obj *)__pyx_v_memviewslice.memview)->flags & PyBUF_WRITABLE) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":1028
- *
- * if (memviewslice.memview).flags & PyBUF_WRITABLE:
- * result.flags = PyBUF_RECORDS # <<<<<<<<<<<<<<
- * else:
- * result.flags = PyBUF_RECORDS_RO
- */
- __pyx_v_result->__pyx_base.flags = PyBUF_RECORDS;
-
- /* "View.MemoryView":1027
- * Py_INCREF(Py_None)
- *
- * if (memviewslice.memview).flags & PyBUF_WRITABLE: # <<<<<<<<<<<<<<
- * result.flags = PyBUF_RECORDS
- * else:
- */
- goto __pyx_L4;
- }
-
- /* "View.MemoryView":1030
- * result.flags = PyBUF_RECORDS
- * else:
- * result.flags = PyBUF_RECORDS_RO # <<<<<<<<<<<<<<
- *
- * result.view.shape = result.from_slice.shape
- */
- /*else*/ {
- __pyx_v_result->__pyx_base.flags = PyBUF_RECORDS_RO;
- }
- __pyx_L4:;
-
- /* "View.MemoryView":1032
- * result.flags = PyBUF_RECORDS_RO
- *
- * result.view.shape = result.from_slice.shape # <<<<<<<<<<<<<<
- * result.view.strides = result.from_slice.strides
- *
- */
- __pyx_v_result->__pyx_base.view.shape = ((Py_ssize_t *)__pyx_v_result->from_slice.shape);
-
- /* "View.MemoryView":1033
- *
- * result.view.shape = result.from_slice.shape
- * result.view.strides = result.from_slice.strides # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_v_result->__pyx_base.view.strides = ((Py_ssize_t *)__pyx_v_result->from_slice.strides);
-
- /* "View.MemoryView":1036
- *
- *
- * result.view.suboffsets = NULL # <<<<<<<<<<<<<<
- * for suboffset in result.from_slice.suboffsets[:ndim]:
- * if suboffset >= 0:
- */
- __pyx_v_result->__pyx_base.view.suboffsets = NULL;
-
- /* "View.MemoryView":1037
- *
- * result.view.suboffsets = NULL
- * for suboffset in result.from_slice.suboffsets[:ndim]: # <<<<<<<<<<<<<<
- * if suboffset >= 0:
- * result.view.suboffsets = result.from_slice.suboffsets
- */
- __pyx_t_7 = (__pyx_v_result->from_slice.suboffsets + __pyx_v_ndim);
- for (__pyx_t_8 = __pyx_v_result->from_slice.suboffsets; __pyx_t_8 < __pyx_t_7; __pyx_t_8++) {
- __pyx_t_6 = __pyx_t_8;
- __pyx_v_suboffset = (__pyx_t_6[0]);
-
- /* "View.MemoryView":1038
- * result.view.suboffsets = NULL
- * for suboffset in result.from_slice.suboffsets[:ndim]:
- * if suboffset >= 0: # <<<<<<<<<<<<<<
- * result.view.suboffsets = result.from_slice.suboffsets
- * break
- */
- __pyx_t_1 = ((__pyx_v_suboffset >= 0) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":1039
- * for suboffset in result.from_slice.suboffsets[:ndim]:
- * if suboffset >= 0:
- * result.view.suboffsets = result.from_slice.suboffsets # <<<<<<<<<<<<<<
- * break
- *
- */
- __pyx_v_result->__pyx_base.view.suboffsets = ((Py_ssize_t *)__pyx_v_result->from_slice.suboffsets);
-
- /* "View.MemoryView":1040
- * if suboffset >= 0:
- * result.view.suboffsets = result.from_slice.suboffsets
- * break # <<<<<<<<<<<<<<
- *
- * result.view.len = result.view.itemsize
- */
- goto __pyx_L6_break;
-
- /* "View.MemoryView":1038
- * result.view.suboffsets = NULL
- * for suboffset in result.from_slice.suboffsets[:ndim]:
- * if suboffset >= 0: # <<<<<<<<<<<<<<
- * result.view.suboffsets = result.from_slice.suboffsets
- * break
- */
- }
- }
- __pyx_L6_break:;
-
- /* "View.MemoryView":1042
- * break
- *
- * result.view.len = result.view.itemsize # <<<<<<<<<<<<<<
- * for length in result.view.shape[:ndim]:
- * result.view.len *= length
- */
- __pyx_t_9 = __pyx_v_result->__pyx_base.view.itemsize;
- __pyx_v_result->__pyx_base.view.len = __pyx_t_9;
-
- /* "View.MemoryView":1043
- *
- * result.view.len = result.view.itemsize
- * for length in result.view.shape[:ndim]: # <<<<<<<<<<<<<<
- * result.view.len *= length
- *
- */
- __pyx_t_7 = (__pyx_v_result->__pyx_base.view.shape + __pyx_v_ndim);
- for (__pyx_t_8 = __pyx_v_result->__pyx_base.view.shape; __pyx_t_8 < __pyx_t_7; __pyx_t_8++) {
- __pyx_t_6 = __pyx_t_8;
- __pyx_t_2 = PyInt_FromSsize_t((__pyx_t_6[0])); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1043, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_XDECREF_SET(__pyx_v_length, __pyx_t_2);
- __pyx_t_2 = 0;
-
- /* "View.MemoryView":1044
- * result.view.len = result.view.itemsize
- * for length in result.view.shape[:ndim]:
- * result.view.len *= length # <<<<<<<<<<<<<<
- *
- * result.to_object_func = to_object_func
- */
- __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_result->__pyx_base.view.len); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1044, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_3 = PyNumber_InPlaceMultiply(__pyx_t_2, __pyx_v_length); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1044, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __pyx_t_9 = __Pyx_PyIndex_AsSsize_t(__pyx_t_3); if (unlikely((__pyx_t_9 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 1044, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __pyx_v_result->__pyx_base.view.len = __pyx_t_9;
- }
-
- /* "View.MemoryView":1046
- * result.view.len *= length
- *
- * result.to_object_func = to_object_func # <<<<<<<<<<<<<<
- * result.to_dtype_func = to_dtype_func
- *
- */
- __pyx_v_result->to_object_func = __pyx_v_to_object_func;
-
- /* "View.MemoryView":1047
- *
- * result.to_object_func = to_object_func
- * result.to_dtype_func = to_dtype_func # <<<<<<<<<<<<<<
- *
- * return result
- */
- __pyx_v_result->to_dtype_func = __pyx_v_to_dtype_func;
-
- /* "View.MemoryView":1049
- * result.to_dtype_func = to_dtype_func
- *
- * return result # <<<<<<<<<<<<<<
- *
- * @cname('__pyx_memoryview_get_slice_from_memoryview')
- */
- __Pyx_XDECREF(__pyx_r);
- __Pyx_INCREF(((PyObject *)__pyx_v_result));
- __pyx_r = ((PyObject *)__pyx_v_result);
- goto __pyx_L0;
-
- /* "View.MemoryView":999
- *
- * @cname('__pyx_memoryview_fromslice')
- * cdef memoryview_fromslice(__Pyx_memviewslice memviewslice, # <<<<<<<<<<<<<<
- * int ndim,
- * object (*to_object_func)(char *),
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_AddTraceback("View.MemoryView.memoryview_fromslice", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XDECREF((PyObject *)__pyx_v_result);
- __Pyx_XDECREF(__pyx_v_length);
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":1052
- *
- * @cname('__pyx_memoryview_get_slice_from_memoryview')
- * cdef __Pyx_memviewslice *get_slice_from_memview(memoryview memview, # <<<<<<<<<<<<<<
- * __Pyx_memviewslice *mslice) except NULL:
- * cdef _memoryviewslice obj
- */
-
-static __Pyx_memviewslice *__pyx_memoryview_get_slice_from_memoryview(struct __pyx_memoryview_obj *__pyx_v_memview, __Pyx_memviewslice *__pyx_v_mslice) {
- struct __pyx_memoryviewslice_obj *__pyx_v_obj = 0;
- __Pyx_memviewslice *__pyx_r;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- int __pyx_t_2;
- PyObject *__pyx_t_3 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("get_slice_from_memview", 0);
-
- /* "View.MemoryView":1055
- * __Pyx_memviewslice *mslice) except NULL:
- * cdef _memoryviewslice obj
- * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<<
- * obj = memview
- * return &obj.from_slice
- */
- __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type);
- __pyx_t_2 = (__pyx_t_1 != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":1056
- * cdef _memoryviewslice obj
- * if isinstance(memview, _memoryviewslice):
- * obj = memview # <<<<<<<<<<<<<<
- * return &obj.from_slice
- * else:
- */
- if (!(likely(((((PyObject *)__pyx_v_memview)) == Py_None) || likely(__Pyx_TypeTest(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type))))) __PYX_ERR(1, 1056, __pyx_L1_error)
- __pyx_t_3 = ((PyObject *)__pyx_v_memview);
- __Pyx_INCREF(__pyx_t_3);
- __pyx_v_obj = ((struct __pyx_memoryviewslice_obj *)__pyx_t_3);
- __pyx_t_3 = 0;
-
- /* "View.MemoryView":1057
- * if isinstance(memview, _memoryviewslice):
- * obj = memview
- * return &obj.from_slice # <<<<<<<<<<<<<<
- * else:
- * slice_copy(memview, mslice)
- */
- __pyx_r = (&__pyx_v_obj->from_slice);
- goto __pyx_L0;
-
- /* "View.MemoryView":1055
- * __Pyx_memviewslice *mslice) except NULL:
- * cdef _memoryviewslice obj
- * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<<
- * obj = memview
- * return &obj.from_slice
- */
- }
-
- /* "View.MemoryView":1059
- * return &obj.from_slice
- * else:
- * slice_copy(memview, mslice) # <<<<<<<<<<<<<<
- * return mslice
- *
- */
- /*else*/ {
- __pyx_memoryview_slice_copy(__pyx_v_memview, __pyx_v_mslice);
-
- /* "View.MemoryView":1060
- * else:
- * slice_copy(memview, mslice)
- * return mslice # <<<<<<<<<<<<<<
- *
- * @cname('__pyx_memoryview_slice_copy')
- */
- __pyx_r = __pyx_v_mslice;
- goto __pyx_L0;
- }
-
- /* "View.MemoryView":1052
- *
- * @cname('__pyx_memoryview_get_slice_from_memoryview')
- * cdef __Pyx_memviewslice *get_slice_from_memview(memoryview memview, # <<<<<<<<<<<<<<
- * __Pyx_memviewslice *mslice) except NULL:
- * cdef _memoryviewslice obj
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_AddTraceback("View.MemoryView.get_slice_from_memview", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XDECREF((PyObject *)__pyx_v_obj);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":1063
- *
- * @cname('__pyx_memoryview_slice_copy')
- * cdef void slice_copy(memoryview memview, __Pyx_memviewslice *dst): # <<<<<<<<<<<<<<
- * cdef int dim
- * cdef (Py_ssize_t*) shape, strides, suboffsets
- */
-
-static void __pyx_memoryview_slice_copy(struct __pyx_memoryview_obj *__pyx_v_memview, __Pyx_memviewslice *__pyx_v_dst) {
- int __pyx_v_dim;
- Py_ssize_t *__pyx_v_shape;
- Py_ssize_t *__pyx_v_strides;
- Py_ssize_t *__pyx_v_suboffsets;
- __Pyx_RefNannyDeclarations
- Py_ssize_t *__pyx_t_1;
- int __pyx_t_2;
- int __pyx_t_3;
- int __pyx_t_4;
- Py_ssize_t __pyx_t_5;
- __Pyx_RefNannySetupContext("slice_copy", 0);
-
- /* "View.MemoryView":1067
- * cdef (Py_ssize_t*) shape, strides, suboffsets
- *
- * shape = memview.view.shape # <<<<<<<<<<<<<<
- * strides = memview.view.strides
- * suboffsets = memview.view.suboffsets
- */
- __pyx_t_1 = __pyx_v_memview->view.shape;
- __pyx_v_shape = __pyx_t_1;
-
- /* "View.MemoryView":1068
- *
- * shape = memview.view.shape
- * strides = memview.view.strides # <<<<<<<<<<<<<<
- * suboffsets = memview.view.suboffsets
- *
- */
- __pyx_t_1 = __pyx_v_memview->view.strides;
- __pyx_v_strides = __pyx_t_1;
-
- /* "View.MemoryView":1069
- * shape = memview.view.shape
- * strides = memview.view.strides
- * suboffsets = memview.view.suboffsets # <<<<<<<<<<<<<<
- *
- * dst.memview = <__pyx_memoryview *> memview
- */
- __pyx_t_1 = __pyx_v_memview->view.suboffsets;
- __pyx_v_suboffsets = __pyx_t_1;
-
- /* "View.MemoryView":1071
- * suboffsets = memview.view.suboffsets
- *
- * dst.memview = <__pyx_memoryview *> memview # <<<<<<<<<<<<<<
- * dst.data = memview.view.buf
- *
- */
- __pyx_v_dst->memview = ((struct __pyx_memoryview_obj *)__pyx_v_memview);
-
- /* "View.MemoryView":1072
- *
- * dst.memview = <__pyx_memoryview *> memview
- * dst.data = memview.view.buf # <<<<<<<<<<<<<<
- *
- * for dim in range(memview.view.ndim):
- */
- __pyx_v_dst->data = ((char *)__pyx_v_memview->view.buf);
-
- /* "View.MemoryView":1074
- * dst.data = memview.view.buf
- *
- * for dim in range(memview.view.ndim): # <<<<<<<<<<<<<<
- * dst.shape[dim] = shape[dim]
- * dst.strides[dim] = strides[dim]
- */
- __pyx_t_2 = __pyx_v_memview->view.ndim;
- __pyx_t_3 = __pyx_t_2;
- for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) {
- __pyx_v_dim = __pyx_t_4;
-
- /* "View.MemoryView":1075
- *
- * for dim in range(memview.view.ndim):
- * dst.shape[dim] = shape[dim] # <<<<<<<<<<<<<<
- * dst.strides[dim] = strides[dim]
- * dst.suboffsets[dim] = suboffsets[dim] if suboffsets else -1
- */
- (__pyx_v_dst->shape[__pyx_v_dim]) = (__pyx_v_shape[__pyx_v_dim]);
-
- /* "View.MemoryView":1076
- * for dim in range(memview.view.ndim):
- * dst.shape[dim] = shape[dim]
- * dst.strides[dim] = strides[dim] # <<<<<<<<<<<<<<
- * dst.suboffsets[dim] = suboffsets[dim] if suboffsets else -1
- *
- */
- (__pyx_v_dst->strides[__pyx_v_dim]) = (__pyx_v_strides[__pyx_v_dim]);
-
- /* "View.MemoryView":1077
- * dst.shape[dim] = shape[dim]
- * dst.strides[dim] = strides[dim]
- * dst.suboffsets[dim] = suboffsets[dim] if suboffsets else -1 # <<<<<<<<<<<<<<
- *
- * @cname('__pyx_memoryview_copy_object')
- */
- if ((__pyx_v_suboffsets != 0)) {
- __pyx_t_5 = (__pyx_v_suboffsets[__pyx_v_dim]);
- } else {
- __pyx_t_5 = -1L;
- }
- (__pyx_v_dst->suboffsets[__pyx_v_dim]) = __pyx_t_5;
- }
-
- /* "View.MemoryView":1063
- *
- * @cname('__pyx_memoryview_slice_copy')
- * cdef void slice_copy(memoryview memview, __Pyx_memviewslice *dst): # <<<<<<<<<<<<<<
- * cdef int dim
- * cdef (Py_ssize_t*) shape, strides, suboffsets
- */
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
-}
-
-/* "View.MemoryView":1080
- *
- * @cname('__pyx_memoryview_copy_object')
- * cdef memoryview_copy(memoryview memview): # <<<<<<<<<<<<<<
- * "Create a new memoryview object"
- * cdef __Pyx_memviewslice memviewslice
- */
-
-static PyObject *__pyx_memoryview_copy_object(struct __pyx_memoryview_obj *__pyx_v_memview) {
- __Pyx_memviewslice __pyx_v_memviewslice;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("memoryview_copy", 0);
-
- /* "View.MemoryView":1083
- * "Create a new memoryview object"
- * cdef __Pyx_memviewslice memviewslice
- * slice_copy(memview, &memviewslice) # <<<<<<<<<<<<<<
- * return memoryview_copy_from_slice(memview, &memviewslice)
- *
- */
- __pyx_memoryview_slice_copy(__pyx_v_memview, (&__pyx_v_memviewslice));
-
- /* "View.MemoryView":1084
- * cdef __Pyx_memviewslice memviewslice
- * slice_copy(memview, &memviewslice)
- * return memoryview_copy_from_slice(memview, &memviewslice) # <<<<<<<<<<<<<<
- *
- * @cname('__pyx_memoryview_copy_object_from_slice')
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_1 = __pyx_memoryview_copy_object_from_slice(__pyx_v_memview, (&__pyx_v_memviewslice)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 1084, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_r = __pyx_t_1;
- __pyx_t_1 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":1080
- *
- * @cname('__pyx_memoryview_copy_object')
- * cdef memoryview_copy(memoryview memview): # <<<<<<<<<<<<<<
- * "Create a new memoryview object"
- * cdef __Pyx_memviewslice memviewslice
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_AddTraceback("View.MemoryView.memoryview_copy", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":1087
- *
- * @cname('__pyx_memoryview_copy_object_from_slice')
- * cdef memoryview_copy_from_slice(memoryview memview, __Pyx_memviewslice *memviewslice): # <<<<<<<<<<<<<<
- * """
- * Create a new memoryview object from a given memoryview object and slice.
- */
-
-static PyObject *__pyx_memoryview_copy_object_from_slice(struct __pyx_memoryview_obj *__pyx_v_memview, __Pyx_memviewslice *__pyx_v_memviewslice) {
- PyObject *(*__pyx_v_to_object_func)(char *);
- int (*__pyx_v_to_dtype_func)(char *, PyObject *);
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- int __pyx_t_2;
- PyObject *(*__pyx_t_3)(char *);
- int (*__pyx_t_4)(char *, PyObject *);
- PyObject *__pyx_t_5 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("memoryview_copy_from_slice", 0);
-
- /* "View.MemoryView":1094
- * cdef int (*to_dtype_func)(char *, object) except 0
- *
- * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<<
- * to_object_func = (<_memoryviewslice> memview).to_object_func
- * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func
- */
- __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type);
- __pyx_t_2 = (__pyx_t_1 != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":1095
- *
- * if isinstance(memview, _memoryviewslice):
- * to_object_func = (<_memoryviewslice> memview).to_object_func # <<<<<<<<<<<<<<
- * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func
- * else:
- */
- __pyx_t_3 = ((struct __pyx_memoryviewslice_obj *)__pyx_v_memview)->to_object_func;
- __pyx_v_to_object_func = __pyx_t_3;
-
- /* "View.MemoryView":1096
- * if isinstance(memview, _memoryviewslice):
- * to_object_func = (<_memoryviewslice> memview).to_object_func
- * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func # <<<<<<<<<<<<<<
- * else:
- * to_object_func = NULL
- */
- __pyx_t_4 = ((struct __pyx_memoryviewslice_obj *)__pyx_v_memview)->to_dtype_func;
- __pyx_v_to_dtype_func = __pyx_t_4;
-
- /* "View.MemoryView":1094
- * cdef int (*to_dtype_func)(char *, object) except 0
- *
- * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<<
- * to_object_func = (<_memoryviewslice> memview).to_object_func
- * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func
- */
- goto __pyx_L3;
- }
-
- /* "View.MemoryView":1098
- * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func
- * else:
- * to_object_func = NULL # <<<<<<<<<<<<<<
- * to_dtype_func = NULL
- *
- */
- /*else*/ {
- __pyx_v_to_object_func = NULL;
-
- /* "View.MemoryView":1099
- * else:
- * to_object_func = NULL
- * to_dtype_func = NULL # <<<<<<<<<<<<<<
- *
- * return memoryview_fromslice(memviewslice[0], memview.view.ndim,
- */
- __pyx_v_to_dtype_func = NULL;
- }
- __pyx_L3:;
-
- /* "View.MemoryView":1101
- * to_dtype_func = NULL
- *
- * return memoryview_fromslice(memviewslice[0], memview.view.ndim, # <<<<<<<<<<<<<<
- * to_object_func, to_dtype_func,
- * memview.dtype_is_object)
- */
- __Pyx_XDECREF(__pyx_r);
-
- /* "View.MemoryView":1103
- * return memoryview_fromslice(memviewslice[0], memview.view.ndim,
- * to_object_func, to_dtype_func,
- * memview.dtype_is_object) # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_t_5 = __pyx_memoryview_fromslice((__pyx_v_memviewslice[0]), __pyx_v_memview->view.ndim, __pyx_v_to_object_func, __pyx_v_to_dtype_func, __pyx_v_memview->dtype_is_object); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 1101, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- __pyx_r = __pyx_t_5;
- __pyx_t_5 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":1087
- *
- * @cname('__pyx_memoryview_copy_object_from_slice')
- * cdef memoryview_copy_from_slice(memoryview memview, __Pyx_memviewslice *memviewslice): # <<<<<<<<<<<<<<
- * """
- * Create a new memoryview object from a given memoryview object and slice.
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_5);
- __Pyx_AddTraceback("View.MemoryView.memoryview_copy_from_slice", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":1109
- *
- *
- * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) nogil: # <<<<<<<<<<<<<<
- * if arg < 0:
- * return -arg
- */
-
-static Py_ssize_t abs_py_ssize_t(Py_ssize_t __pyx_v_arg) {
- Py_ssize_t __pyx_r;
- int __pyx_t_1;
-
- /* "View.MemoryView":1110
- *
- * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) nogil:
- * if arg < 0: # <<<<<<<<<<<<<<
- * return -arg
- * else:
- */
- __pyx_t_1 = ((__pyx_v_arg < 0) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":1111
- * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) nogil:
- * if arg < 0:
- * return -arg # <<<<<<<<<<<<<<
- * else:
- * return arg
- */
- __pyx_r = (-__pyx_v_arg);
- goto __pyx_L0;
-
- /* "View.MemoryView":1110
- *
- * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) nogil:
- * if arg < 0: # <<<<<<<<<<<<<<
- * return -arg
- * else:
- */
- }
-
- /* "View.MemoryView":1113
- * return -arg
- * else:
- * return arg # <<<<<<<<<<<<<<
- *
- * @cname('__pyx_get_best_slice_order')
- */
- /*else*/ {
- __pyx_r = __pyx_v_arg;
- goto __pyx_L0;
- }
-
- /* "View.MemoryView":1109
- *
- *
- * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) nogil: # <<<<<<<<<<<<<<
- * if arg < 0:
- * return -arg
- */
-
- /* function exit code */
- __pyx_L0:;
- return __pyx_r;
-}
-
-/* "View.MemoryView":1116
- *
- * @cname('__pyx_get_best_slice_order')
- * cdef char get_best_order(__Pyx_memviewslice *mslice, int ndim) nogil: # <<<<<<<<<<<<<<
- * """
- * Figure out the best memory access order for a given slice.
- */
-
-static char __pyx_get_best_slice_order(__Pyx_memviewslice *__pyx_v_mslice, int __pyx_v_ndim) {
- int __pyx_v_i;
- Py_ssize_t __pyx_v_c_stride;
- Py_ssize_t __pyx_v_f_stride;
- char __pyx_r;
- int __pyx_t_1;
- int __pyx_t_2;
- int __pyx_t_3;
- int __pyx_t_4;
-
- /* "View.MemoryView":1121
- * """
- * cdef int i
- * cdef Py_ssize_t c_stride = 0 # <<<<<<<<<<<<<<
- * cdef Py_ssize_t f_stride = 0
- *
- */
- __pyx_v_c_stride = 0;
-
- /* "View.MemoryView":1122
- * cdef int i
- * cdef Py_ssize_t c_stride = 0
- * cdef Py_ssize_t f_stride = 0 # <<<<<<<<<<<<<<
- *
- * for i in range(ndim - 1, -1, -1):
- */
- __pyx_v_f_stride = 0;
-
- /* "View.MemoryView":1124
- * cdef Py_ssize_t f_stride = 0
- *
- * for i in range(ndim - 1, -1, -1): # <<<<<<<<<<<<<<
- * if mslice.shape[i] > 1:
- * c_stride = mslice.strides[i]
- */
- for (__pyx_t_1 = (__pyx_v_ndim - 1); __pyx_t_1 > -1; __pyx_t_1-=1) {
- __pyx_v_i = __pyx_t_1;
-
- /* "View.MemoryView":1125
- *
- * for i in range(ndim - 1, -1, -1):
- * if mslice.shape[i] > 1: # <<<<<<<<<<<<<<
- * c_stride = mslice.strides[i]
- * break
- */
- __pyx_t_2 = (((__pyx_v_mslice->shape[__pyx_v_i]) > 1) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":1126
- * for i in range(ndim - 1, -1, -1):
- * if mslice.shape[i] > 1:
- * c_stride = mslice.strides[i] # <<<<<<<<<<<<<<
- * break
- *
- */
- __pyx_v_c_stride = (__pyx_v_mslice->strides[__pyx_v_i]);
-
- /* "View.MemoryView":1127
- * if mslice.shape[i] > 1:
- * c_stride = mslice.strides[i]
- * break # <<<<<<<<<<<<<<
- *
- * for i in range(ndim):
- */
- goto __pyx_L4_break;
-
- /* "View.MemoryView":1125
- *
- * for i in range(ndim - 1, -1, -1):
- * if mslice.shape[i] > 1: # <<<<<<<<<<<<<<
- * c_stride = mslice.strides[i]
- * break
- */
- }
- }
- __pyx_L4_break:;
-
- /* "View.MemoryView":1129
- * break
- *
- * for i in range(ndim): # <<<<<<<<<<<<<<
- * if mslice.shape[i] > 1:
- * f_stride = mslice.strides[i]
- */
- __pyx_t_1 = __pyx_v_ndim;
- __pyx_t_3 = __pyx_t_1;
- for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) {
- __pyx_v_i = __pyx_t_4;
-
- /* "View.MemoryView":1130
- *
- * for i in range(ndim):
- * if mslice.shape[i] > 1: # <<<<<<<<<<<<<<
- * f_stride = mslice.strides[i]
- * break
- */
- __pyx_t_2 = (((__pyx_v_mslice->shape[__pyx_v_i]) > 1) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":1131
- * for i in range(ndim):
- * if mslice.shape[i] > 1:
- * f_stride = mslice.strides[i] # <<<<<<<<<<<<<<
- * break
- *
- */
- __pyx_v_f_stride = (__pyx_v_mslice->strides[__pyx_v_i]);
-
- /* "View.MemoryView":1132
- * if mslice.shape[i] > 1:
- * f_stride = mslice.strides[i]
- * break # <<<<<<<<<<<<<<
- *
- * if abs_py_ssize_t(c_stride) <= abs_py_ssize_t(f_stride):
- */
- goto __pyx_L7_break;
-
- /* "View.MemoryView":1130
- *
- * for i in range(ndim):
- * if mslice.shape[i] > 1: # <<<<<<<<<<<<<<
- * f_stride = mslice.strides[i]
- * break
- */
- }
- }
- __pyx_L7_break:;
-
- /* "View.MemoryView":1134
- * break
- *
- * if abs_py_ssize_t(c_stride) <= abs_py_ssize_t(f_stride): # <<<<<<<<<<<<<<
- * return 'C'
- * else:
- */
- __pyx_t_2 = ((abs_py_ssize_t(__pyx_v_c_stride) <= abs_py_ssize_t(__pyx_v_f_stride)) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":1135
- *
- * if abs_py_ssize_t(c_stride) <= abs_py_ssize_t(f_stride):
- * return 'C' # <<<<<<<<<<<<<<
- * else:
- * return 'F'
- */
- __pyx_r = 'C';
- goto __pyx_L0;
-
- /* "View.MemoryView":1134
- * break
- *
- * if abs_py_ssize_t(c_stride) <= abs_py_ssize_t(f_stride): # <<<<<<<<<<<<<<
- * return 'C'
- * else:
- */
- }
-
- /* "View.MemoryView":1137
- * return 'C'
- * else:
- * return 'F' # <<<<<<<<<<<<<<
- *
- * @cython.cdivision(True)
- */
- /*else*/ {
- __pyx_r = 'F';
- goto __pyx_L0;
- }
-
- /* "View.MemoryView":1116
- *
- * @cname('__pyx_get_best_slice_order')
- * cdef char get_best_order(__Pyx_memviewslice *mslice, int ndim) nogil: # <<<<<<<<<<<<<<
- * """
- * Figure out the best memory access order for a given slice.
- */
-
- /* function exit code */
- __pyx_L0:;
- return __pyx_r;
-}
-
-/* "View.MemoryView":1140
- *
- * @cython.cdivision(True)
- * cdef void _copy_strided_to_strided(char *src_data, Py_ssize_t *src_strides, # <<<<<<<<<<<<<<
- * char *dst_data, Py_ssize_t *dst_strides,
- * Py_ssize_t *src_shape, Py_ssize_t *dst_shape,
- */
-
-static void _copy_strided_to_strided(char *__pyx_v_src_data, Py_ssize_t *__pyx_v_src_strides, char *__pyx_v_dst_data, Py_ssize_t *__pyx_v_dst_strides, Py_ssize_t *__pyx_v_src_shape, Py_ssize_t *__pyx_v_dst_shape, int __pyx_v_ndim, size_t __pyx_v_itemsize) {
- CYTHON_UNUSED Py_ssize_t __pyx_v_i;
- CYTHON_UNUSED Py_ssize_t __pyx_v_src_extent;
- Py_ssize_t __pyx_v_dst_extent;
- Py_ssize_t __pyx_v_src_stride;
- Py_ssize_t __pyx_v_dst_stride;
- int __pyx_t_1;
- int __pyx_t_2;
- int __pyx_t_3;
- Py_ssize_t __pyx_t_4;
- Py_ssize_t __pyx_t_5;
- Py_ssize_t __pyx_t_6;
-
- /* "View.MemoryView":1147
- *
- * cdef Py_ssize_t i
- * cdef Py_ssize_t src_extent = src_shape[0] # <<<<<<<<<<<<<<
- * cdef Py_ssize_t dst_extent = dst_shape[0]
- * cdef Py_ssize_t src_stride = src_strides[0]
- */
- __pyx_v_src_extent = (__pyx_v_src_shape[0]);
-
- /* "View.MemoryView":1148
- * cdef Py_ssize_t i
- * cdef Py_ssize_t src_extent = src_shape[0]
- * cdef Py_ssize_t dst_extent = dst_shape[0] # <<<<<<<<<<<<<<
- * cdef Py_ssize_t src_stride = src_strides[0]
- * cdef Py_ssize_t dst_stride = dst_strides[0]
- */
- __pyx_v_dst_extent = (__pyx_v_dst_shape[0]);
-
- /* "View.MemoryView":1149
- * cdef Py_ssize_t src_extent = src_shape[0]
- * cdef Py_ssize_t dst_extent = dst_shape[0]
- * cdef Py_ssize_t src_stride = src_strides[0] # <<<<<<<<<<<<<<
- * cdef Py_ssize_t dst_stride = dst_strides[0]
- *
- */
- __pyx_v_src_stride = (__pyx_v_src_strides[0]);
-
- /* "View.MemoryView":1150
- * cdef Py_ssize_t dst_extent = dst_shape[0]
- * cdef Py_ssize_t src_stride = src_strides[0]
- * cdef Py_ssize_t dst_stride = dst_strides[0] # <<<<<<<<<<<<<<
- *
- * if ndim == 1:
- */
- __pyx_v_dst_stride = (__pyx_v_dst_strides[0]);
-
- /* "View.MemoryView":1152
- * cdef Py_ssize_t dst_stride = dst_strides[0]
- *
- * if ndim == 1: # <<<<<<<<<<<<<<
- * if (src_stride > 0 and dst_stride > 0 and
- * src_stride == itemsize == dst_stride):
- */
- __pyx_t_1 = ((__pyx_v_ndim == 1) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":1153
- *
- * if ndim == 1:
- * if (src_stride > 0 and dst_stride > 0 and # <<<<<<<<<<<<<<
- * src_stride == itemsize == dst_stride):
- * memcpy(dst_data, src_data, itemsize * dst_extent)
- */
- __pyx_t_2 = ((__pyx_v_src_stride > 0) != 0);
- if (__pyx_t_2) {
- } else {
- __pyx_t_1 = __pyx_t_2;
- goto __pyx_L5_bool_binop_done;
- }
- __pyx_t_2 = ((__pyx_v_dst_stride > 0) != 0);
- if (__pyx_t_2) {
- } else {
- __pyx_t_1 = __pyx_t_2;
- goto __pyx_L5_bool_binop_done;
- }
-
- /* "View.MemoryView":1154
- * if ndim == 1:
- * if (src_stride > 0 and dst_stride > 0 and
- * src_stride == itemsize == dst_stride): # <<<<<<<<<<<<<<
- * memcpy(dst_data, src_data, itemsize * dst_extent)
- * else:
- */
- __pyx_t_2 = (((size_t)__pyx_v_src_stride) == __pyx_v_itemsize);
- if (__pyx_t_2) {
- __pyx_t_2 = (__pyx_v_itemsize == ((size_t)__pyx_v_dst_stride));
- }
- __pyx_t_3 = (__pyx_t_2 != 0);
- __pyx_t_1 = __pyx_t_3;
- __pyx_L5_bool_binop_done:;
-
- /* "View.MemoryView":1153
- *
- * if ndim == 1:
- * if (src_stride > 0 and dst_stride > 0 and # <<<<<<<<<<<<<<
- * src_stride == itemsize == dst_stride):
- * memcpy(dst_data, src_data, itemsize * dst_extent)
- */
- if (__pyx_t_1) {
-
- /* "View.MemoryView":1155
- * if (src_stride > 0 and dst_stride > 0 and
- * src_stride == itemsize == dst_stride):
- * memcpy(dst_data, src_data, itemsize * dst_extent) # <<<<<<<<<<<<<<
- * else:
- * for i in range(dst_extent):
- */
- (void)(memcpy(__pyx_v_dst_data, __pyx_v_src_data, (__pyx_v_itemsize * __pyx_v_dst_extent)));
-
- /* "View.MemoryView":1153
- *
- * if ndim == 1:
- * if (src_stride > 0 and dst_stride > 0 and # <<<<<<<<<<<<<<
- * src_stride == itemsize == dst_stride):
- * memcpy(dst_data, src_data, itemsize * dst_extent)
- */
- goto __pyx_L4;
- }
-
- /* "View.MemoryView":1157
- * memcpy(dst_data, src_data, itemsize * dst_extent)
- * else:
- * for i in range(dst_extent): # <<<<<<<<<<<<<<
- * memcpy(dst_data, src_data, itemsize)
- * src_data += src_stride
- */
- /*else*/ {
- __pyx_t_4 = __pyx_v_dst_extent;
- __pyx_t_5 = __pyx_t_4;
- for (__pyx_t_6 = 0; __pyx_t_6 < __pyx_t_5; __pyx_t_6+=1) {
- __pyx_v_i = __pyx_t_6;
-
- /* "View.MemoryView":1158
- * else:
- * for i in range(dst_extent):
- * memcpy(dst_data, src_data, itemsize) # <<<<<<<<<<<<<<
- * src_data += src_stride
- * dst_data += dst_stride
- */
- (void)(memcpy(__pyx_v_dst_data, __pyx_v_src_data, __pyx_v_itemsize));
-
- /* "View.MemoryView":1159
- * for i in range(dst_extent):
- * memcpy(dst_data, src_data, itemsize)
- * src_data += src_stride # <<<<<<<<<<<<<<
- * dst_data += dst_stride
- * else:
- */
- __pyx_v_src_data = (__pyx_v_src_data + __pyx_v_src_stride);
-
- /* "View.MemoryView":1160
- * memcpy(dst_data, src_data, itemsize)
- * src_data += src_stride
- * dst_data += dst_stride # <<<<<<<<<<<<<<
- * else:
- * for i in range(dst_extent):
- */
- __pyx_v_dst_data = (__pyx_v_dst_data + __pyx_v_dst_stride);
- }
- }
- __pyx_L4:;
-
- /* "View.MemoryView":1152
- * cdef Py_ssize_t dst_stride = dst_strides[0]
- *
- * if ndim == 1: # <<<<<<<<<<<<<<
- * if (src_stride > 0 and dst_stride > 0 and
- * src_stride == itemsize == dst_stride):
- */
- goto __pyx_L3;
- }
-
- /* "View.MemoryView":1162
- * dst_data += dst_stride
- * else:
- * for i in range(dst_extent): # <<<<<<<<<<<<<<
- * _copy_strided_to_strided(src_data, src_strides + 1,
- * dst_data, dst_strides + 1,
- */
- /*else*/ {
- __pyx_t_4 = __pyx_v_dst_extent;
- __pyx_t_5 = __pyx_t_4;
- for (__pyx_t_6 = 0; __pyx_t_6 < __pyx_t_5; __pyx_t_6+=1) {
- __pyx_v_i = __pyx_t_6;
-
- /* "View.MemoryView":1163
- * else:
- * for i in range(dst_extent):
- * _copy_strided_to_strided(src_data, src_strides + 1, # <<<<<<<<<<<<<<
- * dst_data, dst_strides + 1,
- * src_shape + 1, dst_shape + 1,
- */
- _copy_strided_to_strided(__pyx_v_src_data, (__pyx_v_src_strides + 1), __pyx_v_dst_data, (__pyx_v_dst_strides + 1), (__pyx_v_src_shape + 1), (__pyx_v_dst_shape + 1), (__pyx_v_ndim - 1), __pyx_v_itemsize);
-
- /* "View.MemoryView":1167
- * src_shape + 1, dst_shape + 1,
- * ndim - 1, itemsize)
- * src_data += src_stride # <<<<<<<<<<<<<<
- * dst_data += dst_stride
- *
- */
- __pyx_v_src_data = (__pyx_v_src_data + __pyx_v_src_stride);
-
- /* "View.MemoryView":1168
- * ndim - 1, itemsize)
- * src_data += src_stride
- * dst_data += dst_stride # <<<<<<<<<<<<<<
- *
- * cdef void copy_strided_to_strided(__Pyx_memviewslice *src,
- */
- __pyx_v_dst_data = (__pyx_v_dst_data + __pyx_v_dst_stride);
- }
- }
- __pyx_L3:;
-
- /* "View.MemoryView":1140
- *
- * @cython.cdivision(True)
- * cdef void _copy_strided_to_strided(char *src_data, Py_ssize_t *src_strides, # <<<<<<<<<<<<<<
- * char *dst_data, Py_ssize_t *dst_strides,
- * Py_ssize_t *src_shape, Py_ssize_t *dst_shape,
- */
-
- /* function exit code */
-}
-
-/* "View.MemoryView":1170
- * dst_data += dst_stride
- *
- * cdef void copy_strided_to_strided(__Pyx_memviewslice *src, # <<<<<<<<<<<<<<
- * __Pyx_memviewslice *dst,
- * int ndim, size_t itemsize) nogil:
- */
-
-static void copy_strided_to_strided(__Pyx_memviewslice *__pyx_v_src, __Pyx_memviewslice *__pyx_v_dst, int __pyx_v_ndim, size_t __pyx_v_itemsize) {
-
- /* "View.MemoryView":1173
- * __Pyx_memviewslice *dst,
- * int ndim, size_t itemsize) nogil:
- * _copy_strided_to_strided(src.data, src.strides, dst.data, dst.strides, # <<<<<<<<<<<<<<
- * src.shape, dst.shape, ndim, itemsize)
- *
- */
- _copy_strided_to_strided(__pyx_v_src->data, __pyx_v_src->strides, __pyx_v_dst->data, __pyx_v_dst->strides, __pyx_v_src->shape, __pyx_v_dst->shape, __pyx_v_ndim, __pyx_v_itemsize);
-
- /* "View.MemoryView":1170
- * dst_data += dst_stride
- *
- * cdef void copy_strided_to_strided(__Pyx_memviewslice *src, # <<<<<<<<<<<<<<
- * __Pyx_memviewslice *dst,
- * int ndim, size_t itemsize) nogil:
- */
-
- /* function exit code */
-}
-
-/* "View.MemoryView":1177
- *
- * @cname('__pyx_memoryview_slice_get_size')
- * cdef Py_ssize_t slice_get_size(__Pyx_memviewslice *src, int ndim) nogil: # <<<<<<<<<<<<<<
- * "Return the size of the memory occupied by the slice in number of bytes"
- * cdef Py_ssize_t shape, size = src.memview.view.itemsize
- */
-
-static Py_ssize_t __pyx_memoryview_slice_get_size(__Pyx_memviewslice *__pyx_v_src, int __pyx_v_ndim) {
- Py_ssize_t __pyx_v_shape;
- Py_ssize_t __pyx_v_size;
- Py_ssize_t __pyx_r;
- Py_ssize_t __pyx_t_1;
- Py_ssize_t *__pyx_t_2;
- Py_ssize_t *__pyx_t_3;
- Py_ssize_t *__pyx_t_4;
-
- /* "View.MemoryView":1179
- * cdef Py_ssize_t slice_get_size(__Pyx_memviewslice *src, int ndim) nogil:
- * "Return the size of the memory occupied by the slice in number of bytes"
- * cdef Py_ssize_t shape, size = src.memview.view.itemsize # <<<<<<<<<<<<<<
- *
- * for shape in src.shape[:ndim]:
- */
- __pyx_t_1 = __pyx_v_src->memview->view.itemsize;
- __pyx_v_size = __pyx_t_1;
-
- /* "View.MemoryView":1181
- * cdef Py_ssize_t shape, size = src.memview.view.itemsize
- *
- * for shape in src.shape[:ndim]: # <<<<<<<<<<<<<<
- * size *= shape
- *
- */
- __pyx_t_3 = (__pyx_v_src->shape + __pyx_v_ndim);
- for (__pyx_t_4 = __pyx_v_src->shape; __pyx_t_4 < __pyx_t_3; __pyx_t_4++) {
- __pyx_t_2 = __pyx_t_4;
- __pyx_v_shape = (__pyx_t_2[0]);
-
- /* "View.MemoryView":1182
- *
- * for shape in src.shape[:ndim]:
- * size *= shape # <<<<<<<<<<<<<<
- *
- * return size
- */
- __pyx_v_size = (__pyx_v_size * __pyx_v_shape);
- }
-
- /* "View.MemoryView":1184
- * size *= shape
- *
- * return size # <<<<<<<<<<<<<<
- *
- * @cname('__pyx_fill_contig_strides_array')
- */
- __pyx_r = __pyx_v_size;
- goto __pyx_L0;
-
- /* "View.MemoryView":1177
- *
- * @cname('__pyx_memoryview_slice_get_size')
- * cdef Py_ssize_t slice_get_size(__Pyx_memviewslice *src, int ndim) nogil: # <<<<<<<<<<<<<<
- * "Return the size of the memory occupied by the slice in number of bytes"
- * cdef Py_ssize_t shape, size = src.memview.view.itemsize
- */
-
- /* function exit code */
- __pyx_L0:;
- return __pyx_r;
-}
-
-/* "View.MemoryView":1187
- *
- * @cname('__pyx_fill_contig_strides_array')
- * cdef Py_ssize_t fill_contig_strides_array( # <<<<<<<<<<<<<<
- * Py_ssize_t *shape, Py_ssize_t *strides, Py_ssize_t stride,
- * int ndim, char order) nogil:
- */
-
-static Py_ssize_t __pyx_fill_contig_strides_array(Py_ssize_t *__pyx_v_shape, Py_ssize_t *__pyx_v_strides, Py_ssize_t __pyx_v_stride, int __pyx_v_ndim, char __pyx_v_order) {
- int __pyx_v_idx;
- Py_ssize_t __pyx_r;
- int __pyx_t_1;
- int __pyx_t_2;
- int __pyx_t_3;
- int __pyx_t_4;
-
- /* "View.MemoryView":1196
- * cdef int idx
- *
- * if order == 'F': # <<<<<<<<<<<<<<
- * for idx in range(ndim):
- * strides[idx] = stride
- */
- __pyx_t_1 = ((__pyx_v_order == 'F') != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":1197
- *
- * if order == 'F':
- * for idx in range(ndim): # <<<<<<<<<<<<<<
- * strides[idx] = stride
- * stride *= shape[idx]
- */
- __pyx_t_2 = __pyx_v_ndim;
- __pyx_t_3 = __pyx_t_2;
- for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) {
- __pyx_v_idx = __pyx_t_4;
-
- /* "View.MemoryView":1198
- * if order == 'F':
- * for idx in range(ndim):
- * strides[idx] = stride # <<<<<<<<<<<<<<
- * stride *= shape[idx]
- * else:
- */
- (__pyx_v_strides[__pyx_v_idx]) = __pyx_v_stride;
-
- /* "View.MemoryView":1199
- * for idx in range(ndim):
- * strides[idx] = stride
- * stride *= shape[idx] # <<<<<<<<<<<<<<
- * else:
- * for idx in range(ndim - 1, -1, -1):
- */
- __pyx_v_stride = (__pyx_v_stride * (__pyx_v_shape[__pyx_v_idx]));
- }
-
- /* "View.MemoryView":1196
- * cdef int idx
- *
- * if order == 'F': # <<<<<<<<<<<<<<
- * for idx in range(ndim):
- * strides[idx] = stride
- */
- goto __pyx_L3;
- }
-
- /* "View.MemoryView":1201
- * stride *= shape[idx]
- * else:
- * for idx in range(ndim - 1, -1, -1): # <<<<<<<<<<<<<<
- * strides[idx] = stride
- * stride *= shape[idx]
- */
- /*else*/ {
- for (__pyx_t_2 = (__pyx_v_ndim - 1); __pyx_t_2 > -1; __pyx_t_2-=1) {
- __pyx_v_idx = __pyx_t_2;
-
- /* "View.MemoryView":1202
- * else:
- * for idx in range(ndim - 1, -1, -1):
- * strides[idx] = stride # <<<<<<<<<<<<<<
- * stride *= shape[idx]
- *
- */
- (__pyx_v_strides[__pyx_v_idx]) = __pyx_v_stride;
-
- /* "View.MemoryView":1203
- * for idx in range(ndim - 1, -1, -1):
- * strides[idx] = stride
- * stride *= shape[idx] # <<<<<<<<<<<<<<
- *
- * return stride
- */
- __pyx_v_stride = (__pyx_v_stride * (__pyx_v_shape[__pyx_v_idx]));
- }
- }
- __pyx_L3:;
-
- /* "View.MemoryView":1205
- * stride *= shape[idx]
- *
- * return stride # <<<<<<<<<<<<<<
- *
- * @cname('__pyx_memoryview_copy_data_to_temp')
- */
- __pyx_r = __pyx_v_stride;
- goto __pyx_L0;
-
- /* "View.MemoryView":1187
- *
- * @cname('__pyx_fill_contig_strides_array')
- * cdef Py_ssize_t fill_contig_strides_array( # <<<<<<<<<<<<<<
- * Py_ssize_t *shape, Py_ssize_t *strides, Py_ssize_t stride,
- * int ndim, char order) nogil:
- */
-
- /* function exit code */
- __pyx_L0:;
- return __pyx_r;
-}
-
-/* "View.MemoryView":1208
- *
- * @cname('__pyx_memoryview_copy_data_to_temp')
- * cdef void *copy_data_to_temp(__Pyx_memviewslice *src, # <<<<<<<<<<<<<<
- * __Pyx_memviewslice *tmpslice,
- * char order,
- */
-
-static void *__pyx_memoryview_copy_data_to_temp(__Pyx_memviewslice *__pyx_v_src, __Pyx_memviewslice *__pyx_v_tmpslice, char __pyx_v_order, int __pyx_v_ndim) {
- int __pyx_v_i;
- void *__pyx_v_result;
- size_t __pyx_v_itemsize;
- size_t __pyx_v_size;
- void *__pyx_r;
- Py_ssize_t __pyx_t_1;
- int __pyx_t_2;
- int __pyx_t_3;
- struct __pyx_memoryview_obj *__pyx_t_4;
- int __pyx_t_5;
- int __pyx_t_6;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
-
- /* "View.MemoryView":1219
- * cdef void *result
- *
- * cdef size_t itemsize = src.memview.view.itemsize # <<<<<<<<<<<<<<
- * cdef size_t size = slice_get_size(src, ndim)
- *
- */
- __pyx_t_1 = __pyx_v_src->memview->view.itemsize;
- __pyx_v_itemsize = __pyx_t_1;
-
- /* "View.MemoryView":1220
- *
- * cdef size_t itemsize = src.memview.view.itemsize
- * cdef size_t size = slice_get_size(src, ndim) # <<<<<<<<<<<<<<
- *
- * result = malloc(size)
- */
- __pyx_v_size = __pyx_memoryview_slice_get_size(__pyx_v_src, __pyx_v_ndim);
-
- /* "View.MemoryView":1222
- * cdef size_t size = slice_get_size(src, ndim)
- *
- * result = malloc(size) # <<<<<<<<<<<<<<
- * if not result:
- * _err(MemoryError, NULL)
- */
- __pyx_v_result = malloc(__pyx_v_size);
-
- /* "View.MemoryView":1223
- *
- * result = malloc(size)
- * if not result: # <<<<<<<<<<<<<<
- * _err(MemoryError, NULL)
- *
- */
- __pyx_t_2 = ((!(__pyx_v_result != 0)) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":1224
- * result = malloc(size)
- * if not result:
- * _err(MemoryError, NULL) # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_t_3 = __pyx_memoryview_err(__pyx_builtin_MemoryError, NULL); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 1224, __pyx_L1_error)
-
- /* "View.MemoryView":1223
- *
- * result = malloc(size)
- * if not result: # <<<<<<<<<<<<<<
- * _err(MemoryError, NULL)
- *
- */
- }
-
- /* "View.MemoryView":1227
- *
- *
- * tmpslice.data = result # <<<<<<<<<<<<<<
- * tmpslice.memview = src.memview
- * for i in range(ndim):
- */
- __pyx_v_tmpslice->data = ((char *)__pyx_v_result);
-
- /* "View.MemoryView":1228
- *
- * tmpslice.data = result
- * tmpslice.memview = src.memview # <<<<<<<<<<<<<<
- * for i in range(ndim):
- * tmpslice.shape[i] = src.shape[i]
- */
- __pyx_t_4 = __pyx_v_src->memview;
- __pyx_v_tmpslice->memview = __pyx_t_4;
-
- /* "View.MemoryView":1229
- * tmpslice.data = result
- * tmpslice.memview = src.memview
- * for i in range(ndim): # <<<<<<<<<<<<<<
- * tmpslice.shape[i] = src.shape[i]
- * tmpslice.suboffsets[i] = -1
- */
- __pyx_t_3 = __pyx_v_ndim;
- __pyx_t_5 = __pyx_t_3;
- for (__pyx_t_6 = 0; __pyx_t_6 < __pyx_t_5; __pyx_t_6+=1) {
- __pyx_v_i = __pyx_t_6;
-
- /* "View.MemoryView":1230
- * tmpslice.memview = src.memview
- * for i in range(ndim):
- * tmpslice.shape[i] = src.shape[i] # <<<<<<<<<<<<<<
- * tmpslice.suboffsets[i] = -1
- *
- */
- (__pyx_v_tmpslice->shape[__pyx_v_i]) = (__pyx_v_src->shape[__pyx_v_i]);
-
- /* "View.MemoryView":1231
- * for i in range(ndim):
- * tmpslice.shape[i] = src.shape[i]
- * tmpslice.suboffsets[i] = -1 # <<<<<<<<<<<<<<
- *
- * fill_contig_strides_array(&tmpslice.shape[0], &tmpslice.strides[0], itemsize,
- */
- (__pyx_v_tmpslice->suboffsets[__pyx_v_i]) = -1L;
- }
-
- /* "View.MemoryView":1233
- * tmpslice.suboffsets[i] = -1
- *
- * fill_contig_strides_array(&tmpslice.shape[0], &tmpslice.strides[0], itemsize, # <<<<<<<<<<<<<<
- * ndim, order)
- *
- */
- (void)(__pyx_fill_contig_strides_array((&(__pyx_v_tmpslice->shape[0])), (&(__pyx_v_tmpslice->strides[0])), __pyx_v_itemsize, __pyx_v_ndim, __pyx_v_order));
-
- /* "View.MemoryView":1237
- *
- *
- * for i in range(ndim): # <<<<<<<<<<<<<<
- * if tmpslice.shape[i] == 1:
- * tmpslice.strides[i] = 0
- */
- __pyx_t_3 = __pyx_v_ndim;
- __pyx_t_5 = __pyx_t_3;
- for (__pyx_t_6 = 0; __pyx_t_6 < __pyx_t_5; __pyx_t_6+=1) {
- __pyx_v_i = __pyx_t_6;
-
- /* "View.MemoryView":1238
- *
- * for i in range(ndim):
- * if tmpslice.shape[i] == 1: # <<<<<<<<<<<<<<
- * tmpslice.strides[i] = 0
- *
- */
- __pyx_t_2 = (((__pyx_v_tmpslice->shape[__pyx_v_i]) == 1) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":1239
- * for i in range(ndim):
- * if tmpslice.shape[i] == 1:
- * tmpslice.strides[i] = 0 # <<<<<<<<<<<<<<
- *
- * if slice_is_contig(src[0], order, ndim):
- */
- (__pyx_v_tmpslice->strides[__pyx_v_i]) = 0;
-
- /* "View.MemoryView":1238
- *
- * for i in range(ndim):
- * if tmpslice.shape[i] == 1: # <<<<<<<<<<<<<<
- * tmpslice.strides[i] = 0
- *
- */
- }
- }
-
- /* "View.MemoryView":1241
- * tmpslice.strides[i] = 0
- *
- * if slice_is_contig(src[0], order, ndim): # <<<<<<<<<<<<<<
- * memcpy(result, src.data, size)
- * else:
- */
- __pyx_t_2 = (__pyx_memviewslice_is_contig((__pyx_v_src[0]), __pyx_v_order, __pyx_v_ndim) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":1242
- *
- * if slice_is_contig(src[0], order, ndim):
- * memcpy(result, src.data, size) # <<<<<<<<<<<<<<
- * else:
- * copy_strided_to_strided(src, tmpslice, ndim, itemsize)
- */
- (void)(memcpy(__pyx_v_result, __pyx_v_src->data, __pyx_v_size));
-
- /* "View.MemoryView":1241
- * tmpslice.strides[i] = 0
- *
- * if slice_is_contig(src[0], order, ndim): # <<<<<<<<<<<<<<
- * memcpy(result, src.data, size)
- * else:
- */
- goto __pyx_L9;
- }
-
- /* "View.MemoryView":1244
- * memcpy(result, src.data, size)
- * else:
- * copy_strided_to_strided(src, tmpslice, ndim, itemsize) # <<<<<<<<<<<<<<
- *
- * return result
- */
- /*else*/ {
- copy_strided_to_strided(__pyx_v_src, __pyx_v_tmpslice, __pyx_v_ndim, __pyx_v_itemsize);
- }
- __pyx_L9:;
-
- /* "View.MemoryView":1246
- * copy_strided_to_strided(src, tmpslice, ndim, itemsize)
- *
- * return result # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_r = __pyx_v_result;
- goto __pyx_L0;
-
- /* "View.MemoryView":1208
- *
- * @cname('__pyx_memoryview_copy_data_to_temp')
- * cdef void *copy_data_to_temp(__Pyx_memviewslice *src, # <<<<<<<<<<<<<<
- * __Pyx_memviewslice *tmpslice,
- * char order,
- */
-
- /* function exit code */
- __pyx_L1_error:;
- {
- #ifdef WITH_THREAD
- PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure();
- #endif
- __Pyx_AddTraceback("View.MemoryView.copy_data_to_temp", __pyx_clineno, __pyx_lineno, __pyx_filename);
- #ifdef WITH_THREAD
- __Pyx_PyGILState_Release(__pyx_gilstate_save);
- #endif
- }
- __pyx_r = NULL;
- __pyx_L0:;
- return __pyx_r;
-}
-
-/* "View.MemoryView":1251
- *
- * @cname('__pyx_memoryview_err_extents')
- * cdef int _err_extents(int i, Py_ssize_t extent1, # <<<<<<<<<<<<<<
- * Py_ssize_t extent2) except -1 with gil:
- * raise ValueError("got differing extents in dimension %d (got %d and %d)" %
- */
-
-static int __pyx_memoryview_err_extents(int __pyx_v_i, Py_ssize_t __pyx_v_extent1, Py_ssize_t __pyx_v_extent2) {
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- PyObject *__pyx_t_2 = NULL;
- PyObject *__pyx_t_3 = NULL;
- PyObject *__pyx_t_4 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- #ifdef WITH_THREAD
- PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure();
- #endif
- __Pyx_RefNannySetupContext("_err_extents", 0);
-
- /* "View.MemoryView":1254
- * Py_ssize_t extent2) except -1 with gil:
- * raise ValueError("got differing extents in dimension %d (got %d and %d)" %
- * (i, extent1, extent2)) # <<<<<<<<<<<<<<
- *
- * @cname('__pyx_memoryview_err_dim')
- */
- __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_i); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 1254, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_extent1); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1254, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_3 = PyInt_FromSsize_t(__pyx_v_extent2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1254, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_t_4 = PyTuple_New(3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 1254, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __Pyx_GIVEREF(__pyx_t_1);
- PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_1);
- __Pyx_GIVEREF(__pyx_t_2);
- PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_2);
- __Pyx_GIVEREF(__pyx_t_3);
- PyTuple_SET_ITEM(__pyx_t_4, 2, __pyx_t_3);
- __pyx_t_1 = 0;
- __pyx_t_2 = 0;
- __pyx_t_3 = 0;
-
- /* "View.MemoryView":1253
- * cdef int _err_extents(int i, Py_ssize_t extent1,
- * Py_ssize_t extent2) except -1 with gil:
- * raise ValueError("got differing extents in dimension %d (got %d and %d)" % # <<<<<<<<<<<<<<
- * (i, extent1, extent2))
- *
- */
- __pyx_t_3 = __Pyx_PyString_Format(__pyx_kp_s_got_differing_extents_in_dimensi, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1253, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
- __pyx_t_4 = __Pyx_PyObject_CallOneArg(__pyx_builtin_ValueError, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 1253, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __Pyx_Raise(__pyx_t_4, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
- __PYX_ERR(1, 1253, __pyx_L1_error)
-
- /* "View.MemoryView":1251
- *
- * @cname('__pyx_memoryview_err_extents')
- * cdef int _err_extents(int i, Py_ssize_t extent1, # <<<<<<<<<<<<<<
- * Py_ssize_t extent2) except -1 with gil:
- * raise ValueError("got differing extents in dimension %d (got %d and %d)" %
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_XDECREF(__pyx_t_4);
- __Pyx_AddTraceback("View.MemoryView._err_extents", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = -1;
- __Pyx_RefNannyFinishContext();
- #ifdef WITH_THREAD
- __Pyx_PyGILState_Release(__pyx_gilstate_save);
- #endif
- return __pyx_r;
-}
-
-/* "View.MemoryView":1257
- *
- * @cname('__pyx_memoryview_err_dim')
- * cdef int _err_dim(object error, char *msg, int dim) except -1 with gil: # <<<<<<<<<<<<<<
- * raise error(msg.decode('ascii') % dim)
- *
- */
-
-static int __pyx_memoryview_err_dim(PyObject *__pyx_v_error, char *__pyx_v_msg, int __pyx_v_dim) {
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- PyObject *__pyx_t_2 = NULL;
- PyObject *__pyx_t_3 = NULL;
- PyObject *__pyx_t_4 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- #ifdef WITH_THREAD
- PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure();
- #endif
- __Pyx_RefNannySetupContext("_err_dim", 0);
- __Pyx_INCREF(__pyx_v_error);
-
- /* "View.MemoryView":1258
- * @cname('__pyx_memoryview_err_dim')
- * cdef int _err_dim(object error, char *msg, int dim) except -1 with gil:
- * raise error(msg.decode('ascii') % dim) # <<<<<<<<<<<<<<
- *
- * @cname('__pyx_memoryview_err')
- */
- __pyx_t_2 = __Pyx_decode_c_string(__pyx_v_msg, 0, strlen(__pyx_v_msg), NULL, NULL, PyUnicode_DecodeASCII); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1258, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_3 = __Pyx_PyInt_From_int(__pyx_v_dim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1258, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_t_4 = PyUnicode_Format(__pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 1258, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __Pyx_INCREF(__pyx_v_error);
- __pyx_t_3 = __pyx_v_error; __pyx_t_2 = NULL;
- if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) {
- __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3);
- if (likely(__pyx_t_2)) {
- PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);
- __Pyx_INCREF(__pyx_t_2);
- __Pyx_INCREF(function);
- __Pyx_DECREF_SET(__pyx_t_3, function);
- }
- }
- __pyx_t_1 = (__pyx_t_2) ? __Pyx_PyObject_Call2Args(__pyx_t_3, __pyx_t_2, __pyx_t_4) : __Pyx_PyObject_CallOneArg(__pyx_t_3, __pyx_t_4);
- __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;
- __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
- if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 1258, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __Pyx_Raise(__pyx_t_1, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __PYX_ERR(1, 1258, __pyx_L1_error)
-
- /* "View.MemoryView":1257
- *
- * @cname('__pyx_memoryview_err_dim')
- * cdef int _err_dim(object error, char *msg, int dim) except -1 with gil: # <<<<<<<<<<<<<<
- * raise error(msg.decode('ascii') % dim)
- *
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_XDECREF(__pyx_t_4);
- __Pyx_AddTraceback("View.MemoryView._err_dim", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = -1;
- __Pyx_XDECREF(__pyx_v_error);
- __Pyx_RefNannyFinishContext();
- #ifdef WITH_THREAD
- __Pyx_PyGILState_Release(__pyx_gilstate_save);
- #endif
- return __pyx_r;
-}
-
-/* "View.MemoryView":1261
- *
- * @cname('__pyx_memoryview_err')
- * cdef int _err(object error, char *msg) except -1 with gil: # <<<<<<<<<<<<<<
- * if msg != NULL:
- * raise error(msg.decode('ascii'))
- */
-
-static int __pyx_memoryview_err(PyObject *__pyx_v_error, char *__pyx_v_msg) {
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- PyObject *__pyx_t_2 = NULL;
- PyObject *__pyx_t_3 = NULL;
- PyObject *__pyx_t_4 = NULL;
- PyObject *__pyx_t_5 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- #ifdef WITH_THREAD
- PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure();
- #endif
- __Pyx_RefNannySetupContext("_err", 0);
- __Pyx_INCREF(__pyx_v_error);
-
- /* "View.MemoryView":1262
- * @cname('__pyx_memoryview_err')
- * cdef int _err(object error, char *msg) except -1 with gil:
- * if msg != NULL: # <<<<<<<<<<<<<<
- * raise error(msg.decode('ascii'))
- * else:
- */
- __pyx_t_1 = ((__pyx_v_msg != NULL) != 0);
- if (unlikely(__pyx_t_1)) {
-
- /* "View.MemoryView":1263
- * cdef int _err(object error, char *msg) except -1 with gil:
- * if msg != NULL:
- * raise error(msg.decode('ascii')) # <<<<<<<<<<<<<<
- * else:
- * raise error
- */
- __pyx_t_3 = __Pyx_decode_c_string(__pyx_v_msg, 0, strlen(__pyx_v_msg), NULL, NULL, PyUnicode_DecodeASCII); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1263, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_INCREF(__pyx_v_error);
- __pyx_t_4 = __pyx_v_error; __pyx_t_5 = NULL;
- if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_4))) {
- __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_4);
- if (likely(__pyx_t_5)) {
- PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4);
- __Pyx_INCREF(__pyx_t_5);
- __Pyx_INCREF(function);
- __Pyx_DECREF_SET(__pyx_t_4, function);
- }
- }
- __pyx_t_2 = (__pyx_t_5) ? __Pyx_PyObject_Call2Args(__pyx_t_4, __pyx_t_5, __pyx_t_3) : __Pyx_PyObject_CallOneArg(__pyx_t_4, __pyx_t_3);
- __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1263, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
- __Pyx_Raise(__pyx_t_2, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __PYX_ERR(1, 1263, __pyx_L1_error)
-
- /* "View.MemoryView":1262
- * @cname('__pyx_memoryview_err')
- * cdef int _err(object error, char *msg) except -1 with gil:
- * if msg != NULL: # <<<<<<<<<<<<<<
- * raise error(msg.decode('ascii'))
- * else:
- */
- }
-
- /* "View.MemoryView":1265
- * raise error(msg.decode('ascii'))
- * else:
- * raise error # <<<<<<<<<<<<<<
- *
- * @cname('__pyx_memoryview_copy_contents')
- */
- /*else*/ {
- __Pyx_Raise(__pyx_v_error, 0, 0, 0);
- __PYX_ERR(1, 1265, __pyx_L1_error)
- }
-
- /* "View.MemoryView":1261
- *
- * @cname('__pyx_memoryview_err')
- * cdef int _err(object error, char *msg) except -1 with gil: # <<<<<<<<<<<<<<
- * if msg != NULL:
- * raise error(msg.decode('ascii'))
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_XDECREF(__pyx_t_4);
- __Pyx_XDECREF(__pyx_t_5);
- __Pyx_AddTraceback("View.MemoryView._err", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = -1;
- __Pyx_XDECREF(__pyx_v_error);
- __Pyx_RefNannyFinishContext();
- #ifdef WITH_THREAD
- __Pyx_PyGILState_Release(__pyx_gilstate_save);
- #endif
- return __pyx_r;
-}
-
-/* "View.MemoryView":1268
- *
- * @cname('__pyx_memoryview_copy_contents')
- * cdef int memoryview_copy_contents(__Pyx_memviewslice src, # <<<<<<<<<<<<<<
- * __Pyx_memviewslice dst,
- * int src_ndim, int dst_ndim,
- */
-
-static int __pyx_memoryview_copy_contents(__Pyx_memviewslice __pyx_v_src, __Pyx_memviewslice __pyx_v_dst, int __pyx_v_src_ndim, int __pyx_v_dst_ndim, int __pyx_v_dtype_is_object) {
- void *__pyx_v_tmpdata;
- size_t __pyx_v_itemsize;
- int __pyx_v_i;
- char __pyx_v_order;
- int __pyx_v_broadcasting;
- int __pyx_v_direct_copy;
- __Pyx_memviewslice __pyx_v_tmp;
- int __pyx_v_ndim;
- int __pyx_r;
- Py_ssize_t __pyx_t_1;
- int __pyx_t_2;
- int __pyx_t_3;
- int __pyx_t_4;
- int __pyx_t_5;
- int __pyx_t_6;
- void *__pyx_t_7;
- int __pyx_t_8;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
-
- /* "View.MemoryView":1276
- * Check for overlapping memory and verify the shapes.
- * """
- * cdef void *tmpdata = NULL # <<<<<<<<<<<<<<
- * cdef size_t itemsize = src.memview.view.itemsize
- * cdef int i
- */
- __pyx_v_tmpdata = NULL;
-
- /* "View.MemoryView":1277
- * """
- * cdef void *tmpdata = NULL
- * cdef size_t itemsize = src.memview.view.itemsize # <<<<<<<<<<<<<<
- * cdef int i
- * cdef char order = get_best_order(&src, src_ndim)
- */
- __pyx_t_1 = __pyx_v_src.memview->view.itemsize;
- __pyx_v_itemsize = __pyx_t_1;
-
- /* "View.MemoryView":1279
- * cdef size_t itemsize = src.memview.view.itemsize
- * cdef int i
- * cdef char order = get_best_order(&src, src_ndim) # <<<<<<<<<<<<<<
- * cdef bint broadcasting = False
- * cdef bint direct_copy = False
- */
- __pyx_v_order = __pyx_get_best_slice_order((&__pyx_v_src), __pyx_v_src_ndim);
-
- /* "View.MemoryView":1280
- * cdef int i
- * cdef char order = get_best_order(&src, src_ndim)
- * cdef bint broadcasting = False # <<<<<<<<<<<<<<
- * cdef bint direct_copy = False
- * cdef __Pyx_memviewslice tmp
- */
- __pyx_v_broadcasting = 0;
-
- /* "View.MemoryView":1281
- * cdef char order = get_best_order(&src, src_ndim)
- * cdef bint broadcasting = False
- * cdef bint direct_copy = False # <<<<<<<<<<<<<<
- * cdef __Pyx_memviewslice tmp
- *
- */
- __pyx_v_direct_copy = 0;
-
- /* "View.MemoryView":1284
- * cdef __Pyx_memviewslice tmp
- *
- * if src_ndim < dst_ndim: # <<<<<<<<<<<<<<
- * broadcast_leading(&src, src_ndim, dst_ndim)
- * elif dst_ndim < src_ndim:
- */
- __pyx_t_2 = ((__pyx_v_src_ndim < __pyx_v_dst_ndim) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":1285
- *
- * if src_ndim < dst_ndim:
- * broadcast_leading(&src, src_ndim, dst_ndim) # <<<<<<<<<<<<<<
- * elif dst_ndim < src_ndim:
- * broadcast_leading(&dst, dst_ndim, src_ndim)
- */
- __pyx_memoryview_broadcast_leading((&__pyx_v_src), __pyx_v_src_ndim, __pyx_v_dst_ndim);
-
- /* "View.MemoryView":1284
- * cdef __Pyx_memviewslice tmp
- *
- * if src_ndim < dst_ndim: # <<<<<<<<<<<<<<
- * broadcast_leading(&src, src_ndim, dst_ndim)
- * elif dst_ndim < src_ndim:
- */
- goto __pyx_L3;
- }
-
- /* "View.MemoryView":1286
- * if src_ndim < dst_ndim:
- * broadcast_leading(&src, src_ndim, dst_ndim)
- * elif dst_ndim < src_ndim: # <<<<<<<<<<<<<<
- * broadcast_leading(&dst, dst_ndim, src_ndim)
- *
- */
- __pyx_t_2 = ((__pyx_v_dst_ndim < __pyx_v_src_ndim) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":1287
- * broadcast_leading(&src, src_ndim, dst_ndim)
- * elif dst_ndim < src_ndim:
- * broadcast_leading(&dst, dst_ndim, src_ndim) # <<<<<<<<<<<<<<
- *
- * cdef int ndim = max(src_ndim, dst_ndim)
- */
- __pyx_memoryview_broadcast_leading((&__pyx_v_dst), __pyx_v_dst_ndim, __pyx_v_src_ndim);
-
- /* "View.MemoryView":1286
- * if src_ndim < dst_ndim:
- * broadcast_leading(&src, src_ndim, dst_ndim)
- * elif dst_ndim < src_ndim: # <<<<<<<<<<<<<<
- * broadcast_leading(&dst, dst_ndim, src_ndim)
- *
- */
- }
- __pyx_L3:;
-
- /* "View.MemoryView":1289
- * broadcast_leading(&dst, dst_ndim, src_ndim)
- *
- * cdef int ndim = max(src_ndim, dst_ndim) # <<<<<<<<<<<<<<
- *
- * for i in range(ndim):
- */
- __pyx_t_3 = __pyx_v_dst_ndim;
- __pyx_t_4 = __pyx_v_src_ndim;
- if (((__pyx_t_3 > __pyx_t_4) != 0)) {
- __pyx_t_5 = __pyx_t_3;
- } else {
- __pyx_t_5 = __pyx_t_4;
- }
- __pyx_v_ndim = __pyx_t_5;
-
- /* "View.MemoryView":1291
- * cdef int ndim = max(src_ndim, dst_ndim)
- *
- * for i in range(ndim): # <<<<<<<<<<<<<<
- * if src.shape[i] != dst.shape[i]:
- * if src.shape[i] == 1:
- */
- __pyx_t_5 = __pyx_v_ndim;
- __pyx_t_3 = __pyx_t_5;
- for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) {
- __pyx_v_i = __pyx_t_4;
-
- /* "View.MemoryView":1292
- *
- * for i in range(ndim):
- * if src.shape[i] != dst.shape[i]: # <<<<<<<<<<<<<<
- * if src.shape[i] == 1:
- * broadcasting = True
- */
- __pyx_t_2 = (((__pyx_v_src.shape[__pyx_v_i]) != (__pyx_v_dst.shape[__pyx_v_i])) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":1293
- * for i in range(ndim):
- * if src.shape[i] != dst.shape[i]:
- * if src.shape[i] == 1: # <<<<<<<<<<<<<<
- * broadcasting = True
- * src.strides[i] = 0
- */
- __pyx_t_2 = (((__pyx_v_src.shape[__pyx_v_i]) == 1) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":1294
- * if src.shape[i] != dst.shape[i]:
- * if src.shape[i] == 1:
- * broadcasting = True # <<<<<<<<<<<<<<
- * src.strides[i] = 0
- * else:
- */
- __pyx_v_broadcasting = 1;
-
- /* "View.MemoryView":1295
- * if src.shape[i] == 1:
- * broadcasting = True
- * src.strides[i] = 0 # <<<<<<<<<<<<<<
- * else:
- * _err_extents(i, dst.shape[i], src.shape[i])
- */
- (__pyx_v_src.strides[__pyx_v_i]) = 0;
-
- /* "View.MemoryView":1293
- * for i in range(ndim):
- * if src.shape[i] != dst.shape[i]:
- * if src.shape[i] == 1: # <<<<<<<<<<<<<<
- * broadcasting = True
- * src.strides[i] = 0
- */
- goto __pyx_L7;
- }
-
- /* "View.MemoryView":1297
- * src.strides[i] = 0
- * else:
- * _err_extents(i, dst.shape[i], src.shape[i]) # <<<<<<<<<<<<<<
- *
- * if src.suboffsets[i] >= 0:
- */
- /*else*/ {
- __pyx_t_6 = __pyx_memoryview_err_extents(__pyx_v_i, (__pyx_v_dst.shape[__pyx_v_i]), (__pyx_v_src.shape[__pyx_v_i])); if (unlikely(__pyx_t_6 == ((int)-1))) __PYX_ERR(1, 1297, __pyx_L1_error)
- }
- __pyx_L7:;
-
- /* "View.MemoryView":1292
- *
- * for i in range(ndim):
- * if src.shape[i] != dst.shape[i]: # <<<<<<<<<<<<<<
- * if src.shape[i] == 1:
- * broadcasting = True
- */
- }
-
- /* "View.MemoryView":1299
- * _err_extents(i, dst.shape[i], src.shape[i])
- *
- * if src.suboffsets[i] >= 0: # <<<<<<<<<<<<<<
- * _err_dim(ValueError, "Dimension %d is not direct", i)
- *
- */
- __pyx_t_2 = (((__pyx_v_src.suboffsets[__pyx_v_i]) >= 0) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":1300
- *
- * if src.suboffsets[i] >= 0:
- * _err_dim(ValueError, "Dimension %d is not direct", i) # <<<<<<<<<<<<<<
- *
- * if slices_overlap(&src, &dst, ndim, itemsize):
- */
- __pyx_t_6 = __pyx_memoryview_err_dim(__pyx_builtin_ValueError, ((char *)"Dimension %d is not direct"), __pyx_v_i); if (unlikely(__pyx_t_6 == ((int)-1))) __PYX_ERR(1, 1300, __pyx_L1_error)
-
- /* "View.MemoryView":1299
- * _err_extents(i, dst.shape[i], src.shape[i])
- *
- * if src.suboffsets[i] >= 0: # <<<<<<<<<<<<<<
- * _err_dim(ValueError, "Dimension %d is not direct", i)
- *
- */
- }
- }
-
- /* "View.MemoryView":1302
- * _err_dim(ValueError, "Dimension %d is not direct", i)
- *
- * if slices_overlap(&src, &dst, ndim, itemsize): # <<<<<<<<<<<<<<
- *
- * if not slice_is_contig(src, order, ndim):
- */
- __pyx_t_2 = (__pyx_slices_overlap((&__pyx_v_src), (&__pyx_v_dst), __pyx_v_ndim, __pyx_v_itemsize) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":1304
- * if slices_overlap(&src, &dst, ndim, itemsize):
- *
- * if not slice_is_contig(src, order, ndim): # <<<<<<<<<<<<<<
- * order = get_best_order(&dst, ndim)
- *
- */
- __pyx_t_2 = ((!(__pyx_memviewslice_is_contig(__pyx_v_src, __pyx_v_order, __pyx_v_ndim) != 0)) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":1305
- *
- * if not slice_is_contig(src, order, ndim):
- * order = get_best_order(&dst, ndim) # <<<<<<<<<<<<<<
- *
- * tmpdata = copy_data_to_temp(&src, &tmp, order, ndim)
- */
- __pyx_v_order = __pyx_get_best_slice_order((&__pyx_v_dst), __pyx_v_ndim);
-
- /* "View.MemoryView":1304
- * if slices_overlap(&src, &dst, ndim, itemsize):
- *
- * if not slice_is_contig(src, order, ndim): # <<<<<<<<<<<<<<
- * order = get_best_order(&dst, ndim)
- *
- */
- }
-
- /* "View.MemoryView":1307
- * order = get_best_order(&dst, ndim)
- *
- * tmpdata = copy_data_to_temp(&src, &tmp, order, ndim) # <<<<<<<<<<<<<<
- * src = tmp
- *
- */
- __pyx_t_7 = __pyx_memoryview_copy_data_to_temp((&__pyx_v_src), (&__pyx_v_tmp), __pyx_v_order, __pyx_v_ndim); if (unlikely(__pyx_t_7 == ((void *)NULL))) __PYX_ERR(1, 1307, __pyx_L1_error)
- __pyx_v_tmpdata = __pyx_t_7;
-
- /* "View.MemoryView":1308
- *
- * tmpdata = copy_data_to_temp(&src, &tmp, order, ndim)
- * src = tmp # <<<<<<<<<<<<<<
- *
- * if not broadcasting:
- */
- __pyx_v_src = __pyx_v_tmp;
-
- /* "View.MemoryView":1302
- * _err_dim(ValueError, "Dimension %d is not direct", i)
- *
- * if slices_overlap(&src, &dst, ndim, itemsize): # <<<<<<<<<<<<<<
- *
- * if not slice_is_contig(src, order, ndim):
- */
- }
-
- /* "View.MemoryView":1310
- * src = tmp
- *
- * if not broadcasting: # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_t_2 = ((!(__pyx_v_broadcasting != 0)) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":1313
- *
- *
- * if slice_is_contig(src, 'C', ndim): # <<<<<<<<<<<<<<
- * direct_copy = slice_is_contig(dst, 'C', ndim)
- * elif slice_is_contig(src, 'F', ndim):
- */
- __pyx_t_2 = (__pyx_memviewslice_is_contig(__pyx_v_src, 'C', __pyx_v_ndim) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":1314
- *
- * if slice_is_contig(src, 'C', ndim):
- * direct_copy = slice_is_contig(dst, 'C', ndim) # <<<<<<<<<<<<<<
- * elif slice_is_contig(src, 'F', ndim):
- * direct_copy = slice_is_contig(dst, 'F', ndim)
- */
- __pyx_v_direct_copy = __pyx_memviewslice_is_contig(__pyx_v_dst, 'C', __pyx_v_ndim);
-
- /* "View.MemoryView":1313
- *
- *
- * if slice_is_contig(src, 'C', ndim): # <<<<<<<<<<<<<<
- * direct_copy = slice_is_contig(dst, 'C', ndim)
- * elif slice_is_contig(src, 'F', ndim):
- */
- goto __pyx_L12;
- }
-
- /* "View.MemoryView":1315
- * if slice_is_contig(src, 'C', ndim):
- * direct_copy = slice_is_contig(dst, 'C', ndim)
- * elif slice_is_contig(src, 'F', ndim): # <<<<<<<<<<<<<<
- * direct_copy = slice_is_contig(dst, 'F', ndim)
- *
- */
- __pyx_t_2 = (__pyx_memviewslice_is_contig(__pyx_v_src, 'F', __pyx_v_ndim) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":1316
- * direct_copy = slice_is_contig(dst, 'C', ndim)
- * elif slice_is_contig(src, 'F', ndim):
- * direct_copy = slice_is_contig(dst, 'F', ndim) # <<<<<<<<<<<<<<
- *
- * if direct_copy:
- */
- __pyx_v_direct_copy = __pyx_memviewslice_is_contig(__pyx_v_dst, 'F', __pyx_v_ndim);
-
- /* "View.MemoryView":1315
- * if slice_is_contig(src, 'C', ndim):
- * direct_copy = slice_is_contig(dst, 'C', ndim)
- * elif slice_is_contig(src, 'F', ndim): # <<<<<<<<<<<<<<
- * direct_copy = slice_is_contig(dst, 'F', ndim)
- *
- */
- }
- __pyx_L12:;
-
- /* "View.MemoryView":1318
- * direct_copy = slice_is_contig(dst, 'F', ndim)
- *
- * if direct_copy: # <<<<<<<<<<<<<<
- *
- * refcount_copying(&dst, dtype_is_object, ndim, False)
- */
- __pyx_t_2 = (__pyx_v_direct_copy != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":1320
- * if direct_copy:
- *
- * refcount_copying(&dst, dtype_is_object, ndim, False) # <<<<<<<<<<<<<<
- * memcpy(dst.data, src.data, slice_get_size(&src, ndim))
- * refcount_copying(&dst, dtype_is_object, ndim, True)
- */
- __pyx_memoryview_refcount_copying((&__pyx_v_dst), __pyx_v_dtype_is_object, __pyx_v_ndim, 0);
-
- /* "View.MemoryView":1321
- *
- * refcount_copying(&dst, dtype_is_object, ndim, False)
- * memcpy(dst.data, src.data, slice_get_size(&src, ndim)) # <<<<<<<<<<<<<<
- * refcount_copying(&dst, dtype_is_object, ndim, True)
- * free(tmpdata)
- */
- (void)(memcpy(__pyx_v_dst.data, __pyx_v_src.data, __pyx_memoryview_slice_get_size((&__pyx_v_src), __pyx_v_ndim)));
-
- /* "View.MemoryView":1322
- * refcount_copying(&dst, dtype_is_object, ndim, False)
- * memcpy(dst.data, src.data, slice_get_size(&src, ndim))
- * refcount_copying(&dst, dtype_is_object, ndim, True) # <<<<<<<<<<<<<<
- * free(tmpdata)
- * return 0
- */
- __pyx_memoryview_refcount_copying((&__pyx_v_dst), __pyx_v_dtype_is_object, __pyx_v_ndim, 1);
-
- /* "View.MemoryView":1323
- * memcpy(dst.data, src.data, slice_get_size(&src, ndim))
- * refcount_copying(&dst, dtype_is_object, ndim, True)
- * free(tmpdata) # <<<<<<<<<<<<<<
- * return 0
- *
- */
- free(__pyx_v_tmpdata);
-
- /* "View.MemoryView":1324
- * refcount_copying(&dst, dtype_is_object, ndim, True)
- * free(tmpdata)
- * return 0 # <<<<<<<<<<<<<<
- *
- * if order == 'F' == get_best_order(&dst, ndim):
- */
- __pyx_r = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":1318
- * direct_copy = slice_is_contig(dst, 'F', ndim)
- *
- * if direct_copy: # <<<<<<<<<<<<<<
- *
- * refcount_copying(&dst, dtype_is_object, ndim, False)
- */
- }
-
- /* "View.MemoryView":1310
- * src = tmp
- *
- * if not broadcasting: # <<<<<<<<<<<<<<
- *
- *
- */
- }
-
- /* "View.MemoryView":1326
- * return 0
- *
- * if order == 'F' == get_best_order(&dst, ndim): # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_t_2 = (__pyx_v_order == 'F');
- if (__pyx_t_2) {
- __pyx_t_2 = ('F' == __pyx_get_best_slice_order((&__pyx_v_dst), __pyx_v_ndim));
- }
- __pyx_t_8 = (__pyx_t_2 != 0);
- if (__pyx_t_8) {
-
- /* "View.MemoryView":1329
- *
- *
- * transpose_memslice(&src) # <<<<<<<<<<<<<<
- * transpose_memslice(&dst)
- *
- */
- __pyx_t_5 = __pyx_memslice_transpose((&__pyx_v_src)); if (unlikely(__pyx_t_5 == ((int)0))) __PYX_ERR(1, 1329, __pyx_L1_error)
-
- /* "View.MemoryView":1330
- *
- * transpose_memslice(&src)
- * transpose_memslice(&dst) # <<<<<<<<<<<<<<
- *
- * refcount_copying(&dst, dtype_is_object, ndim, False)
- */
- __pyx_t_5 = __pyx_memslice_transpose((&__pyx_v_dst)); if (unlikely(__pyx_t_5 == ((int)0))) __PYX_ERR(1, 1330, __pyx_L1_error)
-
- /* "View.MemoryView":1326
- * return 0
- *
- * if order == 'F' == get_best_order(&dst, ndim): # <<<<<<<<<<<<<<
- *
- *
- */
- }
-
- /* "View.MemoryView":1332
- * transpose_memslice(&dst)
- *
- * refcount_copying(&dst, dtype_is_object, ndim, False) # <<<<<<<<<<<<<<
- * copy_strided_to_strided(&src, &dst, ndim, itemsize)
- * refcount_copying(&dst, dtype_is_object, ndim, True)
- */
- __pyx_memoryview_refcount_copying((&__pyx_v_dst), __pyx_v_dtype_is_object, __pyx_v_ndim, 0);
-
- /* "View.MemoryView":1333
- *
- * refcount_copying(&dst, dtype_is_object, ndim, False)
- * copy_strided_to_strided(&src, &dst, ndim, itemsize) # <<<<<<<<<<<<<<
- * refcount_copying(&dst, dtype_is_object, ndim, True)
- *
- */
- copy_strided_to_strided((&__pyx_v_src), (&__pyx_v_dst), __pyx_v_ndim, __pyx_v_itemsize);
-
- /* "View.MemoryView":1334
- * refcount_copying(&dst, dtype_is_object, ndim, False)
- * copy_strided_to_strided(&src, &dst, ndim, itemsize)
- * refcount_copying(&dst, dtype_is_object, ndim, True) # <<<<<<<<<<<<<<
- *
- * free(tmpdata)
- */
- __pyx_memoryview_refcount_copying((&__pyx_v_dst), __pyx_v_dtype_is_object, __pyx_v_ndim, 1);
-
- /* "View.MemoryView":1336
- * refcount_copying(&dst, dtype_is_object, ndim, True)
- *
- * free(tmpdata) # <<<<<<<<<<<<<<
- * return 0
- *
- */
- free(__pyx_v_tmpdata);
-
- /* "View.MemoryView":1337
- *
- * free(tmpdata)
- * return 0 # <<<<<<<<<<<<<<
- *
- * @cname('__pyx_memoryview_broadcast_leading')
- */
- __pyx_r = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":1268
- *
- * @cname('__pyx_memoryview_copy_contents')
- * cdef int memoryview_copy_contents(__Pyx_memviewslice src, # <<<<<<<<<<<<<<
- * __Pyx_memviewslice dst,
- * int src_ndim, int dst_ndim,
- */
-
- /* function exit code */
- __pyx_L1_error:;
- {
- #ifdef WITH_THREAD
- PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure();
- #endif
- __Pyx_AddTraceback("View.MemoryView.memoryview_copy_contents", __pyx_clineno, __pyx_lineno, __pyx_filename);
- #ifdef WITH_THREAD
- __Pyx_PyGILState_Release(__pyx_gilstate_save);
- #endif
- }
- __pyx_r = -1;
- __pyx_L0:;
- return __pyx_r;
-}
-
-/* "View.MemoryView":1340
- *
- * @cname('__pyx_memoryview_broadcast_leading')
- * cdef void broadcast_leading(__Pyx_memviewslice *mslice, # <<<<<<<<<<<<<<
- * int ndim,
- * int ndim_other) nogil:
- */
-
-static void __pyx_memoryview_broadcast_leading(__Pyx_memviewslice *__pyx_v_mslice, int __pyx_v_ndim, int __pyx_v_ndim_other) {
- int __pyx_v_i;
- int __pyx_v_offset;
- int __pyx_t_1;
- int __pyx_t_2;
- int __pyx_t_3;
-
- /* "View.MemoryView":1344
- * int ndim_other) nogil:
- * cdef int i
- * cdef int offset = ndim_other - ndim # <<<<<<<<<<<<<<
- *
- * for i in range(ndim - 1, -1, -1):
- */
- __pyx_v_offset = (__pyx_v_ndim_other - __pyx_v_ndim);
-
- /* "View.MemoryView":1346
- * cdef int offset = ndim_other - ndim
- *
- * for i in range(ndim - 1, -1, -1): # <<<<<<<<<<<<<<
- * mslice.shape[i + offset] = mslice.shape[i]
- * mslice.strides[i + offset] = mslice.strides[i]
- */
- for (__pyx_t_1 = (__pyx_v_ndim - 1); __pyx_t_1 > -1; __pyx_t_1-=1) {
- __pyx_v_i = __pyx_t_1;
-
- /* "View.MemoryView":1347
- *
- * for i in range(ndim - 1, -1, -1):
- * mslice.shape[i + offset] = mslice.shape[i] # <<<<<<<<<<<<<<
- * mslice.strides[i + offset] = mslice.strides[i]
- * mslice.suboffsets[i + offset] = mslice.suboffsets[i]
- */
- (__pyx_v_mslice->shape[(__pyx_v_i + __pyx_v_offset)]) = (__pyx_v_mslice->shape[__pyx_v_i]);
-
- /* "View.MemoryView":1348
- * for i in range(ndim - 1, -1, -1):
- * mslice.shape[i + offset] = mslice.shape[i]
- * mslice.strides[i + offset] = mslice.strides[i] # <<<<<<<<<<<<<<
- * mslice.suboffsets[i + offset] = mslice.suboffsets[i]
- *
- */
- (__pyx_v_mslice->strides[(__pyx_v_i + __pyx_v_offset)]) = (__pyx_v_mslice->strides[__pyx_v_i]);
-
- /* "View.MemoryView":1349
- * mslice.shape[i + offset] = mslice.shape[i]
- * mslice.strides[i + offset] = mslice.strides[i]
- * mslice.suboffsets[i + offset] = mslice.suboffsets[i] # <<<<<<<<<<<<<<
- *
- * for i in range(offset):
- */
- (__pyx_v_mslice->suboffsets[(__pyx_v_i + __pyx_v_offset)]) = (__pyx_v_mslice->suboffsets[__pyx_v_i]);
- }
-
- /* "View.MemoryView":1351
- * mslice.suboffsets[i + offset] = mslice.suboffsets[i]
- *
- * for i in range(offset): # <<<<<<<<<<<<<<
- * mslice.shape[i] = 1
- * mslice.strides[i] = mslice.strides[0]
- */
- __pyx_t_1 = __pyx_v_offset;
- __pyx_t_2 = __pyx_t_1;
- for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) {
- __pyx_v_i = __pyx_t_3;
-
- /* "View.MemoryView":1352
- *
- * for i in range(offset):
- * mslice.shape[i] = 1 # <<<<<<<<<<<<<<
- * mslice.strides[i] = mslice.strides[0]
- * mslice.suboffsets[i] = -1
- */
- (__pyx_v_mslice->shape[__pyx_v_i]) = 1;
-
- /* "View.MemoryView":1353
- * for i in range(offset):
- * mslice.shape[i] = 1
- * mslice.strides[i] = mslice.strides[0] # <<<<<<<<<<<<<<
- * mslice.suboffsets[i] = -1
- *
- */
- (__pyx_v_mslice->strides[__pyx_v_i]) = (__pyx_v_mslice->strides[0]);
-
- /* "View.MemoryView":1354
- * mslice.shape[i] = 1
- * mslice.strides[i] = mslice.strides[0]
- * mslice.suboffsets[i] = -1 # <<<<<<<<<<<<<<
- *
- *
- */
- (__pyx_v_mslice->suboffsets[__pyx_v_i]) = -1L;
- }
-
- /* "View.MemoryView":1340
- *
- * @cname('__pyx_memoryview_broadcast_leading')
- * cdef void broadcast_leading(__Pyx_memviewslice *mslice, # <<<<<<<<<<<<<<
- * int ndim,
- * int ndim_other) nogil:
- */
-
- /* function exit code */
-}
-
-/* "View.MemoryView":1362
- *
- * @cname('__pyx_memoryview_refcount_copying')
- * cdef void refcount_copying(__Pyx_memviewslice *dst, bint dtype_is_object, # <<<<<<<<<<<<<<
- * int ndim, bint inc) nogil:
- *
- */
-
-static void __pyx_memoryview_refcount_copying(__Pyx_memviewslice *__pyx_v_dst, int __pyx_v_dtype_is_object, int __pyx_v_ndim, int __pyx_v_inc) {
- int __pyx_t_1;
-
- /* "View.MemoryView":1366
- *
- *
- * if dtype_is_object: # <<<<<<<<<<<<<<
- * refcount_objects_in_slice_with_gil(dst.data, dst.shape,
- * dst.strides, ndim, inc)
- */
- __pyx_t_1 = (__pyx_v_dtype_is_object != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":1367
- *
- * if dtype_is_object:
- * refcount_objects_in_slice_with_gil(dst.data, dst.shape, # <<<<<<<<<<<<<<
- * dst.strides, ndim, inc)
- *
- */
- __pyx_memoryview_refcount_objects_in_slice_with_gil(__pyx_v_dst->data, __pyx_v_dst->shape, __pyx_v_dst->strides, __pyx_v_ndim, __pyx_v_inc);
-
- /* "View.MemoryView":1366
- *
- *
- * if dtype_is_object: # <<<<<<<<<<<<<<
- * refcount_objects_in_slice_with_gil(dst.data, dst.shape,
- * dst.strides, ndim, inc)
- */
- }
-
- /* "View.MemoryView":1362
- *
- * @cname('__pyx_memoryview_refcount_copying')
- * cdef void refcount_copying(__Pyx_memviewslice *dst, bint dtype_is_object, # <<<<<<<<<<<<<<
- * int ndim, bint inc) nogil:
- *
- */
-
- /* function exit code */
-}
-
-/* "View.MemoryView":1371
- *
- * @cname('__pyx_memoryview_refcount_objects_in_slice_with_gil')
- * cdef void refcount_objects_in_slice_with_gil(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<<
- * Py_ssize_t *strides, int ndim,
- * bint inc) with gil:
- */
-
-static void __pyx_memoryview_refcount_objects_in_slice_with_gil(char *__pyx_v_data, Py_ssize_t *__pyx_v_shape, Py_ssize_t *__pyx_v_strides, int __pyx_v_ndim, int __pyx_v_inc) {
- __Pyx_RefNannyDeclarations
- #ifdef WITH_THREAD
- PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure();
- #endif
- __Pyx_RefNannySetupContext("refcount_objects_in_slice_with_gil", 0);
-
- /* "View.MemoryView":1374
- * Py_ssize_t *strides, int ndim,
- * bint inc) with gil:
- * refcount_objects_in_slice(data, shape, strides, ndim, inc) # <<<<<<<<<<<<<<
- *
- * @cname('__pyx_memoryview_refcount_objects_in_slice')
- */
- __pyx_memoryview_refcount_objects_in_slice(__pyx_v_data, __pyx_v_shape, __pyx_v_strides, __pyx_v_ndim, __pyx_v_inc);
-
- /* "View.MemoryView":1371
- *
- * @cname('__pyx_memoryview_refcount_objects_in_slice_with_gil')
- * cdef void refcount_objects_in_slice_with_gil(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<<
- * Py_ssize_t *strides, int ndim,
- * bint inc) with gil:
- */
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- #ifdef WITH_THREAD
- __Pyx_PyGILState_Release(__pyx_gilstate_save);
- #endif
-}
-
-/* "View.MemoryView":1377
- *
- * @cname('__pyx_memoryview_refcount_objects_in_slice')
- * cdef void refcount_objects_in_slice(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<<
- * Py_ssize_t *strides, int ndim, bint inc):
- * cdef Py_ssize_t i
- */
-
-static void __pyx_memoryview_refcount_objects_in_slice(char *__pyx_v_data, Py_ssize_t *__pyx_v_shape, Py_ssize_t *__pyx_v_strides, int __pyx_v_ndim, int __pyx_v_inc) {
- CYTHON_UNUSED Py_ssize_t __pyx_v_i;
- __Pyx_RefNannyDeclarations
- Py_ssize_t __pyx_t_1;
- Py_ssize_t __pyx_t_2;
- Py_ssize_t __pyx_t_3;
- int __pyx_t_4;
- __Pyx_RefNannySetupContext("refcount_objects_in_slice", 0);
-
- /* "View.MemoryView":1381
- * cdef Py_ssize_t i
- *
- * for i in range(shape[0]): # <<<<<<<<<<<<<<
- * if ndim == 1:
- * if inc:
- */
- __pyx_t_1 = (__pyx_v_shape[0]);
- __pyx_t_2 = __pyx_t_1;
- for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) {
- __pyx_v_i = __pyx_t_3;
-
- /* "View.MemoryView":1382
- *
- * for i in range(shape[0]):
- * if ndim == 1: # <<<<<<<<<<<<<<
- * if inc:
- * Py_INCREF(( data)[0])
- */
- __pyx_t_4 = ((__pyx_v_ndim == 1) != 0);
- if (__pyx_t_4) {
-
- /* "View.MemoryView":1383
- * for i in range(shape[0]):
- * if ndim == 1:
- * if inc: # <<<<<<<<<<<<<<
- * Py_INCREF(( data)[0])
- * else:
- */
- __pyx_t_4 = (__pyx_v_inc != 0);
- if (__pyx_t_4) {
-
- /* "View.MemoryView":1384
- * if ndim == 1:
- * if inc:
- * Py_INCREF(( data)[0]) # <<<<<<<<<<<<<<
- * else:
- * Py_DECREF(( data)[0])
- */
- Py_INCREF((((PyObject **)__pyx_v_data)[0]));
-
- /* "View.MemoryView":1383
- * for i in range(shape[0]):
- * if ndim == 1:
- * if inc: # <<<<<<<<<<<<<<
- * Py_INCREF(( data)[0])
- * else:
- */
- goto __pyx_L6;
- }
-
- /* "View.MemoryView":1386
- * Py_INCREF(( data)[0])
- * else:
- * Py_DECREF(( data)[0]) # <<<<<<<<<<<<<<
- * else:
- * refcount_objects_in_slice(data, shape + 1, strides + 1,
- */
- /*else*/ {
- Py_DECREF((((PyObject **)__pyx_v_data)[0]));
- }
- __pyx_L6:;
-
- /* "View.MemoryView":1382
- *
- * for i in range(shape[0]):
- * if ndim == 1: # <<<<<<<<<<<<<<
- * if inc:
- * Py_INCREF(( data)[0])
- */
- goto __pyx_L5;
- }
-
- /* "View.MemoryView":1388
- * Py_DECREF(( data)[0])
- * else:
- * refcount_objects_in_slice(data, shape + 1, strides + 1, # <<<<<<<<<<<<<<
- * ndim - 1, inc)
- *
- */
- /*else*/ {
-
- /* "View.MemoryView":1389
- * else:
- * refcount_objects_in_slice(data, shape + 1, strides + 1,
- * ndim - 1, inc) # <<<<<<<<<<<<<<
- *
- * data += strides[0]
- */
- __pyx_memoryview_refcount_objects_in_slice(__pyx_v_data, (__pyx_v_shape + 1), (__pyx_v_strides + 1), (__pyx_v_ndim - 1), __pyx_v_inc);
- }
- __pyx_L5:;
-
- /* "View.MemoryView":1391
- * ndim - 1, inc)
- *
- * data += strides[0] # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_v_data = (__pyx_v_data + (__pyx_v_strides[0]));
- }
-
- /* "View.MemoryView":1377
- *
- * @cname('__pyx_memoryview_refcount_objects_in_slice')
- * cdef void refcount_objects_in_slice(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<<
- * Py_ssize_t *strides, int ndim, bint inc):
- * cdef Py_ssize_t i
- */
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
-}
-
-/* "View.MemoryView":1397
- *
- * @cname('__pyx_memoryview_slice_assign_scalar')
- * cdef void slice_assign_scalar(__Pyx_memviewslice *dst, int ndim, # <<<<<<<<<<<<<<
- * size_t itemsize, void *item,
- * bint dtype_is_object) nogil:
- */
-
-static void __pyx_memoryview_slice_assign_scalar(__Pyx_memviewslice *__pyx_v_dst, int __pyx_v_ndim, size_t __pyx_v_itemsize, void *__pyx_v_item, int __pyx_v_dtype_is_object) {
-
- /* "View.MemoryView":1400
- * size_t itemsize, void *item,
- * bint dtype_is_object) nogil:
- * refcount_copying(dst, dtype_is_object, ndim, False) # <<<<<<<<<<<<<<
- * _slice_assign_scalar(dst.data, dst.shape, dst.strides, ndim,
- * itemsize, item)
- */
- __pyx_memoryview_refcount_copying(__pyx_v_dst, __pyx_v_dtype_is_object, __pyx_v_ndim, 0);
-
- /* "View.MemoryView":1401
- * bint dtype_is_object) nogil:
- * refcount_copying(dst, dtype_is_object, ndim, False)
- * _slice_assign_scalar(dst.data, dst.shape, dst.strides, ndim, # <<<<<<<<<<<<<<
- * itemsize, item)
- * refcount_copying(dst, dtype_is_object, ndim, True)
- */
- __pyx_memoryview__slice_assign_scalar(__pyx_v_dst->data, __pyx_v_dst->shape, __pyx_v_dst->strides, __pyx_v_ndim, __pyx_v_itemsize, __pyx_v_item);
-
- /* "View.MemoryView":1403
- * _slice_assign_scalar(dst.data, dst.shape, dst.strides, ndim,
- * itemsize, item)
- * refcount_copying(dst, dtype_is_object, ndim, True) # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_memoryview_refcount_copying(__pyx_v_dst, __pyx_v_dtype_is_object, __pyx_v_ndim, 1);
-
- /* "View.MemoryView":1397
- *
- * @cname('__pyx_memoryview_slice_assign_scalar')
- * cdef void slice_assign_scalar(__Pyx_memviewslice *dst, int ndim, # <<<<<<<<<<<<<<
- * size_t itemsize, void *item,
- * bint dtype_is_object) nogil:
- */
-
- /* function exit code */
-}
-
-/* "View.MemoryView":1407
- *
- * @cname('__pyx_memoryview__slice_assign_scalar')
- * cdef void _slice_assign_scalar(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<<
- * Py_ssize_t *strides, int ndim,
- * size_t itemsize, void *item) nogil:
- */
-
-static void __pyx_memoryview__slice_assign_scalar(char *__pyx_v_data, Py_ssize_t *__pyx_v_shape, Py_ssize_t *__pyx_v_strides, int __pyx_v_ndim, size_t __pyx_v_itemsize, void *__pyx_v_item) {
- CYTHON_UNUSED Py_ssize_t __pyx_v_i;
- Py_ssize_t __pyx_v_stride;
- Py_ssize_t __pyx_v_extent;
- int __pyx_t_1;
- Py_ssize_t __pyx_t_2;
- Py_ssize_t __pyx_t_3;
- Py_ssize_t __pyx_t_4;
-
- /* "View.MemoryView":1411
- * size_t itemsize, void *item) nogil:
- * cdef Py_ssize_t i
- * cdef Py_ssize_t stride = strides[0] # <<<<<<<<<<<<<<
- * cdef Py_ssize_t extent = shape[0]
- *
- */
- __pyx_v_stride = (__pyx_v_strides[0]);
-
- /* "View.MemoryView":1412
- * cdef Py_ssize_t i
- * cdef Py_ssize_t stride = strides[0]
- * cdef Py_ssize_t extent = shape[0] # <<<<<<<<<<<<<<
- *
- * if ndim == 1:
- */
- __pyx_v_extent = (__pyx_v_shape[0]);
-
- /* "View.MemoryView":1414
- * cdef Py_ssize_t extent = shape[0]
- *
- * if ndim == 1: # <<<<<<<<<<<<<<
- * for i in range(extent):
- * memcpy(data, item, itemsize)
- */
- __pyx_t_1 = ((__pyx_v_ndim == 1) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":1415
- *
- * if ndim == 1:
- * for i in range(extent): # <<<<<<<<<<<<<<
- * memcpy(data, item, itemsize)
- * data += stride
- */
- __pyx_t_2 = __pyx_v_extent;
- __pyx_t_3 = __pyx_t_2;
- for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) {
- __pyx_v_i = __pyx_t_4;
-
- /* "View.MemoryView":1416
- * if ndim == 1:
- * for i in range(extent):
- * memcpy(data, item, itemsize) # <<<<<<<<<<<<<<
- * data += stride
- * else:
- */
- (void)(memcpy(__pyx_v_data, __pyx_v_item, __pyx_v_itemsize));
-
- /* "View.MemoryView":1417
- * for i in range(extent):
- * memcpy(data, item, itemsize)
- * data += stride # <<<<<<<<<<<<<<
- * else:
- * for i in range(extent):
- */
- __pyx_v_data = (__pyx_v_data + __pyx_v_stride);
- }
-
- /* "View.MemoryView":1414
- * cdef Py_ssize_t extent = shape[0]
- *
- * if ndim == 1: # <<<<<<<<<<<<<<
- * for i in range(extent):
- * memcpy(data, item, itemsize)
- */
- goto __pyx_L3;
- }
-
- /* "View.MemoryView":1419
- * data += stride
- * else:
- * for i in range(extent): # <<<<<<<<<<<<<<
- * _slice_assign_scalar(data, shape + 1, strides + 1,
- * ndim - 1, itemsize, item)
- */
- /*else*/ {
- __pyx_t_2 = __pyx_v_extent;
- __pyx_t_3 = __pyx_t_2;
- for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) {
- __pyx_v_i = __pyx_t_4;
-
- /* "View.MemoryView":1420
- * else:
- * for i in range(extent):
- * _slice_assign_scalar(data, shape + 1, strides + 1, # <<<<<<<<<<<<<<
- * ndim - 1, itemsize, item)
- * data += stride
- */
- __pyx_memoryview__slice_assign_scalar(__pyx_v_data, (__pyx_v_shape + 1), (__pyx_v_strides + 1), (__pyx_v_ndim - 1), __pyx_v_itemsize, __pyx_v_item);
-
- /* "View.MemoryView":1422
- * _slice_assign_scalar(data, shape + 1, strides + 1,
- * ndim - 1, itemsize, item)
- * data += stride # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_v_data = (__pyx_v_data + __pyx_v_stride);
- }
- }
- __pyx_L3:;
-
- /* "View.MemoryView":1407
- *
- * @cname('__pyx_memoryview__slice_assign_scalar')
- * cdef void _slice_assign_scalar(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<<
- * Py_ssize_t *strides, int ndim,
- * size_t itemsize, void *item) nogil:
- */
-
- /* function exit code */
-}
-
-/* "(tree fragment)":1
- * def __pyx_unpickle_Enum(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<<
- * cdef object __pyx_PickleError
- * cdef object __pyx_result
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw_15View_dot_MemoryView_1__pyx_unpickle_Enum(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
-static PyMethodDef __pyx_mdef_15View_dot_MemoryView_1__pyx_unpickle_Enum = {"__pyx_unpickle_Enum", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_15View_dot_MemoryView_1__pyx_unpickle_Enum, METH_VARARGS|METH_KEYWORDS, 0};
-static PyObject *__pyx_pw_15View_dot_MemoryView_1__pyx_unpickle_Enum(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
- PyObject *__pyx_v___pyx_type = 0;
- long __pyx_v___pyx_checksum;
- PyObject *__pyx_v___pyx_state = 0;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__pyx_unpickle_Enum (wrapper)", 0);
- {
- static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pyx_type,&__pyx_n_s_pyx_checksum,&__pyx_n_s_pyx_state,0};
- PyObject* values[3] = {0,0,0};
- if (unlikely(__pyx_kwds)) {
- Py_ssize_t kw_args;
- const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
- switch (pos_args) {
- case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
- CYTHON_FALLTHROUGH;
- case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
- CYTHON_FALLTHROUGH;
- case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
- CYTHON_FALLTHROUGH;
- case 0: break;
- default: goto __pyx_L5_argtuple_error;
- }
- kw_args = PyDict_Size(__pyx_kwds);
- switch (pos_args) {
- case 0:
- if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_pyx_type)) != 0)) kw_args--;
- else goto __pyx_L5_argtuple_error;
- CYTHON_FALLTHROUGH;
- case 1:
- if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_pyx_checksum)) != 0)) kw_args--;
- else {
- __Pyx_RaiseArgtupleInvalid("__pyx_unpickle_Enum", 1, 3, 3, 1); __PYX_ERR(1, 1, __pyx_L3_error)
- }
- CYTHON_FALLTHROUGH;
- case 2:
- if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_pyx_state)) != 0)) kw_args--;
- else {
- __Pyx_RaiseArgtupleInvalid("__pyx_unpickle_Enum", 1, 3, 3, 2); __PYX_ERR(1, 1, __pyx_L3_error)
- }
- }
- if (unlikely(kw_args > 0)) {
- if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__pyx_unpickle_Enum") < 0)) __PYX_ERR(1, 1, __pyx_L3_error)
- }
- } else if (PyTuple_GET_SIZE(__pyx_args) != 3) {
- goto __pyx_L5_argtuple_error;
- } else {
- values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
- values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
- values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
- }
- __pyx_v___pyx_type = values[0];
- __pyx_v___pyx_checksum = __Pyx_PyInt_As_long(values[1]); if (unlikely((__pyx_v___pyx_checksum == (long)-1) && PyErr_Occurred())) __PYX_ERR(1, 1, __pyx_L3_error)
- __pyx_v___pyx_state = values[2];
- }
- goto __pyx_L4_argument_unpacking_done;
- __pyx_L5_argtuple_error:;
- __Pyx_RaiseArgtupleInvalid("__pyx_unpickle_Enum", 1, 3, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(1, 1, __pyx_L3_error)
- __pyx_L3_error:;
- __Pyx_AddTraceback("View.MemoryView.__pyx_unpickle_Enum", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __Pyx_RefNannyFinishContext();
- return NULL;
- __pyx_L4_argument_unpacking_done:;
- __pyx_r = __pyx_pf_15View_dot_MemoryView___pyx_unpickle_Enum(__pyx_self, __pyx_v___pyx_type, __pyx_v___pyx_checksum, __pyx_v___pyx_state);
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_15View_dot_MemoryView___pyx_unpickle_Enum(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v___pyx_type, long __pyx_v___pyx_checksum, PyObject *__pyx_v___pyx_state) {
- PyObject *__pyx_v___pyx_PickleError = 0;
- PyObject *__pyx_v___pyx_result = 0;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- PyObject *__pyx_t_2 = NULL;
- PyObject *__pyx_t_3 = NULL;
- PyObject *__pyx_t_4 = NULL;
- PyObject *__pyx_t_5 = NULL;
- int __pyx_t_6;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__pyx_unpickle_Enum", 0);
-
- /* "(tree fragment)":4
- * cdef object __pyx_PickleError
- * cdef object __pyx_result
- * if __pyx_checksum != 0xb068931: # <<<<<<<<<<<<<<
- * from pickle import PickleError as __pyx_PickleError
- * raise __pyx_PickleError("Incompatible checksums (%s vs 0xb068931 = (name))" % __pyx_checksum)
- */
- __pyx_t_1 = ((__pyx_v___pyx_checksum != 0xb068931) != 0);
- if (__pyx_t_1) {
-
- /* "(tree fragment)":5
- * cdef object __pyx_result
- * if __pyx_checksum != 0xb068931:
- * from pickle import PickleError as __pyx_PickleError # <<<<<<<<<<<<<<
- * raise __pyx_PickleError("Incompatible checksums (%s vs 0xb068931 = (name))" % __pyx_checksum)
- * __pyx_result = Enum.__new__(__pyx_type)
- */
- __pyx_t_2 = PyList_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 5, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_INCREF(__pyx_n_s_PickleError);
- __Pyx_GIVEREF(__pyx_n_s_PickleError);
- PyList_SET_ITEM(__pyx_t_2, 0, __pyx_n_s_PickleError);
- __pyx_t_3 = __Pyx_Import(__pyx_n_s_pickle, __pyx_t_2, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 5, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __pyx_t_2 = __Pyx_ImportFrom(__pyx_t_3, __pyx_n_s_PickleError); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 5, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_INCREF(__pyx_t_2);
- __pyx_v___pyx_PickleError = __pyx_t_2;
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
-
- /* "(tree fragment)":6
- * if __pyx_checksum != 0xb068931:
- * from pickle import PickleError as __pyx_PickleError
- * raise __pyx_PickleError("Incompatible checksums (%s vs 0xb068931 = (name))" % __pyx_checksum) # <<<<<<<<<<<<<<
- * __pyx_result = Enum.__new__(__pyx_type)
- * if __pyx_state is not None:
- */
- __pyx_t_2 = __Pyx_PyInt_From_long(__pyx_v___pyx_checksum); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 6, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_4 = __Pyx_PyString_Format(__pyx_kp_s_Incompatible_checksums_s_vs_0xb0, __pyx_t_2); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 6, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __Pyx_INCREF(__pyx_v___pyx_PickleError);
- __pyx_t_2 = __pyx_v___pyx_PickleError; __pyx_t_5 = NULL;
- if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {
- __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_2);
- if (likely(__pyx_t_5)) {
- PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
- __Pyx_INCREF(__pyx_t_5);
- __Pyx_INCREF(function);
- __Pyx_DECREF_SET(__pyx_t_2, function);
- }
- }
- __pyx_t_3 = (__pyx_t_5) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_5, __pyx_t_4) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_4);
- __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;
- __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
- if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 6, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __Pyx_Raise(__pyx_t_3, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __PYX_ERR(1, 6, __pyx_L1_error)
-
- /* "(tree fragment)":4
- * cdef object __pyx_PickleError
- * cdef object __pyx_result
- * if __pyx_checksum != 0xb068931: # <<<<<<<<<<<<<<
- * from pickle import PickleError as __pyx_PickleError
- * raise __pyx_PickleError("Incompatible checksums (%s vs 0xb068931 = (name))" % __pyx_checksum)
- */
- }
-
- /* "(tree fragment)":7
- * from pickle import PickleError as __pyx_PickleError
- * raise __pyx_PickleError("Incompatible checksums (%s vs 0xb068931 = (name))" % __pyx_checksum)
- * __pyx_result = Enum.__new__(__pyx_type) # <<<<<<<<<<<<<<
- * if __pyx_state is not None:
- * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state)
- */
- __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_MemviewEnum_type), __pyx_n_s_new); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 7, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_4 = NULL;
- if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
- __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_2);
- if (likely(__pyx_t_4)) {
- PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
- __Pyx_INCREF(__pyx_t_4);
- __Pyx_INCREF(function);
- __Pyx_DECREF_SET(__pyx_t_2, function);
- }
- }
- __pyx_t_3 = (__pyx_t_4) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_4, __pyx_v___pyx_type) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_v___pyx_type);
- __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;
- if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 7, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __pyx_v___pyx_result = __pyx_t_3;
- __pyx_t_3 = 0;
-
- /* "(tree fragment)":8
- * raise __pyx_PickleError("Incompatible checksums (%s vs 0xb068931 = (name))" % __pyx_checksum)
- * __pyx_result = Enum.__new__(__pyx_type)
- * if __pyx_state is not None: # <<<<<<<<<<<<<<
- * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state)
- * return __pyx_result
- */
- __pyx_t_1 = (__pyx_v___pyx_state != Py_None);
- __pyx_t_6 = (__pyx_t_1 != 0);
- if (__pyx_t_6) {
-
- /* "(tree fragment)":9
- * __pyx_result = Enum.__new__(__pyx_type)
- * if __pyx_state is not None:
- * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) # <<<<<<<<<<<<<<
- * return __pyx_result
- * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state):
- */
- if (!(likely(PyTuple_CheckExact(__pyx_v___pyx_state))||((__pyx_v___pyx_state) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "tuple", Py_TYPE(__pyx_v___pyx_state)->tp_name), 0))) __PYX_ERR(1, 9, __pyx_L1_error)
- __pyx_t_3 = __pyx_unpickle_Enum__set_state(((struct __pyx_MemviewEnum_obj *)__pyx_v___pyx_result), ((PyObject*)__pyx_v___pyx_state)); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 9, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
-
- /* "(tree fragment)":8
- * raise __pyx_PickleError("Incompatible checksums (%s vs 0xb068931 = (name))" % __pyx_checksum)
- * __pyx_result = Enum.__new__(__pyx_type)
- * if __pyx_state is not None: # <<<<<<<<<<<<<<
- * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state)
- * return __pyx_result
- */
- }
-
- /* "(tree fragment)":10
- * if __pyx_state is not None:
- * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state)
- * return __pyx_result # <<<<<<<<<<<<<<
- * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state):
- * __pyx_result.name = __pyx_state[0]
- */
- __Pyx_XDECREF(__pyx_r);
- __Pyx_INCREF(__pyx_v___pyx_result);
- __pyx_r = __pyx_v___pyx_result;
- goto __pyx_L0;
-
- /* "(tree fragment)":1
- * def __pyx_unpickle_Enum(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<<
- * cdef object __pyx_PickleError
- * cdef object __pyx_result
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_XDECREF(__pyx_t_4);
- __Pyx_XDECREF(__pyx_t_5);
- __Pyx_AddTraceback("View.MemoryView.__pyx_unpickle_Enum", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XDECREF(__pyx_v___pyx_PickleError);
- __Pyx_XDECREF(__pyx_v___pyx_result);
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "(tree fragment)":11
- * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state)
- * return __pyx_result
- * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): # <<<<<<<<<<<<<<
- * __pyx_result.name = __pyx_state[0]
- * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'):
- */
-
-static PyObject *__pyx_unpickle_Enum__set_state(struct __pyx_MemviewEnum_obj *__pyx_v___pyx_result, PyObject *__pyx_v___pyx_state) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_t_2;
- Py_ssize_t __pyx_t_3;
- int __pyx_t_4;
- int __pyx_t_5;
- PyObject *__pyx_t_6 = NULL;
- PyObject *__pyx_t_7 = NULL;
- PyObject *__pyx_t_8 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__pyx_unpickle_Enum__set_state", 0);
-
- /* "(tree fragment)":12
- * return __pyx_result
- * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state):
- * __pyx_result.name = __pyx_state[0] # <<<<<<<<<<<<<<
- * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'):
- * __pyx_result.__dict__.update(__pyx_state[1])
- */
- if (unlikely(__pyx_v___pyx_state == Py_None)) {
- PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable");
- __PYX_ERR(1, 12, __pyx_L1_error)
- }
- __pyx_t_1 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 12, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_GIVEREF(__pyx_t_1);
- __Pyx_GOTREF(__pyx_v___pyx_result->name);
- __Pyx_DECREF(__pyx_v___pyx_result->name);
- __pyx_v___pyx_result->name = __pyx_t_1;
- __pyx_t_1 = 0;
-
- /* "(tree fragment)":13
- * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state):
- * __pyx_result.name = __pyx_state[0]
- * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): # <<<<<<<<<<<<<<
- * __pyx_result.__dict__.update(__pyx_state[1])
- */
- if (unlikely(__pyx_v___pyx_state == Py_None)) {
- PyErr_SetString(PyExc_TypeError, "object of type 'NoneType' has no len()");
- __PYX_ERR(1, 13, __pyx_L1_error)
- }
- __pyx_t_3 = PyTuple_GET_SIZE(__pyx_v___pyx_state); if (unlikely(__pyx_t_3 == ((Py_ssize_t)-1))) __PYX_ERR(1, 13, __pyx_L1_error)
- __pyx_t_4 = ((__pyx_t_3 > 1) != 0);
- if (__pyx_t_4) {
- } else {
- __pyx_t_2 = __pyx_t_4;
- goto __pyx_L4_bool_binop_done;
- }
- __pyx_t_4 = __Pyx_HasAttr(((PyObject *)__pyx_v___pyx_result), __pyx_n_s_dict); if (unlikely(__pyx_t_4 == ((int)-1))) __PYX_ERR(1, 13, __pyx_L1_error)
- __pyx_t_5 = (__pyx_t_4 != 0);
- __pyx_t_2 = __pyx_t_5;
- __pyx_L4_bool_binop_done:;
- if (__pyx_t_2) {
-
- /* "(tree fragment)":14
- * __pyx_result.name = __pyx_state[0]
- * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'):
- * __pyx_result.__dict__.update(__pyx_state[1]) # <<<<<<<<<<<<<<
- */
- __pyx_t_6 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v___pyx_result), __pyx_n_s_dict); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 14, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_6);
- __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_6, __pyx_n_s_update); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 14, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_7);
- __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
- if (unlikely(__pyx_v___pyx_state == Py_None)) {
- PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable");
- __PYX_ERR(1, 14, __pyx_L1_error)
- }
- __pyx_t_6 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 14, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_6);
- __pyx_t_8 = NULL;
- if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_7))) {
- __pyx_t_8 = PyMethod_GET_SELF(__pyx_t_7);
- if (likely(__pyx_t_8)) {
- PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_7);
- __Pyx_INCREF(__pyx_t_8);
- __Pyx_INCREF(function);
- __Pyx_DECREF_SET(__pyx_t_7, function);
- }
- }
- __pyx_t_1 = (__pyx_t_8) ? __Pyx_PyObject_Call2Args(__pyx_t_7, __pyx_t_8, __pyx_t_6) : __Pyx_PyObject_CallOneArg(__pyx_t_7, __pyx_t_6);
- __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0;
- __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
- if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 14, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
-
- /* "(tree fragment)":13
- * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state):
- * __pyx_result.name = __pyx_state[0]
- * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): # <<<<<<<<<<<<<<
- * __pyx_result.__dict__.update(__pyx_state[1])
- */
- }
-
- /* "(tree fragment)":11
- * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state)
- * return __pyx_result
- * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): # <<<<<<<<<<<<<<
- * __pyx_result.name = __pyx_state[0]
- * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'):
- */
-
- /* function exit code */
- __pyx_r = Py_None; __Pyx_INCREF(Py_None);
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_6);
- __Pyx_XDECREF(__pyx_t_7);
- __Pyx_XDECREF(__pyx_t_8);
- __Pyx_AddTraceback("View.MemoryView.__pyx_unpickle_Enum__set_state", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-static struct __pyx_vtabstruct_array __pyx_vtable_array;
-
-static PyObject *__pyx_tp_new_array(PyTypeObject *t, PyObject *a, PyObject *k) {
- struct __pyx_array_obj *p;
- PyObject *o;
- if (likely((t->tp_flags & Py_TPFLAGS_IS_ABSTRACT) == 0)) {
- o = (*t->tp_alloc)(t, 0);
- } else {
- o = (PyObject *) PyBaseObject_Type.tp_new(t, __pyx_empty_tuple, 0);
- }
- if (unlikely(!o)) return 0;
- p = ((struct __pyx_array_obj *)o);
- p->__pyx_vtab = __pyx_vtabptr_array;
- p->mode = ((PyObject*)Py_None); Py_INCREF(Py_None);
- p->_format = ((PyObject*)Py_None); Py_INCREF(Py_None);
- if (unlikely(__pyx_array___cinit__(o, a, k) < 0)) goto bad;
- return o;
- bad:
- Py_DECREF(o); o = 0;
- return NULL;
-}
-
-static void __pyx_tp_dealloc_array(PyObject *o) {
- struct __pyx_array_obj *p = (struct __pyx_array_obj *)o;
- #if CYTHON_USE_TP_FINALIZE
- if (unlikely(PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE) && Py_TYPE(o)->tp_finalize) && (!PyType_IS_GC(Py_TYPE(o)) || !_PyGC_FINALIZED(o))) {
- if (PyObject_CallFinalizerFromDealloc(o)) return;
- }
- #endif
- {
- PyObject *etype, *eval, *etb;
- PyErr_Fetch(&etype, &eval, &etb);
- __Pyx_SET_REFCNT(o, Py_REFCNT(o) + 1);
- __pyx_array___dealloc__(o);
- __Pyx_SET_REFCNT(o, Py_REFCNT(o) - 1);
- PyErr_Restore(etype, eval, etb);
- }
- Py_CLEAR(p->mode);
- Py_CLEAR(p->_format);
- (*Py_TYPE(o)->tp_free)(o);
-}
-static PyObject *__pyx_sq_item_array(PyObject *o, Py_ssize_t i) {
- PyObject *r;
- PyObject *x = PyInt_FromSsize_t(i); if(!x) return 0;
- r = Py_TYPE(o)->tp_as_mapping->mp_subscript(o, x);
- Py_DECREF(x);
- return r;
-}
-
-static int __pyx_mp_ass_subscript_array(PyObject *o, PyObject *i, PyObject *v) {
- if (v) {
- return __pyx_array___setitem__(o, i, v);
- }
- else {
- PyErr_Format(PyExc_NotImplementedError,
- "Subscript deletion not supported by %.200s", Py_TYPE(o)->tp_name);
- return -1;
- }
-}
-
-static PyObject *__pyx_tp_getattro_array(PyObject *o, PyObject *n) {
- PyObject *v = __Pyx_PyObject_GenericGetAttr(o, n);
- if (!v && PyErr_ExceptionMatches(PyExc_AttributeError)) {
- PyErr_Clear();
- v = __pyx_array___getattr__(o, n);
- }
- return v;
-}
-
-static PyObject *__pyx_getprop___pyx_array_memview(PyObject *o, CYTHON_UNUSED void *x) {
- return __pyx_pw_15View_dot_MemoryView_5array_7memview_1__get__(o);
-}
-
-static PyMethodDef __pyx_methods_array[] = {
- {"__getattr__", (PyCFunction)__pyx_array___getattr__, METH_O|METH_COEXIST, 0},
- {"__reduce_cython__", (PyCFunction)__pyx_pw___pyx_array_1__reduce_cython__, METH_NOARGS, 0},
- {"__setstate_cython__", (PyCFunction)__pyx_pw___pyx_array_3__setstate_cython__, METH_O, 0},
- {0, 0, 0, 0}
-};
-
-static struct PyGetSetDef __pyx_getsets_array[] = {
- {(char *)"memview", __pyx_getprop___pyx_array_memview, 0, (char *)0, 0},
- {0, 0, 0, 0, 0}
-};
-
-static PySequenceMethods __pyx_tp_as_sequence_array = {
- __pyx_array___len__, /*sq_length*/
- 0, /*sq_concat*/
- 0, /*sq_repeat*/
- __pyx_sq_item_array, /*sq_item*/
- 0, /*sq_slice*/
- 0, /*sq_ass_item*/
- 0, /*sq_ass_slice*/
- 0, /*sq_contains*/
- 0, /*sq_inplace_concat*/
- 0, /*sq_inplace_repeat*/
-};
-
-static PyMappingMethods __pyx_tp_as_mapping_array = {
- __pyx_array___len__, /*mp_length*/
- __pyx_array___getitem__, /*mp_subscript*/
- __pyx_mp_ass_subscript_array, /*mp_ass_subscript*/
-};
-
-static PyBufferProcs __pyx_tp_as_buffer_array = {
- #if PY_MAJOR_VERSION < 3
- 0, /*bf_getreadbuffer*/
- #endif
- #if PY_MAJOR_VERSION < 3
- 0, /*bf_getwritebuffer*/
- #endif
- #if PY_MAJOR_VERSION < 3
- 0, /*bf_getsegcount*/
- #endif
- #if PY_MAJOR_VERSION < 3
- 0, /*bf_getcharbuffer*/
- #endif
- __pyx_array_getbuffer, /*bf_getbuffer*/
- 0, /*bf_releasebuffer*/
-};
-
-static PyTypeObject __pyx_type___pyx_array = {
- PyVarObject_HEAD_INIT(0, 0)
- "monotonic_align.core.array", /*tp_name*/
- sizeof(struct __pyx_array_obj), /*tp_basicsize*/
- 0, /*tp_itemsize*/
- __pyx_tp_dealloc_array, /*tp_dealloc*/
- #if PY_VERSION_HEX < 0x030800b4
- 0, /*tp_print*/
- #endif
- #if PY_VERSION_HEX >= 0x030800b4
- 0, /*tp_vectorcall_offset*/
- #endif
- 0, /*tp_getattr*/
- 0, /*tp_setattr*/
- #if PY_MAJOR_VERSION < 3
- 0, /*tp_compare*/
- #endif
- #if PY_MAJOR_VERSION >= 3
- 0, /*tp_as_async*/
- #endif
- 0, /*tp_repr*/
- 0, /*tp_as_number*/
- &__pyx_tp_as_sequence_array, /*tp_as_sequence*/
- &__pyx_tp_as_mapping_array, /*tp_as_mapping*/
- 0, /*tp_hash*/
- 0, /*tp_call*/
- 0, /*tp_str*/
- __pyx_tp_getattro_array, /*tp_getattro*/
- 0, /*tp_setattro*/
- &__pyx_tp_as_buffer_array, /*tp_as_buffer*/
- Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE, /*tp_flags*/
- 0, /*tp_doc*/
- 0, /*tp_traverse*/
- 0, /*tp_clear*/
- 0, /*tp_richcompare*/
- 0, /*tp_weaklistoffset*/
- 0, /*tp_iter*/
- 0, /*tp_iternext*/
- __pyx_methods_array, /*tp_methods*/
- 0, /*tp_members*/
- __pyx_getsets_array, /*tp_getset*/
- 0, /*tp_base*/
- 0, /*tp_dict*/
- 0, /*tp_descr_get*/
- 0, /*tp_descr_set*/
- 0, /*tp_dictoffset*/
- 0, /*tp_init*/
- 0, /*tp_alloc*/
- __pyx_tp_new_array, /*tp_new*/
- 0, /*tp_free*/
- 0, /*tp_is_gc*/
- 0, /*tp_bases*/
- 0, /*tp_mro*/
- 0, /*tp_cache*/
- 0, /*tp_subclasses*/
- 0, /*tp_weaklist*/
- 0, /*tp_del*/
- 0, /*tp_version_tag*/
- #if PY_VERSION_HEX >= 0x030400a1
- 0, /*tp_finalize*/
- #endif
- #if PY_VERSION_HEX >= 0x030800b1
- 0, /*tp_vectorcall*/
- #endif
- #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000
- 0, /*tp_print*/
- #endif
-};
-
-static PyObject *__pyx_tp_new_Enum(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) {
- struct __pyx_MemviewEnum_obj *p;
- PyObject *o;
- if (likely((t->tp_flags & Py_TPFLAGS_IS_ABSTRACT) == 0)) {
- o = (*t->tp_alloc)(t, 0);
- } else {
- o = (PyObject *) PyBaseObject_Type.tp_new(t, __pyx_empty_tuple, 0);
- }
- if (unlikely(!o)) return 0;
- p = ((struct __pyx_MemviewEnum_obj *)o);
- p->name = Py_None; Py_INCREF(Py_None);
- return o;
-}
-
-static void __pyx_tp_dealloc_Enum(PyObject *o) {
- struct __pyx_MemviewEnum_obj *p = (struct __pyx_MemviewEnum_obj *)o;
- #if CYTHON_USE_TP_FINALIZE
- if (unlikely(PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE) && Py_TYPE(o)->tp_finalize) && !_PyGC_FINALIZED(o)) {
- if (PyObject_CallFinalizerFromDealloc(o)) return;
- }
- #endif
- PyObject_GC_UnTrack(o);
- Py_CLEAR(p->name);
- (*Py_TYPE(o)->tp_free)(o);
-}
-
-static int __pyx_tp_traverse_Enum(PyObject *o, visitproc v, void *a) {
- int e;
- struct __pyx_MemviewEnum_obj *p = (struct __pyx_MemviewEnum_obj *)o;
- if (p->name) {
- e = (*v)(p->name, a); if (e) return e;
- }
- return 0;
-}
-
-static int __pyx_tp_clear_Enum(PyObject *o) {
- PyObject* tmp;
- struct __pyx_MemviewEnum_obj *p = (struct __pyx_MemviewEnum_obj *)o;
- tmp = ((PyObject*)p->name);
- p->name = Py_None; Py_INCREF(Py_None);
- Py_XDECREF(tmp);
- return 0;
-}
-
-static PyMethodDef __pyx_methods_Enum[] = {
- {"__reduce_cython__", (PyCFunction)__pyx_pw___pyx_MemviewEnum_1__reduce_cython__, METH_NOARGS, 0},
- {"__setstate_cython__", (PyCFunction)__pyx_pw___pyx_MemviewEnum_3__setstate_cython__, METH_O, 0},
- {0, 0, 0, 0}
-};
-
-static PyTypeObject __pyx_type___pyx_MemviewEnum = {
- PyVarObject_HEAD_INIT(0, 0)
- "monotonic_align.core.Enum", /*tp_name*/
- sizeof(struct __pyx_MemviewEnum_obj), /*tp_basicsize*/
- 0, /*tp_itemsize*/
- __pyx_tp_dealloc_Enum, /*tp_dealloc*/
- #if PY_VERSION_HEX < 0x030800b4
- 0, /*tp_print*/
- #endif
- #if PY_VERSION_HEX >= 0x030800b4
- 0, /*tp_vectorcall_offset*/
- #endif
- 0, /*tp_getattr*/
- 0, /*tp_setattr*/
- #if PY_MAJOR_VERSION < 3
- 0, /*tp_compare*/
- #endif
- #if PY_MAJOR_VERSION >= 3
- 0, /*tp_as_async*/
- #endif
- __pyx_MemviewEnum___repr__, /*tp_repr*/
- 0, /*tp_as_number*/
- 0, /*tp_as_sequence*/
- 0, /*tp_as_mapping*/
- 0, /*tp_hash*/
- 0, /*tp_call*/
- 0, /*tp_str*/
- 0, /*tp_getattro*/
- 0, /*tp_setattro*/
- 0, /*tp_as_buffer*/
- Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_GC, /*tp_flags*/
- 0, /*tp_doc*/
- __pyx_tp_traverse_Enum, /*tp_traverse*/
- __pyx_tp_clear_Enum, /*tp_clear*/
- 0, /*tp_richcompare*/
- 0, /*tp_weaklistoffset*/
- 0, /*tp_iter*/
- 0, /*tp_iternext*/
- __pyx_methods_Enum, /*tp_methods*/
- 0, /*tp_members*/
- 0, /*tp_getset*/
- 0, /*tp_base*/
- 0, /*tp_dict*/
- 0, /*tp_descr_get*/
- 0, /*tp_descr_set*/
- 0, /*tp_dictoffset*/
- __pyx_MemviewEnum___init__, /*tp_init*/
- 0, /*tp_alloc*/
- __pyx_tp_new_Enum, /*tp_new*/
- 0, /*tp_free*/
- 0, /*tp_is_gc*/
- 0, /*tp_bases*/
- 0, /*tp_mro*/
- 0, /*tp_cache*/
- 0, /*tp_subclasses*/
- 0, /*tp_weaklist*/
- 0, /*tp_del*/
- 0, /*tp_version_tag*/
- #if PY_VERSION_HEX >= 0x030400a1
- 0, /*tp_finalize*/
- #endif
- #if PY_VERSION_HEX >= 0x030800b1
- 0, /*tp_vectorcall*/
- #endif
- #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000
- 0, /*tp_print*/
- #endif
-};
-static struct __pyx_vtabstruct_memoryview __pyx_vtable_memoryview;
-
-static PyObject *__pyx_tp_new_memoryview(PyTypeObject *t, PyObject *a, PyObject *k) {
- struct __pyx_memoryview_obj *p;
- PyObject *o;
- if (likely((t->tp_flags & Py_TPFLAGS_IS_ABSTRACT) == 0)) {
- o = (*t->tp_alloc)(t, 0);
- } else {
- o = (PyObject *) PyBaseObject_Type.tp_new(t, __pyx_empty_tuple, 0);
- }
- if (unlikely(!o)) return 0;
- p = ((struct __pyx_memoryview_obj *)o);
- p->__pyx_vtab = __pyx_vtabptr_memoryview;
- p->obj = Py_None; Py_INCREF(Py_None);
- p->_size = Py_None; Py_INCREF(Py_None);
- p->_array_interface = Py_None; Py_INCREF(Py_None);
- p->view.obj = NULL;
- if (unlikely(__pyx_memoryview___cinit__(o, a, k) < 0)) goto bad;
- return o;
- bad:
- Py_DECREF(o); o = 0;
- return NULL;
-}
-
-static void __pyx_tp_dealloc_memoryview(PyObject *o) {
- struct __pyx_memoryview_obj *p = (struct __pyx_memoryview_obj *)o;
- #if CYTHON_USE_TP_FINALIZE
- if (unlikely(PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE) && Py_TYPE(o)->tp_finalize) && !_PyGC_FINALIZED(o)) {
- if (PyObject_CallFinalizerFromDealloc(o)) return;
- }
- #endif
- PyObject_GC_UnTrack(o);
- {
- PyObject *etype, *eval, *etb;
- PyErr_Fetch(&etype, &eval, &etb);
- __Pyx_SET_REFCNT(o, Py_REFCNT(o) + 1);
- __pyx_memoryview___dealloc__(o);
- __Pyx_SET_REFCNT(o, Py_REFCNT(o) - 1);
- PyErr_Restore(etype, eval, etb);
- }
- Py_CLEAR(p->obj);
- Py_CLEAR(p->_size);
- Py_CLEAR(p->_array_interface);
- (*Py_TYPE(o)->tp_free)(o);
-}
-
-static int __pyx_tp_traverse_memoryview(PyObject *o, visitproc v, void *a) {
- int e;
- struct __pyx_memoryview_obj *p = (struct __pyx_memoryview_obj *)o;
- if (p->obj) {
- e = (*v)(p->obj, a); if (e) return e;
- }
- if (p->_size) {
- e = (*v)(p->_size, a); if (e) return e;
- }
- if (p->_array_interface) {
- e = (*v)(p->_array_interface, a); if (e) return e;
- }
- if (p->view.obj) {
- e = (*v)(p->view.obj, a); if (e) return e;
- }
- return 0;
-}
-
-static int __pyx_tp_clear_memoryview(PyObject *o) {
- PyObject* tmp;
- struct __pyx_memoryview_obj *p = (struct __pyx_memoryview_obj *)o;
- tmp = ((PyObject*)p->obj);
- p->obj = Py_None; Py_INCREF(Py_None);
- Py_XDECREF(tmp);
- tmp = ((PyObject*)p->_size);
- p->_size = Py_None; Py_INCREF(Py_None);
- Py_XDECREF(tmp);
- tmp = ((PyObject*)p->_array_interface);
- p->_array_interface = Py_None; Py_INCREF(Py_None);
- Py_XDECREF(tmp);
- Py_CLEAR(p->view.obj);
- return 0;
-}
-static PyObject *__pyx_sq_item_memoryview(PyObject *o, Py_ssize_t i) {
- PyObject *r;
- PyObject *x = PyInt_FromSsize_t(i); if(!x) return 0;
- r = Py_TYPE(o)->tp_as_mapping->mp_subscript(o, x);
- Py_DECREF(x);
- return r;
-}
-
-static int __pyx_mp_ass_subscript_memoryview(PyObject *o, PyObject *i, PyObject *v) {
- if (v) {
- return __pyx_memoryview___setitem__(o, i, v);
- }
- else {
- PyErr_Format(PyExc_NotImplementedError,
- "Subscript deletion not supported by %.200s", Py_TYPE(o)->tp_name);
- return -1;
- }
-}
-
-static PyObject *__pyx_getprop___pyx_memoryview_T(PyObject *o, CYTHON_UNUSED void *x) {
- return __pyx_pw_15View_dot_MemoryView_10memoryview_1T_1__get__(o);
-}
-
-static PyObject *__pyx_getprop___pyx_memoryview_base(PyObject *o, CYTHON_UNUSED void *x) {
- return __pyx_pw_15View_dot_MemoryView_10memoryview_4base_1__get__(o);
-}
-
-static PyObject *__pyx_getprop___pyx_memoryview_shape(PyObject *o, CYTHON_UNUSED void *x) {
- return __pyx_pw_15View_dot_MemoryView_10memoryview_5shape_1__get__(o);
-}
-
-static PyObject *__pyx_getprop___pyx_memoryview_strides(PyObject *o, CYTHON_UNUSED void *x) {
- return __pyx_pw_15View_dot_MemoryView_10memoryview_7strides_1__get__(o);
-}
-
-static PyObject *__pyx_getprop___pyx_memoryview_suboffsets(PyObject *o, CYTHON_UNUSED void *x) {
- return __pyx_pw_15View_dot_MemoryView_10memoryview_10suboffsets_1__get__(o);
-}
-
-static PyObject *__pyx_getprop___pyx_memoryview_ndim(PyObject *o, CYTHON_UNUSED void *x) {
- return __pyx_pw_15View_dot_MemoryView_10memoryview_4ndim_1__get__(o);
-}
-
-static PyObject *__pyx_getprop___pyx_memoryview_itemsize(PyObject *o, CYTHON_UNUSED void *x) {
- return __pyx_pw_15View_dot_MemoryView_10memoryview_8itemsize_1__get__(o);
-}
-
-static PyObject *__pyx_getprop___pyx_memoryview_nbytes(PyObject *o, CYTHON_UNUSED void *x) {
- return __pyx_pw_15View_dot_MemoryView_10memoryview_6nbytes_1__get__(o);
-}
-
-static PyObject *__pyx_getprop___pyx_memoryview_size(PyObject *o, CYTHON_UNUSED void *x) {
- return __pyx_pw_15View_dot_MemoryView_10memoryview_4size_1__get__(o);
-}
-
-static PyMethodDef __pyx_methods_memoryview[] = {
- {"is_c_contig", (PyCFunction)__pyx_memoryview_is_c_contig, METH_NOARGS, 0},
- {"is_f_contig", (PyCFunction)__pyx_memoryview_is_f_contig, METH_NOARGS, 0},
- {"copy", (PyCFunction)__pyx_memoryview_copy, METH_NOARGS, 0},
- {"copy_fortran", (PyCFunction)__pyx_memoryview_copy_fortran, METH_NOARGS, 0},
- {"__reduce_cython__", (PyCFunction)__pyx_pw___pyx_memoryview_1__reduce_cython__, METH_NOARGS, 0},
- {"__setstate_cython__", (PyCFunction)__pyx_pw___pyx_memoryview_3__setstate_cython__, METH_O, 0},
- {0, 0, 0, 0}
-};
-
-static struct PyGetSetDef __pyx_getsets_memoryview[] = {
- {(char *)"T", __pyx_getprop___pyx_memoryview_T, 0, (char *)0, 0},
- {(char *)"base", __pyx_getprop___pyx_memoryview_base, 0, (char *)0, 0},
- {(char *)"shape", __pyx_getprop___pyx_memoryview_shape, 0, (char *)0, 0},
- {(char *)"strides", __pyx_getprop___pyx_memoryview_strides, 0, (char *)0, 0},
- {(char *)"suboffsets", __pyx_getprop___pyx_memoryview_suboffsets, 0, (char *)0, 0},
- {(char *)"ndim", __pyx_getprop___pyx_memoryview_ndim, 0, (char *)0, 0},
- {(char *)"itemsize", __pyx_getprop___pyx_memoryview_itemsize, 0, (char *)0, 0},
- {(char *)"nbytes", __pyx_getprop___pyx_memoryview_nbytes, 0, (char *)0, 0},
- {(char *)"size", __pyx_getprop___pyx_memoryview_size, 0, (char *)0, 0},
- {0, 0, 0, 0, 0}
-};
-
-static PySequenceMethods __pyx_tp_as_sequence_memoryview = {
- __pyx_memoryview___len__, /*sq_length*/
- 0, /*sq_concat*/
- 0, /*sq_repeat*/
- __pyx_sq_item_memoryview, /*sq_item*/
- 0, /*sq_slice*/
- 0, /*sq_ass_item*/
- 0, /*sq_ass_slice*/
- 0, /*sq_contains*/
- 0, /*sq_inplace_concat*/
- 0, /*sq_inplace_repeat*/
-};
-
-static PyMappingMethods __pyx_tp_as_mapping_memoryview = {
- __pyx_memoryview___len__, /*mp_length*/
- __pyx_memoryview___getitem__, /*mp_subscript*/
- __pyx_mp_ass_subscript_memoryview, /*mp_ass_subscript*/
-};
-
-static PyBufferProcs __pyx_tp_as_buffer_memoryview = {
- #if PY_MAJOR_VERSION < 3
- 0, /*bf_getreadbuffer*/
- #endif
- #if PY_MAJOR_VERSION < 3
- 0, /*bf_getwritebuffer*/
- #endif
- #if PY_MAJOR_VERSION < 3
- 0, /*bf_getsegcount*/
- #endif
- #if PY_MAJOR_VERSION < 3
- 0, /*bf_getcharbuffer*/
- #endif
- __pyx_memoryview_getbuffer, /*bf_getbuffer*/
- 0, /*bf_releasebuffer*/
-};
-
-static PyTypeObject __pyx_type___pyx_memoryview = {
- PyVarObject_HEAD_INIT(0, 0)
- "monotonic_align.core.memoryview", /*tp_name*/
- sizeof(struct __pyx_memoryview_obj), /*tp_basicsize*/
- 0, /*tp_itemsize*/
- __pyx_tp_dealloc_memoryview, /*tp_dealloc*/
- #if PY_VERSION_HEX < 0x030800b4
- 0, /*tp_print*/
- #endif
- #if PY_VERSION_HEX >= 0x030800b4
- 0, /*tp_vectorcall_offset*/
- #endif
- 0, /*tp_getattr*/
- 0, /*tp_setattr*/
- #if PY_MAJOR_VERSION < 3
- 0, /*tp_compare*/
- #endif
- #if PY_MAJOR_VERSION >= 3
- 0, /*tp_as_async*/
- #endif
- __pyx_memoryview___repr__, /*tp_repr*/
- 0, /*tp_as_number*/
- &__pyx_tp_as_sequence_memoryview, /*tp_as_sequence*/
- &__pyx_tp_as_mapping_memoryview, /*tp_as_mapping*/
- 0, /*tp_hash*/
- 0, /*tp_call*/
- __pyx_memoryview___str__, /*tp_str*/
- 0, /*tp_getattro*/
- 0, /*tp_setattro*/
- &__pyx_tp_as_buffer_memoryview, /*tp_as_buffer*/
- Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_GC, /*tp_flags*/
- 0, /*tp_doc*/
- __pyx_tp_traverse_memoryview, /*tp_traverse*/
- __pyx_tp_clear_memoryview, /*tp_clear*/
- 0, /*tp_richcompare*/
- 0, /*tp_weaklistoffset*/
- 0, /*tp_iter*/
- 0, /*tp_iternext*/
- __pyx_methods_memoryview, /*tp_methods*/
- 0, /*tp_members*/
- __pyx_getsets_memoryview, /*tp_getset*/
- 0, /*tp_base*/
- 0, /*tp_dict*/
- 0, /*tp_descr_get*/
- 0, /*tp_descr_set*/
- 0, /*tp_dictoffset*/
- 0, /*tp_init*/
- 0, /*tp_alloc*/
- __pyx_tp_new_memoryview, /*tp_new*/
- 0, /*tp_free*/
- 0, /*tp_is_gc*/
- 0, /*tp_bases*/
- 0, /*tp_mro*/
- 0, /*tp_cache*/
- 0, /*tp_subclasses*/
- 0, /*tp_weaklist*/
- 0, /*tp_del*/
- 0, /*tp_version_tag*/
- #if PY_VERSION_HEX >= 0x030400a1
- 0, /*tp_finalize*/
- #endif
- #if PY_VERSION_HEX >= 0x030800b1
- 0, /*tp_vectorcall*/
- #endif
- #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000
- 0, /*tp_print*/
- #endif
-};
-static struct __pyx_vtabstruct__memoryviewslice __pyx_vtable__memoryviewslice;
-
-static PyObject *__pyx_tp_new__memoryviewslice(PyTypeObject *t, PyObject *a, PyObject *k) {
- struct __pyx_memoryviewslice_obj *p;
- PyObject *o = __pyx_tp_new_memoryview(t, a, k);
- if (unlikely(!o)) return 0;
- p = ((struct __pyx_memoryviewslice_obj *)o);
- p->__pyx_base.__pyx_vtab = (struct __pyx_vtabstruct_memoryview*)__pyx_vtabptr__memoryviewslice;
- p->from_object = Py_None; Py_INCREF(Py_None);
- p->from_slice.memview = NULL;
- return o;
-}
-
-static void __pyx_tp_dealloc__memoryviewslice(PyObject *o) {
- struct __pyx_memoryviewslice_obj *p = (struct __pyx_memoryviewslice_obj *)o;
- #if CYTHON_USE_TP_FINALIZE
- if (unlikely(PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE) && Py_TYPE(o)->tp_finalize) && !_PyGC_FINALIZED(o)) {
- if (PyObject_CallFinalizerFromDealloc(o)) return;
- }
- #endif
- PyObject_GC_UnTrack(o);
- {
- PyObject *etype, *eval, *etb;
- PyErr_Fetch(&etype, &eval, &etb);
- __Pyx_SET_REFCNT(o, Py_REFCNT(o) + 1);
- __pyx_memoryviewslice___dealloc__(o);
- __Pyx_SET_REFCNT(o, Py_REFCNT(o) - 1);
- PyErr_Restore(etype, eval, etb);
- }
- Py_CLEAR(p->from_object);
- PyObject_GC_Track(o);
- __pyx_tp_dealloc_memoryview(o);
-}
-
-static int __pyx_tp_traverse__memoryviewslice(PyObject *o, visitproc v, void *a) {
- int e;
- struct __pyx_memoryviewslice_obj *p = (struct __pyx_memoryviewslice_obj *)o;
- e = __pyx_tp_traverse_memoryview(o, v, a); if (e) return e;
- if (p->from_object) {
- e = (*v)(p->from_object, a); if (e) return e;
- }
- return 0;
-}
-
-static int __pyx_tp_clear__memoryviewslice(PyObject *o) {
- PyObject* tmp;
- struct __pyx_memoryviewslice_obj *p = (struct __pyx_memoryviewslice_obj *)o;
- __pyx_tp_clear_memoryview(o);
- tmp = ((PyObject*)p->from_object);
- p->from_object = Py_None; Py_INCREF(Py_None);
- Py_XDECREF(tmp);
- __PYX_XDEC_MEMVIEW(&p->from_slice, 1);
- return 0;
-}
-
-static PyObject *__pyx_getprop___pyx_memoryviewslice_base(PyObject *o, CYTHON_UNUSED void *x) {
- return __pyx_pw_15View_dot_MemoryView_16_memoryviewslice_4base_1__get__(o);
-}
-
-static PyMethodDef __pyx_methods__memoryviewslice[] = {
- {"__reduce_cython__", (PyCFunction)__pyx_pw___pyx_memoryviewslice_1__reduce_cython__, METH_NOARGS, 0},
- {"__setstate_cython__", (PyCFunction)__pyx_pw___pyx_memoryviewslice_3__setstate_cython__, METH_O, 0},
- {0, 0, 0, 0}
-};
-
-static struct PyGetSetDef __pyx_getsets__memoryviewslice[] = {
- {(char *)"base", __pyx_getprop___pyx_memoryviewslice_base, 0, (char *)0, 0},
- {0, 0, 0, 0, 0}
-};
-
-static PyTypeObject __pyx_type___pyx_memoryviewslice = {
- PyVarObject_HEAD_INIT(0, 0)
- "monotonic_align.core._memoryviewslice", /*tp_name*/
- sizeof(struct __pyx_memoryviewslice_obj), /*tp_basicsize*/
- 0, /*tp_itemsize*/
- __pyx_tp_dealloc__memoryviewslice, /*tp_dealloc*/
- #if PY_VERSION_HEX < 0x030800b4
- 0, /*tp_print*/
- #endif
- #if PY_VERSION_HEX >= 0x030800b4
- 0, /*tp_vectorcall_offset*/
- #endif
- 0, /*tp_getattr*/
- 0, /*tp_setattr*/
- #if PY_MAJOR_VERSION < 3
- 0, /*tp_compare*/
- #endif
- #if PY_MAJOR_VERSION >= 3
- 0, /*tp_as_async*/
- #endif
- #if CYTHON_COMPILING_IN_PYPY
- __pyx_memoryview___repr__, /*tp_repr*/
- #else
- 0, /*tp_repr*/
- #endif
- 0, /*tp_as_number*/
- 0, /*tp_as_sequence*/
- 0, /*tp_as_mapping*/
- 0, /*tp_hash*/
- 0, /*tp_call*/
- #if CYTHON_COMPILING_IN_PYPY
- __pyx_memoryview___str__, /*tp_str*/
- #else
- 0, /*tp_str*/
- #endif
- 0, /*tp_getattro*/
- 0, /*tp_setattro*/
- 0, /*tp_as_buffer*/
- Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_GC, /*tp_flags*/
- "Internal class for passing memoryview slices to Python", /*tp_doc*/
- __pyx_tp_traverse__memoryviewslice, /*tp_traverse*/
- __pyx_tp_clear__memoryviewslice, /*tp_clear*/
- 0, /*tp_richcompare*/
- 0, /*tp_weaklistoffset*/
- 0, /*tp_iter*/
- 0, /*tp_iternext*/
- __pyx_methods__memoryviewslice, /*tp_methods*/
- 0, /*tp_members*/
- __pyx_getsets__memoryviewslice, /*tp_getset*/
- 0, /*tp_base*/
- 0, /*tp_dict*/
- 0, /*tp_descr_get*/
- 0, /*tp_descr_set*/
- 0, /*tp_dictoffset*/
- 0, /*tp_init*/
- 0, /*tp_alloc*/
- __pyx_tp_new__memoryviewslice, /*tp_new*/
- 0, /*tp_free*/
- 0, /*tp_is_gc*/
- 0, /*tp_bases*/
- 0, /*tp_mro*/
- 0, /*tp_cache*/
- 0, /*tp_subclasses*/
- 0, /*tp_weaklist*/
- 0, /*tp_del*/
- 0, /*tp_version_tag*/
- #if PY_VERSION_HEX >= 0x030400a1
- 0, /*tp_finalize*/
- #endif
- #if PY_VERSION_HEX >= 0x030800b1
- 0, /*tp_vectorcall*/
- #endif
- #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000
- 0, /*tp_print*/
- #endif
-};
-
-static PyMethodDef __pyx_methods[] = {
- {"maximum_path_c", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_15monotonic_align_4core_1maximum_path_c, METH_VARARGS|METH_KEYWORDS, 0},
- {0, 0, 0, 0}
-};
-
-#if PY_MAJOR_VERSION >= 3
-#if CYTHON_PEP489_MULTI_PHASE_INIT
-static PyObject* __pyx_pymod_create(PyObject *spec, PyModuleDef *def); /*proto*/
-static int __pyx_pymod_exec_core(PyObject* module); /*proto*/
-static PyModuleDef_Slot __pyx_moduledef_slots[] = {
- {Py_mod_create, (void*)__pyx_pymod_create},
- {Py_mod_exec, (void*)__pyx_pymod_exec_core},
- {0, NULL}
-};
-#endif
-
-static struct PyModuleDef __pyx_moduledef = {
- PyModuleDef_HEAD_INIT,
- "core",
- 0, /* m_doc */
- #if CYTHON_PEP489_MULTI_PHASE_INIT
- 0, /* m_size */
- #else
- -1, /* m_size */
- #endif
- __pyx_methods /* m_methods */,
- #if CYTHON_PEP489_MULTI_PHASE_INIT
- __pyx_moduledef_slots, /* m_slots */
- #else
- NULL, /* m_reload */
- #endif
- NULL, /* m_traverse */
- NULL, /* m_clear */
- NULL /* m_free */
-};
-#endif
-#ifndef CYTHON_SMALL_CODE
-#if defined(__clang__)
- #define CYTHON_SMALL_CODE
-#elif defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 3))
- #define CYTHON_SMALL_CODE __attribute__((cold))
-#else
- #define CYTHON_SMALL_CODE
-#endif
-#endif
-
-static __Pyx_StringTabEntry __pyx_string_tab[] = {
- {&__pyx_n_s_ASCII, __pyx_k_ASCII, sizeof(__pyx_k_ASCII), 0, 0, 1, 1},
- {&__pyx_kp_s_Buffer_view_does_not_expose_stri, __pyx_k_Buffer_view_does_not_expose_stri, sizeof(__pyx_k_Buffer_view_does_not_expose_stri), 0, 0, 1, 0},
- {&__pyx_kp_s_Can_only_create_a_buffer_that_is, __pyx_k_Can_only_create_a_buffer_that_is, sizeof(__pyx_k_Can_only_create_a_buffer_that_is), 0, 0, 1, 0},
- {&__pyx_kp_s_Cannot_assign_to_read_only_memor, __pyx_k_Cannot_assign_to_read_only_memor, sizeof(__pyx_k_Cannot_assign_to_read_only_memor), 0, 0, 1, 0},
- {&__pyx_kp_s_Cannot_create_writable_memory_vi, __pyx_k_Cannot_create_writable_memory_vi, sizeof(__pyx_k_Cannot_create_writable_memory_vi), 0, 0, 1, 0},
- {&__pyx_kp_s_Cannot_index_with_type_s, __pyx_k_Cannot_index_with_type_s, sizeof(__pyx_k_Cannot_index_with_type_s), 0, 0, 1, 0},
- {&__pyx_n_s_Ellipsis, __pyx_k_Ellipsis, sizeof(__pyx_k_Ellipsis), 0, 0, 1, 1},
- {&__pyx_kp_s_Empty_shape_tuple_for_cython_arr, __pyx_k_Empty_shape_tuple_for_cython_arr, sizeof(__pyx_k_Empty_shape_tuple_for_cython_arr), 0, 0, 1, 0},
- {&__pyx_kp_s_Incompatible_checksums_s_vs_0xb0, __pyx_k_Incompatible_checksums_s_vs_0xb0, sizeof(__pyx_k_Incompatible_checksums_s_vs_0xb0), 0, 0, 1, 0},
- {&__pyx_n_s_IndexError, __pyx_k_IndexError, sizeof(__pyx_k_IndexError), 0, 0, 1, 1},
- {&__pyx_kp_s_Indirect_dimensions_not_supporte, __pyx_k_Indirect_dimensions_not_supporte, sizeof(__pyx_k_Indirect_dimensions_not_supporte), 0, 0, 1, 0},
- {&__pyx_kp_s_Invalid_mode_expected_c_or_fortr, __pyx_k_Invalid_mode_expected_c_or_fortr, sizeof(__pyx_k_Invalid_mode_expected_c_or_fortr), 0, 0, 1, 0},
- {&__pyx_kp_s_Invalid_shape_in_axis_d_d, __pyx_k_Invalid_shape_in_axis_d_d, sizeof(__pyx_k_Invalid_shape_in_axis_d_d), 0, 0, 1, 0},
- {&__pyx_n_s_MemoryError, __pyx_k_MemoryError, sizeof(__pyx_k_MemoryError), 0, 0, 1, 1},
- {&__pyx_kp_s_MemoryView_of_r_at_0x_x, __pyx_k_MemoryView_of_r_at_0x_x, sizeof(__pyx_k_MemoryView_of_r_at_0x_x), 0, 0, 1, 0},
- {&__pyx_kp_s_MemoryView_of_r_object, __pyx_k_MemoryView_of_r_object, sizeof(__pyx_k_MemoryView_of_r_object), 0, 0, 1, 0},
- {&__pyx_n_b_O, __pyx_k_O, sizeof(__pyx_k_O), 0, 0, 0, 1},
- {&__pyx_kp_s_Out_of_bounds_on_buffer_access_a, __pyx_k_Out_of_bounds_on_buffer_access_a, sizeof(__pyx_k_Out_of_bounds_on_buffer_access_a), 0, 0, 1, 0},
- {&__pyx_n_s_PickleError, __pyx_k_PickleError, sizeof(__pyx_k_PickleError), 0, 0, 1, 1},
- {&__pyx_n_s_TypeError, __pyx_k_TypeError, sizeof(__pyx_k_TypeError), 0, 0, 1, 1},
- {&__pyx_kp_s_Unable_to_convert_item_to_object, __pyx_k_Unable_to_convert_item_to_object, sizeof(__pyx_k_Unable_to_convert_item_to_object), 0, 0, 1, 0},
- {&__pyx_n_s_ValueError, __pyx_k_ValueError, sizeof(__pyx_k_ValueError), 0, 0, 1, 1},
- {&__pyx_n_s_View_MemoryView, __pyx_k_View_MemoryView, sizeof(__pyx_k_View_MemoryView), 0, 0, 1, 1},
- {&__pyx_n_s_allocate_buffer, __pyx_k_allocate_buffer, sizeof(__pyx_k_allocate_buffer), 0, 0, 1, 1},
- {&__pyx_n_s_base, __pyx_k_base, sizeof(__pyx_k_base), 0, 0, 1, 1},
- {&__pyx_n_s_c, __pyx_k_c, sizeof(__pyx_k_c), 0, 0, 1, 1},
- {&__pyx_n_u_c, __pyx_k_c, sizeof(__pyx_k_c), 0, 1, 0, 1},
- {&__pyx_n_s_class, __pyx_k_class, sizeof(__pyx_k_class), 0, 0, 1, 1},
- {&__pyx_n_s_cline_in_traceback, __pyx_k_cline_in_traceback, sizeof(__pyx_k_cline_in_traceback), 0, 0, 1, 1},
- {&__pyx_kp_s_contiguous_and_direct, __pyx_k_contiguous_and_direct, sizeof(__pyx_k_contiguous_and_direct), 0, 0, 1, 0},
- {&__pyx_kp_s_contiguous_and_indirect, __pyx_k_contiguous_and_indirect, sizeof(__pyx_k_contiguous_and_indirect), 0, 0, 1, 0},
- {&__pyx_n_s_dict, __pyx_k_dict, sizeof(__pyx_k_dict), 0, 0, 1, 1},
- {&__pyx_n_s_dtype_is_object, __pyx_k_dtype_is_object, sizeof(__pyx_k_dtype_is_object), 0, 0, 1, 1},
- {&__pyx_n_s_encode, __pyx_k_encode, sizeof(__pyx_k_encode), 0, 0, 1, 1},
- {&__pyx_n_s_enumerate, __pyx_k_enumerate, sizeof(__pyx_k_enumerate), 0, 0, 1, 1},
- {&__pyx_n_s_error, __pyx_k_error, sizeof(__pyx_k_error), 0, 0, 1, 1},
- {&__pyx_n_s_flags, __pyx_k_flags, sizeof(__pyx_k_flags), 0, 0, 1, 1},
- {&__pyx_n_s_format, __pyx_k_format, sizeof(__pyx_k_format), 0, 0, 1, 1},
- {&__pyx_n_s_fortran, __pyx_k_fortran, sizeof(__pyx_k_fortran), 0, 0, 1, 1},
- {&__pyx_n_u_fortran, __pyx_k_fortran, sizeof(__pyx_k_fortran), 0, 1, 0, 1},
- {&__pyx_n_s_getstate, __pyx_k_getstate, sizeof(__pyx_k_getstate), 0, 0, 1, 1},
- {&__pyx_kp_s_got_differing_extents_in_dimensi, __pyx_k_got_differing_extents_in_dimensi, sizeof(__pyx_k_got_differing_extents_in_dimensi), 0, 0, 1, 0},
- {&__pyx_n_s_id, __pyx_k_id, sizeof(__pyx_k_id), 0, 0, 1, 1},
- {&__pyx_n_s_import, __pyx_k_import, sizeof(__pyx_k_import), 0, 0, 1, 1},
- {&__pyx_n_s_itemsize, __pyx_k_itemsize, sizeof(__pyx_k_itemsize), 0, 0, 1, 1},
- {&__pyx_kp_s_itemsize_0_for_cython_array, __pyx_k_itemsize_0_for_cython_array, sizeof(__pyx_k_itemsize_0_for_cython_array), 0, 0, 1, 0},
- {&__pyx_n_s_main, __pyx_k_main, sizeof(__pyx_k_main), 0, 0, 1, 1},
- {&__pyx_n_s_memview, __pyx_k_memview, sizeof(__pyx_k_memview), 0, 0, 1, 1},
- {&__pyx_n_s_mode, __pyx_k_mode, sizeof(__pyx_k_mode), 0, 0, 1, 1},
- {&__pyx_n_s_name, __pyx_k_name, sizeof(__pyx_k_name), 0, 0, 1, 1},
- {&__pyx_n_s_name_2, __pyx_k_name_2, sizeof(__pyx_k_name_2), 0, 0, 1, 1},
- {&__pyx_n_s_ndim, __pyx_k_ndim, sizeof(__pyx_k_ndim), 0, 0, 1, 1},
- {&__pyx_n_s_new, __pyx_k_new, sizeof(__pyx_k_new), 0, 0, 1, 1},
- {&__pyx_kp_s_no_default___reduce___due_to_non, __pyx_k_no_default___reduce___due_to_non, sizeof(__pyx_k_no_default___reduce___due_to_non), 0, 0, 1, 0},
- {&__pyx_n_s_obj, __pyx_k_obj, sizeof(__pyx_k_obj), 0, 0, 1, 1},
- {&__pyx_n_s_pack, __pyx_k_pack, sizeof(__pyx_k_pack), 0, 0, 1, 1},
- {&__pyx_n_s_paths, __pyx_k_paths, sizeof(__pyx_k_paths), 0, 0, 1, 1},
- {&__pyx_n_s_pickle, __pyx_k_pickle, sizeof(__pyx_k_pickle), 0, 0, 1, 1},
- {&__pyx_n_s_pyx_PickleError, __pyx_k_pyx_PickleError, sizeof(__pyx_k_pyx_PickleError), 0, 0, 1, 1},
- {&__pyx_n_s_pyx_checksum, __pyx_k_pyx_checksum, sizeof(__pyx_k_pyx_checksum), 0, 0, 1, 1},
- {&__pyx_n_s_pyx_getbuffer, __pyx_k_pyx_getbuffer, sizeof(__pyx_k_pyx_getbuffer), 0, 0, 1, 1},
- {&__pyx_n_s_pyx_result, __pyx_k_pyx_result, sizeof(__pyx_k_pyx_result), 0, 0, 1, 1},
- {&__pyx_n_s_pyx_state, __pyx_k_pyx_state, sizeof(__pyx_k_pyx_state), 0, 0, 1, 1},
- {&__pyx_n_s_pyx_type, __pyx_k_pyx_type, sizeof(__pyx_k_pyx_type), 0, 0, 1, 1},
- {&__pyx_n_s_pyx_unpickle_Enum, __pyx_k_pyx_unpickle_Enum, sizeof(__pyx_k_pyx_unpickle_Enum), 0, 0, 1, 1},
- {&__pyx_n_s_pyx_vtable, __pyx_k_pyx_vtable, sizeof(__pyx_k_pyx_vtable), 0, 0, 1, 1},
- {&__pyx_n_s_range, __pyx_k_range, sizeof(__pyx_k_range), 0, 0, 1, 1},
- {&__pyx_n_s_reduce, __pyx_k_reduce, sizeof(__pyx_k_reduce), 0, 0, 1, 1},
- {&__pyx_n_s_reduce_cython, __pyx_k_reduce_cython, sizeof(__pyx_k_reduce_cython), 0, 0, 1, 1},
- {&__pyx_n_s_reduce_ex, __pyx_k_reduce_ex, sizeof(__pyx_k_reduce_ex), 0, 0, 1, 1},
- {&__pyx_n_s_setstate, __pyx_k_setstate, sizeof(__pyx_k_setstate), 0, 0, 1, 1},
- {&__pyx_n_s_setstate_cython, __pyx_k_setstate_cython, sizeof(__pyx_k_setstate_cython), 0, 0, 1, 1},
- {&__pyx_n_s_shape, __pyx_k_shape, sizeof(__pyx_k_shape), 0, 0, 1, 1},
- {&__pyx_n_s_size, __pyx_k_size, sizeof(__pyx_k_size), 0, 0, 1, 1},
- {&__pyx_n_s_start, __pyx_k_start, sizeof(__pyx_k_start), 0, 0, 1, 1},
- {&__pyx_n_s_step, __pyx_k_step, sizeof(__pyx_k_step), 0, 0, 1, 1},
- {&__pyx_n_s_stop, __pyx_k_stop, sizeof(__pyx_k_stop), 0, 0, 1, 1},
- {&__pyx_kp_s_strided_and_direct, __pyx_k_strided_and_direct, sizeof(__pyx_k_strided_and_direct), 0, 0, 1, 0},
- {&__pyx_kp_s_strided_and_direct_or_indirect, __pyx_k_strided_and_direct_or_indirect, sizeof(__pyx_k_strided_and_direct_or_indirect), 0, 0, 1, 0},
- {&__pyx_kp_s_strided_and_indirect, __pyx_k_strided_and_indirect, sizeof(__pyx_k_strided_and_indirect), 0, 0, 1, 0},
- {&__pyx_kp_s_stringsource, __pyx_k_stringsource, sizeof(__pyx_k_stringsource), 0, 0, 1, 0},
- {&__pyx_n_s_struct, __pyx_k_struct, sizeof(__pyx_k_struct), 0, 0, 1, 1},
- {&__pyx_n_s_t_xs, __pyx_k_t_xs, sizeof(__pyx_k_t_xs), 0, 0, 1, 1},
- {&__pyx_n_s_t_ys, __pyx_k_t_ys, sizeof(__pyx_k_t_ys), 0, 0, 1, 1},
- {&__pyx_n_s_test, __pyx_k_test, sizeof(__pyx_k_test), 0, 0, 1, 1},
- {&__pyx_kp_s_unable_to_allocate_array_data, __pyx_k_unable_to_allocate_array_data, sizeof(__pyx_k_unable_to_allocate_array_data), 0, 0, 1, 0},
- {&__pyx_kp_s_unable_to_allocate_shape_and_str, __pyx_k_unable_to_allocate_shape_and_str, sizeof(__pyx_k_unable_to_allocate_shape_and_str), 0, 0, 1, 0},
- {&__pyx_n_s_unpack, __pyx_k_unpack, sizeof(__pyx_k_unpack), 0, 0, 1, 1},
- {&__pyx_n_s_update, __pyx_k_update, sizeof(__pyx_k_update), 0, 0, 1, 1},
- {&__pyx_n_s_values, __pyx_k_values, sizeof(__pyx_k_values), 0, 0, 1, 1},
- {0, 0, 0, 0, 0, 0, 0}
-};
-static CYTHON_SMALL_CODE int __Pyx_InitCachedBuiltins(void) {
- __pyx_builtin_range = __Pyx_GetBuiltinName(__pyx_n_s_range); if (!__pyx_builtin_range) __PYX_ERR(0, 15, __pyx_L1_error)
- __pyx_builtin_ValueError = __Pyx_GetBuiltinName(__pyx_n_s_ValueError); if (!__pyx_builtin_ValueError) __PYX_ERR(1, 133, __pyx_L1_error)
- __pyx_builtin_MemoryError = __Pyx_GetBuiltinName(__pyx_n_s_MemoryError); if (!__pyx_builtin_MemoryError) __PYX_ERR(1, 148, __pyx_L1_error)
- __pyx_builtin_enumerate = __Pyx_GetBuiltinName(__pyx_n_s_enumerate); if (!__pyx_builtin_enumerate) __PYX_ERR(1, 151, __pyx_L1_error)
- __pyx_builtin_TypeError = __Pyx_GetBuiltinName(__pyx_n_s_TypeError); if (!__pyx_builtin_TypeError) __PYX_ERR(1, 2, __pyx_L1_error)
- __pyx_builtin_Ellipsis = __Pyx_GetBuiltinName(__pyx_n_s_Ellipsis); if (!__pyx_builtin_Ellipsis) __PYX_ERR(1, 404, __pyx_L1_error)
- __pyx_builtin_id = __Pyx_GetBuiltinName(__pyx_n_s_id); if (!__pyx_builtin_id) __PYX_ERR(1, 613, __pyx_L1_error)
- __pyx_builtin_IndexError = __Pyx_GetBuiltinName(__pyx_n_s_IndexError); if (!__pyx_builtin_IndexError) __PYX_ERR(1, 832, __pyx_L1_error)
- return 0;
- __pyx_L1_error:;
- return -1;
-}
-
-static CYTHON_SMALL_CODE int __Pyx_InitCachedConstants(void) {
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__Pyx_InitCachedConstants", 0);
-
- /* "View.MemoryView":133
- *
- * if not self.ndim:
- * raise ValueError("Empty shape tuple for cython.array") # <<<<<<<<<<<<<<
- *
- * if itemsize <= 0:
- */
- __pyx_tuple__2 = PyTuple_Pack(1, __pyx_kp_s_Empty_shape_tuple_for_cython_arr); if (unlikely(!__pyx_tuple__2)) __PYX_ERR(1, 133, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_tuple__2);
- __Pyx_GIVEREF(__pyx_tuple__2);
-
- /* "View.MemoryView":136
- *
- * if itemsize <= 0:
- * raise ValueError("itemsize <= 0 for cython.array") # <<<<<<<<<<<<<<
- *
- * if not isinstance(format, bytes):
- */
- __pyx_tuple__3 = PyTuple_Pack(1, __pyx_kp_s_itemsize_0_for_cython_array); if (unlikely(!__pyx_tuple__3)) __PYX_ERR(1, 136, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_tuple__3);
- __Pyx_GIVEREF(__pyx_tuple__3);
-
- /* "View.MemoryView":148
- *
- * if not self._shape:
- * raise MemoryError("unable to allocate shape and strides.") # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_tuple__4 = PyTuple_Pack(1, __pyx_kp_s_unable_to_allocate_shape_and_str); if (unlikely(!__pyx_tuple__4)) __PYX_ERR(1, 148, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_tuple__4);
- __Pyx_GIVEREF(__pyx_tuple__4);
-
- /* "View.MemoryView":176
- * self.data = malloc(self.len)
- * if not self.data:
- * raise MemoryError("unable to allocate array data.") # <<<<<<<<<<<<<<
- *
- * if self.dtype_is_object:
- */
- __pyx_tuple__5 = PyTuple_Pack(1, __pyx_kp_s_unable_to_allocate_array_data); if (unlikely(!__pyx_tuple__5)) __PYX_ERR(1, 176, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_tuple__5);
- __Pyx_GIVEREF(__pyx_tuple__5);
-
- /* "View.MemoryView":192
- * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS
- * if not (flags & bufmode):
- * raise ValueError("Can only create a buffer that is contiguous in memory.") # <<<<<<<<<<<<<<
- * info.buf = self.data
- * info.len = self.len
- */
- __pyx_tuple__6 = PyTuple_Pack(1, __pyx_kp_s_Can_only_create_a_buffer_that_is); if (unlikely(!__pyx_tuple__6)) __PYX_ERR(1, 192, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_tuple__6);
- __Pyx_GIVEREF(__pyx_tuple__6);
-
- /* "(tree fragment)":2
- * def __reduce_cython__(self):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<<
- * def __setstate_cython__(self, __pyx_state):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- */
- __pyx_tuple__7 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__7)) __PYX_ERR(1, 2, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_tuple__7);
- __Pyx_GIVEREF(__pyx_tuple__7);
-
- /* "(tree fragment)":4
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<<
- */
- __pyx_tuple__8 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__8)) __PYX_ERR(1, 4, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_tuple__8);
- __Pyx_GIVEREF(__pyx_tuple__8);
-
- /* "View.MemoryView":418
- * def __setitem__(memoryview self, object index, object value):
- * if self.view.readonly:
- * raise TypeError("Cannot assign to read-only memoryview") # <<<<<<<<<<<<<<
- *
- * have_slices, index = _unellipsify(index, self.view.ndim)
- */
- __pyx_tuple__9 = PyTuple_Pack(1, __pyx_kp_s_Cannot_assign_to_read_only_memor); if (unlikely(!__pyx_tuple__9)) __PYX_ERR(1, 418, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_tuple__9);
- __Pyx_GIVEREF(__pyx_tuple__9);
-
- /* "View.MemoryView":495
- * result = struct.unpack(self.view.format, bytesitem)
- * except struct.error:
- * raise ValueError("Unable to convert item to object") # <<<<<<<<<<<<<<
- * else:
- * if len(self.view.format) == 1:
- */
- __pyx_tuple__10 = PyTuple_Pack(1, __pyx_kp_s_Unable_to_convert_item_to_object); if (unlikely(!__pyx_tuple__10)) __PYX_ERR(1, 495, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_tuple__10);
- __Pyx_GIVEREF(__pyx_tuple__10);
-
- /* "View.MemoryView":520
- * def __getbuffer__(self, Py_buffer *info, int flags):
- * if flags & PyBUF_WRITABLE and self.view.readonly:
- * raise ValueError("Cannot create writable memory view from read-only memoryview") # <<<<<<<<<<<<<<
- *
- * if flags & PyBUF_ND:
- */
- __pyx_tuple__11 = PyTuple_Pack(1, __pyx_kp_s_Cannot_create_writable_memory_vi); if (unlikely(!__pyx_tuple__11)) __PYX_ERR(1, 520, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_tuple__11);
- __Pyx_GIVEREF(__pyx_tuple__11);
-
- /* "View.MemoryView":570
- * if self.view.strides == NULL:
- *
- * raise ValueError("Buffer view does not expose strides") # <<<<<<<<<<<<<<
- *
- * return tuple([stride for stride in self.view.strides[:self.view.ndim]])
- */
- __pyx_tuple__12 = PyTuple_Pack(1, __pyx_kp_s_Buffer_view_does_not_expose_stri); if (unlikely(!__pyx_tuple__12)) __PYX_ERR(1, 570, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_tuple__12);
- __Pyx_GIVEREF(__pyx_tuple__12);
-
- /* "View.MemoryView":577
- * def suboffsets(self):
- * if self.view.suboffsets == NULL:
- * return (-1,) * self.view.ndim # <<<<<<<<<<<<<<
- *
- * return tuple([suboffset for suboffset in self.view.suboffsets[:self.view.ndim]])
- */
- __pyx_tuple__13 = PyTuple_New(1); if (unlikely(!__pyx_tuple__13)) __PYX_ERR(1, 577, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_tuple__13);
- __Pyx_INCREF(__pyx_int_neg_1);
- __Pyx_GIVEREF(__pyx_int_neg_1);
- PyTuple_SET_ITEM(__pyx_tuple__13, 0, __pyx_int_neg_1);
- __Pyx_GIVEREF(__pyx_tuple__13);
-
- /* "(tree fragment)":2
- * def __reduce_cython__(self):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<<
- * def __setstate_cython__(self, __pyx_state):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- */
- __pyx_tuple__14 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__14)) __PYX_ERR(1, 2, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_tuple__14);
- __Pyx_GIVEREF(__pyx_tuple__14);
-
- /* "(tree fragment)":4
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<<
- */
- __pyx_tuple__15 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__15)) __PYX_ERR(1, 4, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_tuple__15);
- __Pyx_GIVEREF(__pyx_tuple__15);
-
- /* "View.MemoryView":682
- * if item is Ellipsis:
- * if not seen_ellipsis:
- * result.extend([slice(None)] * (ndim - len(tup) + 1)) # <<<<<<<<<<<<<<
- * seen_ellipsis = True
- * else:
- */
- __pyx_slice__16 = PySlice_New(Py_None, Py_None, Py_None); if (unlikely(!__pyx_slice__16)) __PYX_ERR(1, 682, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_slice__16);
- __Pyx_GIVEREF(__pyx_slice__16);
-
- /* "View.MemoryView":703
- * for suboffset in suboffsets[:ndim]:
- * if suboffset >= 0:
- * raise ValueError("Indirect dimensions not supported") # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_tuple__17 = PyTuple_Pack(1, __pyx_kp_s_Indirect_dimensions_not_supporte); if (unlikely(!__pyx_tuple__17)) __PYX_ERR(1, 703, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_tuple__17);
- __Pyx_GIVEREF(__pyx_tuple__17);
-
- /* "(tree fragment)":2
- * def __reduce_cython__(self):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<<
- * def __setstate_cython__(self, __pyx_state):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- */
- __pyx_tuple__18 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__18)) __PYX_ERR(1, 2, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_tuple__18);
- __Pyx_GIVEREF(__pyx_tuple__18);
-
- /* "(tree fragment)":4
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<<
- */
- __pyx_tuple__19 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__19)) __PYX_ERR(1, 4, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_tuple__19);
- __Pyx_GIVEREF(__pyx_tuple__19);
-
- /* "View.MemoryView":286
- * return self.name
- *
- * cdef generic = Enum("") # <<<<<<<<<<<<<<
- * cdef strided = Enum("") # default
- * cdef indirect = Enum("")
- */
- __pyx_tuple__20 = PyTuple_Pack(1, __pyx_kp_s_strided_and_direct_or_indirect); if (unlikely(!__pyx_tuple__20)) __PYX_ERR(1, 286, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_tuple__20);
- __Pyx_GIVEREF(__pyx_tuple__20);
-
- /* "View.MemoryView":287
- *
- * cdef generic = Enum("")
- * cdef strided = Enum("") # default # <<<<<<<<<<<<<<
- * cdef indirect = Enum("")
- *
- */
- __pyx_tuple__21 = PyTuple_Pack(1, __pyx_kp_s_strided_and_direct); if (unlikely(!__pyx_tuple__21)) __PYX_ERR(1, 287, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_tuple__21);
- __Pyx_GIVEREF(__pyx_tuple__21);
-
- /* "View.MemoryView":288
- * cdef generic = Enum("")
- * cdef strided = Enum("") # default
- * cdef indirect = Enum("") # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_tuple__22 = PyTuple_Pack(1, __pyx_kp_s_strided_and_indirect); if (unlikely(!__pyx_tuple__22)) __PYX_ERR(1, 288, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_tuple__22);
- __Pyx_GIVEREF(__pyx_tuple__22);
-
- /* "View.MemoryView":291
- *
- *
- * cdef contiguous = Enum("") # <<<<<<<<<<<<<<
- * cdef indirect_contiguous = Enum("")
- *
- */
- __pyx_tuple__23 = PyTuple_Pack(1, __pyx_kp_s_contiguous_and_direct); if (unlikely(!__pyx_tuple__23)) __PYX_ERR(1, 291, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_tuple__23);
- __Pyx_GIVEREF(__pyx_tuple__23);
-
- /* "View.MemoryView":292
- *
- * cdef contiguous = Enum("")
- * cdef indirect_contiguous = Enum("") # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_tuple__24 = PyTuple_Pack(1, __pyx_kp_s_contiguous_and_indirect); if (unlikely(!__pyx_tuple__24)) __PYX_ERR(1, 292, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_tuple__24);
- __Pyx_GIVEREF(__pyx_tuple__24);
-
- /* "(tree fragment)":1
- * def __pyx_unpickle_Enum(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<<
- * cdef object __pyx_PickleError
- * cdef object __pyx_result
- */
- __pyx_tuple__25 = PyTuple_Pack(5, __pyx_n_s_pyx_type, __pyx_n_s_pyx_checksum, __pyx_n_s_pyx_state, __pyx_n_s_pyx_PickleError, __pyx_n_s_pyx_result); if (unlikely(!__pyx_tuple__25)) __PYX_ERR(1, 1, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_tuple__25);
- __Pyx_GIVEREF(__pyx_tuple__25);
- __pyx_codeobj__26 = (PyObject*)__Pyx_PyCode_New(3, 0, 5, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__25, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_stringsource, __pyx_n_s_pyx_unpickle_Enum, 1, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__26)) __PYX_ERR(1, 1, __pyx_L1_error)
- __Pyx_RefNannyFinishContext();
- return 0;
- __pyx_L1_error:;
- __Pyx_RefNannyFinishContext();
- return -1;
-}
-
-static CYTHON_SMALL_CODE int __Pyx_InitGlobals(void) {
- /* InitThreads.init */
- #ifdef WITH_THREAD
-PyEval_InitThreads();
-#endif
-
-if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1, __pyx_L1_error)
-
- if (__Pyx_InitStrings(__pyx_string_tab) < 0) __PYX_ERR(0, 1, __pyx_L1_error);
- __pyx_int_0 = PyInt_FromLong(0); if (unlikely(!__pyx_int_0)) __PYX_ERR(0, 1, __pyx_L1_error)
- __pyx_int_1 = PyInt_FromLong(1); if (unlikely(!__pyx_int_1)) __PYX_ERR(0, 1, __pyx_L1_error)
- __pyx_int_184977713 = PyInt_FromLong(184977713L); if (unlikely(!__pyx_int_184977713)) __PYX_ERR(0, 1, __pyx_L1_error)
- __pyx_int_neg_1 = PyInt_FromLong(-1); if (unlikely(!__pyx_int_neg_1)) __PYX_ERR(0, 1, __pyx_L1_error)
- return 0;
- __pyx_L1_error:;
- return -1;
-}
-
-static CYTHON_SMALL_CODE int __Pyx_modinit_global_init_code(void); /*proto*/
-static CYTHON_SMALL_CODE int __Pyx_modinit_variable_export_code(void); /*proto*/
-static CYTHON_SMALL_CODE int __Pyx_modinit_function_export_code(void); /*proto*/
-static CYTHON_SMALL_CODE int __Pyx_modinit_type_init_code(void); /*proto*/
-static CYTHON_SMALL_CODE int __Pyx_modinit_type_import_code(void); /*proto*/
-static CYTHON_SMALL_CODE int __Pyx_modinit_variable_import_code(void); /*proto*/
-static CYTHON_SMALL_CODE int __Pyx_modinit_function_import_code(void); /*proto*/
-
-static int __Pyx_modinit_global_init_code(void) {
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__Pyx_modinit_global_init_code", 0);
- /*--- Global init code ---*/
- generic = Py_None; Py_INCREF(Py_None);
- strided = Py_None; Py_INCREF(Py_None);
- indirect = Py_None; Py_INCREF(Py_None);
- contiguous = Py_None; Py_INCREF(Py_None);
- indirect_contiguous = Py_None; Py_INCREF(Py_None);
- __Pyx_RefNannyFinishContext();
- return 0;
-}
-
-static int __Pyx_modinit_variable_export_code(void) {
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__Pyx_modinit_variable_export_code", 0);
- /*--- Variable export code ---*/
- __Pyx_RefNannyFinishContext();
- return 0;
-}
-
-static int __Pyx_modinit_function_export_code(void) {
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__Pyx_modinit_function_export_code", 0);
- /*--- Function export code ---*/
- __Pyx_RefNannyFinishContext();
- return 0;
-}
-
-static int __Pyx_modinit_type_init_code(void) {
- __Pyx_RefNannyDeclarations
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__Pyx_modinit_type_init_code", 0);
- /*--- Type init code ---*/
- __pyx_vtabptr_array = &__pyx_vtable_array;
- __pyx_vtable_array.get_memview = (PyObject *(*)(struct __pyx_array_obj *))__pyx_array_get_memview;
- if (PyType_Ready(&__pyx_type___pyx_array) < 0) __PYX_ERR(1, 105, __pyx_L1_error)
- #if PY_VERSION_HEX < 0x030800B1
- __pyx_type___pyx_array.tp_print = 0;
- #endif
- if (__Pyx_SetVtable(__pyx_type___pyx_array.tp_dict, __pyx_vtabptr_array) < 0) __PYX_ERR(1, 105, __pyx_L1_error)
- if (__Pyx_setup_reduce((PyObject*)&__pyx_type___pyx_array) < 0) __PYX_ERR(1, 105, __pyx_L1_error)
- __pyx_array_type = &__pyx_type___pyx_array;
- if (PyType_Ready(&__pyx_type___pyx_MemviewEnum) < 0) __PYX_ERR(1, 279, __pyx_L1_error)
- #if PY_VERSION_HEX < 0x030800B1
- __pyx_type___pyx_MemviewEnum.tp_print = 0;
- #endif
- if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_type___pyx_MemviewEnum.tp_dictoffset && __pyx_type___pyx_MemviewEnum.tp_getattro == PyObject_GenericGetAttr)) {
- __pyx_type___pyx_MemviewEnum.tp_getattro = __Pyx_PyObject_GenericGetAttr;
- }
- if (__Pyx_setup_reduce((PyObject*)&__pyx_type___pyx_MemviewEnum) < 0) __PYX_ERR(1, 279, __pyx_L1_error)
- __pyx_MemviewEnum_type = &__pyx_type___pyx_MemviewEnum;
- __pyx_vtabptr_memoryview = &__pyx_vtable_memoryview;
- __pyx_vtable_memoryview.get_item_pointer = (char *(*)(struct __pyx_memoryview_obj *, PyObject *))__pyx_memoryview_get_item_pointer;
- __pyx_vtable_memoryview.is_slice = (PyObject *(*)(struct __pyx_memoryview_obj *, PyObject *))__pyx_memoryview_is_slice;
- __pyx_vtable_memoryview.setitem_slice_assignment = (PyObject *(*)(struct __pyx_memoryview_obj *, PyObject *, PyObject *))__pyx_memoryview_setitem_slice_assignment;
- __pyx_vtable_memoryview.setitem_slice_assign_scalar = (PyObject *(*)(struct __pyx_memoryview_obj *, struct __pyx_memoryview_obj *, PyObject *))__pyx_memoryview_setitem_slice_assign_scalar;
- __pyx_vtable_memoryview.setitem_indexed = (PyObject *(*)(struct __pyx_memoryview_obj *, PyObject *, PyObject *))__pyx_memoryview_setitem_indexed;
- __pyx_vtable_memoryview.convert_item_to_object = (PyObject *(*)(struct __pyx_memoryview_obj *, char *))__pyx_memoryview_convert_item_to_object;
- __pyx_vtable_memoryview.assign_item_from_object = (PyObject *(*)(struct __pyx_memoryview_obj *, char *, PyObject *))__pyx_memoryview_assign_item_from_object;
- if (PyType_Ready(&__pyx_type___pyx_memoryview) < 0) __PYX_ERR(1, 330, __pyx_L1_error)
- #if PY_VERSION_HEX < 0x030800B1
- __pyx_type___pyx_memoryview.tp_print = 0;
- #endif
- if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_type___pyx_memoryview.tp_dictoffset && __pyx_type___pyx_memoryview.tp_getattro == PyObject_GenericGetAttr)) {
- __pyx_type___pyx_memoryview.tp_getattro = __Pyx_PyObject_GenericGetAttr;
- }
- if (__Pyx_SetVtable(__pyx_type___pyx_memoryview.tp_dict, __pyx_vtabptr_memoryview) < 0) __PYX_ERR(1, 330, __pyx_L1_error)
- if (__Pyx_setup_reduce((PyObject*)&__pyx_type___pyx_memoryview) < 0) __PYX_ERR(1, 330, __pyx_L1_error)
- __pyx_memoryview_type = &__pyx_type___pyx_memoryview;
- __pyx_vtabptr__memoryviewslice = &__pyx_vtable__memoryviewslice;
- __pyx_vtable__memoryviewslice.__pyx_base = *__pyx_vtabptr_memoryview;
- __pyx_vtable__memoryviewslice.__pyx_base.convert_item_to_object = (PyObject *(*)(struct __pyx_memoryview_obj *, char *))__pyx_memoryviewslice_convert_item_to_object;
- __pyx_vtable__memoryviewslice.__pyx_base.assign_item_from_object = (PyObject *(*)(struct __pyx_memoryview_obj *, char *, PyObject *))__pyx_memoryviewslice_assign_item_from_object;
- __pyx_type___pyx_memoryviewslice.tp_base = __pyx_memoryview_type;
- if (PyType_Ready(&__pyx_type___pyx_memoryviewslice) < 0) __PYX_ERR(1, 965, __pyx_L1_error)
- #if PY_VERSION_HEX < 0x030800B1
- __pyx_type___pyx_memoryviewslice.tp_print = 0;
- #endif
- if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_type___pyx_memoryviewslice.tp_dictoffset && __pyx_type___pyx_memoryviewslice.tp_getattro == PyObject_GenericGetAttr)) {
- __pyx_type___pyx_memoryviewslice.tp_getattro = __Pyx_PyObject_GenericGetAttr;
- }
- if (__Pyx_SetVtable(__pyx_type___pyx_memoryviewslice.tp_dict, __pyx_vtabptr__memoryviewslice) < 0) __PYX_ERR(1, 965, __pyx_L1_error)
- if (__Pyx_setup_reduce((PyObject*)&__pyx_type___pyx_memoryviewslice) < 0) __PYX_ERR(1, 965, __pyx_L1_error)
- __pyx_memoryviewslice_type = &__pyx_type___pyx_memoryviewslice;
- __Pyx_RefNannyFinishContext();
- return 0;
- __pyx_L1_error:;
- __Pyx_RefNannyFinishContext();
- return -1;
-}
-
-static int __Pyx_modinit_type_import_code(void) {
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__Pyx_modinit_type_import_code", 0);
- /*--- Type import code ---*/
- __Pyx_RefNannyFinishContext();
- return 0;
-}
-
-static int __Pyx_modinit_variable_import_code(void) {
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__Pyx_modinit_variable_import_code", 0);
- /*--- Variable import code ---*/
- __Pyx_RefNannyFinishContext();
- return 0;
-}
-
-static int __Pyx_modinit_function_import_code(void) {
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__Pyx_modinit_function_import_code", 0);
- /*--- Function import code ---*/
- __Pyx_RefNannyFinishContext();
- return 0;
-}
-
-
-#ifndef CYTHON_NO_PYINIT_EXPORT
-#define __Pyx_PyMODINIT_FUNC PyMODINIT_FUNC
-#elif PY_MAJOR_VERSION < 3
-#ifdef __cplusplus
-#define __Pyx_PyMODINIT_FUNC extern "C" void
-#else
-#define __Pyx_PyMODINIT_FUNC void
-#endif
-#else
-#ifdef __cplusplus
-#define __Pyx_PyMODINIT_FUNC extern "C" PyObject *
-#else
-#define __Pyx_PyMODINIT_FUNC PyObject *
-#endif
-#endif
-
-
-#if PY_MAJOR_VERSION < 3
-__Pyx_PyMODINIT_FUNC initcore(void) CYTHON_SMALL_CODE; /*proto*/
-__Pyx_PyMODINIT_FUNC initcore(void)
-#else
-__Pyx_PyMODINIT_FUNC PyInit_core(void) CYTHON_SMALL_CODE; /*proto*/
-__Pyx_PyMODINIT_FUNC PyInit_core(void)
-#if CYTHON_PEP489_MULTI_PHASE_INIT
-{
- return PyModuleDef_Init(&__pyx_moduledef);
-}
-static CYTHON_SMALL_CODE int __Pyx_check_single_interpreter(void) {
- #if PY_VERSION_HEX >= 0x030700A1
- static PY_INT64_T main_interpreter_id = -1;
- PY_INT64_T current_id = PyInterpreterState_GetID(PyThreadState_Get()->interp);
- if (main_interpreter_id == -1) {
- main_interpreter_id = current_id;
- return (unlikely(current_id == -1)) ? -1 : 0;
- } else if (unlikely(main_interpreter_id != current_id))
- #else
- static PyInterpreterState *main_interpreter = NULL;
- PyInterpreterState *current_interpreter = PyThreadState_Get()->interp;
- if (!main_interpreter) {
- main_interpreter = current_interpreter;
- } else if (unlikely(main_interpreter != current_interpreter))
- #endif
- {
- PyErr_SetString(
- PyExc_ImportError,
- "Interpreter change detected - this module can only be loaded into one interpreter per process.");
- return -1;
- }
- return 0;
-}
-static CYTHON_SMALL_CODE int __Pyx_copy_spec_to_module(PyObject *spec, PyObject *moddict, const char* from_name, const char* to_name, int allow_none) {
- PyObject *value = PyObject_GetAttrString(spec, from_name);
- int result = 0;
- if (likely(value)) {
- if (allow_none || value != Py_None) {
- result = PyDict_SetItemString(moddict, to_name, value);
- }
- Py_DECREF(value);
- } else if (PyErr_ExceptionMatches(PyExc_AttributeError)) {
- PyErr_Clear();
- } else {
- result = -1;
- }
- return result;
-}
-static CYTHON_SMALL_CODE PyObject* __pyx_pymod_create(PyObject *spec, CYTHON_UNUSED PyModuleDef *def) {
- PyObject *module = NULL, *moddict, *modname;
- if (__Pyx_check_single_interpreter())
- return NULL;
- if (__pyx_m)
- return __Pyx_NewRef(__pyx_m);
- modname = PyObject_GetAttrString(spec, "name");
- if (unlikely(!modname)) goto bad;
- module = PyModule_NewObject(modname);
- Py_DECREF(modname);
- if (unlikely(!module)) goto bad;
- moddict = PyModule_GetDict(module);
- if (unlikely(!moddict)) goto bad;
- if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "loader", "__loader__", 1) < 0)) goto bad;
- if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "origin", "__file__", 1) < 0)) goto bad;
- if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "parent", "__package__", 1) < 0)) goto bad;
- if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "submodule_search_locations", "__path__", 0) < 0)) goto bad;
- return module;
-bad:
- Py_XDECREF(module);
- return NULL;
-}
-
-
-static CYTHON_SMALL_CODE int __pyx_pymod_exec_core(PyObject *__pyx_pyinit_module)
-#endif
-#endif
-{
- PyObject *__pyx_t_1 = NULL;
- static PyThread_type_lock __pyx_t_2[8];
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannyDeclarations
- #if CYTHON_PEP489_MULTI_PHASE_INIT
- if (__pyx_m) {
- if (__pyx_m == __pyx_pyinit_module) return 0;
- PyErr_SetString(PyExc_RuntimeError, "Module 'core' has already been imported. Re-initialisation is not supported.");
- return -1;
- }
- #elif PY_MAJOR_VERSION >= 3
- if (__pyx_m) return __Pyx_NewRef(__pyx_m);
- #endif
- #if CYTHON_REFNANNY
-__Pyx_RefNanny = __Pyx_RefNannyImportAPI("refnanny");
-if (!__Pyx_RefNanny) {
- PyErr_Clear();
- __Pyx_RefNanny = __Pyx_RefNannyImportAPI("Cython.Runtime.refnanny");
- if (!__Pyx_RefNanny)
- Py_FatalError("failed to import 'refnanny' module");
-}
-#endif
- __Pyx_RefNannySetupContext("__Pyx_PyMODINIT_FUNC PyInit_core(void)", 0);
- if (__Pyx_check_binary_version() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
- #ifdef __Pxy_PyFrame_Initialize_Offsets
- __Pxy_PyFrame_Initialize_Offsets();
- #endif
- __pyx_empty_tuple = PyTuple_New(0); if (unlikely(!__pyx_empty_tuple)) __PYX_ERR(0, 1, __pyx_L1_error)
- __pyx_empty_bytes = PyBytes_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_bytes)) __PYX_ERR(0, 1, __pyx_L1_error)
- __pyx_empty_unicode = PyUnicode_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_unicode)) __PYX_ERR(0, 1, __pyx_L1_error)
- #ifdef __Pyx_CyFunction_USED
- if (__pyx_CyFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
- #endif
- #ifdef __Pyx_FusedFunction_USED
- if (__pyx_FusedFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
- #endif
- #ifdef __Pyx_Coroutine_USED
- if (__pyx_Coroutine_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
- #endif
- #ifdef __Pyx_Generator_USED
- if (__pyx_Generator_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
- #endif
- #ifdef __Pyx_AsyncGen_USED
- if (__pyx_AsyncGen_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
- #endif
- #ifdef __Pyx_StopAsyncIteration_USED
- if (__pyx_StopAsyncIteration_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
- #endif
- /*--- Library function declarations ---*/
- /*--- Threads initialization code ---*/
- #if defined(__PYX_FORCE_INIT_THREADS) && __PYX_FORCE_INIT_THREADS
- #ifdef WITH_THREAD /* Python build with threading support? */
- PyEval_InitThreads();
- #endif
- #endif
- /*--- Module creation code ---*/
- #if CYTHON_PEP489_MULTI_PHASE_INIT
- __pyx_m = __pyx_pyinit_module;
- Py_INCREF(__pyx_m);
- #else
- #if PY_MAJOR_VERSION < 3
- __pyx_m = Py_InitModule4("core", __pyx_methods, 0, 0, PYTHON_API_VERSION); Py_XINCREF(__pyx_m);
- #else
- __pyx_m = PyModule_Create(&__pyx_moduledef);
- #endif
- if (unlikely(!__pyx_m)) __PYX_ERR(0, 1, __pyx_L1_error)
- #endif
- __pyx_d = PyModule_GetDict(__pyx_m); if (unlikely(!__pyx_d)) __PYX_ERR(0, 1, __pyx_L1_error)
- Py_INCREF(__pyx_d);
- __pyx_b = PyImport_AddModule(__Pyx_BUILTIN_MODULE_NAME); if (unlikely(!__pyx_b)) __PYX_ERR(0, 1, __pyx_L1_error)
- Py_INCREF(__pyx_b);
- __pyx_cython_runtime = PyImport_AddModule((char *) "cython_runtime"); if (unlikely(!__pyx_cython_runtime)) __PYX_ERR(0, 1, __pyx_L1_error)
- Py_INCREF(__pyx_cython_runtime);
- if (PyObject_SetAttrString(__pyx_m, "__builtins__", __pyx_b) < 0) __PYX_ERR(0, 1, __pyx_L1_error);
- /*--- Initialize various global constants etc. ---*/
- if (__Pyx_InitGlobals() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
- #if PY_MAJOR_VERSION < 3 && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT)
- if (__Pyx_init_sys_getdefaultencoding_params() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
- #endif
- if (__pyx_module_is_main_monotonic_align__core) {
- if (PyObject_SetAttr(__pyx_m, __pyx_n_s_name_2, __pyx_n_s_main) < 0) __PYX_ERR(0, 1, __pyx_L1_error)
- }
- #if PY_MAJOR_VERSION >= 3
- {
- PyObject *modules = PyImport_GetModuleDict(); if (unlikely(!modules)) __PYX_ERR(0, 1, __pyx_L1_error)
- if (!PyDict_GetItemString(modules, "monotonic_align.core")) {
- if (unlikely(PyDict_SetItemString(modules, "monotonic_align.core", __pyx_m) < 0)) __PYX_ERR(0, 1, __pyx_L1_error)
- }
- }
- #endif
- /*--- Builtin init code ---*/
- if (__Pyx_InitCachedBuiltins() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
- /*--- Constants init code ---*/
- if (__Pyx_InitCachedConstants() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
- /*--- Global type/function init code ---*/
- (void)__Pyx_modinit_global_init_code();
- (void)__Pyx_modinit_variable_export_code();
- (void)__Pyx_modinit_function_export_code();
- if (unlikely(__Pyx_modinit_type_init_code() < 0)) __PYX_ERR(0, 1, __pyx_L1_error)
- (void)__Pyx_modinit_type_import_code();
- (void)__Pyx_modinit_variable_import_code();
- (void)__Pyx_modinit_function_import_code();
- /*--- Execution code ---*/
- #if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED)
- if (__Pyx_patch_abc() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
- #endif
-
- /* "monotonic_align/core.pyx":7
- * @cython.boundscheck(False)
- * @cython.wraparound(False)
- * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<<
- * cdef int x
- * cdef int y
- */
- __pyx_k_ = (-1e9);
-
- /* "monotonic_align/core.pyx":1
- * cimport cython # <<<<<<<<<<<<<<
- * from cython.parallel import prange
- *
- */
- __pyx_t_1 = __Pyx_PyDict_NewPresized(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- if (PyDict_SetItem(__pyx_d, __pyx_n_s_test, __pyx_t_1) < 0) __PYX_ERR(0, 1, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
-
- /* "View.MemoryView":209
- * info.obj = self
- *
- * __pyx_getbuffer = capsule( &__pyx_array_getbuffer, "getbuffer(obj, view, flags)") # <<<<<<<<<<<<<<
- *
- * def __dealloc__(array self):
- */
- __pyx_t_1 = __pyx_capsule_create(((void *)(&__pyx_array_getbuffer)), ((char *)"getbuffer(obj, view, flags)")); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 209, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- if (PyDict_SetItem((PyObject *)__pyx_array_type->tp_dict, __pyx_n_s_pyx_getbuffer, __pyx_t_1) < 0) __PYX_ERR(1, 209, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- PyType_Modified(__pyx_array_type);
-
- /* "View.MemoryView":286
- * return self.name
- *
- * cdef generic = Enum("") # <<<<<<<<<<<<<<
- * cdef strided = Enum("") # default
- * cdef indirect = Enum("")
- */
- __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__20, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 286, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_XGOTREF(generic);
- __Pyx_DECREF_SET(generic, __pyx_t_1);
- __Pyx_GIVEREF(__pyx_t_1);
- __pyx_t_1 = 0;
-
- /* "View.MemoryView":287
- *
- * cdef generic = Enum("")
- * cdef strided = Enum("") # default # <<<<<<<<<<<<<<
- * cdef indirect = Enum("")
- *
- */
- __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__21, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 287, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_XGOTREF(strided);
- __Pyx_DECREF_SET(strided, __pyx_t_1);
- __Pyx_GIVEREF(__pyx_t_1);
- __pyx_t_1 = 0;
-
- /* "View.MemoryView":288
- * cdef generic = Enum("")
- * cdef strided = Enum("") # default
- * cdef indirect = Enum("") # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__22, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 288, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_XGOTREF(indirect);
- __Pyx_DECREF_SET(indirect, __pyx_t_1);
- __Pyx_GIVEREF(__pyx_t_1);
- __pyx_t_1 = 0;
-
- /* "View.MemoryView":291
- *
- *
- * cdef contiguous = Enum("") # <<<<<<<<<<<<<<
- * cdef indirect_contiguous = Enum("")
- *
- */
- __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__23, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 291, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_XGOTREF(contiguous);
- __Pyx_DECREF_SET(contiguous, __pyx_t_1);
- __Pyx_GIVEREF(__pyx_t_1);
- __pyx_t_1 = 0;
-
- /* "View.MemoryView":292
- *
- * cdef contiguous = Enum("")
- * cdef indirect_contiguous = Enum("") # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__24, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 292, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_XGOTREF(indirect_contiguous);
- __Pyx_DECREF_SET(indirect_contiguous, __pyx_t_1);
- __Pyx_GIVEREF(__pyx_t_1);
- __pyx_t_1 = 0;
-
- /* "View.MemoryView":316
- *
- * DEF THREAD_LOCKS_PREALLOCATED = 8
- * cdef int __pyx_memoryview_thread_locks_used = 0 # <<<<<<<<<<<<<<
- * cdef PyThread_type_lock[THREAD_LOCKS_PREALLOCATED] __pyx_memoryview_thread_locks = [
- * PyThread_allocate_lock(),
- */
- __pyx_memoryview_thread_locks_used = 0;
-
- /* "View.MemoryView":317
- * DEF THREAD_LOCKS_PREALLOCATED = 8
- * cdef int __pyx_memoryview_thread_locks_used = 0
- * cdef PyThread_type_lock[THREAD_LOCKS_PREALLOCATED] __pyx_memoryview_thread_locks = [ # <<<<<<<<<<<<<<
- * PyThread_allocate_lock(),
- * PyThread_allocate_lock(),
- */
- __pyx_t_2[0] = PyThread_allocate_lock();
- __pyx_t_2[1] = PyThread_allocate_lock();
- __pyx_t_2[2] = PyThread_allocate_lock();
- __pyx_t_2[3] = PyThread_allocate_lock();
- __pyx_t_2[4] = PyThread_allocate_lock();
- __pyx_t_2[5] = PyThread_allocate_lock();
- __pyx_t_2[6] = PyThread_allocate_lock();
- __pyx_t_2[7] = PyThread_allocate_lock();
- memcpy(&(__pyx_memoryview_thread_locks[0]), __pyx_t_2, sizeof(__pyx_memoryview_thread_locks[0]) * (8));
-
- /* "View.MemoryView":549
- * info.obj = self
- *
- * __pyx_getbuffer = capsule( &__pyx_memoryview_getbuffer, "getbuffer(obj, view, flags)") # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_t_1 = __pyx_capsule_create(((void *)(&__pyx_memoryview_getbuffer)), ((char *)"getbuffer(obj, view, flags)")); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 549, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- if (PyDict_SetItem((PyObject *)__pyx_memoryview_type->tp_dict, __pyx_n_s_pyx_getbuffer, __pyx_t_1) < 0) __PYX_ERR(1, 549, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- PyType_Modified(__pyx_memoryview_type);
-
- /* "View.MemoryView":995
- * return self.from_object
- *
- * __pyx_getbuffer = capsule( &__pyx_memoryview_getbuffer, "getbuffer(obj, view, flags)") # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_t_1 = __pyx_capsule_create(((void *)(&__pyx_memoryview_getbuffer)), ((char *)"getbuffer(obj, view, flags)")); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 995, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- if (PyDict_SetItem((PyObject *)__pyx_memoryviewslice_type->tp_dict, __pyx_n_s_pyx_getbuffer, __pyx_t_1) < 0) __PYX_ERR(1, 995, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- PyType_Modified(__pyx_memoryviewslice_type);
-
- /* "(tree fragment)":1
- * def __pyx_unpickle_Enum(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<<
- * cdef object __pyx_PickleError
- * cdef object __pyx_result
- */
- __pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_15View_dot_MemoryView_1__pyx_unpickle_Enum, NULL, __pyx_n_s_View_MemoryView); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 1, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- if (PyDict_SetItem(__pyx_d, __pyx_n_s_pyx_unpickle_Enum, __pyx_t_1) < 0) __PYX_ERR(1, 1, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
-
- /* "(tree fragment)":11
- * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state)
- * return __pyx_result
- * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): # <<<<<<<<<<<<<<
- * __pyx_result.name = __pyx_state[0]
- * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'):
- */
-
- /*--- Wrapped vars code ---*/
-
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- if (__pyx_m) {
- if (__pyx_d) {
- __Pyx_AddTraceback("init monotonic_align.core", __pyx_clineno, __pyx_lineno, __pyx_filename);
- }
- Py_CLEAR(__pyx_m);
- } else if (!PyErr_Occurred()) {
- PyErr_SetString(PyExc_ImportError, "init monotonic_align.core");
- }
- __pyx_L0:;
- __Pyx_RefNannyFinishContext();
- #if CYTHON_PEP489_MULTI_PHASE_INIT
- return (__pyx_m != NULL) ? 0 : -1;
- #elif PY_MAJOR_VERSION >= 3
- return __pyx_m;
- #else
- return;
- #endif
-}
-
-/* --- Runtime support code --- */
-/* Refnanny */
-#if CYTHON_REFNANNY
-static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname) {
- PyObject *m = NULL, *p = NULL;
- void *r = NULL;
- m = PyImport_ImportModule(modname);
- if (!m) goto end;
- p = PyObject_GetAttrString(m, "RefNannyAPI");
- if (!p) goto end;
- r = PyLong_AsVoidPtr(p);
-end:
- Py_XDECREF(p);
- Py_XDECREF(m);
- return (__Pyx_RefNannyAPIStruct *)r;
-}
-#endif
-
-/* PyObjectGetAttrStr */
-#if CYTHON_USE_TYPE_SLOTS
-static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name) {
- PyTypeObject* tp = Py_TYPE(obj);
- if (likely(tp->tp_getattro))
- return tp->tp_getattro(obj, attr_name);
-#if PY_MAJOR_VERSION < 3
- if (likely(tp->tp_getattr))
- return tp->tp_getattr(obj, PyString_AS_STRING(attr_name));
-#endif
- return PyObject_GetAttr(obj, attr_name);
-}
-#endif
-
-/* GetBuiltinName */
-static PyObject *__Pyx_GetBuiltinName(PyObject *name) {
- PyObject* result = __Pyx_PyObject_GetAttrStr(__pyx_b, name);
- if (unlikely(!result)) {
- PyErr_Format(PyExc_NameError,
-#if PY_MAJOR_VERSION >= 3
- "name '%U' is not defined", name);
-#else
- "name '%.200s' is not defined", PyString_AS_STRING(name));
-#endif
- }
- return result;
-}
-
-/* MemviewSliceInit */
-static int
-__Pyx_init_memviewslice(struct __pyx_memoryview_obj *memview,
- int ndim,
- __Pyx_memviewslice *memviewslice,
- int memview_is_new_reference)
-{
- __Pyx_RefNannyDeclarations
- int i, retval=-1;
- Py_buffer *buf = &memview->view;
- __Pyx_RefNannySetupContext("init_memviewslice", 0);
- if (unlikely(memviewslice->memview || memviewslice->data)) {
- PyErr_SetString(PyExc_ValueError,
- "memviewslice is already initialized!");
- goto fail;
- }
- if (buf->strides) {
- for (i = 0; i < ndim; i++) {
- memviewslice->strides[i] = buf->strides[i];
- }
- } else {
- Py_ssize_t stride = buf->itemsize;
- for (i = ndim - 1; i >= 0; i--) {
- memviewslice->strides[i] = stride;
- stride *= buf->shape[i];
- }
- }
- for (i = 0; i < ndim; i++) {
- memviewslice->shape[i] = buf->shape[i];
- if (buf->suboffsets) {
- memviewslice->suboffsets[i] = buf->suboffsets[i];
- } else {
- memviewslice->suboffsets[i] = -1;
- }
- }
- memviewslice->memview = memview;
- memviewslice->data = (char *)buf->buf;
- if (__pyx_add_acquisition_count(memview) == 0 && !memview_is_new_reference) {
- Py_INCREF(memview);
- }
- retval = 0;
- goto no_fail;
-fail:
- memviewslice->memview = 0;
- memviewslice->data = 0;
- retval = -1;
-no_fail:
- __Pyx_RefNannyFinishContext();
- return retval;
-}
-#ifndef Py_NO_RETURN
-#define Py_NO_RETURN
-#endif
-static void __pyx_fatalerror(const char *fmt, ...) Py_NO_RETURN {
- va_list vargs;
- char msg[200];
-#ifdef HAVE_STDARG_PROTOTYPES
- va_start(vargs, fmt);
-#else
- va_start(vargs);
-#endif
- vsnprintf(msg, 200, fmt, vargs);
- va_end(vargs);
- Py_FatalError(msg);
-}
-static CYTHON_INLINE int
-__pyx_add_acquisition_count_locked(__pyx_atomic_int *acquisition_count,
- PyThread_type_lock lock)
-{
- int result;
- PyThread_acquire_lock(lock, 1);
- result = (*acquisition_count)++;
- PyThread_release_lock(lock);
- return result;
-}
-static CYTHON_INLINE int
-__pyx_sub_acquisition_count_locked(__pyx_atomic_int *acquisition_count,
- PyThread_type_lock lock)
-{
- int result;
- PyThread_acquire_lock(lock, 1);
- result = (*acquisition_count)--;
- PyThread_release_lock(lock);
- return result;
-}
-static CYTHON_INLINE void
-__Pyx_INC_MEMVIEW(__Pyx_memviewslice *memslice, int have_gil, int lineno)
-{
- int first_time;
- struct __pyx_memoryview_obj *memview = memslice->memview;
- if (unlikely(!memview || (PyObject *) memview == Py_None))
- return;
- if (unlikely(__pyx_get_slice_count(memview) < 0))
- __pyx_fatalerror("Acquisition count is %d (line %d)",
- __pyx_get_slice_count(memview), lineno);
- first_time = __pyx_add_acquisition_count(memview) == 0;
- if (unlikely(first_time)) {
- if (have_gil) {
- Py_INCREF((PyObject *) memview);
- } else {
- PyGILState_STATE _gilstate = PyGILState_Ensure();
- Py_INCREF((PyObject *) memview);
- PyGILState_Release(_gilstate);
- }
- }
-}
-static CYTHON_INLINE void __Pyx_XDEC_MEMVIEW(__Pyx_memviewslice *memslice,
- int have_gil, int lineno) {
- int last_time;
- struct __pyx_memoryview_obj *memview = memslice->memview;
- if (unlikely(!memview || (PyObject *) memview == Py_None)) {
- memslice->memview = NULL;
- return;
- }
- if (unlikely(__pyx_get_slice_count(memview) <= 0))
- __pyx_fatalerror("Acquisition count is %d (line %d)",
- __pyx_get_slice_count(memview), lineno);
- last_time = __pyx_sub_acquisition_count(memview) == 1;
- memslice->data = NULL;
- if (unlikely(last_time)) {
- if (have_gil) {
- Py_CLEAR(memslice->memview);
- } else {
- PyGILState_STATE _gilstate = PyGILState_Ensure();
- Py_CLEAR(memslice->memview);
- PyGILState_Release(_gilstate);
- }
- } else {
- memslice->memview = NULL;
- }
-}
-
-/* RaiseArgTupleInvalid */
-static void __Pyx_RaiseArgtupleInvalid(
- const char* func_name,
- int exact,
- Py_ssize_t num_min,
- Py_ssize_t num_max,
- Py_ssize_t num_found)
-{
- Py_ssize_t num_expected;
- const char *more_or_less;
- if (num_found < num_min) {
- num_expected = num_min;
- more_or_less = "at least";
- } else {
- num_expected = num_max;
- more_or_less = "at most";
- }
- if (exact) {
- more_or_less = "exactly";
- }
- PyErr_Format(PyExc_TypeError,
- "%.200s() takes %.8s %" CYTHON_FORMAT_SSIZE_T "d positional argument%.1s (%" CYTHON_FORMAT_SSIZE_T "d given)",
- func_name, more_or_less, num_expected,
- (num_expected == 1) ? "" : "s", num_found);
-}
-
-/* RaiseDoubleKeywords */
-static void __Pyx_RaiseDoubleKeywordsError(
- const char* func_name,
- PyObject* kw_name)
-{
- PyErr_Format(PyExc_TypeError,
- #if PY_MAJOR_VERSION >= 3
- "%s() got multiple values for keyword argument '%U'", func_name, kw_name);
- #else
- "%s() got multiple values for keyword argument '%s'", func_name,
- PyString_AsString(kw_name));
- #endif
-}
-
-/* ParseKeywords */
-static int __Pyx_ParseOptionalKeywords(
- PyObject *kwds,
- PyObject **argnames[],
- PyObject *kwds2,
- PyObject *values[],
- Py_ssize_t num_pos_args,
- const char* function_name)
-{
- PyObject *key = 0, *value = 0;
- Py_ssize_t pos = 0;
- PyObject*** name;
- PyObject*** first_kw_arg = argnames + num_pos_args;
- while (PyDict_Next(kwds, &pos, &key, &value)) {
- name = first_kw_arg;
- while (*name && (**name != key)) name++;
- if (*name) {
- values[name-argnames] = value;
- continue;
- }
- name = first_kw_arg;
- #if PY_MAJOR_VERSION < 3
- if (likely(PyString_Check(key))) {
- while (*name) {
- if ((CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**name) == PyString_GET_SIZE(key))
- && _PyString_Eq(**name, key)) {
- values[name-argnames] = value;
- break;
- }
- name++;
- }
- if (*name) continue;
- else {
- PyObject*** argname = argnames;
- while (argname != first_kw_arg) {
- if ((**argname == key) || (
- (CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**argname) == PyString_GET_SIZE(key))
- && _PyString_Eq(**argname, key))) {
- goto arg_passed_twice;
- }
- argname++;
- }
- }
- } else
- #endif
- if (likely(PyUnicode_Check(key))) {
- while (*name) {
- int cmp = (**name == key) ? 0 :
- #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3
- (__Pyx_PyUnicode_GET_LENGTH(**name) != __Pyx_PyUnicode_GET_LENGTH(key)) ? 1 :
- #endif
- PyUnicode_Compare(**name, key);
- if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad;
- if (cmp == 0) {
- values[name-argnames] = value;
- break;
- }
- name++;
- }
- if (*name) continue;
- else {
- PyObject*** argname = argnames;
- while (argname != first_kw_arg) {
- int cmp = (**argname == key) ? 0 :
- #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3
- (__Pyx_PyUnicode_GET_LENGTH(**argname) != __Pyx_PyUnicode_GET_LENGTH(key)) ? 1 :
- #endif
- PyUnicode_Compare(**argname, key);
- if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad;
- if (cmp == 0) goto arg_passed_twice;
- argname++;
- }
- }
- } else
- goto invalid_keyword_type;
- if (kwds2) {
- if (unlikely(PyDict_SetItem(kwds2, key, value))) goto bad;
- } else {
- goto invalid_keyword;
- }
- }
- return 0;
-arg_passed_twice:
- __Pyx_RaiseDoubleKeywordsError(function_name, key);
- goto bad;
-invalid_keyword_type:
- PyErr_Format(PyExc_TypeError,
- "%.200s() keywords must be strings", function_name);
- goto bad;
-invalid_keyword:
- PyErr_Format(PyExc_TypeError,
- #if PY_MAJOR_VERSION < 3
- "%.200s() got an unexpected keyword argument '%.200s'",
- function_name, PyString_AsString(key));
- #else
- "%s() got an unexpected keyword argument '%U'",
- function_name, key);
- #endif
-bad:
- return -1;
-}
-
-/* None */
-static CYTHON_INLINE void __Pyx_RaiseUnboundLocalError(const char *varname) {
- PyErr_Format(PyExc_UnboundLocalError, "local variable '%s' referenced before assignment", varname);
-}
-
-/* ArgTypeTest */
-static int __Pyx__ArgTypeTest(PyObject *obj, PyTypeObject *type, const char *name, int exact)
-{
- if (unlikely(!type)) {
- PyErr_SetString(PyExc_SystemError, "Missing type object");
- return 0;
- }
- else if (exact) {
- #if PY_MAJOR_VERSION == 2
- if ((type == &PyBaseString_Type) && likely(__Pyx_PyBaseString_CheckExact(obj))) return 1;
- #endif
- }
- else {
- if (likely(__Pyx_TypeCheck(obj, type))) return 1;
- }
- PyErr_Format(PyExc_TypeError,
- "Argument '%.200s' has incorrect type (expected %.200s, got %.200s)",
- name, type->tp_name, Py_TYPE(obj)->tp_name);
- return 0;
-}
-
-/* PyObjectCall */
-#if CYTHON_COMPILING_IN_CPYTHON
-static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw) {
- PyObject *result;
- ternaryfunc call = func->ob_type->tp_call;
- if (unlikely(!call))
- return PyObject_Call(func, arg, kw);
- if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object")))
- return NULL;
- result = (*call)(func, arg, kw);
- Py_LeaveRecursiveCall();
- if (unlikely(!result) && unlikely(!PyErr_Occurred())) {
- PyErr_SetString(
- PyExc_SystemError,
- "NULL result without error in PyObject_Call");
- }
- return result;
-}
-#endif
-
-/* PyErrFetchRestore */
-#if CYTHON_FAST_THREAD_STATE
-static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) {
- PyObject *tmp_type, *tmp_value, *tmp_tb;
- tmp_type = tstate->curexc_type;
- tmp_value = tstate->curexc_value;
- tmp_tb = tstate->curexc_traceback;
- tstate->curexc_type = type;
- tstate->curexc_value = value;
- tstate->curexc_traceback = tb;
- Py_XDECREF(tmp_type);
- Py_XDECREF(tmp_value);
- Py_XDECREF(tmp_tb);
-}
-static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) {
- *type = tstate->curexc_type;
- *value = tstate->curexc_value;
- *tb = tstate->curexc_traceback;
- tstate->curexc_type = 0;
- tstate->curexc_value = 0;
- tstate->curexc_traceback = 0;
-}
-#endif
-
-/* RaiseException */
-#if PY_MAJOR_VERSION < 3
-static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb,
- CYTHON_UNUSED PyObject *cause) {
- __Pyx_PyThreadState_declare
- Py_XINCREF(type);
- if (!value || value == Py_None)
- value = NULL;
- else
- Py_INCREF(value);
- if (!tb || tb == Py_None)
- tb = NULL;
- else {
- Py_INCREF(tb);
- if (!PyTraceBack_Check(tb)) {
- PyErr_SetString(PyExc_TypeError,
- "raise: arg 3 must be a traceback or None");
- goto raise_error;
- }
- }
- if (PyType_Check(type)) {
-#if CYTHON_COMPILING_IN_PYPY
- if (!value) {
- Py_INCREF(Py_None);
- value = Py_None;
- }
-#endif
- PyErr_NormalizeException(&type, &value, &tb);
- } else {
- if (value) {
- PyErr_SetString(PyExc_TypeError,
- "instance exception may not have a separate value");
- goto raise_error;
- }
- value = type;
- type = (PyObject*) Py_TYPE(type);
- Py_INCREF(type);
- if (!PyType_IsSubtype((PyTypeObject *)type, (PyTypeObject *)PyExc_BaseException)) {
- PyErr_SetString(PyExc_TypeError,
- "raise: exception class must be a subclass of BaseException");
- goto raise_error;
- }
- }
- __Pyx_PyThreadState_assign
- __Pyx_ErrRestore(type, value, tb);
- return;
-raise_error:
- Py_XDECREF(value);
- Py_XDECREF(type);
- Py_XDECREF(tb);
- return;
-}
-#else
-static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause) {
- PyObject* owned_instance = NULL;
- if (tb == Py_None) {
- tb = 0;
- } else if (tb && !PyTraceBack_Check(tb)) {
- PyErr_SetString(PyExc_TypeError,
- "raise: arg 3 must be a traceback or None");
- goto bad;
- }
- if (value == Py_None)
- value = 0;
- if (PyExceptionInstance_Check(type)) {
- if (value) {
- PyErr_SetString(PyExc_TypeError,
- "instance exception may not have a separate value");
- goto bad;
- }
- value = type;
- type = (PyObject*) Py_TYPE(value);
- } else if (PyExceptionClass_Check(type)) {
- PyObject *instance_class = NULL;
- if (value && PyExceptionInstance_Check(value)) {
- instance_class = (PyObject*) Py_TYPE(value);
- if (instance_class != type) {
- int is_subclass = PyObject_IsSubclass(instance_class, type);
- if (!is_subclass) {
- instance_class = NULL;
- } else if (unlikely(is_subclass == -1)) {
- goto bad;
- } else {
- type = instance_class;
- }
- }
- }
- if (!instance_class) {
- PyObject *args;
- if (!value)
- args = PyTuple_New(0);
- else if (PyTuple_Check(value)) {
- Py_INCREF(value);
- args = value;
- } else
- args = PyTuple_Pack(1, value);
- if (!args)
- goto bad;
- owned_instance = PyObject_Call(type, args, NULL);
- Py_DECREF(args);
- if (!owned_instance)
- goto bad;
- value = owned_instance;
- if (!PyExceptionInstance_Check(value)) {
- PyErr_Format(PyExc_TypeError,
- "calling %R should have returned an instance of "
- "BaseException, not %R",
- type, Py_TYPE(value));
- goto bad;
- }
- }
- } else {
- PyErr_SetString(PyExc_TypeError,
- "raise: exception class must be a subclass of BaseException");
- goto bad;
- }
- if (cause) {
- PyObject *fixed_cause;
- if (cause == Py_None) {
- fixed_cause = NULL;
- } else if (PyExceptionClass_Check(cause)) {
- fixed_cause = PyObject_CallObject(cause, NULL);
- if (fixed_cause == NULL)
- goto bad;
- } else if (PyExceptionInstance_Check(cause)) {
- fixed_cause = cause;
- Py_INCREF(fixed_cause);
- } else {
- PyErr_SetString(PyExc_TypeError,
- "exception causes must derive from "
- "BaseException");
- goto bad;
- }
- PyException_SetCause(value, fixed_cause);
- }
- PyErr_SetObject(type, value);
- if (tb) {
-#if CYTHON_COMPILING_IN_PYPY
- PyObject *tmp_type, *tmp_value, *tmp_tb;
- PyErr_Fetch(&tmp_type, &tmp_value, &tmp_tb);
- Py_INCREF(tb);
- PyErr_Restore(tmp_type, tmp_value, tb);
- Py_XDECREF(tmp_tb);
-#else
- PyThreadState *tstate = __Pyx_PyThreadState_Current;
- PyObject* tmp_tb = tstate->curexc_traceback;
- if (tb != tmp_tb) {
- Py_INCREF(tb);
- tstate->curexc_traceback = tb;
- Py_XDECREF(tmp_tb);
- }
-#endif
- }
-bad:
- Py_XDECREF(owned_instance);
- return;
-}
-#endif
-
-/* PyCFunctionFastCall */
-#if CYTHON_FAST_PYCCALL
-static CYTHON_INLINE PyObject * __Pyx_PyCFunction_FastCall(PyObject *func_obj, PyObject **args, Py_ssize_t nargs) {
- PyCFunctionObject *func = (PyCFunctionObject*)func_obj;
- PyCFunction meth = PyCFunction_GET_FUNCTION(func);
- PyObject *self = PyCFunction_GET_SELF(func);
- int flags = PyCFunction_GET_FLAGS(func);
- assert(PyCFunction_Check(func));
- assert(METH_FASTCALL == (flags & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_KEYWORDS | METH_STACKLESS)));
- assert(nargs >= 0);
- assert(nargs == 0 || args != NULL);
- /* _PyCFunction_FastCallDict() must not be called with an exception set,
- because it may clear it (directly or indirectly) and so the
- caller loses its exception */
- assert(!PyErr_Occurred());
- if ((PY_VERSION_HEX < 0x030700A0) || unlikely(flags & METH_KEYWORDS)) {
- return (*((__Pyx_PyCFunctionFastWithKeywords)(void*)meth)) (self, args, nargs, NULL);
- } else {
- return (*((__Pyx_PyCFunctionFast)(void*)meth)) (self, args, nargs);
- }
-}
-#endif
-
-/* PyFunctionFastCall */
-#if CYTHON_FAST_PYCALL
-static PyObject* __Pyx_PyFunction_FastCallNoKw(PyCodeObject *co, PyObject **args, Py_ssize_t na,
- PyObject *globals) {
- PyFrameObject *f;
- PyThreadState *tstate = __Pyx_PyThreadState_Current;
- PyObject **fastlocals;
- Py_ssize_t i;
- PyObject *result;
- assert(globals != NULL);
- /* XXX Perhaps we should create a specialized
- PyFrame_New() that doesn't take locals, but does
- take builtins without sanity checking them.
- */
- assert(tstate != NULL);
- f = PyFrame_New(tstate, co, globals, NULL);
- if (f == NULL) {
- return NULL;
- }
- fastlocals = __Pyx_PyFrame_GetLocalsplus(f);
- for (i = 0; i < na; i++) {
- Py_INCREF(*args);
- fastlocals[i] = *args++;
- }
- result = PyEval_EvalFrameEx(f,0);
- ++tstate->recursion_depth;
- Py_DECREF(f);
- --tstate->recursion_depth;
- return result;
-}
-#if 1 || PY_VERSION_HEX < 0x030600B1
-static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs) {
- PyCodeObject *co = (PyCodeObject *)PyFunction_GET_CODE(func);
- PyObject *globals = PyFunction_GET_GLOBALS(func);
- PyObject *argdefs = PyFunction_GET_DEFAULTS(func);
- PyObject *closure;
-#if PY_MAJOR_VERSION >= 3
- PyObject *kwdefs;
-#endif
- PyObject *kwtuple, **k;
- PyObject **d;
- Py_ssize_t nd;
- Py_ssize_t nk;
- PyObject *result;
- assert(kwargs == NULL || PyDict_Check(kwargs));
- nk = kwargs ? PyDict_Size(kwargs) : 0;
- if (Py_EnterRecursiveCall((char*)" while calling a Python object")) {
- return NULL;
- }
- if (
-#if PY_MAJOR_VERSION >= 3
- co->co_kwonlyargcount == 0 &&
-#endif
- likely(kwargs == NULL || nk == 0) &&
- co->co_flags == (CO_OPTIMIZED | CO_NEWLOCALS | CO_NOFREE)) {
- if (argdefs == NULL && co->co_argcount == nargs) {
- result = __Pyx_PyFunction_FastCallNoKw(co, args, nargs, globals);
- goto done;
- }
- else if (nargs == 0 && argdefs != NULL
- && co->co_argcount == Py_SIZE(argdefs)) {
- /* function called with no arguments, but all parameters have
- a default value: use default values as arguments .*/
- args = &PyTuple_GET_ITEM(argdefs, 0);
- result =__Pyx_PyFunction_FastCallNoKw(co, args, Py_SIZE(argdefs), globals);
- goto done;
- }
- }
- if (kwargs != NULL) {
- Py_ssize_t pos, i;
- kwtuple = PyTuple_New(2 * nk);
- if (kwtuple == NULL) {
- result = NULL;
- goto done;
- }
- k = &PyTuple_GET_ITEM(kwtuple, 0);
- pos = i = 0;
- while (PyDict_Next(kwargs, &pos, &k[i], &k[i+1])) {
- Py_INCREF(k[i]);
- Py_INCREF(k[i+1]);
- i += 2;
- }
- nk = i / 2;
- }
- else {
- kwtuple = NULL;
- k = NULL;
- }
- closure = PyFunction_GET_CLOSURE(func);
-#if PY_MAJOR_VERSION >= 3
- kwdefs = PyFunction_GET_KW_DEFAULTS(func);
-#endif
- if (argdefs != NULL) {
- d = &PyTuple_GET_ITEM(argdefs, 0);
- nd = Py_SIZE(argdefs);
- }
- else {
- d = NULL;
- nd = 0;
- }
-#if PY_MAJOR_VERSION >= 3
- result = PyEval_EvalCodeEx((PyObject*)co, globals, (PyObject *)NULL,
- args, (int)nargs,
- k, (int)nk,
- d, (int)nd, kwdefs, closure);
-#else
- result = PyEval_EvalCodeEx(co, globals, (PyObject *)NULL,
- args, (int)nargs,
- k, (int)nk,
- d, (int)nd, closure);
-#endif
- Py_XDECREF(kwtuple);
-done:
- Py_LeaveRecursiveCall();
- return result;
-}
-#endif
-#endif
-
-/* PyObjectCall2Args */
-static CYTHON_UNUSED PyObject* __Pyx_PyObject_Call2Args(PyObject* function, PyObject* arg1, PyObject* arg2) {
- PyObject *args, *result = NULL;
- #if CYTHON_FAST_PYCALL
- if (PyFunction_Check(function)) {
- PyObject *args[2] = {arg1, arg2};
- return __Pyx_PyFunction_FastCall(function, args, 2);
- }
- #endif
- #if CYTHON_FAST_PYCCALL
- if (__Pyx_PyFastCFunction_Check(function)) {
- PyObject *args[2] = {arg1, arg2};
- return __Pyx_PyCFunction_FastCall(function, args, 2);
- }
- #endif
- args = PyTuple_New(2);
- if (unlikely(!args)) goto done;
- Py_INCREF(arg1);
- PyTuple_SET_ITEM(args, 0, arg1);
- Py_INCREF(arg2);
- PyTuple_SET_ITEM(args, 1, arg2);
- Py_INCREF(function);
- result = __Pyx_PyObject_Call(function, args, NULL);
- Py_DECREF(args);
- Py_DECREF(function);
-done:
- return result;
-}
-
-/* PyObjectCallMethO */
-#if CYTHON_COMPILING_IN_CPYTHON
-static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg) {
- PyObject *self, *result;
- PyCFunction cfunc;
- cfunc = PyCFunction_GET_FUNCTION(func);
- self = PyCFunction_GET_SELF(func);
- if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object")))
- return NULL;
- result = cfunc(self, arg);
- Py_LeaveRecursiveCall();
- if (unlikely(!result) && unlikely(!PyErr_Occurred())) {
- PyErr_SetString(
- PyExc_SystemError,
- "NULL result without error in PyObject_Call");
- }
- return result;
-}
-#endif
-
-/* PyObjectCallOneArg */
-#if CYTHON_COMPILING_IN_CPYTHON
-static PyObject* __Pyx__PyObject_CallOneArg(PyObject *func, PyObject *arg) {
- PyObject *result;
- PyObject *args = PyTuple_New(1);
- if (unlikely(!args)) return NULL;
- Py_INCREF(arg);
- PyTuple_SET_ITEM(args, 0, arg);
- result = __Pyx_PyObject_Call(func, args, NULL);
- Py_DECREF(args);
- return result;
-}
-static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) {
-#if CYTHON_FAST_PYCALL
- if (PyFunction_Check(func)) {
- return __Pyx_PyFunction_FastCall(func, &arg, 1);
- }
-#endif
- if (likely(PyCFunction_Check(func))) {
- if (likely(PyCFunction_GET_FLAGS(func) & METH_O)) {
- return __Pyx_PyObject_CallMethO(func, arg);
-#if CYTHON_FAST_PYCCALL
- } else if (PyCFunction_GET_FLAGS(func) & METH_FASTCALL) {
- return __Pyx_PyCFunction_FastCall(func, &arg, 1);
-#endif
- }
- }
- return __Pyx__PyObject_CallOneArg(func, arg);
-}
-#else
-static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) {
- PyObject *result;
- PyObject *args = PyTuple_Pack(1, arg);
- if (unlikely(!args)) return NULL;
- result = __Pyx_PyObject_Call(func, args, NULL);
- Py_DECREF(args);
- return result;
-}
-#endif
-
-/* BytesEquals */
-static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals) {
-#if CYTHON_COMPILING_IN_PYPY
- return PyObject_RichCompareBool(s1, s2, equals);
-#else
- if (s1 == s2) {
- return (equals == Py_EQ);
- } else if (PyBytes_CheckExact(s1) & PyBytes_CheckExact(s2)) {
- const char *ps1, *ps2;
- Py_ssize_t length = PyBytes_GET_SIZE(s1);
- if (length != PyBytes_GET_SIZE(s2))
- return (equals == Py_NE);
- ps1 = PyBytes_AS_STRING(s1);
- ps2 = PyBytes_AS_STRING(s2);
- if (ps1[0] != ps2[0]) {
- return (equals == Py_NE);
- } else if (length == 1) {
- return (equals == Py_EQ);
- } else {
- int result;
-#if CYTHON_USE_UNICODE_INTERNALS
- Py_hash_t hash1, hash2;
- hash1 = ((PyBytesObject*)s1)->ob_shash;
- hash2 = ((PyBytesObject*)s2)->ob_shash;
- if (hash1 != hash2 && hash1 != -1 && hash2 != -1) {
- return (equals == Py_NE);
- }
-#endif
- result = memcmp(ps1, ps2, (size_t)length);
- return (equals == Py_EQ) ? (result == 0) : (result != 0);
- }
- } else if ((s1 == Py_None) & PyBytes_CheckExact(s2)) {
- return (equals == Py_NE);
- } else if ((s2 == Py_None) & PyBytes_CheckExact(s1)) {
- return (equals == Py_NE);
- } else {
- int result;
- PyObject* py_result = PyObject_RichCompare(s1, s2, equals);
- if (!py_result)
- return -1;
- result = __Pyx_PyObject_IsTrue(py_result);
- Py_DECREF(py_result);
- return result;
- }
-#endif
-}
-
-/* UnicodeEquals */
-static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals) {
-#if CYTHON_COMPILING_IN_PYPY
- return PyObject_RichCompareBool(s1, s2, equals);
-#else
-#if PY_MAJOR_VERSION < 3
- PyObject* owned_ref = NULL;
-#endif
- int s1_is_unicode, s2_is_unicode;
- if (s1 == s2) {
- goto return_eq;
- }
- s1_is_unicode = PyUnicode_CheckExact(s1);
- s2_is_unicode = PyUnicode_CheckExact(s2);
-#if PY_MAJOR_VERSION < 3
- if ((s1_is_unicode & (!s2_is_unicode)) && PyString_CheckExact(s2)) {
- owned_ref = PyUnicode_FromObject(s2);
- if (unlikely(!owned_ref))
- return -1;
- s2 = owned_ref;
- s2_is_unicode = 1;
- } else if ((s2_is_unicode & (!s1_is_unicode)) && PyString_CheckExact(s1)) {
- owned_ref = PyUnicode_FromObject(s1);
- if (unlikely(!owned_ref))
- return -1;
- s1 = owned_ref;
- s1_is_unicode = 1;
- } else if (((!s2_is_unicode) & (!s1_is_unicode))) {
- return __Pyx_PyBytes_Equals(s1, s2, equals);
- }
-#endif
- if (s1_is_unicode & s2_is_unicode) {
- Py_ssize_t length;
- int kind;
- void *data1, *data2;
- if (unlikely(__Pyx_PyUnicode_READY(s1) < 0) || unlikely(__Pyx_PyUnicode_READY(s2) < 0))
- return -1;
- length = __Pyx_PyUnicode_GET_LENGTH(s1);
- if (length != __Pyx_PyUnicode_GET_LENGTH(s2)) {
- goto return_ne;
- }
-#if CYTHON_USE_UNICODE_INTERNALS
- {
- Py_hash_t hash1, hash2;
- #if CYTHON_PEP393_ENABLED
- hash1 = ((PyASCIIObject*)s1)->hash;
- hash2 = ((PyASCIIObject*)s2)->hash;
- #else
- hash1 = ((PyUnicodeObject*)s1)->hash;
- hash2 = ((PyUnicodeObject*)s2)->hash;
- #endif
- if (hash1 != hash2 && hash1 != -1 && hash2 != -1) {
- goto return_ne;
- }
- }
-#endif
- kind = __Pyx_PyUnicode_KIND(s1);
- if (kind != __Pyx_PyUnicode_KIND(s2)) {
- goto return_ne;
- }
- data1 = __Pyx_PyUnicode_DATA(s1);
- data2 = __Pyx_PyUnicode_DATA(s2);
- if (__Pyx_PyUnicode_READ(kind, data1, 0) != __Pyx_PyUnicode_READ(kind, data2, 0)) {
- goto return_ne;
- } else if (length == 1) {
- goto return_eq;
- } else {
- int result = memcmp(data1, data2, (size_t)(length * kind));
- #if PY_MAJOR_VERSION < 3
- Py_XDECREF(owned_ref);
- #endif
- return (equals == Py_EQ) ? (result == 0) : (result != 0);
- }
- } else if ((s1 == Py_None) & s2_is_unicode) {
- goto return_ne;
- } else if ((s2 == Py_None) & s1_is_unicode) {
- goto return_ne;
- } else {
- int result;
- PyObject* py_result = PyObject_RichCompare(s1, s2, equals);
- #if PY_MAJOR_VERSION < 3
- Py_XDECREF(owned_ref);
- #endif
- if (!py_result)
- return -1;
- result = __Pyx_PyObject_IsTrue(py_result);
- Py_DECREF(py_result);
- return result;
- }
-return_eq:
- #if PY_MAJOR_VERSION < 3
- Py_XDECREF(owned_ref);
- #endif
- return (equals == Py_EQ);
-return_ne:
- #if PY_MAJOR_VERSION < 3
- Py_XDECREF(owned_ref);
- #endif
- return (equals == Py_NE);
-#endif
-}
-
-/* None */
-static CYTHON_INLINE Py_ssize_t __Pyx_div_Py_ssize_t(Py_ssize_t a, Py_ssize_t b) {
- Py_ssize_t q = a / b;
- Py_ssize_t r = a - q*b;
- q -= ((r != 0) & ((r ^ b) < 0));
- return q;
-}
-
-/* GetAttr */
-static CYTHON_INLINE PyObject *__Pyx_GetAttr(PyObject *o, PyObject *n) {
-#if CYTHON_USE_TYPE_SLOTS
-#if PY_MAJOR_VERSION >= 3
- if (likely(PyUnicode_Check(n)))
-#else
- if (likely(PyString_Check(n)))
-#endif
- return __Pyx_PyObject_GetAttrStr(o, n);
-#endif
- return PyObject_GetAttr(o, n);
-}
-
-/* GetItemInt */
-static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j) {
- PyObject *r;
- if (!j) return NULL;
- r = PyObject_GetItem(o, j);
- Py_DECREF(j);
- return r;
-}
-static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i,
- CYTHON_NCP_UNUSED int wraparound,
- CYTHON_NCP_UNUSED int boundscheck) {
-#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- Py_ssize_t wrapped_i = i;
- if (wraparound & unlikely(i < 0)) {
- wrapped_i += PyList_GET_SIZE(o);
- }
- if ((!boundscheck) || likely(__Pyx_is_valid_index(wrapped_i, PyList_GET_SIZE(o)))) {
- PyObject *r = PyList_GET_ITEM(o, wrapped_i);
- Py_INCREF(r);
- return r;
- }
- return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i));
-#else
- return PySequence_GetItem(o, i);
-#endif
-}
-static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i,
- CYTHON_NCP_UNUSED int wraparound,
- CYTHON_NCP_UNUSED int boundscheck) {
-#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- Py_ssize_t wrapped_i = i;
- if (wraparound & unlikely(i < 0)) {
- wrapped_i += PyTuple_GET_SIZE(o);
- }
- if ((!boundscheck) || likely(__Pyx_is_valid_index(wrapped_i, PyTuple_GET_SIZE(o)))) {
- PyObject *r = PyTuple_GET_ITEM(o, wrapped_i);
- Py_INCREF(r);
- return r;
- }
- return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i));
-#else
- return PySequence_GetItem(o, i);
-#endif
-}
-static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i, int is_list,
- CYTHON_NCP_UNUSED int wraparound,
- CYTHON_NCP_UNUSED int boundscheck) {
-#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS && CYTHON_USE_TYPE_SLOTS
- if (is_list || PyList_CheckExact(o)) {
- Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyList_GET_SIZE(o);
- if ((!boundscheck) || (likely(__Pyx_is_valid_index(n, PyList_GET_SIZE(o))))) {
- PyObject *r = PyList_GET_ITEM(o, n);
- Py_INCREF(r);
- return r;
- }
- }
- else if (PyTuple_CheckExact(o)) {
- Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyTuple_GET_SIZE(o);
- if ((!boundscheck) || likely(__Pyx_is_valid_index(n, PyTuple_GET_SIZE(o)))) {
- PyObject *r = PyTuple_GET_ITEM(o, n);
- Py_INCREF(r);
- return r;
- }
- } else {
- PySequenceMethods *m = Py_TYPE(o)->tp_as_sequence;
- if (likely(m && m->sq_item)) {
- if (wraparound && unlikely(i < 0) && likely(m->sq_length)) {
- Py_ssize_t l = m->sq_length(o);
- if (likely(l >= 0)) {
- i += l;
- } else {
- if (!PyErr_ExceptionMatches(PyExc_OverflowError))
- return NULL;
- PyErr_Clear();
- }
- }
- return m->sq_item(o, i);
- }
- }
-#else
- if (is_list || PySequence_Check(o)) {
- return PySequence_GetItem(o, i);
- }
-#endif
- return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i));
-}
-
-/* ObjectGetItem */
-#if CYTHON_USE_TYPE_SLOTS
-static PyObject *__Pyx_PyObject_GetIndex(PyObject *obj, PyObject* index) {
- PyObject *runerr;
- Py_ssize_t key_value;
- PySequenceMethods *m = Py_TYPE(obj)->tp_as_sequence;
- if (unlikely(!(m && m->sq_item))) {
- PyErr_Format(PyExc_TypeError, "'%.200s' object is not subscriptable", Py_TYPE(obj)->tp_name);
- return NULL;
- }
- key_value = __Pyx_PyIndex_AsSsize_t(index);
- if (likely(key_value != -1 || !(runerr = PyErr_Occurred()))) {
- return __Pyx_GetItemInt_Fast(obj, key_value, 0, 1, 1);
- }
- if (PyErr_GivenExceptionMatches(runerr, PyExc_OverflowError)) {
- PyErr_Clear();
- PyErr_Format(PyExc_IndexError, "cannot fit '%.200s' into an index-sized integer", Py_TYPE(index)->tp_name);
- }
- return NULL;
-}
-static PyObject *__Pyx_PyObject_GetItem(PyObject *obj, PyObject* key) {
- PyMappingMethods *m = Py_TYPE(obj)->tp_as_mapping;
- if (likely(m && m->mp_subscript)) {
- return m->mp_subscript(obj, key);
- }
- return __Pyx_PyObject_GetIndex(obj, key);
-}
-#endif
-
-/* decode_c_string */
-static CYTHON_INLINE PyObject* __Pyx_decode_c_string(
- const char* cstring, Py_ssize_t start, Py_ssize_t stop,
- const char* encoding, const char* errors,
- PyObject* (*decode_func)(const char *s, Py_ssize_t size, const char *errors)) {
- Py_ssize_t length;
- if (unlikely((start < 0) | (stop < 0))) {
- size_t slen = strlen(cstring);
- if (unlikely(slen > (size_t) PY_SSIZE_T_MAX)) {
- PyErr_SetString(PyExc_OverflowError,
- "c-string too long to convert to Python");
- return NULL;
- }
- length = (Py_ssize_t) slen;
- if (start < 0) {
- start += length;
- if (start < 0)
- start = 0;
- }
- if (stop < 0)
- stop += length;
- }
- if (unlikely(stop <= start))
- return __Pyx_NewRef(__pyx_empty_unicode);
- length = stop - start;
- cstring += start;
- if (decode_func) {
- return decode_func(cstring, length, errors);
- } else {
- return PyUnicode_Decode(cstring, length, encoding, errors);
- }
-}
-
-/* PyErrExceptionMatches */
-#if CYTHON_FAST_THREAD_STATE
-static int __Pyx_PyErr_ExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) {
- Py_ssize_t i, n;
- n = PyTuple_GET_SIZE(tuple);
-#if PY_MAJOR_VERSION >= 3
- for (i=0; icurexc_type;
- if (exc_type == err) return 1;
- if (unlikely(!exc_type)) return 0;
- if (unlikely(PyTuple_Check(err)))
- return __Pyx_PyErr_ExceptionMatchesTuple(exc_type, err);
- return __Pyx_PyErr_GivenExceptionMatches(exc_type, err);
-}
-#endif
-
-/* GetAttr3 */
-static PyObject *__Pyx_GetAttr3Default(PyObject *d) {
- __Pyx_PyThreadState_declare
- __Pyx_PyThreadState_assign
- if (unlikely(!__Pyx_PyErr_ExceptionMatches(PyExc_AttributeError)))
- return NULL;
- __Pyx_PyErr_Clear();
- Py_INCREF(d);
- return d;
-}
-static CYTHON_INLINE PyObject *__Pyx_GetAttr3(PyObject *o, PyObject *n, PyObject *d) {
- PyObject *r = __Pyx_GetAttr(o, n);
- return (likely(r)) ? r : __Pyx_GetAttr3Default(d);
-}
-
-/* PyDictVersioning */
-#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS
-static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj) {
- PyObject *dict = Py_TYPE(obj)->tp_dict;
- return likely(dict) ? __PYX_GET_DICT_VERSION(dict) : 0;
-}
-static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj) {
- PyObject **dictptr = NULL;
- Py_ssize_t offset = Py_TYPE(obj)->tp_dictoffset;
- if (offset) {
-#if CYTHON_COMPILING_IN_CPYTHON
- dictptr = (likely(offset > 0)) ? (PyObject **) ((char *)obj + offset) : _PyObject_GetDictPtr(obj);
-#else
- dictptr = _PyObject_GetDictPtr(obj);
-#endif
- }
- return (dictptr && *dictptr) ? __PYX_GET_DICT_VERSION(*dictptr) : 0;
-}
-static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version) {
- PyObject *dict = Py_TYPE(obj)->tp_dict;
- if (unlikely(!dict) || unlikely(tp_dict_version != __PYX_GET_DICT_VERSION(dict)))
- return 0;
- return obj_dict_version == __Pyx_get_object_dict_version(obj);
-}
-#endif
-
-/* GetModuleGlobalName */
-#if CYTHON_USE_DICT_VERSIONS
-static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value)
-#else
-static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name)
-#endif
-{
- PyObject *result;
-#if !CYTHON_AVOID_BORROWED_REFS
-#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1
- result = _PyDict_GetItem_KnownHash(__pyx_d, name, ((PyASCIIObject *) name)->hash);
- __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version)
- if (likely(result)) {
- return __Pyx_NewRef(result);
- } else if (unlikely(PyErr_Occurred())) {
- return NULL;
- }
-#else
- result = PyDict_GetItem(__pyx_d, name);
- __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version)
- if (likely(result)) {
- return __Pyx_NewRef(result);
- }
-#endif
-#else
- result = PyObject_GetItem(__pyx_d, name);
- __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version)
- if (likely(result)) {
- return __Pyx_NewRef(result);
- }
- PyErr_Clear();
-#endif
- return __Pyx_GetBuiltinName(name);
-}
-
-/* RaiseTooManyValuesToUnpack */
-static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected) {
- PyErr_Format(PyExc_ValueError,
- "too many values to unpack (expected %" CYTHON_FORMAT_SSIZE_T "d)", expected);
-}
-
-/* RaiseNeedMoreValuesToUnpack */
-static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index) {
- PyErr_Format(PyExc_ValueError,
- "need more than %" CYTHON_FORMAT_SSIZE_T "d value%.1s to unpack",
- index, (index == 1) ? "" : "s");
-}
-
-/* RaiseNoneIterError */
-static CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void) {
- PyErr_SetString(PyExc_TypeError, "'NoneType' object is not iterable");
-}
-
-/* ExtTypeTest */
-static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type) {
- if (unlikely(!type)) {
- PyErr_SetString(PyExc_SystemError, "Missing type object");
- return 0;
- }
- if (likely(__Pyx_TypeCheck(obj, type)))
- return 1;
- PyErr_Format(PyExc_TypeError, "Cannot convert %.200s to %.200s",
- Py_TYPE(obj)->tp_name, type->tp_name);
- return 0;
-}
-
-/* GetTopmostException */
-#if CYTHON_USE_EXC_INFO_STACK
-static _PyErr_StackItem *
-__Pyx_PyErr_GetTopmostException(PyThreadState *tstate)
-{
- _PyErr_StackItem *exc_info = tstate->exc_info;
- while ((exc_info->exc_type == NULL || exc_info->exc_type == Py_None) &&
- exc_info->previous_item != NULL)
- {
- exc_info = exc_info->previous_item;
- }
- return exc_info;
-}
-#endif
-
-/* SaveResetException */
-#if CYTHON_FAST_THREAD_STATE
-static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) {
- #if CYTHON_USE_EXC_INFO_STACK
- _PyErr_StackItem *exc_info = __Pyx_PyErr_GetTopmostException(tstate);
- *type = exc_info->exc_type;
- *value = exc_info->exc_value;
- *tb = exc_info->exc_traceback;
- #else
- *type = tstate->exc_type;
- *value = tstate->exc_value;
- *tb = tstate->exc_traceback;
- #endif
- Py_XINCREF(*type);
- Py_XINCREF(*value);
- Py_XINCREF(*tb);
-}
-static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) {
- PyObject *tmp_type, *tmp_value, *tmp_tb;
- #if CYTHON_USE_EXC_INFO_STACK
- _PyErr_StackItem *exc_info = tstate->exc_info;
- tmp_type = exc_info->exc_type;
- tmp_value = exc_info->exc_value;
- tmp_tb = exc_info->exc_traceback;
- exc_info->exc_type = type;
- exc_info->exc_value = value;
- exc_info->exc_traceback = tb;
- #else
- tmp_type = tstate->exc_type;
- tmp_value = tstate->exc_value;
- tmp_tb = tstate->exc_traceback;
- tstate->exc_type = type;
- tstate->exc_value = value;
- tstate->exc_traceback = tb;
- #endif
- Py_XDECREF(tmp_type);
- Py_XDECREF(tmp_value);
- Py_XDECREF(tmp_tb);
-}
-#endif
-
-/* GetException */
-#if CYTHON_FAST_THREAD_STATE
-static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb)
-#else
-static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb)
-#endif
-{
- PyObject *local_type, *local_value, *local_tb;
-#if CYTHON_FAST_THREAD_STATE
- PyObject *tmp_type, *tmp_value, *tmp_tb;
- local_type = tstate->curexc_type;
- local_value = tstate->curexc_value;
- local_tb = tstate->curexc_traceback;
- tstate->curexc_type = 0;
- tstate->curexc_value = 0;
- tstate->curexc_traceback = 0;
-#else
- PyErr_Fetch(&local_type, &local_value, &local_tb);
-#endif
- PyErr_NormalizeException(&local_type, &local_value, &local_tb);
-#if CYTHON_FAST_THREAD_STATE
- if (unlikely(tstate->curexc_type))
-#else
- if (unlikely(PyErr_Occurred()))
-#endif
- goto bad;
- #if PY_MAJOR_VERSION >= 3
- if (local_tb) {
- if (unlikely(PyException_SetTraceback(local_value, local_tb) < 0))
- goto bad;
- }
- #endif
- Py_XINCREF(local_tb);
- Py_XINCREF(local_type);
- Py_XINCREF(local_value);
- *type = local_type;
- *value = local_value;
- *tb = local_tb;
-#if CYTHON_FAST_THREAD_STATE
- #if CYTHON_USE_EXC_INFO_STACK
- {
- _PyErr_StackItem *exc_info = tstate->exc_info;
- tmp_type = exc_info->exc_type;
- tmp_value = exc_info->exc_value;
- tmp_tb = exc_info->exc_traceback;
- exc_info->exc_type = local_type;
- exc_info->exc_value = local_value;
- exc_info->exc_traceback = local_tb;
- }
- #else
- tmp_type = tstate->exc_type;
- tmp_value = tstate->exc_value;
- tmp_tb = tstate->exc_traceback;
- tstate->exc_type = local_type;
- tstate->exc_value = local_value;
- tstate->exc_traceback = local_tb;
- #endif
- Py_XDECREF(tmp_type);
- Py_XDECREF(tmp_value);
- Py_XDECREF(tmp_tb);
-#else
- PyErr_SetExcInfo(local_type, local_value, local_tb);
-#endif
- return 0;
-bad:
- *type = 0;
- *value = 0;
- *tb = 0;
- Py_XDECREF(local_type);
- Py_XDECREF(local_value);
- Py_XDECREF(local_tb);
- return -1;
-}
-
-/* SwapException */
-#if CYTHON_FAST_THREAD_STATE
-static CYTHON_INLINE void __Pyx__ExceptionSwap(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) {
- PyObject *tmp_type, *tmp_value, *tmp_tb;
- #if CYTHON_USE_EXC_INFO_STACK
- _PyErr_StackItem *exc_info = tstate->exc_info;
- tmp_type = exc_info->exc_type;
- tmp_value = exc_info->exc_value;
- tmp_tb = exc_info->exc_traceback;
- exc_info->exc_type = *type;
- exc_info->exc_value = *value;
- exc_info->exc_traceback = *tb;
- #else
- tmp_type = tstate->exc_type;
- tmp_value = tstate->exc_value;
- tmp_tb = tstate->exc_traceback;
- tstate->exc_type = *type;
- tstate->exc_value = *value;
- tstate->exc_traceback = *tb;
- #endif
- *type = tmp_type;
- *value = tmp_value;
- *tb = tmp_tb;
-}
-#else
-static CYTHON_INLINE void __Pyx_ExceptionSwap(PyObject **type, PyObject **value, PyObject **tb) {
- PyObject *tmp_type, *tmp_value, *tmp_tb;
- PyErr_GetExcInfo(&tmp_type, &tmp_value, &tmp_tb);
- PyErr_SetExcInfo(*type, *value, *tb);
- *type = tmp_type;
- *value = tmp_value;
- *tb = tmp_tb;
-}
-#endif
-
-/* Import */
-static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level) {
- PyObject *empty_list = 0;
- PyObject *module = 0;
- PyObject *global_dict = 0;
- PyObject *empty_dict = 0;
- PyObject *list;
- #if PY_MAJOR_VERSION < 3
- PyObject *py_import;
- py_import = __Pyx_PyObject_GetAttrStr(__pyx_b, __pyx_n_s_import);
- if (!py_import)
- goto bad;
- #endif
- if (from_list)
- list = from_list;
- else {
- empty_list = PyList_New(0);
- if (!empty_list)
- goto bad;
- list = empty_list;
- }
- global_dict = PyModule_GetDict(__pyx_m);
- if (!global_dict)
- goto bad;
- empty_dict = PyDict_New();
- if (!empty_dict)
- goto bad;
- {
- #if PY_MAJOR_VERSION >= 3
- if (level == -1) {
- if ((1) && (strchr(__Pyx_MODULE_NAME, '.'))) {
- module = PyImport_ImportModuleLevelObject(
- name, global_dict, empty_dict, list, 1);
- if (!module) {
- if (!PyErr_ExceptionMatches(PyExc_ImportError))
- goto bad;
- PyErr_Clear();
- }
- }
- level = 0;
- }
- #endif
- if (!module) {
- #if PY_MAJOR_VERSION < 3
- PyObject *py_level = PyInt_FromLong(level);
- if (!py_level)
- goto bad;
- module = PyObject_CallFunctionObjArgs(py_import,
- name, global_dict, empty_dict, list, py_level, (PyObject *)NULL);
- Py_DECREF(py_level);
- #else
- module = PyImport_ImportModuleLevelObject(
- name, global_dict, empty_dict, list, level);
- #endif
- }
- }
-bad:
- #if PY_MAJOR_VERSION < 3
- Py_XDECREF(py_import);
- #endif
- Py_XDECREF(empty_list);
- Py_XDECREF(empty_dict);
- return module;
-}
-
-/* FastTypeChecks */
-#if CYTHON_COMPILING_IN_CPYTHON
-static int __Pyx_InBases(PyTypeObject *a, PyTypeObject *b) {
- while (a) {
- a = a->tp_base;
- if (a == b)
- return 1;
- }
- return b == &PyBaseObject_Type;
-}
-static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b) {
- PyObject *mro;
- if (a == b) return 1;
- mro = a->tp_mro;
- if (likely(mro)) {
- Py_ssize_t i, n;
- n = PyTuple_GET_SIZE(mro);
- for (i = 0; i < n; i++) {
- if (PyTuple_GET_ITEM(mro, i) == (PyObject *)b)
- return 1;
- }
- return 0;
- }
- return __Pyx_InBases(a, b);
-}
-#if PY_MAJOR_VERSION == 2
-static int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject* exc_type2) {
- PyObject *exception, *value, *tb;
- int res;
- __Pyx_PyThreadState_declare
- __Pyx_PyThreadState_assign
- __Pyx_ErrFetch(&exception, &value, &tb);
- res = exc_type1 ? PyObject_IsSubclass(err, exc_type1) : 0;
- if (unlikely(res == -1)) {
- PyErr_WriteUnraisable(err);
- res = 0;
- }
- if (!res) {
- res = PyObject_IsSubclass(err, exc_type2);
- if (unlikely(res == -1)) {
- PyErr_WriteUnraisable(err);
- res = 0;
- }
- }
- __Pyx_ErrRestore(exception, value, tb);
- return res;
-}
-#else
-static CYTHON_INLINE int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject *exc_type2) {
- int res = exc_type1 ? __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type1) : 0;
- if (!res) {
- res = __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type2);
- }
- return res;
-}
-#endif
-static int __Pyx_PyErr_GivenExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) {
- Py_ssize_t i, n;
- assert(PyExceptionClass_Check(exc_type));
- n = PyTuple_GET_SIZE(tuple);
-#if PY_MAJOR_VERSION >= 3
- for (i=0; i= 0 || (x^b) >= 0))
- return PyInt_FromLong(x);
- return PyLong_Type.tp_as_number->nb_add(op1, op2);
- }
- #endif
- #if CYTHON_USE_PYLONG_INTERNALS
- if (likely(PyLong_CheckExact(op1))) {
- const long b = intval;
- long a, x;
-#ifdef HAVE_LONG_LONG
- const PY_LONG_LONG llb = intval;
- PY_LONG_LONG lla, llx;
-#endif
- const digit* digits = ((PyLongObject*)op1)->ob_digit;
- const Py_ssize_t size = Py_SIZE(op1);
- if (likely(__Pyx_sst_abs(size) <= 1)) {
- a = likely(size) ? digits[0] : 0;
- if (size == -1) a = -a;
- } else {
- switch (size) {
- case -2:
- if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) {
- a = -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]));
- break;
-#ifdef HAVE_LONG_LONG
- } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) {
- lla = -(PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0]));
- goto long_long;
-#endif
- }
- CYTHON_FALLTHROUGH;
- case 2:
- if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) {
- a = (long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]));
- break;
-#ifdef HAVE_LONG_LONG
- } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) {
- lla = (PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0]));
- goto long_long;
-#endif
- }
- CYTHON_FALLTHROUGH;
- case -3:
- if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) {
- a = -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]));
- break;
-#ifdef HAVE_LONG_LONG
- } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) {
- lla = -(PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0]));
- goto long_long;
-#endif
- }
- CYTHON_FALLTHROUGH;
- case 3:
- if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) {
- a = (long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]));
- break;
-#ifdef HAVE_LONG_LONG
- } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) {
- lla = (PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0]));
- goto long_long;
-#endif
- }
- CYTHON_FALLTHROUGH;
- case -4:
- if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) {
- a = -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]));
- break;
-#ifdef HAVE_LONG_LONG
- } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) {
- lla = -(PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0]));
- goto long_long;
-#endif
- }
- CYTHON_FALLTHROUGH;
- case 4:
- if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) {
- a = (long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]));
- break;
-#ifdef HAVE_LONG_LONG
- } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) {
- lla = (PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0]));
- goto long_long;
-#endif
- }
- CYTHON_FALLTHROUGH;
- default: return PyLong_Type.tp_as_number->nb_add(op1, op2);
- }
- }
- x = a + b;
- return PyLong_FromLong(x);
-#ifdef HAVE_LONG_LONG
- long_long:
- llx = lla + llb;
- return PyLong_FromLongLong(llx);
-#endif
-
-
- }
- #endif
- if (PyFloat_CheckExact(op1)) {
- const long b = intval;
- double a = PyFloat_AS_DOUBLE(op1);
- double result;
- PyFPE_START_PROTECT("add", return NULL)
- result = ((double)a) + (double)b;
- PyFPE_END_PROTECT(result)
- return PyFloat_FromDouble(result);
- }
- return (inplace ? PyNumber_InPlaceAdd : PyNumber_Add)(op1, op2);
-}
-#endif
-
-/* None */
-static CYTHON_INLINE long __Pyx_div_long(long a, long b) {
- long q = a / b;
- long r = a - q*b;
- q -= ((r != 0) & ((r ^ b) < 0));
- return q;
-}
-
-/* ImportFrom */
-static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name) {
- PyObject* value = __Pyx_PyObject_GetAttrStr(module, name);
- if (unlikely(!value) && PyErr_ExceptionMatches(PyExc_AttributeError)) {
- PyErr_Format(PyExc_ImportError,
- #if PY_MAJOR_VERSION < 3
- "cannot import name %.230s", PyString_AS_STRING(name));
- #else
- "cannot import name %S", name);
- #endif
- }
- return value;
-}
-
-/* HasAttr */
-static CYTHON_INLINE int __Pyx_HasAttr(PyObject *o, PyObject *n) {
- PyObject *r;
- if (unlikely(!__Pyx_PyBaseString_Check(n))) {
- PyErr_SetString(PyExc_TypeError,
- "hasattr(): attribute name must be string");
- return -1;
- }
- r = __Pyx_GetAttr(o, n);
- if (unlikely(!r)) {
- PyErr_Clear();
- return 0;
- } else {
- Py_DECREF(r);
- return 1;
- }
-}
-
-/* PyObject_GenericGetAttrNoDict */
-#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000
-static PyObject *__Pyx_RaiseGenericGetAttributeError(PyTypeObject *tp, PyObject *attr_name) {
- PyErr_Format(PyExc_AttributeError,
-#if PY_MAJOR_VERSION >= 3
- "'%.50s' object has no attribute '%U'",
- tp->tp_name, attr_name);
-#else
- "'%.50s' object has no attribute '%.400s'",
- tp->tp_name, PyString_AS_STRING(attr_name));
-#endif
- return NULL;
-}
-static CYTHON_INLINE PyObject* __Pyx_PyObject_GenericGetAttrNoDict(PyObject* obj, PyObject* attr_name) {
- PyObject *descr;
- PyTypeObject *tp = Py_TYPE(obj);
- if (unlikely(!PyString_Check(attr_name))) {
- return PyObject_GenericGetAttr(obj, attr_name);
- }
- assert(!tp->tp_dictoffset);
- descr = _PyType_Lookup(tp, attr_name);
- if (unlikely(!descr)) {
- return __Pyx_RaiseGenericGetAttributeError(tp, attr_name);
- }
- Py_INCREF(descr);
- #if PY_MAJOR_VERSION < 3
- if (likely(PyType_HasFeature(Py_TYPE(descr), Py_TPFLAGS_HAVE_CLASS)))
- #endif
- {
- descrgetfunc f = Py_TYPE(descr)->tp_descr_get;
- if (unlikely(f)) {
- PyObject *res = f(descr, obj, (PyObject *)tp);
- Py_DECREF(descr);
- return res;
- }
- }
- return descr;
-}
-#endif
-
-/* PyObject_GenericGetAttr */
-#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000
-static PyObject* __Pyx_PyObject_GenericGetAttr(PyObject* obj, PyObject* attr_name) {
- if (unlikely(Py_TYPE(obj)->tp_dictoffset)) {
- return PyObject_GenericGetAttr(obj, attr_name);
- }
- return __Pyx_PyObject_GenericGetAttrNoDict(obj, attr_name);
-}
-#endif
-
-/* SetVTable */
-static int __Pyx_SetVtable(PyObject *dict, void *vtable) {
-#if PY_VERSION_HEX >= 0x02070000
- PyObject *ob = PyCapsule_New(vtable, 0, 0);
-#else
- PyObject *ob = PyCObject_FromVoidPtr(vtable, 0);
-#endif
- if (!ob)
- goto bad;
- if (PyDict_SetItem(dict, __pyx_n_s_pyx_vtable, ob) < 0)
- goto bad;
- Py_DECREF(ob);
- return 0;
-bad:
- Py_XDECREF(ob);
- return -1;
-}
-
-/* PyObjectGetAttrStrNoError */
-static void __Pyx_PyObject_GetAttrStr_ClearAttributeError(void) {
- __Pyx_PyThreadState_declare
- __Pyx_PyThreadState_assign
- if (likely(__Pyx_PyErr_ExceptionMatches(PyExc_AttributeError)))
- __Pyx_PyErr_Clear();
-}
-static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStrNoError(PyObject* obj, PyObject* attr_name) {
- PyObject *result;
-#if CYTHON_COMPILING_IN_CPYTHON && CYTHON_USE_TYPE_SLOTS && PY_VERSION_HEX >= 0x030700B1
- PyTypeObject* tp = Py_TYPE(obj);
- if (likely(tp->tp_getattro == PyObject_GenericGetAttr)) {
- return _PyObject_GenericGetAttrWithDict(obj, attr_name, NULL, 1);
- }
-#endif
- result = __Pyx_PyObject_GetAttrStr(obj, attr_name);
- if (unlikely(!result)) {
- __Pyx_PyObject_GetAttrStr_ClearAttributeError();
- }
- return result;
-}
-
-/* SetupReduce */
-static int __Pyx_setup_reduce_is_named(PyObject* meth, PyObject* name) {
- int ret;
- PyObject *name_attr;
- name_attr = __Pyx_PyObject_GetAttrStr(meth, __pyx_n_s_name_2);
- if (likely(name_attr)) {
- ret = PyObject_RichCompareBool(name_attr, name, Py_EQ);
- } else {
- ret = -1;
- }
- if (unlikely(ret < 0)) {
- PyErr_Clear();
- ret = 0;
- }
- Py_XDECREF(name_attr);
- return ret;
-}
-static int __Pyx_setup_reduce(PyObject* type_obj) {
- int ret = 0;
- PyObject *object_reduce = NULL;
- PyObject *object_reduce_ex = NULL;
- PyObject *reduce = NULL;
- PyObject *reduce_ex = NULL;
- PyObject *reduce_cython = NULL;
- PyObject *setstate = NULL;
- PyObject *setstate_cython = NULL;
-#if CYTHON_USE_PYTYPE_LOOKUP
- if (_PyType_Lookup((PyTypeObject*)type_obj, __pyx_n_s_getstate)) goto __PYX_GOOD;
-#else
- if (PyObject_HasAttr(type_obj, __pyx_n_s_getstate)) goto __PYX_GOOD;
-#endif
-#if CYTHON_USE_PYTYPE_LOOKUP
- object_reduce_ex = _PyType_Lookup(&PyBaseObject_Type, __pyx_n_s_reduce_ex); if (!object_reduce_ex) goto __PYX_BAD;
-#else
- object_reduce_ex = __Pyx_PyObject_GetAttrStr((PyObject*)&PyBaseObject_Type, __pyx_n_s_reduce_ex); if (!object_reduce_ex) goto __PYX_BAD;
-#endif
- reduce_ex = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_reduce_ex); if (unlikely(!reduce_ex)) goto __PYX_BAD;
- if (reduce_ex == object_reduce_ex) {
-#if CYTHON_USE_PYTYPE_LOOKUP
- object_reduce = _PyType_Lookup(&PyBaseObject_Type, __pyx_n_s_reduce); if (!object_reduce) goto __PYX_BAD;
-#else
- object_reduce = __Pyx_PyObject_GetAttrStr((PyObject*)&PyBaseObject_Type, __pyx_n_s_reduce); if (!object_reduce) goto __PYX_BAD;
-#endif
- reduce = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_reduce); if (unlikely(!reduce)) goto __PYX_BAD;
- if (reduce == object_reduce || __Pyx_setup_reduce_is_named(reduce, __pyx_n_s_reduce_cython)) {
- reduce_cython = __Pyx_PyObject_GetAttrStrNoError(type_obj, __pyx_n_s_reduce_cython);
- if (likely(reduce_cython)) {
- ret = PyDict_SetItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_reduce, reduce_cython); if (unlikely(ret < 0)) goto __PYX_BAD;
- ret = PyDict_DelItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_reduce_cython); if (unlikely(ret < 0)) goto __PYX_BAD;
- } else if (reduce == object_reduce || PyErr_Occurred()) {
- goto __PYX_BAD;
- }
- setstate = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_setstate);
- if (!setstate) PyErr_Clear();
- if (!setstate || __Pyx_setup_reduce_is_named(setstate, __pyx_n_s_setstate_cython)) {
- setstate_cython = __Pyx_PyObject_GetAttrStrNoError(type_obj, __pyx_n_s_setstate_cython);
- if (likely(setstate_cython)) {
- ret = PyDict_SetItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_setstate, setstate_cython); if (unlikely(ret < 0)) goto __PYX_BAD;
- ret = PyDict_DelItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_setstate_cython); if (unlikely(ret < 0)) goto __PYX_BAD;
- } else if (!setstate || PyErr_Occurred()) {
- goto __PYX_BAD;
- }
- }
- PyType_Modified((PyTypeObject*)type_obj);
- }
- }
- goto __PYX_GOOD;
-__PYX_BAD:
- if (!PyErr_Occurred())
- PyErr_Format(PyExc_RuntimeError, "Unable to initialize pickling for %s", ((PyTypeObject*)type_obj)->tp_name);
- ret = -1;
-__PYX_GOOD:
-#if !CYTHON_USE_PYTYPE_LOOKUP
- Py_XDECREF(object_reduce);
- Py_XDECREF(object_reduce_ex);
-#endif
- Py_XDECREF(reduce);
- Py_XDECREF(reduce_ex);
- Py_XDECREF(reduce_cython);
- Py_XDECREF(setstate);
- Py_XDECREF(setstate_cython);
- return ret;
-}
-
-/* CLineInTraceback */
-#ifndef CYTHON_CLINE_IN_TRACEBACK
-static int __Pyx_CLineForTraceback(CYTHON_NCP_UNUSED PyThreadState *tstate, int c_line) {
- PyObject *use_cline;
- PyObject *ptype, *pvalue, *ptraceback;
-#if CYTHON_COMPILING_IN_CPYTHON
- PyObject **cython_runtime_dict;
-#endif
- if (unlikely(!__pyx_cython_runtime)) {
- return c_line;
- }
- __Pyx_ErrFetchInState(tstate, &ptype, &pvalue, &ptraceback);
-#if CYTHON_COMPILING_IN_CPYTHON
- cython_runtime_dict = _PyObject_GetDictPtr(__pyx_cython_runtime);
- if (likely(cython_runtime_dict)) {
- __PYX_PY_DICT_LOOKUP_IF_MODIFIED(
- use_cline, *cython_runtime_dict,
- __Pyx_PyDict_GetItemStr(*cython_runtime_dict, __pyx_n_s_cline_in_traceback))
- } else
-#endif
- {
- PyObject *use_cline_obj = __Pyx_PyObject_GetAttrStr(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback);
- if (use_cline_obj) {
- use_cline = PyObject_Not(use_cline_obj) ? Py_False : Py_True;
- Py_DECREF(use_cline_obj);
- } else {
- PyErr_Clear();
- use_cline = NULL;
- }
- }
- if (!use_cline) {
- c_line = 0;
- PyObject_SetAttr(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback, Py_False);
- }
- else if (use_cline == Py_False || (use_cline != Py_True && PyObject_Not(use_cline) != 0)) {
- c_line = 0;
- }
- __Pyx_ErrRestoreInState(tstate, ptype, pvalue, ptraceback);
- return c_line;
-}
-#endif
-
-/* CodeObjectCache */
-static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line) {
- int start = 0, mid = 0, end = count - 1;
- if (end >= 0 && code_line > entries[end].code_line) {
- return count;
- }
- while (start < end) {
- mid = start + (end - start) / 2;
- if (code_line < entries[mid].code_line) {
- end = mid;
- } else if (code_line > entries[mid].code_line) {
- start = mid + 1;
- } else {
- return mid;
- }
- }
- if (code_line <= entries[mid].code_line) {
- return mid;
- } else {
- return mid + 1;
- }
-}
-static PyCodeObject *__pyx_find_code_object(int code_line) {
- PyCodeObject* code_object;
- int pos;
- if (unlikely(!code_line) || unlikely(!__pyx_code_cache.entries)) {
- return NULL;
- }
- pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line);
- if (unlikely(pos >= __pyx_code_cache.count) || unlikely(__pyx_code_cache.entries[pos].code_line != code_line)) {
- return NULL;
- }
- code_object = __pyx_code_cache.entries[pos].code_object;
- Py_INCREF(code_object);
- return code_object;
-}
-static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object) {
- int pos, i;
- __Pyx_CodeObjectCacheEntry* entries = __pyx_code_cache.entries;
- if (unlikely(!code_line)) {
- return;
- }
- if (unlikely(!entries)) {
- entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Malloc(64*sizeof(__Pyx_CodeObjectCacheEntry));
- if (likely(entries)) {
- __pyx_code_cache.entries = entries;
- __pyx_code_cache.max_count = 64;
- __pyx_code_cache.count = 1;
- entries[0].code_line = code_line;
- entries[0].code_object = code_object;
- Py_INCREF(code_object);
- }
- return;
- }
- pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line);
- if ((pos < __pyx_code_cache.count) && unlikely(__pyx_code_cache.entries[pos].code_line == code_line)) {
- PyCodeObject* tmp = entries[pos].code_object;
- entries[pos].code_object = code_object;
- Py_DECREF(tmp);
- return;
- }
- if (__pyx_code_cache.count == __pyx_code_cache.max_count) {
- int new_max = __pyx_code_cache.max_count + 64;
- entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Realloc(
- __pyx_code_cache.entries, ((size_t)new_max) * sizeof(__Pyx_CodeObjectCacheEntry));
- if (unlikely(!entries)) {
- return;
- }
- __pyx_code_cache.entries = entries;
- __pyx_code_cache.max_count = new_max;
- }
- for (i=__pyx_code_cache.count; i>pos; i--) {
- entries[i] = entries[i-1];
- }
- entries[pos].code_line = code_line;
- entries[pos].code_object = code_object;
- __pyx_code_cache.count++;
- Py_INCREF(code_object);
-}
-
-/* AddTraceback */
-#include "compile.h"
-#include "frameobject.h"
-#include "traceback.h"
-static PyCodeObject* __Pyx_CreateCodeObjectForTraceback(
- const char *funcname, int c_line,
- int py_line, const char *filename) {
- PyCodeObject *py_code = 0;
- PyObject *py_srcfile = 0;
- PyObject *py_funcname = 0;
- #if PY_MAJOR_VERSION < 3
- py_srcfile = PyString_FromString(filename);
- #else
- py_srcfile = PyUnicode_FromString(filename);
- #endif
- if (!py_srcfile) goto bad;
- if (c_line) {
- #if PY_MAJOR_VERSION < 3
- py_funcname = PyString_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line);
- #else
- py_funcname = PyUnicode_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line);
- #endif
- }
- else {
- #if PY_MAJOR_VERSION < 3
- py_funcname = PyString_FromString(funcname);
- #else
- py_funcname = PyUnicode_FromString(funcname);
- #endif
- }
- if (!py_funcname) goto bad;
- py_code = __Pyx_PyCode_New(
- 0,
- 0,
- 0,
- 0,
- 0,
- __pyx_empty_bytes, /*PyObject *code,*/
- __pyx_empty_tuple, /*PyObject *consts,*/
- __pyx_empty_tuple, /*PyObject *names,*/
- __pyx_empty_tuple, /*PyObject *varnames,*/
- __pyx_empty_tuple, /*PyObject *freevars,*/
- __pyx_empty_tuple, /*PyObject *cellvars,*/
- py_srcfile, /*PyObject *filename,*/
- py_funcname, /*PyObject *name,*/
- py_line,
- __pyx_empty_bytes /*PyObject *lnotab*/
- );
- Py_DECREF(py_srcfile);
- Py_DECREF(py_funcname);
- return py_code;
-bad:
- Py_XDECREF(py_srcfile);
- Py_XDECREF(py_funcname);
- return NULL;
-}
-static void __Pyx_AddTraceback(const char *funcname, int c_line,
- int py_line, const char *filename) {
- PyCodeObject *py_code = 0;
- PyFrameObject *py_frame = 0;
- PyThreadState *tstate = __Pyx_PyThreadState_Current;
- if (c_line) {
- c_line = __Pyx_CLineForTraceback(tstate, c_line);
- }
- py_code = __pyx_find_code_object(c_line ? -c_line : py_line);
- if (!py_code) {
- py_code = __Pyx_CreateCodeObjectForTraceback(
- funcname, c_line, py_line, filename);
- if (!py_code) goto bad;
- __pyx_insert_code_object(c_line ? -c_line : py_line, py_code);
- }
- py_frame = PyFrame_New(
- tstate, /*PyThreadState *tstate,*/
- py_code, /*PyCodeObject *code,*/
- __pyx_d, /*PyObject *globals,*/
- 0 /*PyObject *locals*/
- );
- if (!py_frame) goto bad;
- __Pyx_PyFrame_SetLineNumber(py_frame, py_line);
- PyTraceBack_Here(py_frame);
-bad:
- Py_XDECREF(py_code);
- Py_XDECREF(py_frame);
-}
-
-#if PY_MAJOR_VERSION < 3
-static int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags) {
- if (PyObject_CheckBuffer(obj)) return PyObject_GetBuffer(obj, view, flags);
- if (__Pyx_TypeCheck(obj, __pyx_array_type)) return __pyx_array_getbuffer(obj, view, flags);
- if (__Pyx_TypeCheck(obj, __pyx_memoryview_type)) return __pyx_memoryview_getbuffer(obj, view, flags);
- PyErr_Format(PyExc_TypeError, "'%.200s' does not have the buffer interface", Py_TYPE(obj)->tp_name);
- return -1;
-}
-static void __Pyx_ReleaseBuffer(Py_buffer *view) {
- PyObject *obj = view->obj;
- if (!obj) return;
- if (PyObject_CheckBuffer(obj)) {
- PyBuffer_Release(view);
- return;
- }
- if ((0)) {}
- view->obj = NULL;
- Py_DECREF(obj);
-}
-#endif
-
-
-/* MemviewSliceIsContig */
-static int
-__pyx_memviewslice_is_contig(const __Pyx_memviewslice mvs, char order, int ndim)
-{
- int i, index, step, start;
- Py_ssize_t itemsize = mvs.memview->view.itemsize;
- if (order == 'F') {
- step = 1;
- start = 0;
- } else {
- step = -1;
- start = ndim - 1;
- }
- for (i = 0; i < ndim; i++) {
- index = start + step * i;
- if (mvs.suboffsets[index] >= 0 || mvs.strides[index] != itemsize)
- return 0;
- itemsize *= mvs.shape[index];
- }
- return 1;
-}
-
-/* OverlappingSlices */
-static void
-__pyx_get_array_memory_extents(__Pyx_memviewslice *slice,
- void **out_start, void **out_end,
- int ndim, size_t itemsize)
-{
- char *start, *end;
- int i;
- start = end = slice->data;
- for (i = 0; i < ndim; i++) {
- Py_ssize_t stride = slice->strides[i];
- Py_ssize_t extent = slice->shape[i];
- if (extent == 0) {
- *out_start = *out_end = start;
- return;
- } else {
- if (stride > 0)
- end += stride * (extent - 1);
- else
- start += stride * (extent - 1);
- }
- }
- *out_start = start;
- *out_end = end + itemsize;
-}
-static int
-__pyx_slices_overlap(__Pyx_memviewslice *slice1,
- __Pyx_memviewslice *slice2,
- int ndim, size_t itemsize)
-{
- void *start1, *end1, *start2, *end2;
- __pyx_get_array_memory_extents(slice1, &start1, &end1, ndim, itemsize);
- __pyx_get_array_memory_extents(slice2, &start2, &end2, ndim, itemsize);
- return (start1 < end2) && (start2 < end1);
-}
-
-/* Capsule */
-static CYTHON_INLINE PyObject *
-__pyx_capsule_create(void *p, CYTHON_UNUSED const char *sig)
-{
- PyObject *cobj;
-#if PY_VERSION_HEX >= 0x02070000
- cobj = PyCapsule_New(p, sig, NULL);
-#else
- cobj = PyCObject_FromVoidPtr(p, NULL);
-#endif
- return cobj;
-}
-
-/* IsLittleEndian */
-static CYTHON_INLINE int __Pyx_Is_Little_Endian(void)
-{
- union {
- uint32_t u32;
- uint8_t u8[4];
- } S;
- S.u32 = 0x01020304;
- return S.u8[0] == 4;
-}
-
-/* BufferFormatCheck */
-static void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx,
- __Pyx_BufFmt_StackElem* stack,
- __Pyx_TypeInfo* type) {
- stack[0].field = &ctx->root;
- stack[0].parent_offset = 0;
- ctx->root.type = type;
- ctx->root.name = "buffer dtype";
- ctx->root.offset = 0;
- ctx->head = stack;
- ctx->head->field = &ctx->root;
- ctx->fmt_offset = 0;
- ctx->head->parent_offset = 0;
- ctx->new_packmode = '@';
- ctx->enc_packmode = '@';
- ctx->new_count = 1;
- ctx->enc_count = 0;
- ctx->enc_type = 0;
- ctx->is_complex = 0;
- ctx->is_valid_array = 0;
- ctx->struct_alignment = 0;
- while (type->typegroup == 'S') {
- ++ctx->head;
- ctx->head->field = type->fields;
- ctx->head->parent_offset = 0;
- type = type->fields->type;
- }
-}
-static int __Pyx_BufFmt_ParseNumber(const char** ts) {
- int count;
- const char* t = *ts;
- if (*t < '0' || *t > '9') {
- return -1;
- } else {
- count = *t++ - '0';
- while (*t >= '0' && *t <= '9') {
- count *= 10;
- count += *t++ - '0';
- }
- }
- *ts = t;
- return count;
-}
-static int __Pyx_BufFmt_ExpectNumber(const char **ts) {
- int number = __Pyx_BufFmt_ParseNumber(ts);
- if (number == -1)
- PyErr_Format(PyExc_ValueError,\
- "Does not understand character buffer dtype format string ('%c')", **ts);
- return number;
-}
-static void __Pyx_BufFmt_RaiseUnexpectedChar(char ch) {
- PyErr_Format(PyExc_ValueError,
- "Unexpected format string character: '%c'", ch);
-}
-static const char* __Pyx_BufFmt_DescribeTypeChar(char ch, int is_complex) {
- switch (ch) {
- case '?': return "'bool'";
- case 'c': return "'char'";
- case 'b': return "'signed char'";
- case 'B': return "'unsigned char'";
- case 'h': return "'short'";
- case 'H': return "'unsigned short'";
- case 'i': return "'int'";
- case 'I': return "'unsigned int'";
- case 'l': return "'long'";
- case 'L': return "'unsigned long'";
- case 'q': return "'long long'";
- case 'Q': return "'unsigned long long'";
- case 'f': return (is_complex ? "'complex float'" : "'float'");
- case 'd': return (is_complex ? "'complex double'" : "'double'");
- case 'g': return (is_complex ? "'complex long double'" : "'long double'");
- case 'T': return "a struct";
- case 'O': return "Python object";
- case 'P': return "a pointer";
- case 's': case 'p': return "a string";
- case 0: return "end";
- default: return "unparseable format string";
- }
-}
-static size_t __Pyx_BufFmt_TypeCharToStandardSize(char ch, int is_complex) {
- switch (ch) {
- case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1;
- case 'h': case 'H': return 2;
- case 'i': case 'I': case 'l': case 'L': return 4;
- case 'q': case 'Q': return 8;
- case 'f': return (is_complex ? 8 : 4);
- case 'd': return (is_complex ? 16 : 8);
- case 'g': {
- PyErr_SetString(PyExc_ValueError, "Python does not define a standard format string size for long double ('g')..");
- return 0;
- }
- case 'O': case 'P': return sizeof(void*);
- default:
- __Pyx_BufFmt_RaiseUnexpectedChar(ch);
- return 0;
- }
-}
-static size_t __Pyx_BufFmt_TypeCharToNativeSize(char ch, int is_complex) {
- switch (ch) {
- case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1;
- case 'h': case 'H': return sizeof(short);
- case 'i': case 'I': return sizeof(int);
- case 'l': case 'L': return sizeof(long);
- #ifdef HAVE_LONG_LONG
- case 'q': case 'Q': return sizeof(PY_LONG_LONG);
- #endif
- case 'f': return sizeof(float) * (is_complex ? 2 : 1);
- case 'd': return sizeof(double) * (is_complex ? 2 : 1);
- case 'g': return sizeof(long double) * (is_complex ? 2 : 1);
- case 'O': case 'P': return sizeof(void*);
- default: {
- __Pyx_BufFmt_RaiseUnexpectedChar(ch);
- return 0;
- }
- }
-}
-typedef struct { char c; short x; } __Pyx_st_short;
-typedef struct { char c; int x; } __Pyx_st_int;
-typedef struct { char c; long x; } __Pyx_st_long;
-typedef struct { char c; float x; } __Pyx_st_float;
-typedef struct { char c; double x; } __Pyx_st_double;
-typedef struct { char c; long double x; } __Pyx_st_longdouble;
-typedef struct { char c; void *x; } __Pyx_st_void_p;
-#ifdef HAVE_LONG_LONG
-typedef struct { char c; PY_LONG_LONG x; } __Pyx_st_longlong;
-#endif
-static size_t __Pyx_BufFmt_TypeCharToAlignment(char ch, CYTHON_UNUSED int is_complex) {
- switch (ch) {
- case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1;
- case 'h': case 'H': return sizeof(__Pyx_st_short) - sizeof(short);
- case 'i': case 'I': return sizeof(__Pyx_st_int) - sizeof(int);
- case 'l': case 'L': return sizeof(__Pyx_st_long) - sizeof(long);
-#ifdef HAVE_LONG_LONG
- case 'q': case 'Q': return sizeof(__Pyx_st_longlong) - sizeof(PY_LONG_LONG);
-#endif
- case 'f': return sizeof(__Pyx_st_float) - sizeof(float);
- case 'd': return sizeof(__Pyx_st_double) - sizeof(double);
- case 'g': return sizeof(__Pyx_st_longdouble) - sizeof(long double);
- case 'P': case 'O': return sizeof(__Pyx_st_void_p) - sizeof(void*);
- default:
- __Pyx_BufFmt_RaiseUnexpectedChar(ch);
- return 0;
- }
-}
-/* These are for computing the padding at the end of the struct to align
- on the first member of the struct. This will probably the same as above,
- but we don't have any guarantees.
- */
-typedef struct { short x; char c; } __Pyx_pad_short;
-typedef struct { int x; char c; } __Pyx_pad_int;
-typedef struct { long x; char c; } __Pyx_pad_long;
-typedef struct { float x; char c; } __Pyx_pad_float;
-typedef struct { double x; char c; } __Pyx_pad_double;
-typedef struct { long double x; char c; } __Pyx_pad_longdouble;
-typedef struct { void *x; char c; } __Pyx_pad_void_p;
-#ifdef HAVE_LONG_LONG
-typedef struct { PY_LONG_LONG x; char c; } __Pyx_pad_longlong;
-#endif
-static size_t __Pyx_BufFmt_TypeCharToPadding(char ch, CYTHON_UNUSED int is_complex) {
- switch (ch) {
- case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1;
- case 'h': case 'H': return sizeof(__Pyx_pad_short) - sizeof(short);
- case 'i': case 'I': return sizeof(__Pyx_pad_int) - sizeof(int);
- case 'l': case 'L': return sizeof(__Pyx_pad_long) - sizeof(long);
-#ifdef HAVE_LONG_LONG
- case 'q': case 'Q': return sizeof(__Pyx_pad_longlong) - sizeof(PY_LONG_LONG);
-#endif
- case 'f': return sizeof(__Pyx_pad_float) - sizeof(float);
- case 'd': return sizeof(__Pyx_pad_double) - sizeof(double);
- case 'g': return sizeof(__Pyx_pad_longdouble) - sizeof(long double);
- case 'P': case 'O': return sizeof(__Pyx_pad_void_p) - sizeof(void*);
- default:
- __Pyx_BufFmt_RaiseUnexpectedChar(ch);
- return 0;
- }
-}
-static char __Pyx_BufFmt_TypeCharToGroup(char ch, int is_complex) {
- switch (ch) {
- case 'c':
- return 'H';
- case 'b': case 'h': case 'i':
- case 'l': case 'q': case 's': case 'p':
- return 'I';
- case '?': case 'B': case 'H': case 'I': case 'L': case 'Q':
- return 'U';
- case 'f': case 'd': case 'g':
- return (is_complex ? 'C' : 'R');
- case 'O':
- return 'O';
- case 'P':
- return 'P';
- default: {
- __Pyx_BufFmt_RaiseUnexpectedChar(ch);
- return 0;
- }
- }
-}
-static void __Pyx_BufFmt_RaiseExpected(__Pyx_BufFmt_Context* ctx) {
- if (ctx->head == NULL || ctx->head->field == &ctx->root) {
- const char* expected;
- const char* quote;
- if (ctx->head == NULL) {
- expected = "end";
- quote = "";
- } else {
- expected = ctx->head->field->type->name;
- quote = "'";
- }
- PyErr_Format(PyExc_ValueError,
- "Buffer dtype mismatch, expected %s%s%s but got %s",
- quote, expected, quote,
- __Pyx_BufFmt_DescribeTypeChar(ctx->enc_type, ctx->is_complex));
- } else {
- __Pyx_StructField* field = ctx->head->field;
- __Pyx_StructField* parent = (ctx->head - 1)->field;
- PyErr_Format(PyExc_ValueError,
- "Buffer dtype mismatch, expected '%s' but got %s in '%s.%s'",
- field->type->name, __Pyx_BufFmt_DescribeTypeChar(ctx->enc_type, ctx->is_complex),
- parent->type->name, field->name);
- }
-}
-static int __Pyx_BufFmt_ProcessTypeChunk(__Pyx_BufFmt_Context* ctx) {
- char group;
- size_t size, offset, arraysize = 1;
- if (ctx->enc_type == 0) return 0;
- if (ctx->head->field->type->arraysize[0]) {
- int i, ndim = 0;
- if (ctx->enc_type == 's' || ctx->enc_type == 'p') {
- ctx->is_valid_array = ctx->head->field->type->ndim == 1;
- ndim = 1;
- if (ctx->enc_count != ctx->head->field->type->arraysize[0]) {
- PyErr_Format(PyExc_ValueError,
- "Expected a dimension of size %zu, got %zu",
- ctx->head->field->type->arraysize[0], ctx->enc_count);
- return -1;
- }
- }
- if (!ctx->is_valid_array) {
- PyErr_Format(PyExc_ValueError, "Expected %d dimensions, got %d",
- ctx->head->field->type->ndim, ndim);
- return -1;
- }
- for (i = 0; i < ctx->head->field->type->ndim; i++) {
- arraysize *= ctx->head->field->type->arraysize[i];
- }
- ctx->is_valid_array = 0;
- ctx->enc_count = 1;
- }
- group = __Pyx_BufFmt_TypeCharToGroup(ctx->enc_type, ctx->is_complex);
- do {
- __Pyx_StructField* field = ctx->head->field;
- __Pyx_TypeInfo* type = field->type;
- if (ctx->enc_packmode == '@' || ctx->enc_packmode == '^') {
- size = __Pyx_BufFmt_TypeCharToNativeSize(ctx->enc_type, ctx->is_complex);
- } else {
- size = __Pyx_BufFmt_TypeCharToStandardSize(ctx->enc_type, ctx->is_complex);
- }
- if (ctx->enc_packmode == '@') {
- size_t align_at = __Pyx_BufFmt_TypeCharToAlignment(ctx->enc_type, ctx->is_complex);
- size_t align_mod_offset;
- if (align_at == 0) return -1;
- align_mod_offset = ctx->fmt_offset % align_at;
- if (align_mod_offset > 0) ctx->fmt_offset += align_at - align_mod_offset;
- if (ctx->struct_alignment == 0)
- ctx->struct_alignment = __Pyx_BufFmt_TypeCharToPadding(ctx->enc_type,
- ctx->is_complex);
- }
- if (type->size != size || type->typegroup != group) {
- if (type->typegroup == 'C' && type->fields != NULL) {
- size_t parent_offset = ctx->head->parent_offset + field->offset;
- ++ctx->head;
- ctx->head->field = type->fields;
- ctx->head->parent_offset = parent_offset;
- continue;
- }
- if ((type->typegroup == 'H' || group == 'H') && type->size == size) {
- } else {
- __Pyx_BufFmt_RaiseExpected(ctx);
- return -1;
- }
- }
- offset = ctx->head->parent_offset + field->offset;
- if (ctx->fmt_offset != offset) {
- PyErr_Format(PyExc_ValueError,
- "Buffer dtype mismatch; next field is at offset %" CYTHON_FORMAT_SSIZE_T "d but %" CYTHON_FORMAT_SSIZE_T "d expected",
- (Py_ssize_t)ctx->fmt_offset, (Py_ssize_t)offset);
- return -1;
- }
- ctx->fmt_offset += size;
- if (arraysize)
- ctx->fmt_offset += (arraysize - 1) * size;
- --ctx->enc_count;
- while (1) {
- if (field == &ctx->root) {
- ctx->head = NULL;
- if (ctx->enc_count != 0) {
- __Pyx_BufFmt_RaiseExpected(ctx);
- return -1;
- }
- break;
- }
- ctx->head->field = ++field;
- if (field->type == NULL) {
- --ctx->head;
- field = ctx->head->field;
- continue;
- } else if (field->type->typegroup == 'S') {
- size_t parent_offset = ctx->head->parent_offset + field->offset;
- if (field->type->fields->type == NULL) continue;
- field = field->type->fields;
- ++ctx->head;
- ctx->head->field = field;
- ctx->head->parent_offset = parent_offset;
- break;
- } else {
- break;
- }
- }
- } while (ctx->enc_count);
- ctx->enc_type = 0;
- ctx->is_complex = 0;
- return 0;
-}
-static PyObject *
-__pyx_buffmt_parse_array(__Pyx_BufFmt_Context* ctx, const char** tsp)
-{
- const char *ts = *tsp;
- int i = 0, number, ndim;
- ++ts;
- if (ctx->new_count != 1) {
- PyErr_SetString(PyExc_ValueError,
- "Cannot handle repeated arrays in format string");
- return NULL;
- }
- if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL;
- ndim = ctx->head->field->type->ndim;
- while (*ts && *ts != ')') {
- switch (*ts) {
- case ' ': case '\f': case '\r': case '\n': case '\t': case '\v': continue;
- default: break;
- }
- number = __Pyx_BufFmt_ExpectNumber(&ts);
- if (number == -1) return NULL;
- if (i < ndim && (size_t) number != ctx->head->field->type->arraysize[i])
- return PyErr_Format(PyExc_ValueError,
- "Expected a dimension of size %zu, got %d",
- ctx->head->field->type->arraysize[i], number);
- if (*ts != ',' && *ts != ')')
- return PyErr_Format(PyExc_ValueError,
- "Expected a comma in format string, got '%c'", *ts);
- if (*ts == ',') ts++;
- i++;
- }
- if (i != ndim)
- return PyErr_Format(PyExc_ValueError, "Expected %d dimension(s), got %d",
- ctx->head->field->type->ndim, i);
- if (!*ts) {
- PyErr_SetString(PyExc_ValueError,
- "Unexpected end of format string, expected ')'");
- return NULL;
- }
- ctx->is_valid_array = 1;
- ctx->new_count = 1;
- *tsp = ++ts;
- return Py_None;
-}
-static const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const char* ts) {
- int got_Z = 0;
- while (1) {
- switch(*ts) {
- case 0:
- if (ctx->enc_type != 0 && ctx->head == NULL) {
- __Pyx_BufFmt_RaiseExpected(ctx);
- return NULL;
- }
- if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL;
- if (ctx->head != NULL) {
- __Pyx_BufFmt_RaiseExpected(ctx);
- return NULL;
- }
- return ts;
- case ' ':
- case '\r':
- case '\n':
- ++ts;
- break;
- case '<':
- if (!__Pyx_Is_Little_Endian()) {
- PyErr_SetString(PyExc_ValueError, "Little-endian buffer not supported on big-endian compiler");
- return NULL;
- }
- ctx->new_packmode = '=';
- ++ts;
- break;
- case '>':
- case '!':
- if (__Pyx_Is_Little_Endian()) {
- PyErr_SetString(PyExc_ValueError, "Big-endian buffer not supported on little-endian compiler");
- return NULL;
- }
- ctx->new_packmode = '=';
- ++ts;
- break;
- case '=':
- case '@':
- case '^':
- ctx->new_packmode = *ts++;
- break;
- case 'T':
- {
- const char* ts_after_sub;
- size_t i, struct_count = ctx->new_count;
- size_t struct_alignment = ctx->struct_alignment;
- ctx->new_count = 1;
- ++ts;
- if (*ts != '{') {
- PyErr_SetString(PyExc_ValueError, "Buffer acquisition: Expected '{' after 'T'");
- return NULL;
- }
- if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL;
- ctx->enc_type = 0;
- ctx->enc_count = 0;
- ctx->struct_alignment = 0;
- ++ts;
- ts_after_sub = ts;
- for (i = 0; i != struct_count; ++i) {
- ts_after_sub = __Pyx_BufFmt_CheckString(ctx, ts);
- if (!ts_after_sub) return NULL;
- }
- ts = ts_after_sub;
- if (struct_alignment) ctx->struct_alignment = struct_alignment;
- }
- break;
- case '}':
- {
- size_t alignment = ctx->struct_alignment;
- ++ts;
- if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL;
- ctx->enc_type = 0;
- if (alignment && ctx->fmt_offset % alignment) {
- ctx->fmt_offset += alignment - (ctx->fmt_offset % alignment);
- }
- }
- return ts;
- case 'x':
- if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL;
- ctx->fmt_offset += ctx->new_count;
- ctx->new_count = 1;
- ctx->enc_count = 0;
- ctx->enc_type = 0;
- ctx->enc_packmode = ctx->new_packmode;
- ++ts;
- break;
- case 'Z':
- got_Z = 1;
- ++ts;
- if (*ts != 'f' && *ts != 'd' && *ts != 'g') {
- __Pyx_BufFmt_RaiseUnexpectedChar('Z');
- return NULL;
- }
- CYTHON_FALLTHROUGH;
- case '?': case 'c': case 'b': case 'B': case 'h': case 'H': case 'i': case 'I':
- case 'l': case 'L': case 'q': case 'Q':
- case 'f': case 'd': case 'g':
- case 'O': case 'p':
- if ((ctx->enc_type == *ts) && (got_Z == ctx->is_complex) &&
- (ctx->enc_packmode == ctx->new_packmode) && (!ctx->is_valid_array)) {
- ctx->enc_count += ctx->new_count;
- ctx->new_count = 1;
- got_Z = 0;
- ++ts;
- break;
- }
- CYTHON_FALLTHROUGH;
- case 's':
- if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL;
- ctx->enc_count = ctx->new_count;
- ctx->enc_packmode = ctx->new_packmode;
- ctx->enc_type = *ts;
- ctx->is_complex = got_Z;
- ++ts;
- ctx->new_count = 1;
- got_Z = 0;
- break;
- case ':':
- ++ts;
- while(*ts != ':') ++ts;
- ++ts;
- break;
- case '(':
- if (!__pyx_buffmt_parse_array(ctx, &ts)) return NULL;
- break;
- default:
- {
- int number = __Pyx_BufFmt_ExpectNumber(&ts);
- if (number == -1) return NULL;
- ctx->new_count = (size_t)number;
- }
- }
- }
-}
-
-/* TypeInfoCompare */
- static int
-__pyx_typeinfo_cmp(__Pyx_TypeInfo *a, __Pyx_TypeInfo *b)
-{
- int i;
- if (!a || !b)
- return 0;
- if (a == b)
- return 1;
- if (a->size != b->size || a->typegroup != b->typegroup ||
- a->is_unsigned != b->is_unsigned || a->ndim != b->ndim) {
- if (a->typegroup == 'H' || b->typegroup == 'H') {
- return a->size == b->size;
- } else {
- return 0;
- }
- }
- if (a->ndim) {
- for (i = 0; i < a->ndim; i++)
- if (a->arraysize[i] != b->arraysize[i])
- return 0;
- }
- if (a->typegroup == 'S') {
- if (a->flags != b->flags)
- return 0;
- if (a->fields || b->fields) {
- if (!(a->fields && b->fields))
- return 0;
- for (i = 0; a->fields[i].type && b->fields[i].type; i++) {
- __Pyx_StructField *field_a = a->fields + i;
- __Pyx_StructField *field_b = b->fields + i;
- if (field_a->offset != field_b->offset ||
- !__pyx_typeinfo_cmp(field_a->type, field_b->type))
- return 0;
- }
- return !a->fields[i].type && !b->fields[i].type;
- }
- }
- return 1;
-}
-
-/* MemviewSliceValidateAndInit */
- static int
-__pyx_check_strides(Py_buffer *buf, int dim, int ndim, int spec)
-{
- if (buf->shape[dim] <= 1)
- return 1;
- if (buf->strides) {
- if (spec & __Pyx_MEMVIEW_CONTIG) {
- if (spec & (__Pyx_MEMVIEW_PTR|__Pyx_MEMVIEW_FULL)) {
- if (unlikely(buf->strides[dim] != sizeof(void *))) {
- PyErr_Format(PyExc_ValueError,
- "Buffer is not indirectly contiguous "
- "in dimension %d.", dim);
- goto fail;
- }
- } else if (unlikely(buf->strides[dim] != buf->itemsize)) {
- PyErr_SetString(PyExc_ValueError,
- "Buffer and memoryview are not contiguous "
- "in the same dimension.");
- goto fail;
- }
- }
- if (spec & __Pyx_MEMVIEW_FOLLOW) {
- Py_ssize_t stride = buf->strides[dim];
- if (stride < 0)
- stride = -stride;
- if (unlikely(stride < buf->itemsize)) {
- PyErr_SetString(PyExc_ValueError,
- "Buffer and memoryview are not contiguous "
- "in the same dimension.");
- goto fail;
- }
- }
- } else {
- if (unlikely(spec & __Pyx_MEMVIEW_CONTIG && dim != ndim - 1)) {
- PyErr_Format(PyExc_ValueError,
- "C-contiguous buffer is not contiguous in "
- "dimension %d", dim);
- goto fail;
- } else if (unlikely(spec & (__Pyx_MEMVIEW_PTR))) {
- PyErr_Format(PyExc_ValueError,
- "C-contiguous buffer is not indirect in "
- "dimension %d", dim);
- goto fail;
- } else if (unlikely(buf->suboffsets)) {
- PyErr_SetString(PyExc_ValueError,
- "Buffer exposes suboffsets but no strides");
- goto fail;
- }
- }
- return 1;
-fail:
- return 0;
-}
-static int
-__pyx_check_suboffsets(Py_buffer *buf, int dim, CYTHON_UNUSED int ndim, int spec)
-{
- if (spec & __Pyx_MEMVIEW_DIRECT) {
- if (unlikely(buf->suboffsets && buf->suboffsets[dim] >= 0)) {
- PyErr_Format(PyExc_ValueError,
- "Buffer not compatible with direct access "
- "in dimension %d.", dim);
- goto fail;
- }
- }
- if (spec & __Pyx_MEMVIEW_PTR) {
- if (unlikely(!buf->suboffsets || (buf->suboffsets[dim] < 0))) {
- PyErr_Format(PyExc_ValueError,
- "Buffer is not indirectly accessible "
- "in dimension %d.", dim);
- goto fail;
- }
- }
- return 1;
-fail:
- return 0;
-}
-static int
-__pyx_verify_contig(Py_buffer *buf, int ndim, int c_or_f_flag)
-{
- int i;
- if (c_or_f_flag & __Pyx_IS_F_CONTIG) {
- Py_ssize_t stride = 1;
- for (i = 0; i < ndim; i++) {
- if (unlikely(stride * buf->itemsize != buf->strides[i] && buf->shape[i] > 1)) {
- PyErr_SetString(PyExc_ValueError,
- "Buffer not fortran contiguous.");
- goto fail;
- }
- stride = stride * buf->shape[i];
- }
- } else if (c_or_f_flag & __Pyx_IS_C_CONTIG) {
- Py_ssize_t stride = 1;
- for (i = ndim - 1; i >- 1; i--) {
- if (unlikely(stride * buf->itemsize != buf->strides[i] && buf->shape[i] > 1)) {
- PyErr_SetString(PyExc_ValueError,
- "Buffer not C contiguous.");
- goto fail;
- }
- stride = stride * buf->shape[i];
- }
- }
- return 1;
-fail:
- return 0;
-}
-static int __Pyx_ValidateAndInit_memviewslice(
- int *axes_specs,
- int c_or_f_flag,
- int buf_flags,
- int ndim,
- __Pyx_TypeInfo *dtype,
- __Pyx_BufFmt_StackElem stack[],
- __Pyx_memviewslice *memviewslice,
- PyObject *original_obj)
-{
- struct __pyx_memoryview_obj *memview, *new_memview;
- __Pyx_RefNannyDeclarations
- Py_buffer *buf;
- int i, spec = 0, retval = -1;
- __Pyx_BufFmt_Context ctx;
- int from_memoryview = __pyx_memoryview_check(original_obj);
- __Pyx_RefNannySetupContext("ValidateAndInit_memviewslice", 0);
- if (from_memoryview && __pyx_typeinfo_cmp(dtype, ((struct __pyx_memoryview_obj *)
- original_obj)->typeinfo)) {
- memview = (struct __pyx_memoryview_obj *) original_obj;
- new_memview = NULL;
- } else {
- memview = (struct __pyx_memoryview_obj *) __pyx_memoryview_new(
- original_obj, buf_flags, 0, dtype);
- new_memview = memview;
- if (unlikely(!memview))
- goto fail;
- }
- buf = &memview->view;
- if (unlikely(buf->ndim != ndim)) {
- PyErr_Format(PyExc_ValueError,
- "Buffer has wrong number of dimensions (expected %d, got %d)",
- ndim, buf->ndim);
- goto fail;
- }
- if (new_memview) {
- __Pyx_BufFmt_Init(&ctx, stack, dtype);
- if (unlikely(!__Pyx_BufFmt_CheckString(&ctx, buf->format))) goto fail;
- }
- if (unlikely((unsigned) buf->itemsize != dtype->size)) {
- PyErr_Format(PyExc_ValueError,
- "Item size of buffer (%" CYTHON_FORMAT_SSIZE_T "u byte%s) "
- "does not match size of '%s' (%" CYTHON_FORMAT_SSIZE_T "u byte%s)",
- buf->itemsize,
- (buf->itemsize > 1) ? "s" : "",
- dtype->name,
- dtype->size,
- (dtype->size > 1) ? "s" : "");
- goto fail;
- }
- if (buf->len > 0) {
- for (i = 0; i < ndim; i++) {
- spec = axes_specs[i];
- if (unlikely(!__pyx_check_strides(buf, i, ndim, spec)))
- goto fail;
- if (unlikely(!__pyx_check_suboffsets(buf, i, ndim, spec)))
- goto fail;
- }
- if (unlikely(buf->strides && !__pyx_verify_contig(buf, ndim, c_or_f_flag)))
- goto fail;
- }
- if (unlikely(__Pyx_init_memviewslice(memview, ndim, memviewslice,
- new_memview != NULL) == -1)) {
- goto fail;
- }
- retval = 0;
- goto no_fail;
-fail:
- Py_XDECREF(new_memview);
- retval = -1;
-no_fail:
- __Pyx_RefNannyFinishContext();
- return retval;
-}
-
-/* ObjectToMemviewSlice */
- static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_int(PyObject *obj, int writable_flag) {
- __Pyx_memviewslice result = { 0, 0, { 0 }, { 0 }, { 0 } };
- __Pyx_BufFmt_StackElem stack[1];
- int axes_specs[] = { (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_FOLLOW), (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_FOLLOW), (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_CONTIG) };
- int retcode;
- if (obj == Py_None) {
- result.memview = (struct __pyx_memoryview_obj *) Py_None;
- return result;
- }
- retcode = __Pyx_ValidateAndInit_memviewslice(axes_specs, __Pyx_IS_C_CONTIG,
- (PyBUF_C_CONTIGUOUS | PyBUF_FORMAT) | writable_flag, 3,
- &__Pyx_TypeInfo_int, stack,
- &result, obj);
- if (unlikely(retcode == -1))
- goto __pyx_fail;
- return result;
-__pyx_fail:
- result.memview = NULL;
- result.data = NULL;
- return result;
-}
-
-/* ObjectToMemviewSlice */
- static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_float(PyObject *obj, int writable_flag) {
- __Pyx_memviewslice result = { 0, 0, { 0 }, { 0 }, { 0 } };
- __Pyx_BufFmt_StackElem stack[1];
- int axes_specs[] = { (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_FOLLOW), (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_FOLLOW), (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_CONTIG) };
- int retcode;
- if (obj == Py_None) {
- result.memview = (struct __pyx_memoryview_obj *) Py_None;
- return result;
- }
- retcode = __Pyx_ValidateAndInit_memviewslice(axes_specs, __Pyx_IS_C_CONTIG,
- (PyBUF_C_CONTIGUOUS | PyBUF_FORMAT) | writable_flag, 3,
- &__Pyx_TypeInfo_float, stack,
- &result, obj);
- if (unlikely(retcode == -1))
- goto __pyx_fail;
- return result;
-__pyx_fail:
- result.memview = NULL;
- result.data = NULL;
- return result;
-}
-
-/* ObjectToMemviewSlice */
- static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_dc_int(PyObject *obj, int writable_flag) {
- __Pyx_memviewslice result = { 0, 0, { 0 }, { 0 }, { 0 } };
- __Pyx_BufFmt_StackElem stack[1];
- int axes_specs[] = { (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_CONTIG) };
- int retcode;
- if (obj == Py_None) {
- result.memview = (struct __pyx_memoryview_obj *) Py_None;
- return result;
- }
- retcode = __Pyx_ValidateAndInit_memviewslice(axes_specs, __Pyx_IS_C_CONTIG,
- (PyBUF_C_CONTIGUOUS | PyBUF_FORMAT) | writable_flag, 1,
- &__Pyx_TypeInfo_int, stack,
- &result, obj);
- if (unlikely(retcode == -1))
- goto __pyx_fail;
- return result;
-__pyx_fail:
- result.memview = NULL;
- result.data = NULL;
- return result;
-}
-
-/* CIntToPy */
- static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value) {
- const int neg_one = (int) ((int) 0 - (int) 1), const_zero = (int) 0;
- const int is_unsigned = neg_one > const_zero;
- if (is_unsigned) {
- if (sizeof(int) < sizeof(long)) {
- return PyInt_FromLong((long) value);
- } else if (sizeof(int) <= sizeof(unsigned long)) {
- return PyLong_FromUnsignedLong((unsigned long) value);
-#ifdef HAVE_LONG_LONG
- } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) {
- return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value);
-#endif
- }
- } else {
- if (sizeof(int) <= sizeof(long)) {
- return PyInt_FromLong((long) value);
-#ifdef HAVE_LONG_LONG
- } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) {
- return PyLong_FromLongLong((PY_LONG_LONG) value);
-#endif
- }
- }
- {
- int one = 1; int little = (int)*(unsigned char *)&one;
- unsigned char *bytes = (unsigned char *)&value;
- return _PyLong_FromByteArray(bytes, sizeof(int),
- little, !is_unsigned);
- }
-}
-
-/* CIntFromPyVerify */
- #define __PYX_VERIFY_RETURN_INT(target_type, func_type, func_value)\
- __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 0)
-#define __PYX_VERIFY_RETURN_INT_EXC(target_type, func_type, func_value)\
- __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 1)
-#define __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, exc)\
- {\
- func_type value = func_value;\
- if (sizeof(target_type) < sizeof(func_type)) {\
- if (unlikely(value != (func_type) (target_type) value)) {\
- func_type zero = 0;\
- if (exc && unlikely(value == (func_type)-1 && PyErr_Occurred()))\
- return (target_type) -1;\
- if (is_unsigned && unlikely(value < zero))\
- goto raise_neg_overflow;\
- else\
- goto raise_overflow;\
- }\
- }\
- return (target_type) value;\
- }
-
-/* CIntToPy */
- static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value) {
- const long neg_one = (long) ((long) 0 - (long) 1), const_zero = (long) 0;
- const int is_unsigned = neg_one > const_zero;
- if (is_unsigned) {
- if (sizeof(long) < sizeof(long)) {
- return PyInt_FromLong((long) value);
- } else if (sizeof(long) <= sizeof(unsigned long)) {
- return PyLong_FromUnsignedLong((unsigned long) value);
-#ifdef HAVE_LONG_LONG
- } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) {
- return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value);
-#endif
- }
- } else {
- if (sizeof(long) <= sizeof(long)) {
- return PyInt_FromLong((long) value);
-#ifdef HAVE_LONG_LONG
- } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) {
- return PyLong_FromLongLong((PY_LONG_LONG) value);
-#endif
- }
- }
- {
- int one = 1; int little = (int)*(unsigned char *)&one;
- unsigned char *bytes = (unsigned char *)&value;
- return _PyLong_FromByteArray(bytes, sizeof(long),
- little, !is_unsigned);
- }
-}
-
-/* MemviewSliceCopyTemplate */
- static __Pyx_memviewslice
-__pyx_memoryview_copy_new_contig(const __Pyx_memviewslice *from_mvs,
- const char *mode, int ndim,
- size_t sizeof_dtype, int contig_flag,
- int dtype_is_object)
-{
- __Pyx_RefNannyDeclarations
- int i;
- __Pyx_memviewslice new_mvs = { 0, 0, { 0 }, { 0 }, { 0 } };
- struct __pyx_memoryview_obj *from_memview = from_mvs->memview;
- Py_buffer *buf = &from_memview->view;
- PyObject *shape_tuple = NULL;
- PyObject *temp_int = NULL;
- struct __pyx_array_obj *array_obj = NULL;
- struct __pyx_memoryview_obj *memview_obj = NULL;
- __Pyx_RefNannySetupContext("__pyx_memoryview_copy_new_contig", 0);
- for (i = 0; i < ndim; i++) {
- if (unlikely(from_mvs->suboffsets[i] >= 0)) {
- PyErr_Format(PyExc_ValueError, "Cannot copy memoryview slice with "
- "indirect dimensions (axis %d)", i);
- goto fail;
- }
- }
- shape_tuple = PyTuple_New(ndim);
- if (unlikely(!shape_tuple)) {
- goto fail;
- }
- __Pyx_GOTREF(shape_tuple);
- for(i = 0; i < ndim; i++) {
- temp_int = PyInt_FromSsize_t(from_mvs->shape[i]);
- if(unlikely(!temp_int)) {
- goto fail;
- } else {
- PyTuple_SET_ITEM(shape_tuple, i, temp_int);
- temp_int = NULL;
- }
- }
- array_obj = __pyx_array_new(shape_tuple, sizeof_dtype, buf->format, (char *) mode, NULL);
- if (unlikely(!array_obj)) {
- goto fail;
- }
- __Pyx_GOTREF(array_obj);
- memview_obj = (struct __pyx_memoryview_obj *) __pyx_memoryview_new(
- (PyObject *) array_obj, contig_flag,
- dtype_is_object,
- from_mvs->memview->typeinfo);
- if (unlikely(!memview_obj))
- goto fail;
- if (unlikely(__Pyx_init_memviewslice(memview_obj, ndim, &new_mvs, 1) < 0))
- goto fail;
- if (unlikely(__pyx_memoryview_copy_contents(*from_mvs, new_mvs, ndim, ndim,
- dtype_is_object) < 0))
- goto fail;
- goto no_fail;
-fail:
- __Pyx_XDECREF(new_mvs.memview);
- new_mvs.memview = NULL;
- new_mvs.data = NULL;
-no_fail:
- __Pyx_XDECREF(shape_tuple);
- __Pyx_XDECREF(temp_int);
- __Pyx_XDECREF(array_obj);
- __Pyx_RefNannyFinishContext();
- return new_mvs;
-}
-
-/* CIntFromPy */
- static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *x) {
- const int neg_one = (int) ((int) 0 - (int) 1), const_zero = (int) 0;
- const int is_unsigned = neg_one > const_zero;
-#if PY_MAJOR_VERSION < 3
- if (likely(PyInt_Check(x))) {
- if (sizeof(int) < sizeof(long)) {
- __PYX_VERIFY_RETURN_INT(int, long, PyInt_AS_LONG(x))
- } else {
- long val = PyInt_AS_LONG(x);
- if (is_unsigned && unlikely(val < 0)) {
- goto raise_neg_overflow;
- }
- return (int) val;
- }
- } else
-#endif
- if (likely(PyLong_Check(x))) {
- if (is_unsigned) {
-#if CYTHON_USE_PYLONG_INTERNALS
- const digit* digits = ((PyLongObject*)x)->ob_digit;
- switch (Py_SIZE(x)) {
- case 0: return (int) 0;
- case 1: __PYX_VERIFY_RETURN_INT(int, digit, digits[0])
- case 2:
- if (8 * sizeof(int) > 1 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(int) >= 2 * PyLong_SHIFT) {
- return (int) (((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]));
- }
- }
- break;
- case 3:
- if (8 * sizeof(int) > 2 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(int) >= 3 * PyLong_SHIFT) {
- return (int) (((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]));
- }
- }
- break;
- case 4:
- if (8 * sizeof(int) > 3 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(int) >= 4 * PyLong_SHIFT) {
- return (int) (((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]));
- }
- }
- break;
- }
-#endif
-#if CYTHON_COMPILING_IN_CPYTHON
- if (unlikely(Py_SIZE(x) < 0)) {
- goto raise_neg_overflow;
- }
-#else
- {
- int result = PyObject_RichCompareBool(x, Py_False, Py_LT);
- if (unlikely(result < 0))
- return (int) -1;
- if (unlikely(result == 1))
- goto raise_neg_overflow;
- }
-#endif
- if (sizeof(int) <= sizeof(unsigned long)) {
- __PYX_VERIFY_RETURN_INT_EXC(int, unsigned long, PyLong_AsUnsignedLong(x))
-#ifdef HAVE_LONG_LONG
- } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) {
- __PYX_VERIFY_RETURN_INT_EXC(int, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x))
-#endif
- }
- } else {
-#if CYTHON_USE_PYLONG_INTERNALS
- const digit* digits = ((PyLongObject*)x)->ob_digit;
- switch (Py_SIZE(x)) {
- case 0: return (int) 0;
- case -1: __PYX_VERIFY_RETURN_INT(int, sdigit, (sdigit) (-(sdigit)digits[0]))
- case 1: __PYX_VERIFY_RETURN_INT(int, digit, +digits[0])
- case -2:
- if (8 * sizeof(int) - 1 > 1 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) {
- return (int) (((int)-1)*(((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));
- }
- }
- break;
- case 2:
- if (8 * sizeof(int) > 1 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) {
- return (int) ((((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));
- }
- }
- break;
- case -3:
- if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) {
- return (int) (((int)-1)*(((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));
- }
- }
- break;
- case 3:
- if (8 * sizeof(int) > 2 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) {
- return (int) ((((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));
- }
- }
- break;
- case -4:
- if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) {
- return (int) (((int)-1)*(((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));
- }
- }
- break;
- case 4:
- if (8 * sizeof(int) > 3 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) {
- return (int) ((((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));
- }
- }
- break;
- }
-#endif
- if (sizeof(int) <= sizeof(long)) {
- __PYX_VERIFY_RETURN_INT_EXC(int, long, PyLong_AsLong(x))
-#ifdef HAVE_LONG_LONG
- } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) {
- __PYX_VERIFY_RETURN_INT_EXC(int, PY_LONG_LONG, PyLong_AsLongLong(x))
-#endif
- }
- }
- {
-#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray)
- PyErr_SetString(PyExc_RuntimeError,
- "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers");
-#else
- int val;
- PyObject *v = __Pyx_PyNumber_IntOrLong(x);
- #if PY_MAJOR_VERSION < 3
- if (likely(v) && !PyLong_Check(v)) {
- PyObject *tmp = v;
- v = PyNumber_Long(tmp);
- Py_DECREF(tmp);
- }
- #endif
- if (likely(v)) {
- int one = 1; int is_little = (int)*(unsigned char *)&one;
- unsigned char *bytes = (unsigned char *)&val;
- int ret = _PyLong_AsByteArray((PyLongObject *)v,
- bytes, sizeof(val),
- is_little, !is_unsigned);
- Py_DECREF(v);
- if (likely(!ret))
- return val;
- }
-#endif
- return (int) -1;
- }
- } else {
- int val;
- PyObject *tmp = __Pyx_PyNumber_IntOrLong(x);
- if (!tmp) return (int) -1;
- val = __Pyx_PyInt_As_int(tmp);
- Py_DECREF(tmp);
- return val;
- }
-raise_overflow:
- PyErr_SetString(PyExc_OverflowError,
- "value too large to convert to int");
- return (int) -1;
-raise_neg_overflow:
- PyErr_SetString(PyExc_OverflowError,
- "can't convert negative value to int");
- return (int) -1;
-}
-
-/* CIntFromPy */
- static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *x) {
- const long neg_one = (long) ((long) 0 - (long) 1), const_zero = (long) 0;
- const int is_unsigned = neg_one > const_zero;
-#if PY_MAJOR_VERSION < 3
- if (likely(PyInt_Check(x))) {
- if (sizeof(long) < sizeof(long)) {
- __PYX_VERIFY_RETURN_INT(long, long, PyInt_AS_LONG(x))
- } else {
- long val = PyInt_AS_LONG(x);
- if (is_unsigned && unlikely(val < 0)) {
- goto raise_neg_overflow;
- }
- return (long) val;
- }
- } else
-#endif
- if (likely(PyLong_Check(x))) {
- if (is_unsigned) {
-#if CYTHON_USE_PYLONG_INTERNALS
- const digit* digits = ((PyLongObject*)x)->ob_digit;
- switch (Py_SIZE(x)) {
- case 0: return (long) 0;
- case 1: __PYX_VERIFY_RETURN_INT(long, digit, digits[0])
- case 2:
- if (8 * sizeof(long) > 1 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(long) >= 2 * PyLong_SHIFT) {
- return (long) (((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]));
- }
- }
- break;
- case 3:
- if (8 * sizeof(long) > 2 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(long) >= 3 * PyLong_SHIFT) {
- return (long) (((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]));
- }
- }
- break;
- case 4:
- if (8 * sizeof(long) > 3 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(long) >= 4 * PyLong_SHIFT) {
- return (long) (((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]));
- }
- }
- break;
- }
-#endif
-#if CYTHON_COMPILING_IN_CPYTHON
- if (unlikely(Py_SIZE(x) < 0)) {
- goto raise_neg_overflow;
- }
-#else
- {
- int result = PyObject_RichCompareBool(x, Py_False, Py_LT);
- if (unlikely(result < 0))
- return (long) -1;
- if (unlikely(result == 1))
- goto raise_neg_overflow;
- }
-#endif
- if (sizeof(long) <= sizeof(unsigned long)) {
- __PYX_VERIFY_RETURN_INT_EXC(long, unsigned long, PyLong_AsUnsignedLong(x))
-#ifdef HAVE_LONG_LONG
- } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) {
- __PYX_VERIFY_RETURN_INT_EXC(long, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x))
-#endif
- }
- } else {
-#if CYTHON_USE_PYLONG_INTERNALS
- const digit* digits = ((PyLongObject*)x)->ob_digit;
- switch (Py_SIZE(x)) {
- case 0: return (long) 0;
- case -1: __PYX_VERIFY_RETURN_INT(long, sdigit, (sdigit) (-(sdigit)digits[0]))
- case 1: __PYX_VERIFY_RETURN_INT(long, digit, +digits[0])
- case -2:
- if (8 * sizeof(long) - 1 > 1 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) {
- return (long) (((long)-1)*(((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));
- }
- }
- break;
- case 2:
- if (8 * sizeof(long) > 1 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) {
- return (long) ((((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));
- }
- }
- break;
- case -3:
- if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) {
- return (long) (((long)-1)*(((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));
- }
- }
- break;
- case 3:
- if (8 * sizeof(long) > 2 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) {
- return (long) ((((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));
- }
- }
- break;
- case -4:
- if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) {
- return (long) (((long)-1)*(((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));
- }
- }
- break;
- case 4:
- if (8 * sizeof(long) > 3 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) {
- return (long) ((((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));
- }
- }
- break;
- }
-#endif
- if (sizeof(long) <= sizeof(long)) {
- __PYX_VERIFY_RETURN_INT_EXC(long, long, PyLong_AsLong(x))
-#ifdef HAVE_LONG_LONG
- } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) {
- __PYX_VERIFY_RETURN_INT_EXC(long, PY_LONG_LONG, PyLong_AsLongLong(x))
-#endif
- }
- }
- {
-#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray)
- PyErr_SetString(PyExc_RuntimeError,
- "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers");
-#else
- long val;
- PyObject *v = __Pyx_PyNumber_IntOrLong(x);
- #if PY_MAJOR_VERSION < 3
- if (likely(v) && !PyLong_Check(v)) {
- PyObject *tmp = v;
- v = PyNumber_Long(tmp);
- Py_DECREF(tmp);
- }
- #endif
- if (likely(v)) {
- int one = 1; int is_little = (int)*(unsigned char *)&one;
- unsigned char *bytes = (unsigned char *)&val;
- int ret = _PyLong_AsByteArray((PyLongObject *)v,
- bytes, sizeof(val),
- is_little, !is_unsigned);
- Py_DECREF(v);
- if (likely(!ret))
- return val;
- }
-#endif
- return (long) -1;
- }
- } else {
- long val;
- PyObject *tmp = __Pyx_PyNumber_IntOrLong(x);
- if (!tmp) return (long) -1;
- val = __Pyx_PyInt_As_long(tmp);
- Py_DECREF(tmp);
- return val;
- }
-raise_overflow:
- PyErr_SetString(PyExc_OverflowError,
- "value too large to convert to long");
- return (long) -1;
-raise_neg_overflow:
- PyErr_SetString(PyExc_OverflowError,
- "can't convert negative value to long");
- return (long) -1;
-}
-
-/* CIntFromPy */
- static CYTHON_INLINE char __Pyx_PyInt_As_char(PyObject *x) {
- const char neg_one = (char) ((char) 0 - (char) 1), const_zero = (char) 0;
- const int is_unsigned = neg_one > const_zero;
-#if PY_MAJOR_VERSION < 3
- if (likely(PyInt_Check(x))) {
- if (sizeof(char) < sizeof(long)) {
- __PYX_VERIFY_RETURN_INT(char, long, PyInt_AS_LONG(x))
- } else {
- long val = PyInt_AS_LONG(x);
- if (is_unsigned && unlikely(val < 0)) {
- goto raise_neg_overflow;
- }
- return (char) val;
- }
- } else
-#endif
- if (likely(PyLong_Check(x))) {
- if (is_unsigned) {
-#if CYTHON_USE_PYLONG_INTERNALS
- const digit* digits = ((PyLongObject*)x)->ob_digit;
- switch (Py_SIZE(x)) {
- case 0: return (char) 0;
- case 1: __PYX_VERIFY_RETURN_INT(char, digit, digits[0])
- case 2:
- if (8 * sizeof(char) > 1 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(char) >= 2 * PyLong_SHIFT) {
- return (char) (((((char)digits[1]) << PyLong_SHIFT) | (char)digits[0]));
- }
- }
- break;
- case 3:
- if (8 * sizeof(char) > 2 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(char) >= 3 * PyLong_SHIFT) {
- return (char) (((((((char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0]));
- }
- }
- break;
- case 4:
- if (8 * sizeof(char) > 3 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(char) >= 4 * PyLong_SHIFT) {
- return (char) (((((((((char)digits[3]) << PyLong_SHIFT) | (char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0]));
- }
- }
- break;
- }
-#endif
-#if CYTHON_COMPILING_IN_CPYTHON
- if (unlikely(Py_SIZE(x) < 0)) {
- goto raise_neg_overflow;
- }
-#else
- {
- int result = PyObject_RichCompareBool(x, Py_False, Py_LT);
- if (unlikely(result < 0))
- return (char) -1;
- if (unlikely(result == 1))
- goto raise_neg_overflow;
- }
-#endif
- if (sizeof(char) <= sizeof(unsigned long)) {
- __PYX_VERIFY_RETURN_INT_EXC(char, unsigned long, PyLong_AsUnsignedLong(x))
-#ifdef HAVE_LONG_LONG
- } else if (sizeof(char) <= sizeof(unsigned PY_LONG_LONG)) {
- __PYX_VERIFY_RETURN_INT_EXC(char, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x))
-#endif
- }
- } else {
-#if CYTHON_USE_PYLONG_INTERNALS
- const digit* digits = ((PyLongObject*)x)->ob_digit;
- switch (Py_SIZE(x)) {
- case 0: return (char) 0;
- case -1: __PYX_VERIFY_RETURN_INT(char, sdigit, (sdigit) (-(sdigit)digits[0]))
- case 1: __PYX_VERIFY_RETURN_INT(char, digit, +digits[0])
- case -2:
- if (8 * sizeof(char) - 1 > 1 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(char, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(char) - 1 > 2 * PyLong_SHIFT) {
- return (char) (((char)-1)*(((((char)digits[1]) << PyLong_SHIFT) | (char)digits[0])));
- }
- }
- break;
- case 2:
- if (8 * sizeof(char) > 1 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(char) - 1 > 2 * PyLong_SHIFT) {
- return (char) ((((((char)digits[1]) << PyLong_SHIFT) | (char)digits[0])));
- }
- }
- break;
- case -3:
- if (8 * sizeof(char) - 1 > 2 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(char, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(char) - 1 > 3 * PyLong_SHIFT) {
- return (char) (((char)-1)*(((((((char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0])));
- }
- }
- break;
- case 3:
- if (8 * sizeof(char) > 2 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(char) - 1 > 3 * PyLong_SHIFT) {
- return (char) ((((((((char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0])));
- }
- }
- break;
- case -4:
- if (8 * sizeof(char) - 1 > 3 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(char, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(char) - 1 > 4 * PyLong_SHIFT) {
- return (char) (((char)-1)*(((((((((char)digits[3]) << PyLong_SHIFT) | (char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0])));
- }
- }
- break;
- case 4:
- if (8 * sizeof(char) > 3 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(char) - 1 > 4 * PyLong_SHIFT) {
- return (char) ((((((((((char)digits[3]) << PyLong_SHIFT) | (char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0])));
- }
- }
- break;
- }
-#endif
- if (sizeof(char) <= sizeof(long)) {
- __PYX_VERIFY_RETURN_INT_EXC(char, long, PyLong_AsLong(x))
-#ifdef HAVE_LONG_LONG
- } else if (sizeof(char) <= sizeof(PY_LONG_LONG)) {
- __PYX_VERIFY_RETURN_INT_EXC(char, PY_LONG_LONG, PyLong_AsLongLong(x))
-#endif
- }
- }
- {
-#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray)
- PyErr_SetString(PyExc_RuntimeError,
- "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers");
-#else
- char val;
- PyObject *v = __Pyx_PyNumber_IntOrLong(x);
- #if PY_MAJOR_VERSION < 3
- if (likely(v) && !PyLong_Check(v)) {
- PyObject *tmp = v;
- v = PyNumber_Long(tmp);
- Py_DECREF(tmp);
- }
- #endif
- if (likely(v)) {
- int one = 1; int is_little = (int)*(unsigned char *)&one;
- unsigned char *bytes = (unsigned char *)&val;
- int ret = _PyLong_AsByteArray((PyLongObject *)v,
- bytes, sizeof(val),
- is_little, !is_unsigned);
- Py_DECREF(v);
- if (likely(!ret))
- return val;
- }
-#endif
- return (char) -1;
- }
- } else {
- char val;
- PyObject *tmp = __Pyx_PyNumber_IntOrLong(x);
- if (!tmp) return (char) -1;
- val = __Pyx_PyInt_As_char(tmp);
- Py_DECREF(tmp);
- return val;
- }
-raise_overflow:
- PyErr_SetString(PyExc_OverflowError,
- "value too large to convert to char");
- return (char) -1;
-raise_neg_overflow:
- PyErr_SetString(PyExc_OverflowError,
- "can't convert negative value to char");
- return (char) -1;
-}
-
-/* CheckBinaryVersion */
- static int __Pyx_check_binary_version(void) {
- char ctversion[4], rtversion[4];
- PyOS_snprintf(ctversion, 4, "%d.%d", PY_MAJOR_VERSION, PY_MINOR_VERSION);
- PyOS_snprintf(rtversion, 4, "%s", Py_GetVersion());
- if (ctversion[0] != rtversion[0] || ctversion[2] != rtversion[2]) {
- char message[200];
- PyOS_snprintf(message, sizeof(message),
- "compiletime version %s of module '%.100s' "
- "does not match runtime version %s",
- ctversion, __Pyx_MODULE_NAME, rtversion);
- return PyErr_WarnEx(NULL, message, 1);
- }
- return 0;
-}
-
-/* InitStrings */
- static int __Pyx_InitStrings(__Pyx_StringTabEntry *t) {
- while (t->p) {
- #if PY_MAJOR_VERSION < 3
- if (t->is_unicode) {
- *t->p = PyUnicode_DecodeUTF8(t->s, t->n - 1, NULL);
- } else if (t->intern) {
- *t->p = PyString_InternFromString(t->s);
- } else {
- *t->p = PyString_FromStringAndSize(t->s, t->n - 1);
- }
- #else
- if (t->is_unicode | t->is_str) {
- if (t->intern) {
- *t->p = PyUnicode_InternFromString(t->s);
- } else if (t->encoding) {
- *t->p = PyUnicode_Decode(t->s, t->n - 1, t->encoding, NULL);
- } else {
- *t->p = PyUnicode_FromStringAndSize(t->s, t->n - 1);
- }
- } else {
- *t->p = PyBytes_FromStringAndSize(t->s, t->n - 1);
- }
- #endif
- if (!*t->p)
- return -1;
- if (PyObject_Hash(*t->p) == -1)
- return -1;
- ++t;
- }
- return 0;
-}
-
-static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char* c_str) {
- return __Pyx_PyUnicode_FromStringAndSize(c_str, (Py_ssize_t)strlen(c_str));
-}
-static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject* o) {
- Py_ssize_t ignore;
- return __Pyx_PyObject_AsStringAndSize(o, &ignore);
-}
-#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT
-#if !CYTHON_PEP393_ENABLED
-static const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) {
- char* defenc_c;
- PyObject* defenc = _PyUnicode_AsDefaultEncodedString(o, NULL);
- if (!defenc) return NULL;
- defenc_c = PyBytes_AS_STRING(defenc);
-#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII
- {
- char* end = defenc_c + PyBytes_GET_SIZE(defenc);
- char* c;
- for (c = defenc_c; c < end; c++) {
- if ((unsigned char) (*c) >= 128) {
- PyUnicode_AsASCIIString(o);
- return NULL;
- }
- }
- }
-#endif
- *length = PyBytes_GET_SIZE(defenc);
- return defenc_c;
-}
-#else
-static CYTHON_INLINE const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) {
- if (unlikely(__Pyx_PyUnicode_READY(o) == -1)) return NULL;
-#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII
- if (likely(PyUnicode_IS_ASCII(o))) {
- *length = PyUnicode_GET_LENGTH(o);
- return PyUnicode_AsUTF8(o);
- } else {
- PyUnicode_AsASCIIString(o);
- return NULL;
- }
-#else
- return PyUnicode_AsUTF8AndSize(o, length);
-#endif
-}
-#endif
-#endif
-static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject* o, Py_ssize_t *length) {
-#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT
- if (
-#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII
- __Pyx_sys_getdefaultencoding_not_ascii &&
-#endif
- PyUnicode_Check(o)) {
- return __Pyx_PyUnicode_AsStringAndSize(o, length);
- } else
-#endif
-#if (!CYTHON_COMPILING_IN_PYPY) || (defined(PyByteArray_AS_STRING) && defined(PyByteArray_GET_SIZE))
- if (PyByteArray_Check(o)) {
- *length = PyByteArray_GET_SIZE(o);
- return PyByteArray_AS_STRING(o);
- } else
-#endif
- {
- char* result;
- int r = PyBytes_AsStringAndSize(o, &result, length);
- if (unlikely(r < 0)) {
- return NULL;
- } else {
- return result;
- }
- }
-}
-static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject* x) {
- int is_true = x == Py_True;
- if (is_true | (x == Py_False) | (x == Py_None)) return is_true;
- else return PyObject_IsTrue(x);
-}
-static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject* x) {
- int retval;
- if (unlikely(!x)) return -1;
- retval = __Pyx_PyObject_IsTrue(x);
- Py_DECREF(x);
- return retval;
-}
-static PyObject* __Pyx_PyNumber_IntOrLongWrongResultType(PyObject* result, const char* type_name) {
-#if PY_MAJOR_VERSION >= 3
- if (PyLong_Check(result)) {
- if (PyErr_WarnFormat(PyExc_DeprecationWarning, 1,
- "__int__ returned non-int (type %.200s). "
- "The ability to return an instance of a strict subclass of int "
- "is deprecated, and may be removed in a future version of Python.",
- Py_TYPE(result)->tp_name)) {
- Py_DECREF(result);
- return NULL;
- }
- return result;
- }
-#endif
- PyErr_Format(PyExc_TypeError,
- "__%.4s__ returned non-%.4s (type %.200s)",
- type_name, type_name, Py_TYPE(result)->tp_name);
- Py_DECREF(result);
- return NULL;
-}
-static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x) {
-#if CYTHON_USE_TYPE_SLOTS
- PyNumberMethods *m;
-#endif
- const char *name = NULL;
- PyObject *res = NULL;
-#if PY_MAJOR_VERSION < 3
- if (likely(PyInt_Check(x) || PyLong_Check(x)))
-#else
- if (likely(PyLong_Check(x)))
-#endif
- return __Pyx_NewRef(x);
-#if CYTHON_USE_TYPE_SLOTS
- m = Py_TYPE(x)->tp_as_number;
- #if PY_MAJOR_VERSION < 3
- if (m && m->nb_int) {
- name = "int";
- res = m->nb_int(x);
- }
- else if (m && m->nb_long) {
- name = "long";
- res = m->nb_long(x);
- }
- #else
- if (likely(m && m->nb_int)) {
- name = "int";
- res = m->nb_int(x);
- }
- #endif
-#else
- if (!PyBytes_CheckExact(x) && !PyUnicode_CheckExact(x)) {
- res = PyNumber_Int(x);
- }
-#endif
- if (likely(res)) {
-#if PY_MAJOR_VERSION < 3
- if (unlikely(!PyInt_Check(res) && !PyLong_Check(res))) {
-#else
- if (unlikely(!PyLong_CheckExact(res))) {
-#endif
- return __Pyx_PyNumber_IntOrLongWrongResultType(res, name);
- }
- }
- else if (!PyErr_Occurred()) {
- PyErr_SetString(PyExc_TypeError,
- "an integer is required");
- }
- return res;
-}
-static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject* b) {
- Py_ssize_t ival;
- PyObject *x;
-#if PY_MAJOR_VERSION < 3
- if (likely(PyInt_CheckExact(b))) {
- if (sizeof(Py_ssize_t) >= sizeof(long))
- return PyInt_AS_LONG(b);
- else
- return PyInt_AsSsize_t(b);
- }
-#endif
- if (likely(PyLong_CheckExact(b))) {
- #if CYTHON_USE_PYLONG_INTERNALS
- const digit* digits = ((PyLongObject*)b)->ob_digit;
- const Py_ssize_t size = Py_SIZE(b);
- if (likely(__Pyx_sst_abs(size) <= 1)) {
- ival = likely(size) ? digits[0] : 0;
- if (size == -1) ival = -ival;
- return ival;
- } else {
- switch (size) {
- case 2:
- if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) {
- return (Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));
- }
- break;
- case -2:
- if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) {
- return -(Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));
- }
- break;
- case 3:
- if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) {
- return (Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));
- }
- break;
- case -3:
- if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) {
- return -(Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));
- }
- break;
- case 4:
- if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) {
- return (Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));
- }
- break;
- case -4:
- if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) {
- return -(Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));
- }
- break;
- }
- }
- #endif
- return PyLong_AsSsize_t(b);
- }
- x = PyNumber_Index(b);
- if (!x) return -1;
- ival = PyInt_AsSsize_t(x);
- Py_DECREF(x);
- return ival;
-}
-static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b) {
- return b ? __Pyx_NewRef(Py_True) : __Pyx_NewRef(Py_False);
-}
-static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t ival) {
- return PyInt_FromSize_t(ival);
-}
-
-
-#endif /* Py_PYTHON_H */
diff --git a/spaces/EsoCode/text-generation-webui/css/html_readable_style.css b/spaces/EsoCode/text-generation-webui/css/html_readable_style.css
deleted file mode 100644
index cd5fca97868167718d239b4be72e9271971807e2..0000000000000000000000000000000000000000
--- a/spaces/EsoCode/text-generation-webui/css/html_readable_style.css
+++ /dev/null
@@ -1,29 +0,0 @@
-.container {
- max-width: 600px;
- margin-left: auto;
- margin-right: auto;
- background-color: rgb(31, 41, 55);
- padding: 3em;
- word-break: break-word;
- overflow-wrap: anywhere;
- color: #efefef !important;
-}
-
-.container p, .container li {
- font-size: 16px !important;
- color: #efefef !important;
- margin-bottom: 22px;
- line-height: 1.4 !important;
-}
-
-.container li > p {
- display: inline !important;
-}
-
-.container code {
- overflow-x: auto;
-}
-
-.container :not(pre) > code {
- white-space: normal !important;
-}
\ No newline at end of file
diff --git a/spaces/EuroPython2022/mmocr-demo/configs/textrecog/seg/README.md b/spaces/EuroPython2022/mmocr-demo/configs/textrecog/seg/README.md
deleted file mode 100644
index f8ab29e61727e3fa648c2aa090fcae8076bbf5e2..0000000000000000000000000000000000000000
--- a/spaces/EuroPython2022/mmocr-demo/configs/textrecog/seg/README.md
+++ /dev/null
@@ -1,48 +0,0 @@
-# SegOCR
-
-
-
-## Abstract
-
-Just a simple Seg-based baseline for text recognition tasks.
-
-## Dataset
-
-### Train Dataset
-
-| trainset | instance_num | repeat_num | source |
-| :-------: | :----------: | :--------: | :----: |
-| SynthText | 7266686 | 1 | synth |
-
-### Test Dataset
-
-| testset | instance_num | type |
-| :-----: | :----------: | :-------: |
-| IIIT5K | 3000 | regular |
-| SVT | 647 | regular |
-| IC13 | 1015 | regular |
-| CT80 | 288 | irregular |
-
-## Results and Models
-
-| Backbone | Neck | Head | | | Regular Text | | | Irregular Text | download |
-| :------: | :----: | :--: | :-: | :----: | :----------: | :--: | :-: | :------------: | :------------------------------------------------------------------------------------------------------------------------------------------: |
-| | | | | IIIT5K | SVT | IC13 | | CT80 | |
-| R31-1/16 | FPNOCR | 1x | | 90.9 | 81.8 | 90.7 | | 80.9 | [model](https://download.openmmlab.com/mmocr/textrecog/seg/seg_r31_1by16_fpnocr_academic-72235b11.pth) \| [log](https://download.openmmlab.com/mmocr/textrecog/seg/20210325_112835.log.json) |
-
-```{note}
-
-- `R31-1/16` means the size (both height and width ) of feature from backbone is 1/16 of input image.
-- `1x` means the size (both height and width) of feature from head is the same with input image.
-```
-
-## Citation
-
-```bibtex
-@unpublished{key,
- title={SegOCR Simple Baseline.},
- author={},
- note={Unpublished Manuscript},
- year={2021}
-}
-```
diff --git a/spaces/Faridmaruf/rvc-genshin-v2/lib/infer_pack/modules/F0Predictor/__init__.py b/spaces/Faridmaruf/rvc-genshin-v2/lib/infer_pack/modules/F0Predictor/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Foti/webui/app.py b/spaces/Foti/webui/app.py
deleted file mode 100644
index 1cd83154fe013ef1426ea1951f940da6b0db7a92..0000000000000000000000000000000000000000
--- a/spaces/Foti/webui/app.py
+++ /dev/null
@@ -1,72 +0,0 @@
-import os
-from subprocess import getoutput
-
-gpu_info = getoutput('nvidia-smi')
-if("A10G" in gpu_info):
- os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+4c06c79.d20221205-cp38-cp38-linux_x86_64.whl")
-elif("T4" in gpu_info):
- os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+1515f77.d20221130-cp38-cp38-linux_x86_64.whl")
-
-os.system(f"git clone https://github.com/camenduru/stable-diffusion-webui /home/user/app/stable-diffusion-webui")
-os.chdir("/home/user/app/stable-diffusion-webui")
-
-os.system(f"wget -q https://github.com/camenduru/webui/raw/main/env_patch.py -O /home/user/app/env_patch.py")
-os.system(f"sed -i -e '/import image_from_url_text/r /home/user/app/env_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e '/(modelmerger_interface, \"Checkpoint Merger\", \"modelmerger\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e '/(train_interface, \"Train\", \"ti\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e '/extensions_interface, \"Extensions\", \"extensions\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e '/settings_interface, \"Settings\", \"settings\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f'''sed -i -e "s/document.getElementsByTagName('gradio-app')\[0\].shadowRoot/!!document.getElementsByTagName('gradio-app')[0].shadowRoot ? document.getElementsByTagName('gradio-app')[0].shadowRoot : document/g" /home/user/app/stable-diffusion-webui/script.js''')
-os.system(f"sed -i -e 's/ show_progress=False,/ show_progress=True,/g' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e 's/shared.demo.launch/shared.demo.queue().launch/g' /home/user/app/stable-diffusion-webui/webui.py")
-os.system(f"sed -i -e 's/ outputs=\[/queue=False, &/g' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e 's/ queue=False, / /g' /home/user/app/stable-diffusion-webui/modules/ui.py")
-
-# ----------------------------Please duplicate this space and delete this block if you don't want to see the extra header----------------------------
-os.system(f"wget -q https://github.com/camenduru/webui/raw/main/header_patch.py -O /home/user/app/header_patch.py")
-os.system(f"sed -i -e '/demo:/r /home/user/app/header_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py")
-# ---------------------------------------------------------------------------------------------------------------------------------------------------
-
-if "IS_SHARED_UI" in os.environ:
- os.system(f"rm -rfv /home/user/app/stable-diffusion-webui/scripts/")
-
- os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-config.json -O /home/user/app/shared-config.json")
- os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-ui-config.json -O /home/user/app/shared-ui-config.json")
-
- os.system(f"wget -q {os.getenv('MODEL_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('MODEL_NAME')}")
- os.system(f"wget -q {os.getenv('VAE_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('VAE_NAME')}")
- os.system(f"wget -q {os.getenv('YAML_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('YAML_NAME')}")
-
- os.system(f"python launch.py --force-enable-xformers --disable-console-progressbars --enable-console-prompts --ui-config-file /home/user/app/shared-ui-config.json --ui-settings-file /home/user/app/shared-config.json --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding")
-else:
- # Please duplicate this space and delete # character in front of the custom script you want to use or add here more custom scripts with same structure os.system(f"wget -q https://CUSTOM_SCRIPT_URL -O /home/user/app/stable-diffusion-webui/scripts/CUSTOM_SCRIPT_NAME.py")
- os.system(f"wget -q https://gist.github.com/camenduru/9ec5f8141db9902e375967e93250860f/raw/d0bcf01786f20107c329c03f8968584ee67be12a/run_n_times.py -O /home/user/app/stable-diffusion-webui/scripts/run_n_times.py")
-
- # Please duplicate this space and delete # character in front of the extension you want to use or add here more extensions with same structure os.system(f"git clone https://EXTENSION_GIT_URL /home/user/app/stable-diffusion-webui/extensions/EXTENSION_NAME")
- #os.system(f"git clone https://github.com/camenduru/stable-diffusion-webui-artists-to-study /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-artists-to-study")
- os.system(f"git clone https://github.com/yfszzx/stable-diffusion-webui-images-browser /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-images-browser")
- os.system(f"git clone https://github.com/deforum-art/deforum-for-automatic1111-webui /home/user/app/stable-diffusion-webui/extensions/deforum-for-automatic1111-webui")
-
- # Please duplicate this space and delete # character in front of the model you want to use or add here more ckpts with same structure os.system(f"wget -q https://CKPT_URL -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/CKPT_NAME.ckpt")
- #os.system(f"wget -q https://huggingface.co/nitrosocke/Arcane-Diffusion/resolve/main/arcane-diffusion-v3.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/arcane-diffusion-v3.ckpt")
- #os.system(f"wget -q https://huggingface.co/DGSpitzer/Cyberpunk-Anime-Diffusion/resolve/main/Cyberpunk-Anime-Diffusion.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Cyberpunk-Anime-Diffusion.ckpt")
- #os.system(f"wget -q https://huggingface.co/prompthero/midjourney-v4-diffusion/resolve/main/mdjrny-v4.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/mdjrny-v4.ckpt")
- #os.system(f"wget -q https://huggingface.co/nitrosocke/mo-di-diffusion/resolve/main/moDi-v1-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/moDi-v1-pruned.ckpt")
- #os.system(f"wget -q https://huggingface.co/Fictiverse/Stable_Diffusion_PaperCut_Model/resolve/main/PaperCut_v1.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/PaperCut_v1.ckpt")
- #os.system(f"wget -q https://huggingface.co/lilpotat/sa/resolve/main/samdoesarts_style.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/samdoesarts_style.ckpt")
- #os.system(f"wget -q https://huggingface.co/hakurei/waifu-diffusion-v1-3/resolve/main/wd-v1-3-float32.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/wd-v1-3-float32.ckpt")
- #os.system(f"wget -q https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/sd-v1-4.ckpt")
- #os.system(f"wget -q https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.ckpt")
- #os.system(f"wget -q https://huggingface.co/runwayml/stable-diffusion-inpainting/resolve/main/sd-v1-5-inpainting.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/sd-v1-5-inpainting.ckpt")
-
- #os.system(f"wget -q https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/Anything-V3.0-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0-pruned.ckpt")
- #os.system(f"wget -q https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/Anything-V3.0.vae.pt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0-pruned.vae.pt")
-
- #os.system(f"wget -q https://huggingface.co/stabilityai/stable-diffusion-2/resolve/main/768-v-ema.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.ckpt")
- #os.system(f"wget -q https://raw.githubusercontent.com/Stability-AI/stablediffusion/main/configs/stable-diffusion/v2-inference-v.yaml -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.yaml")
-
- os.system(f"wget -q https://huggingface.co/stabilityai/stable-diffusion-2-1/resolve/main/v2-1_768-ema-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v2-1_768-ema-pruned.ckpt")
- os.system(f"wget -q https://raw.githubusercontent.com/Stability-AI/stablediffusion/main/configs/stable-diffusion/v2-inference-v.yaml -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v2-1_768-ema-pruned.yaml")
-
- os.system(f"python launch.py --force-enable-xformers --ui-config-file /home/user/app/ui-config.json --ui-settings-file /home/user/app/config.json --disable-console-progressbars --enable-console-prompts --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding --api --skip-torch-cuda-test")
-
\ No newline at end of file
diff --git a/spaces/Freiburg-AI-Research/dermoscopic_image_generation/glide_text2im/respace.py b/spaces/Freiburg-AI-Research/dermoscopic_image_generation/glide_text2im/respace.py
deleted file mode 100644
index fa0e3972184f83a3bea359f25f53a9e69d691d3a..0000000000000000000000000000000000000000
--- a/spaces/Freiburg-AI-Research/dermoscopic_image_generation/glide_text2im/respace.py
+++ /dev/null
@@ -1,117 +0,0 @@
-"""
-Utilities for changing sampling schedules of a trained model.
-
-Simplified from: https://github.com/openai/guided-diffusion/blob/main/guided_diffusion/respace.py
-"""
-
-import numpy as np
-import torch as th
-
-from .gaussian_diffusion import GaussianDiffusion
-
-
-def space_timesteps(num_timesteps, section_counts):
- """
- Create a list of timesteps to use from an original diffusion process,
- given the number of timesteps we want to take from equally-sized portions
- of the original process.
-
- For example, if there's 300 timesteps and the section counts are [10,15,20]
- then the first 100 timesteps are strided to be 10 timesteps, the second 100
- are strided to be 15 timesteps, and the final 100 are strided to be 20.
-
- :param num_timesteps: the number of diffusion steps in the original
- process to divide up.
- :param section_counts: either a list of numbers, or a string containing
- comma-separated numbers, indicating the step count
- per section. As a special case, use "ddimN" where N
- is a number of steps to use the striding from the
- DDIM paper.
- :return: a set of diffusion steps from the original process to use.
- """
- if isinstance(section_counts, str):
- if section_counts.startswith("ddim"):
- desired_count = int(section_counts[len("ddim") :])
- for i in range(1, num_timesteps):
- if len(range(0, num_timesteps, i)) == desired_count:
- return set(range(0, num_timesteps, i))
- raise ValueError(f"cannot create exactly {num_timesteps} steps with an integer stride")
- elif section_counts == "fast27":
- steps = space_timesteps(num_timesteps, "10,10,3,2,2")
- # Help reduce DDIM artifacts from noisiest timesteps.
- steps.remove(num_timesteps - 1)
- steps.add(num_timesteps - 3)
- return steps
- section_counts = [int(x) for x in section_counts.split(",")]
- size_per = num_timesteps // len(section_counts)
- extra = num_timesteps % len(section_counts)
- start_idx = 0
- all_steps = []
- for i, section_count in enumerate(section_counts):
- size = size_per + (1 if i < extra else 0)
- if size < section_count:
- raise ValueError(f"cannot divide section of {size} steps into {section_count}")
- if section_count <= 1:
- frac_stride = 1
- else:
- frac_stride = (size - 1) / (section_count - 1)
- cur_idx = 0.0
- taken_steps = []
- for _ in range(section_count):
- taken_steps.append(start_idx + round(cur_idx))
- cur_idx += frac_stride
- all_steps += taken_steps
- start_idx += size
- return set(all_steps)
-
-
-class SpacedDiffusion(GaussianDiffusion):
- """
- A diffusion process which can skip steps in a base diffusion process.
-
- :param use_timesteps: a collection (sequence or set) of timesteps from the
- original diffusion process to retain.
- :param kwargs: the kwargs to create the base diffusion process.
- """
-
- def __init__(self, use_timesteps, **kwargs):
- self.use_timesteps = set(use_timesteps)
- self.timestep_map = []
- self.original_num_steps = len(kwargs["betas"])
-
- base_diffusion = GaussianDiffusion(**kwargs) # pylint: disable=missing-kwoa
- last_alpha_cumprod = 1.0
- new_betas = []
- for i, alpha_cumprod in enumerate(base_diffusion.alphas_cumprod):
- if i in self.use_timesteps:
- new_betas.append(1 - alpha_cumprod / last_alpha_cumprod)
- last_alpha_cumprod = alpha_cumprod
- self.timestep_map.append(i)
- kwargs["betas"] = np.array(new_betas)
- super().__init__(**kwargs)
-
- def p_mean_variance(self, model, *args, **kwargs):
- return super().p_mean_variance(self._wrap_model(model), *args, **kwargs)
-
- def condition_mean(self, cond_fn, *args, **kwargs):
- return super().condition_mean(self._wrap_model(cond_fn), *args, **kwargs)
-
- def condition_score(self, cond_fn, *args, **kwargs):
- return super().condition_score(self._wrap_model(cond_fn), *args, **kwargs)
-
- def _wrap_model(self, model):
- if isinstance(model, _WrappedModel):
- return model
- return _WrappedModel(model, self.timestep_map, self.original_num_steps)
-
-
-class _WrappedModel:
- def __init__(self, model, timestep_map, original_num_steps):
- self.model = model
- self.timestep_map = timestep_map
- self.original_num_steps = original_num_steps
-
- def __call__(self, x, ts, **kwargs):
- map_tensor = th.tensor(self.timestep_map, device=ts.device, dtype=ts.dtype)
- new_ts = map_tensor[ts]
- return self.model(x, new_ts, **kwargs)
diff --git a/spaces/Gladiator/gradient_dissent_bot/src/extract_questions.py b/spaces/Gladiator/gradient_dissent_bot/src/extract_questions.py
deleted file mode 100644
index 4a5956fec24424bcfb51dfff0a837a6f25ff3c91..0000000000000000000000000000000000000000
--- a/spaces/Gladiator/gradient_dissent_bot/src/extract_questions.py
+++ /dev/null
@@ -1,116 +0,0 @@
-import os
-import re
-from dataclasses import asdict
-
-import pandas as pd
-from langchain.callbacks import get_openai_callback
-from langchain.chains import LLMChain
-from langchain.chat_models import ChatOpenAI
-from langchain.document_loaders import DataFrameLoader
-from langchain.prompts import PromptTemplate
-from langchain.text_splitter import TokenTextSplitter
-from tqdm import tqdm
-from wandb.integration.langchain import WandbTracer
-
-import wandb
-from config import config
-
-
-def get_data(artifact_name: str, total_episodes: int = None):
- podcast_artifact = wandb.use_artifact(artifact_name, type="dataset")
- podcast_artifact_dir = podcast_artifact.download(config.root_artifact_dir)
- filename = artifact_name.split(":")[0].split("/")[-1]
- df = pd.read_csv(os.path.join(podcast_artifact_dir, f"{filename}.csv"))
- if total_episodes is not None:
- df = df.iloc[:total_episodes]
- return df
-
-
-def extract_questions(episode_df: pd.DataFrame):
- # load docs into langchain format
- loader = DataFrameLoader(episode_df, page_content_column="transcript")
- data = loader.load()
-
- # split the documents
- text_splitter = TokenTextSplitter.from_tiktoken_encoder(chunk_size=1000, chunk_overlap=0)
- docs = text_splitter.split_documents(data)
- print(f"Number of documents for podcast {data[0].metadata['title']}: {len(docs)}")
-
- # initialize LLM
- llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)
-
- # define prompt
- prompt = """You are provided with a short transcript from a podcast episode.
- Your task is to extract the relevant and most important questions one might ask from the transcript and present them in a bullet-point list.
- Ensure that the total number of questions is no more than 3.
-
- TRANSCRIPT:
-
- {text}
-
- QUESTIONS:"""
-
- prompt_template = PromptTemplate(template=prompt, input_variables=["text"])
-
- pattern = r"\d+\.\s"
- que_by_llm = []
- for doc in docs:
- llm_chain = LLMChain(llm=llm, prompt=prompt_template)
- out = llm_chain.run(doc)
- cleaned_ques = re.sub(pattern, "", out).split("\n")
- que_by_llm.extend(cleaned_ques)
-
- return que_by_llm
-
-
-if __name__ == "__main__":
- # initialize wandb tracer
- WandbTracer.init(
- {
- "project": config.project_name,
- "job_type": "extract_questions",
- "config": asdict(config),
- }
- )
-
- # get data
- df = get_data(artifact_name=config.summarized_data_artifact)
-
- questions = []
- with get_openai_callback() as cb:
- for episode in tqdm(
- df.iterrows(), total=len(df), desc="Extracting questions from episodes"
- ):
- episode_data = episode[1].to_frame().T
-
- episode_questions = extract_questions(episode_data)
- questions.append(episode_questions)
-
- print("*" * 25)
- print(cb)
- print("*" * 25)
-
- wandb.log(
- {
- "total_prompt_tokens": cb.prompt_tokens,
- "total_completion_tokens": cb.completion_tokens,
- "total_tokens": cb.total_tokens,
- "total_cost": cb.total_cost,
- }
- )
-
- df["questions"] = questions
-
- # log to wandb artifact
- path_to_save = os.path.join(config.root_data_dir, "summarized_que_podcasts.csv")
- df.to_csv(path_to_save, index=False)
- artifact = wandb.Artifact("summarized_que_podcasts", type="dataset")
- artifact.add_file(path_to_save)
- wandb.log_artifact(artifact)
-
- # create wandb table
- df["questions"] = df["questions"].apply(lambda x: "\n".join(x))
- table = wandb.Table(dataframe=df)
- wandb.log({"summarized_que_podcasts": table})
-
- WandbTracer.finish()
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/htc/htc_x101_64x4d_fpn_16x1_20e_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/htc/htc_x101_64x4d_fpn_16x1_20e_coco.py
deleted file mode 100644
index b140f75182cd4832857b6a86fe11b2961703a17c..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/htc/htc_x101_64x4d_fpn_16x1_20e_coco.py
+++ /dev/null
@@ -1,18 +0,0 @@
-_base_ = './htc_r50_fpn_1x_coco.py'
-model = dict(
- pretrained='open-mmlab://resnext101_64x4d',
- backbone=dict(
- type='ResNeXt',
- depth=101,
- groups=64,
- base_width=4,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=True),
- norm_eval=True,
- style='pytorch'))
-data = dict(samples_per_gpu=1, workers_per_gpu=1)
-# learning policy
-lr_config = dict(step=[16, 19])
-runner = dict(type='EpochBasedRunner', max_epochs=20)
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101-d8_769x769_40k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101-d8_769x769_40k_cityscapes.py
deleted file mode 100644
index c3c92eb26f8fead94f5ad7ac7d7fb60d92c57114..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101-d8_769x769_40k_cityscapes.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './deeplabv3plus_r50-d8_769x769_40k_cityscapes.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/core/utils/misc.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/core/utils/misc.py
deleted file mode 100644
index eb862a82bd47c8624db3dd5c6fb6ad8a03b62466..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/core/utils/misc.py
+++ /dev/null
@@ -1,17 +0,0 @@
-def add_prefix(inputs, prefix):
- """Add prefix for dict.
-
- Args:
- inputs (dict): The input dict with str keys.
- prefix (str): The prefix to add.
-
- Returns:
-
- dict: The dict with keys updated with ``prefix``.
- """
-
- outputs = dict()
- for name, value in inputs.items():
- outputs[f'{prefix}.{name}'] = value
-
- return outputs
diff --git a/spaces/GrandaddyShmax/MusicGen_Plus/audiocraft/models/builders.py b/spaces/GrandaddyShmax/MusicGen_Plus/audiocraft/models/builders.py
deleted file mode 100644
index 77ee5f96fea2e3c9e475fe961bc1a5ee473ed8eb..0000000000000000000000000000000000000000
--- a/spaces/GrandaddyShmax/MusicGen_Plus/audiocraft/models/builders.py
+++ /dev/null
@@ -1,218 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-All the functions to build the relevant models and modules
-from the Hydra config.
-"""
-
-import typing as tp
-import warnings
-
-import audiocraft
-import omegaconf
-import torch
-
-from .encodec import CompressionModel, EncodecModel, FlattenedCompressionModel # noqa
-from .lm import LMModel
-from ..modules.codebooks_patterns import (
- CodebooksPatternProvider,
- DelayedPatternProvider,
- ParallelPatternProvider,
- UnrolledPatternProvider,
- VALLEPattern,
- MusicLMPattern,
-)
-from ..modules.conditioners import (
- BaseConditioner,
- ConditioningProvider,
- LUTConditioner,
- T5Conditioner,
- ConditionFuser,
- ChromaStemConditioner,
-)
-from .. import quantization as qt
-from ..utils.utils import dict_from_config
-
-
-def get_quantizer(quantizer: str, cfg: omegaconf.DictConfig, dimension: int) -> qt.BaseQuantizer:
- klass = {
- 'no_quant': qt.DummyQuantizer,
- 'rvq': qt.ResidualVectorQuantizer
- }[quantizer]
- kwargs = dict_from_config(getattr(cfg, quantizer))
- if quantizer != 'no_quant':
- kwargs['dimension'] = dimension
- return klass(**kwargs)
-
-
-def get_encodec_autoencoder(encoder_name: str, cfg: omegaconf.DictConfig):
- if encoder_name == 'seanet':
- kwargs = dict_from_config(getattr(cfg, 'seanet'))
- encoder_override_kwargs = kwargs.pop('encoder')
- decoder_override_kwargs = kwargs.pop('decoder')
- encoder_kwargs = {**kwargs, **encoder_override_kwargs}
- decoder_kwargs = {**kwargs, **decoder_override_kwargs}
- encoder = audiocraft.modules.SEANetEncoder(**encoder_kwargs)
- decoder = audiocraft.modules.SEANetDecoder(**decoder_kwargs)
- return encoder, decoder
- else:
- raise KeyError(f'Unexpected compression model {cfg.compression_model}')
-
-
-def get_compression_model(cfg: omegaconf.DictConfig) -> CompressionModel:
- """Instantiate a compression model.
- """
- if cfg.compression_model == 'encodec':
- kwargs = dict_from_config(getattr(cfg, 'encodec'))
- encoder_name = kwargs.pop('autoencoder')
- quantizer_name = kwargs.pop('quantizer')
- encoder, decoder = get_encodec_autoencoder(encoder_name, cfg)
- quantizer = get_quantizer(quantizer_name, cfg, encoder.dimension)
- frame_rate = kwargs['sample_rate'] // encoder.hop_length
- renormalize = kwargs.pop('renormalize', None)
- renorm = kwargs.pop('renorm')
- if renormalize is None:
- renormalize = renorm is not None
- warnings.warn("You are using a deprecated EnCodec model. Please migrate to new renormalization.")
- return EncodecModel(encoder, decoder, quantizer,
- frame_rate=frame_rate, renormalize=renormalize, **kwargs).to(cfg.device)
- else:
- raise KeyError(f'Unexpected compression model {cfg.compression_model}')
-
-
-def get_lm_model(cfg: omegaconf.DictConfig) -> LMModel:
- """Instantiate a transformer LM.
- """
- if cfg.lm_model == 'transformer_lm':
- kwargs = dict_from_config(getattr(cfg, 'transformer_lm'))
- n_q = kwargs['n_q']
- q_modeling = kwargs.pop('q_modeling', None)
- codebooks_pattern_cfg = getattr(cfg, 'codebooks_pattern')
- attribute_dropout = dict_from_config(getattr(cfg, 'attribute_dropout'))
- cls_free_guidance = dict_from_config(getattr(cfg, 'classifier_free_guidance'))
- cfg_prob, cfg_coef = cls_free_guidance["training_dropout"], cls_free_guidance["inference_coef"]
- fuser = get_condition_fuser(cfg)
- condition_provider = get_conditioner_provider(kwargs["dim"], cfg).to(cfg.device)
- if len(fuser.fuse2cond['cross']) > 0: # enforce cross-att programatically
- kwargs['cross_attention'] = True
- if codebooks_pattern_cfg.modeling is None:
- assert q_modeling is not None, \
- 'LM model should either have a codebook pattern defined or transformer_lm.q_modeling'
- codebooks_pattern_cfg = omegaconf.OmegaConf.create(
- {'modeling': q_modeling, 'delay': {'delays': list(range(n_q))}}
- )
- pattern_provider = get_codebooks_pattern_provider(n_q, codebooks_pattern_cfg)
- return LMModel(
- pattern_provider=pattern_provider,
- condition_provider=condition_provider,
- fuser=fuser,
- cfg_dropout=cfg_prob,
- cfg_coef=cfg_coef,
- attribute_dropout=attribute_dropout,
- dtype=getattr(torch, cfg.dtype),
- device=cfg.device,
- **kwargs
- ).to(cfg.device)
- else:
- raise KeyError(f'Unexpected LM model {cfg.lm_model}')
-
-
-def get_conditioner_provider(output_dim: int, cfg: omegaconf.DictConfig) -> ConditioningProvider:
- """Instantiate a conditioning model.
- """
- device = cfg.device
- duration = cfg.dataset.segment_duration
- cfg = getattr(cfg, "conditioners")
- cfg = omegaconf.OmegaConf.create({}) if cfg is None else cfg
- conditioners: tp.Dict[str, BaseConditioner] = {}
- with omegaconf.open_dict(cfg):
- condition_provider_args = cfg.pop('args', {})
- for cond, cond_cfg in cfg.items():
- model_type = cond_cfg["model"]
- model_args = cond_cfg[model_type]
- if model_type == "t5":
- conditioners[str(cond)] = T5Conditioner(output_dim=output_dim, device=device, **model_args)
- elif model_type == "lut":
- conditioners[str(cond)] = LUTConditioner(output_dim=output_dim, **model_args)
- elif model_type == "chroma_stem":
- model_args.pop('cache_path', None)
- conditioners[str(cond)] = ChromaStemConditioner(
- output_dim=output_dim,
- duration=duration,
- device=device,
- **model_args
- )
- else:
- raise ValueError(f"unrecognized conditioning model: {model_type}")
- conditioner = ConditioningProvider(conditioners, device=device, **condition_provider_args)
- return conditioner
-
-
-def get_condition_fuser(cfg: omegaconf.DictConfig) -> ConditionFuser:
- """Instantiate a condition fuser object.
- """
- fuser_cfg = getattr(cfg, "fuser")
- fuser_methods = ["sum", "cross", "prepend", "input_interpolate"]
- fuse2cond = {k: fuser_cfg[k] for k in fuser_methods}
- kwargs = {k: v for k, v in fuser_cfg.items() if k not in fuser_methods}
- fuser = ConditionFuser(fuse2cond=fuse2cond, **kwargs)
- return fuser
-
-
-def get_codebooks_pattern_provider(n_q: int, cfg: omegaconf.DictConfig) -> CodebooksPatternProvider:
- """Instantiate a codebooks pattern provider object.
- """
- pattern_providers = {
- 'parallel': ParallelPatternProvider,
- 'delay': DelayedPatternProvider,
- 'unroll': UnrolledPatternProvider,
- 'valle': VALLEPattern,
- 'musiclm': MusicLMPattern,
- }
- name = cfg.modeling
- kwargs = dict_from_config(cfg.get(name)) if hasattr(cfg, name) else {}
- klass = pattern_providers[name]
- return klass(n_q, **kwargs)
-
-
-def get_debug_compression_model(device='cpu'):
- """Instantiate a debug compression model to be used for unit tests.
- """
- seanet_kwargs = {
- 'n_filters': 4,
- 'n_residual_layers': 1,
- 'dimension': 32,
- 'ratios': [10, 8, 16] # 25 Hz at 32kHz
- }
- encoder = audiocraft.modules.SEANetEncoder(**seanet_kwargs)
- decoder = audiocraft.modules.SEANetDecoder(**seanet_kwargs)
- quantizer = qt.ResidualVectorQuantizer(dimension=32, bins=400, n_q=4)
- init_x = torch.randn(8, 32, 128)
- quantizer(init_x, 1) # initialize kmeans etc.
- compression_model = EncodecModel(
- encoder, decoder, quantizer,
- frame_rate=25, sample_rate=32000, channels=1).to(device)
- return compression_model.eval()
-
-
-def get_debug_lm_model(device='cpu'):
- """Instantiate a debug LM to be used for unit tests.
- """
- pattern = DelayedPatternProvider(n_q=4)
- dim = 16
- providers = {
- 'description': LUTConditioner(n_bins=128, dim=dim, output_dim=dim, tokenizer="whitespace"),
- }
- condition_provider = ConditioningProvider(providers)
- fuser = ConditionFuser(
- {'cross': ['description'], 'prepend': [],
- 'sum': [], 'input_interpolate': []})
- lm = LMModel(
- pattern, condition_provider, fuser,
- n_q=4, card=400, dim=dim, num_heads=4, custom=True, num_layers=2,
- cross_attention=True, causal=True)
- return lm.to(device).eval()
diff --git a/spaces/HaHaBill/LandShapes-Antarctica/netdissect/__main__.py b/spaces/HaHaBill/LandShapes-Antarctica/netdissect/__main__.py
deleted file mode 100644
index e2bd9f630eaa0f45a6a201adcf356a1e092050cb..0000000000000000000000000000000000000000
--- a/spaces/HaHaBill/LandShapes-Antarctica/netdissect/__main__.py
+++ /dev/null
@@ -1,408 +0,0 @@
-import torch, sys, os, argparse, textwrap, numbers, numpy, json, PIL
-from torchvision import transforms
-from torch.utils.data import TensorDataset
-from netdissect.progress import verbose_progress, print_progress
-from netdissect import InstrumentedModel, BrodenDataset, dissect
-from netdissect import MultiSegmentDataset, GeneratorSegRunner
-from netdissect import ImageOnlySegRunner
-from netdissect.parallelfolder import ParallelImageFolders
-from netdissect.zdataset import z_dataset_for_model
-from netdissect.autoeval import autoimport_eval
-from netdissect.modelconfig import create_instrumented_model
-from netdissect.pidfile import exit_if_job_done, mark_job_done
-
-help_epilog = '''\
-Example: to dissect three layers of the pretrained alexnet in torchvision:
-
-python -m netdissect \\
- --model "torchvision.models.alexnet(pretrained=True)" \\
- --layers features.6:conv3 features.8:conv4 features.10:conv5 \\
- --imgsize 227 \\
- --outdir dissect/alexnet-imagenet
-
-To dissect a progressive GAN model:
-
-python -m netdissect \\
- --model "proggan.from_pth_file('model/churchoutdoor.pth')" \\
- --gan
-'''
-
-def main():
- # Training settings
- def strpair(arg):
- p = tuple(arg.split(':'))
- if len(p) == 1:
- p = p + p
- return p
- def intpair(arg):
- p = arg.split(',')
- if len(p) == 1:
- p = p + p
- return tuple(int(v) for v in p)
-
- parser = argparse.ArgumentParser(description='Net dissect utility',
- prog='python -m netdissect',
- epilog=textwrap.dedent(help_epilog),
- formatter_class=argparse.RawDescriptionHelpFormatter)
- parser.add_argument('--model', type=str, default=None,
- help='constructor for the model to test')
- parser.add_argument('--pthfile', type=str, default=None,
- help='filename of .pth file for the model')
- parser.add_argument('--unstrict', action='store_true', default=False,
- help='ignore unexpected pth parameters')
- parser.add_argument('--submodule', type=str, default=None,
- help='submodule to load from pthfile')
- parser.add_argument('--outdir', type=str, default='dissect',
- help='directory for dissection output')
- parser.add_argument('--layers', type=strpair, nargs='+',
- help='space-separated list of layer names to dissect' +
- ', in the form layername[:reportedname]')
- parser.add_argument('--segments', type=str, default='dataset/broden',
- help='directory containing segmentation dataset')
- parser.add_argument('--segmenter', type=str, default=None,
- help='constructor for asegmenter class')
- parser.add_argument('--download', action='store_true', default=False,
- help='downloads Broden dataset if needed')
- parser.add_argument('--imagedir', type=str, default=None,
- help='directory containing image-only dataset')
- parser.add_argument('--imgsize', type=intpair, default=(227, 227),
- help='input image size to use')
- parser.add_argument('--netname', type=str, default=None,
- help='name for network in generated reports')
- parser.add_argument('--meta', type=str, nargs='+',
- help='json files of metadata to add to report')
- parser.add_argument('--merge', type=str,
- help='json file of unit data to merge in report')
- parser.add_argument('--examples', type=int, default=20,
- help='number of image examples per unit')
- parser.add_argument('--size', type=int, default=10000,
- help='dataset subset size to use')
- parser.add_argument('--batch_size', type=int, default=100,
- help='batch size for forward pass')
- parser.add_argument('--num_workers', type=int, default=24,
- help='number of DataLoader workers')
- parser.add_argument('--quantile_threshold', type=strfloat, default=None,
- choices=[FloatRange(0.0, 1.0), 'iqr'],
- help='quantile to use for masks')
- parser.add_argument('--no-labels', action='store_true', default=False,
- help='disables labeling of units')
- parser.add_argument('--maxiou', action='store_true', default=False,
- help='enables maxiou calculation')
- parser.add_argument('--covariance', action='store_true', default=False,
- help='enables covariance calculation')
- parser.add_argument('--rank_all_labels', action='store_true', default=False,
- help='include low-information labels in rankings')
- parser.add_argument('--no-images', action='store_true', default=False,
- help='disables generation of unit images')
- parser.add_argument('--no-report', action='store_true', default=False,
- help='disables generation report summary')
- parser.add_argument('--no-cuda', action='store_true', default=False,
- help='disables CUDA usage')
- parser.add_argument('--gen', action='store_true', default=False,
- help='test a generator model (e.g., a GAN)')
- parser.add_argument('--gan', action='store_true', default=False,
- help='synonym for --gen')
- parser.add_argument('--perturbation', default=None,
- help='filename of perturbation attack to apply')
- parser.add_argument('--add_scale_offset', action='store_true', default=None,
- help='offsets masks according to stride and padding')
- parser.add_argument('--quiet', action='store_true', default=False,
- help='silences console output')
- if len(sys.argv) == 1:
- parser.print_usage(sys.stderr)
- sys.exit(1)
- args = parser.parse_args()
- args.images = not args.no_images
- args.report = not args.no_report
- args.labels = not args.no_labels
- if args.gan:
- args.gen = args.gan
-
- # Set up console output
- verbose_progress(not args.quiet)
-
- # Exit right away if job is already done or being done.
- if args.outdir is not None:
- exit_if_job_done(args.outdir)
-
- # Speed up pytorch
- torch.backends.cudnn.benchmark = True
-
- # Special case: download flag without model to test.
- if args.model is None and args.download:
- from netdissect.broden import ensure_broden_downloaded
- for resolution in [224, 227, 384]:
- ensure_broden_downloaded(args.segments, resolution, 1)
- from netdissect.segmenter import ensure_upp_segmenter_downloaded
- ensure_upp_segmenter_downloaded('dataset/segmodel')
- sys.exit(0)
-
- # Help if broden is not present
- if not args.gen and not args.imagedir and not os.path.isdir(args.segments):
- print_progress('Segmentation dataset not found at %s.' % args.segments)
- print_progress('Specify dataset directory using --segments [DIR]')
- print_progress('To download Broden, run: netdissect --download')
- sys.exit(1)
-
- # Default segmenter class
- if args.gen and args.segmenter is None:
- args.segmenter = ("netdissect.segmenter.UnifiedParsingSegmenter(" +
- "segsizes=[256], segdiv='quad')")
-
- # Default threshold
- if args.quantile_threshold is None:
- if args.gen:
- args.quantile_threshold = 'iqr'
- else:
- args.quantile_threshold = 0.005
-
- # Set up CUDA
- args.cuda = not args.no_cuda and torch.cuda.is_available()
- if args.cuda:
- torch.backends.cudnn.benchmark = True
-
- # Construct the network with specified layers instrumented
- if args.model is None:
- print_progress('No model specified')
- sys.exit(1)
- model = create_instrumented_model(args)
-
- # Update any metadata from files, if any
- meta = getattr(model, 'meta', {})
- if args.meta:
- for mfilename in args.meta:
- with open(mfilename) as f:
- meta.update(json.load(f))
-
- # Load any merge data from files
- mergedata = None
- if args.merge:
- with open(args.merge) as f:
- mergedata = json.load(f)
-
- # Set up the output directory, verify write access
- if args.outdir is None:
- args.outdir = os.path.join('dissect', type(model).__name__)
- exit_if_job_done(args.outdir)
- print_progress('Writing output into %s.' % args.outdir)
- os.makedirs(args.outdir, exist_ok=True)
- train_dataset = None
-
- if not args.gen:
- # Load dataset for classifier case.
- # Load perturbation
- perturbation = numpy.load(args.perturbation
- ) if args.perturbation else None
- segrunner = None
-
- # Load broden dataset
- if args.imagedir is not None:
- dataset = try_to_load_images(args.imagedir, args.imgsize,
- perturbation, args.size)
- segrunner = ImageOnlySegRunner(dataset)
- else:
- dataset = try_to_load_broden(args.segments, args.imgsize, 1,
- perturbation, args.download, args.size)
- if dataset is None:
- dataset = try_to_load_multiseg(args.segments, args.imgsize,
- perturbation, args.size)
- if dataset is None:
- print_progress('No segmentation dataset found in %s',
- args.segments)
- print_progress('use --download to download Broden.')
- sys.exit(1)
- else:
- # For segmenter case the dataset is just a random z
- dataset = z_dataset_for_model(model, args.size)
- train_dataset = z_dataset_for_model(model, args.size, seed=2)
- segrunner = GeneratorSegRunner(autoimport_eval(args.segmenter))
-
- # Run dissect
- dissect(args.outdir, model, dataset,
- train_dataset=train_dataset,
- segrunner=segrunner,
- examples_per_unit=args.examples,
- netname=args.netname,
- quantile_threshold=args.quantile_threshold,
- meta=meta,
- merge=mergedata,
- make_images=args.images,
- make_labels=args.labels,
- make_maxiou=args.maxiou,
- make_covariance=args.covariance,
- make_report=args.report,
- make_row_images=args.images,
- make_single_images=True,
- rank_all_labels=args.rank_all_labels,
- batch_size=args.batch_size,
- num_workers=args.num_workers,
- settings=vars(args))
-
- # Mark the directory so that it's not done again.
- mark_job_done(args.outdir)
-
-class AddPerturbation(object):
- def __init__(self, perturbation):
- self.perturbation = perturbation
-
- def __call__(self, pic):
- if self.perturbation is None:
- return pic
- # Convert to a numpy float32 array
- npyimg = numpy.array(pic, numpy.uint8, copy=False
- ).astype(numpy.float32)
- # Center the perturbation
- oy, ox = ((self.perturbation.shape[d] - npyimg.shape[d]) // 2
- for d in [0, 1])
- npyimg += self.perturbation[
- oy:oy+npyimg.shape[0], ox:ox+npyimg.shape[1]]
- # Pytorch conventions: as a float it should be [0..1]
- npyimg.clip(0, 255, npyimg)
- return npyimg / 255.0
-
-def test_dissection():
- verbose_progress(True)
- from torchvision.models import alexnet
- from torchvision import transforms
- model = InstrumentedModel(alexnet(pretrained=True))
- model.eval()
- # Load an alexnet
- model.retain_layers([
- ('features.0', 'conv1'),
- ('features.3', 'conv2'),
- ('features.6', 'conv3'),
- ('features.8', 'conv4'),
- ('features.10', 'conv5') ])
- # load broden dataset
- bds = BrodenDataset('dataset/broden',
- transform=transforms.Compose([
- transforms.ToTensor(),
- transforms.Normalize(IMAGE_MEAN, IMAGE_STDEV)]),
- size=100)
- # run dissect
- dissect('dissect/test', model, bds,
- examples_per_unit=10)
-
-def try_to_load_images(directory, imgsize, perturbation, size):
- # Load plain image dataset
- # TODO: allow other normalizations.
- return ParallelImageFolders(
- [directory],
- transform=transforms.Compose([
- transforms.Resize(imgsize),
- AddPerturbation(perturbation),
- transforms.ToTensor(),
- transforms.Normalize(IMAGE_MEAN, IMAGE_STDEV)]),
- size=size)
-
-def try_to_load_broden(directory, imgsize, broden_version, perturbation,
- download, size):
- # Load broden dataset
- ds_resolution = (224 if max(imgsize) <= 224 else
- 227 if max(imgsize) <= 227 else 384)
- if not os.path.isfile(os.path.join(directory,
- 'broden%d_%d' % (broden_version, ds_resolution), 'index.csv')):
- return None
- return BrodenDataset(directory,
- resolution=ds_resolution,
- download=download,
- broden_version=broden_version,
- transform=transforms.Compose([
- transforms.Resize(imgsize),
- AddPerturbation(perturbation),
- transforms.ToTensor(),
- transforms.Normalize(IMAGE_MEAN, IMAGE_STDEV)]),
- size=size)
-
-def try_to_load_multiseg(directory, imgsize, perturbation, size):
- if not os.path.isfile(os.path.join(directory, 'labelnames.json')):
- return None
- minsize = min(imgsize) if hasattr(imgsize, '__iter__') else imgsize
- return MultiSegmentDataset(directory,
- transform=(transforms.Compose([
- transforms.Resize(minsize),
- transforms.CenterCrop(imgsize),
- AddPerturbation(perturbation),
- transforms.ToTensor(),
- transforms.Normalize(IMAGE_MEAN, IMAGE_STDEV)]),
- transforms.Compose([
- transforms.Resize(minsize, interpolation=PIL.Image.NEAREST),
- transforms.CenterCrop(imgsize)])),
- size=size)
-
-def add_scale_offset_info(model, layer_names):
- '''
- Creates a 'scale_offset' property on the model which guesses
- how to offset the featuremap, in cases where the convolutional
- padding does not exacly correspond to keeping featuremap pixels
- centered on the downsampled regions of the input. This mainly
- shows up in AlexNet: ResNet and VGG pad convolutions to keep
- them centered and do not need this.
- '''
- model.scale_offset = {}
- seen = set()
- sequence = []
- aka_map = {}
- for name in layer_names:
- aka = name
- if not isinstance(aka, str):
- name, aka = name
- aka_map[name] = aka
- for name, layer in model.named_modules():
- sequence.append(layer)
- if name in aka_map:
- seen.add(name)
- aka = aka_map[name]
- model.scale_offset[aka] = sequence_scale_offset(sequence)
- for name in aka_map:
- assert name in seen, ('Layer %s not found' % name)
-
-def dilation_scale_offset(dilations):
- '''Composes a list of (k, s, p) into a single total scale and offset.'''
- if len(dilations) == 0:
- return (1, 0)
- scale, offset = dilation_scale_offset(dilations[1:])
- kernel, stride, padding = dilations[0]
- scale *= stride
- offset *= stride
- offset += (kernel - 1) / 2.0 - padding
- return scale, offset
-
-def dilations(modulelist):
- '''Converts a list of modules to (kernel_size, stride, padding)'''
- result = []
- for module in modulelist:
- settings = tuple(getattr(module, n, d)
- for n, d in (('kernel_size', 1), ('stride', 1), ('padding', 0)))
- settings = (((s, s) if not isinstance(s, tuple) else s)
- for s in settings)
- if settings != ((1, 1), (1, 1), (0, 0)):
- result.append(zip(*settings))
- return zip(*result)
-
-def sequence_scale_offset(modulelist):
- '''Returns (yscale, yoffset), (xscale, xoffset) given a list of modules'''
- return tuple(dilation_scale_offset(d) for d in dilations(modulelist))
-
-
-def strfloat(s):
- try:
- return float(s)
- except:
- return s
-
-class FloatRange(object):
- def __init__(self, start, end):
- self.start = start
- self.end = end
- def __eq__(self, other):
- return isinstance(other, float) and self.start <= other <= self.end
- def __repr__(self):
- return '[%g-%g]' % (self.start, self.end)
-
-# Many models use this normalization.
-IMAGE_MEAN = [0.485, 0.456, 0.406]
-IMAGE_STDEV = [0.229, 0.224, 0.225]
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/multilingual/data_scripts/check_iswlt_test_data.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/multilingual/data_scripts/check_iswlt_test_data.py
deleted file mode 100644
index f8e2eb0f15699f1b458a8445d0c1dd6229a21f77..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/multilingual/data_scripts/check_iswlt_test_data.py
+++ /dev/null
@@ -1,67 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-
-import os, sys
-import subprocess
-import re
-from subprocess import check_call, check_output
-
-WORKDIR_ROOT = os.environ.get('WORKDIR_ROOT', None)
-
-if WORKDIR_ROOT is None or not WORKDIR_ROOT.strip():
- print('please specify your working directory root in OS environment variable WORKDIR_ROOT. Exitting..."')
- sys.exit(-1)
-
-
-BLEU_REGEX = re.compile("^BLEU\\S* = (\\S+) ")
-def run_eval_bleu(cmd):
- output = check_output(cmd, shell=True, stderr=subprocess.STDOUT).decode("utf-8").strip()
- print(output)
- bleu = -1.0
- for line in output.strip().split('\n'):
- m = BLEU_REGEX.search(line)
- if m is not None:
- bleu = m.groups()[0]
- bleu = float(bleu)
- break
- return bleu
-
-def check_data_test_bleu(raw_folder, data_lang_pairs):
- not_matchings = []
- for sacrebleu_set, src_tgts in data_lang_pairs:
- for src_tgt in src_tgts:
- print(f'checking test bleus for: {src_tgt} at {sacrebleu_set}')
- src, tgt = src_tgt.split('-')
- ssrc, stgt = src[:2], tgt[:2]
- if os.path.exists(f'{raw_folder}/test.{tgt}-{src}.{src}'):
- # reversed direction may have different test set
- test_src = f'{raw_folder}/test.{tgt}-{src}.{src}'
- else:
- test_src = f'{raw_folder}/test.{src}-{tgt}.{src}'
- cmd1 = f'cat {test_src} | sacrebleu -t "{sacrebleu_set}" -l {stgt}-{ssrc}; [ $? -eq 0 ] || echo ""'
- test_tgt = f'{raw_folder}/test.{src}-{tgt}.{tgt}'
- cmd2 = f'cat {test_tgt} | sacrebleu -t "{sacrebleu_set}" -l {ssrc}-{stgt}; [ $? -eq 0 ] || echo ""'
- bleu1 = run_eval_bleu(cmd1)
- if bleu1 != 100.0:
- not_matchings.append(f'{sacrebleu_set}:{src_tgt} source side not matching: {test_src}')
- bleu2 = run_eval_bleu(cmd2)
- if bleu2 != 100.0:
- not_matchings.append(f'{sacrebleu_set}:{src_tgt} target side not matching: {test_tgt}')
- return not_matchings
-
-if __name__ == "__main__":
- to_data_path = f'{WORKDIR_ROOT}/iwsltv2'
- not_matching = check_data_test_bleu(
- f'{to_data_path}/raw',
- [
- ('iwslt17', ['en_XX-ar_AR', 'en_XX-ko_KR', 'ar_AR-en_XX', 'ko_KR-en_XX']),
- ('iwslt17', ['en_XX-it_IT', 'en_XX-nl_XX', 'it_IT-en_XX', 'nl_XX-en_XX']),
- ('iwslt17/tst2015', ['en_XX-vi_VN', "vi_VN-en_XX"]),
- ]
- )
- if len(not_matching) > 0:
- print('the following datasets do not have matching test datasets:\n\t', '\n\t'.join(not_matching))
-
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_synthesis/README.md b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_synthesis/README.md
deleted file mode 100644
index 4a3ae54b857c43621c9fb67ee4b214584beec835..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_synthesis/README.md
+++ /dev/null
@@ -1,16 +0,0 @@
-Speech Synthesis (S^2)
-===
-
-Speech synthesis with fairseq.
-
-- Autoregressive and non-autoregressive models
-- Multi-speaker synthesis
-- Audio preprocessing
-- Automatic metrics
-- Similar data configuration as [S2T](../speech_to_text/README.md)
-
-
-## Examples
-- [Single-speaker synthesis on LJSpeech](docs/ljspeech_example.md)
-- [Multi-speaker synthesis on VCTK](docs/vctk_example.md)
-- [Multi-speaker synthesis on Common Voice](docs/common_voice_example.md)
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/test_lm_context_window.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/test_lm_context_window.py
deleted file mode 100644
index 7415e86abdf8ddc2d797092bf98f7a1331e038d6..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/test_lm_context_window.py
+++ /dev/null
@@ -1,51 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import unittest
-
-import torch
-from fairseq.data import MonolingualDataset
-from fairseq.tasks.language_modeling import LanguageModelingTask, LanguageModelingConfig
-from tests import utils as test_utils
-
-
-class TestLMContextWindow(unittest.TestCase):
-
- def test_eval_dataloader(self):
- dictionary = test_utils.dummy_dictionary(10)
- assert len(dictionary) == 14 # 4 extra special symbols
- assert dictionary.pad() == 1
-
- dataset = test_utils.TestDataset([
- torch.tensor([4, 5, 6, 7], dtype=torch.long),
- torch.tensor([8, 9, 10, 11], dtype=torch.long),
- torch.tensor([12, 13], dtype=torch.long),
- ])
- dataset = MonolingualDataset(dataset, sizes=[4, 4, 2], src_vocab=dictionary)
-
- config = LanguageModelingConfig(tokens_per_sample=4)
- task = LanguageModelingTask(config, dictionary)
-
- eval_dataloader = task.eval_lm_dataloader(
- dataset=dataset,
- batch_size=1,
- context_window=2,
- )
-
- batch = next(eval_dataloader)
- assert batch["net_input"]["src_tokens"][0].tolist() == [4, 5, 6, 7, 1, 1]
- assert batch["target"][0].tolist() == [4, 5, 6, 7, 1, 1]
-
- batch = next(eval_dataloader)
- assert batch["net_input"]["src_tokens"][0].tolist() == [6, 7, 8, 9, 10, 11]
- assert batch["target"][0].tolist() == [1, 1, 8, 9, 10, 11]
-
- batch = next(eval_dataloader)
- assert batch["net_input"]["src_tokens"][0].tolist() == [10, 11, 12, 13]
- assert batch["target"][0].tolist() == [1, 1, 12, 13]
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/scripts/hifi/train_hifi.sh b/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/scripts/hifi/train_hifi.sh
deleted file mode 100644
index 287ca1159b5bf8f779d66885197fadbcd23b911e..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/scripts/hifi/train_hifi.sh
+++ /dev/null
@@ -1,21 +0,0 @@
-#!/bin/bash
-
-gender='male'
-
-config='../../config/hifi/config_v1.json'
-modeldir='../../checkpoints/hifi/'$gender
-logdir='../../logs/hifi/'$gender
-
-
-####################################################
-
-
-
-python ../../src/hifi_gan/train.py \
- --config $config \
- --input_training_file '../../data/hifi/'$gender'/train.txt' \
- --input_validation_file '../../data/hifi/'$gender'/valid.txt' \
- --checkpoint_path $modeldir \
- --logs_path $logdir \
- --checkpoint_interval 10000 \
- --stdout_interval 50
diff --git a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/mix.py b/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/mix.py
deleted file mode 100644
index aba81eb83a870d713f00ab776537537265975039..0000000000000000000000000000000000000000
--- a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/mix.py
+++ /dev/null
@@ -1,128 +0,0 @@
-"""
-Ways to transform interfaces to produce new interfaces
-"""
-import asyncio
-import warnings
-
-import gradio
-from gradio.documentation import document, set_documentation_group
-
-set_documentation_group("mix_interface")
-
-
-@document()
-class Parallel(gradio.Interface):
- """
- Creates a new Interface consisting of multiple Interfaces in parallel (comparing their outputs).
- The Interfaces to put in Parallel must share the same input components (but can have different output components).
-
- Demos: interface_parallel, interface_parallel_load
- Guides: advanced_interface_features
- """
-
- def __init__(self, *interfaces: gradio.Interface, **options):
- """
- Parameters:
- interfaces: any number of Interface objects that are to be compared in parallel
- options: additional kwargs that are passed into the new Interface object to customize it
- Returns:
- an Interface object comparing the given models
- """
- outputs = []
-
- for interface in interfaces:
- if not (isinstance(interface, gradio.Interface)):
- warnings.warn(
- "Parallel requires all inputs to be of type Interface. "
- "May not work as expected."
- )
- outputs.extend(interface.output_components)
-
- async def parallel_fn(*args):
- return_values_with_durations = await asyncio.gather(
- *[interface.call_function(0, list(args)) for interface in interfaces]
- )
- return_values = [rv["prediction"] for rv in return_values_with_durations]
- combined_list = []
- for interface, return_value in zip(interfaces, return_values):
- if len(interface.output_components) == 1:
- combined_list.append(return_value)
- else:
- combined_list.extend(return_value)
- if len(outputs) == 1:
- return combined_list[0]
- return combined_list
-
- parallel_fn.__name__ = " | ".join([io.__name__ for io in interfaces])
-
- kwargs = {
- "fn": parallel_fn,
- "inputs": interfaces[0].input_components,
- "outputs": outputs,
- }
- kwargs.update(options)
- super().__init__(**kwargs)
-
-
-@document()
-class Series(gradio.Interface):
- """
- Creates a new Interface from multiple Interfaces in series (the output of one is fed as the input to the next,
- and so the input and output components must agree between the interfaces).
-
- Demos: interface_series, interface_series_load
- Guides: advanced_interface_features
- """
-
- def __init__(self, *interfaces: gradio.Interface, **options):
- """
- Parameters:
- interfaces: any number of Interface objects that are to be connected in series
- options: additional kwargs that are passed into the new Interface object to customize it
- Returns:
- an Interface object connecting the given models
- """
-
- async def connected_fn(*data):
- for idx, interface in enumerate(interfaces):
- # skip preprocessing for first interface since the Series interface will include it
- if idx > 0 and not (interface.api_mode):
- data = [
- input_component.preprocess(data[i])
- for i, input_component in enumerate(interface.input_components)
- ]
-
- # run all of predictions sequentially
- data = (await interface.call_function(0, list(data)))["prediction"]
- if len(interface.output_components) == 1:
- data = [data]
-
- # skip postprocessing for final interface since the Series interface will include it
- if idx < len(interfaces) - 1 and not (interface.api_mode):
- data = [
- output_component.postprocess(data[i])
- for i, output_component in enumerate(
- interface.output_components
- )
- ]
-
- if len(interface.output_components) == 1: # type: ignore
- return data[0]
- return data
-
- for interface in interfaces:
- if not (isinstance(interface, gradio.Interface)):
- warnings.warn(
- "Series requires all inputs to be of type Interface. May "
- "not work as expected."
- )
- connected_fn.__name__ = " => ".join([io.__name__ for io in interfaces])
-
- kwargs = {
- "fn": connected_fn,
- "inputs": interfaces[0].input_components,
- "outputs": interfaces[-1].output_components,
- "_api_mode": interfaces[0].api_mode, # TODO: set api_mode per-interface
- }
- kwargs.update(options)
- super().__init__(**kwargs)
diff --git a/spaces/HuggingFaceH4/human_eval_llm_leaderboard/src/elo_leaderboard/visualizations.py b/spaces/HuggingFaceH4/human_eval_llm_leaderboard/src/elo_leaderboard/visualizations.py
deleted file mode 100644
index 4845118d6d1d98b0643a86cf7ee62d1a102b4862..0000000000000000000000000000000000000000
--- a/spaces/HuggingFaceH4/human_eval_llm_leaderboard/src/elo_leaderboard/visualizations.py
+++ /dev/null
@@ -1,137 +0,0 @@
-import math
-
-import numpy as np
-import pandas as pd
-import plotly.express as px
-
-
-# 1
-def compute_pairwise_win_fraction(battles):
- # Times each model wins as Model A
- a_win_ptbl = pd.pivot_table(
- battles[battles["win"] == "model_a"],
- index="model_a",
- columns="model_b",
- aggfunc="size",
- fill_value=0,
- )
-
- # Table counting times each model wins as Model B
- b_win_ptbl = pd.pivot_table(
- battles[battles["win"] == "model_b"],
- index="model_a",
- columns="model_b",
- aggfunc="size",
- fill_value=0,
- )
-
- # Table counting number of A-B pairs
- num_battles_ptbl = pd.pivot_table(battles, index="model_a", columns="model_b", aggfunc="size", fill_value=0)
-
- # Computing the proportion of wins for each model as A and as B
- # against all other models
- row_beats_col_freq = (a_win_ptbl + b_win_ptbl.T) / (num_battles_ptbl + num_battles_ptbl.T)
-
- # Arrange ordering according to proprition of wins
- prop_wins = row_beats_col_freq.mean(axis=1).sort_values(ascending=False)
- model_names = list(prop_wins.keys())
- row_beats_col = row_beats_col_freq.loc[model_names, model_names]
- return row_beats_col
-
-
-def visualize_pairwise_win_fraction(battles, title):
- row_beats_col = compute_pairwise_win_fraction(battles)
- fig = px.imshow(row_beats_col, color_continuous_scale="RdBu", text_auto=".2f", title=title)
- fig.update_layout(
- xaxis_title="Model B",
- yaxis_title="Model A",
- xaxis_side="top",
- title_y=0.07,
- title_x=0.5,
- )
- fig.update_traces(hovertemplate="Model A: %{y} Model B: %{x} Fraction of A Wins: %{z} ")
- return fig
-
-
-# 2
-def switch_model_a_b(df):
- df_switch = df.copy()
- # switch with probability 0.5
- for i, row in df.iterrows():
- if np.random.rand() < 0.5:
- df_switch.at[i, "model_a"] = row["model_b"]
- df_switch.at[i, "model_b"] = row["model_a"]
- if row["win"] == "model_a":
- df_switch.at[i, "win"] = "model_b"
- elif row["win"] == "model_b":
- df_switch.at[i, "win"] = "model_a"
- return df_switch
-
-
-def visualize_battle_count(battles, title):
- ptbl = pd.pivot_table(battles, index="model_a", columns="model_b", aggfunc="size", fill_value=0)
- battle_counts = ptbl + ptbl.T
- ordering = battle_counts.sum().sort_values(ascending=False).index
- fig = px.imshow(battle_counts.loc[ordering, ordering], title=title, text_auto=True, width=600)
- fig.update_layout(
- xaxis_title="Model B",
- yaxis_title="Model A",
- xaxis_side="top",
- title_y=0.07,
- title_x=0.5,
- )
- fig.update_traces(hovertemplate="Model A: %{y} Model B: %{x} Count: %{z} ")
- return fig
-
-
-# 3
-def get_bootstrap_result(battles, func_compute_elo, num_round):
- rows = [func_compute_elo(battles.sample(frac=1.0, replace=True)) for _ in range(num_round)]
- df = pd.DataFrame(rows)
- return df[df.median().sort_values(ascending=False).index]
-
-
-def visualize_bootstrap_scores(df, title):
- bars = (
- pd.DataFrame(
- dict(
- lower=df.quantile(0.025),
- rating=df.quantile(0.5),
- upper=df.quantile(0.975),
- )
- )
- .reset_index(names="model")
- .sort_values("rating", ascending=False)
- )
- bars["error_y"] = bars["upper"] - bars["rating"]
- bars["error_y_minus"] = bars["rating"] - bars["lower"]
- bars["rating_rounded"] = np.round(bars["rating"], 2)
- fig = px.scatter(
- bars,
- x="model",
- y="rating",
- error_y="error_y",
- error_y_minus="error_y_minus",
- text="rating_rounded",
- title=title,
- )
- fig.update_layout(xaxis_title="Model", yaxis_title="Rating")
- return fig
-
-
-# 4
-def visualize_rating_count(df, title):
- df_all_value_counts = pd.concat([df["model_a"], df["model_b"]]).value_counts()
- fig = px.bar(df_all_value_counts, title=title, text_auto=True)
-
- min_y = df_all_value_counts.min()
- max_y = df_all_value_counts.max()
-
- y_end = math.ceil(min_y / 100) * 100
- y_begin = math.floor(max_y / 100) * 100
-
- fig.update_layout(xaxis_title="model", yaxis_title="Rating Count", showlegend=False)
- fig.update_yaxes(range=[y_begin, y_end])
- # save the plot for the blog:
- fig.write_html("src/assets/model_counts.html", full_html=False, include_plotlyjs="cdn")
- return fig
diff --git a/spaces/ICML2022/OFA/fairseq/examples/roberta/multiprocessing_bpe_encoder.py b/spaces/ICML2022/OFA/fairseq/examples/roberta/multiprocessing_bpe_encoder.py
deleted file mode 100644
index 43fe0451bf4d5762d734314075b1402c2a8db2bb..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/roberta/multiprocessing_bpe_encoder.py
+++ /dev/null
@@ -1,130 +0,0 @@
-#!/usr/bin/env python
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import contextlib
-import sys
-from collections import Counter
-from multiprocessing import Pool
-
-from fairseq.data.encoders.gpt2_bpe import get_encoder
-
-
-def main():
- """
- Helper script to encode raw text with the GPT-2 BPE using multiple processes.
-
- The encoder.json and vocab.bpe files can be obtained here:
- - https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json
- - https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe
- """
- parser = argparse.ArgumentParser()
- parser.add_argument(
- "--encoder-json",
- help="path to encoder.json",
- )
- parser.add_argument(
- "--vocab-bpe",
- type=str,
- help="path to vocab.bpe",
- )
- parser.add_argument(
- "--inputs",
- nargs="+",
- default=["-"],
- help="input files to filter/encode",
- )
- parser.add_argument(
- "--outputs",
- nargs="+",
- default=["-"],
- help="path to save encoded outputs",
- )
- parser.add_argument(
- "--keep-empty",
- action="store_true",
- help="keep empty lines",
- )
- parser.add_argument("--workers", type=int, default=20)
- args = parser.parse_args()
-
- assert len(args.inputs) == len(
- args.outputs
- ), "number of input and output paths should match"
-
- with contextlib.ExitStack() as stack:
- inputs = [
- stack.enter_context(open(input, "r", encoding="utf-8"))
- if input != "-"
- else sys.stdin
- for input in args.inputs
- ]
- outputs = [
- stack.enter_context(open(output, "w", encoding="utf-8"))
- if output != "-"
- else sys.stdout
- for output in args.outputs
- ]
-
- encoder = MultiprocessingEncoder(args)
- pool = Pool(args.workers, initializer=encoder.initializer)
- encoded_lines = pool.imap(encoder.encode_lines, zip(*inputs), 100)
-
- stats = Counter()
- for i, (filt, enc_lines) in enumerate(encoded_lines, start=1):
- if filt == "PASS":
- for enc_line, output_h in zip(enc_lines, outputs):
- print(enc_line, file=output_h)
- else:
- stats["num_filtered_" + filt] += 1
- if i % 10000 == 0:
- print("processed {} lines".format(i), file=sys.stderr)
-
- for k, v in stats.most_common():
- print("[{}] filtered {} lines".format(k, v), file=sys.stderr)
-
-
-class MultiprocessingEncoder(object):
- def __init__(self, args):
- self.args = args
-
- def initializer(self):
- global bpe
- bpe = get_encoder(self.args.encoder_json, self.args.vocab_bpe)
-
- def encode(self, line):
- global bpe
- ids = bpe.encode(line)
- return list(map(str, ids))
-
- def decode(self, tokens):
- global bpe
- return bpe.decode(tokens)
-
- def encode_lines(self, lines):
- """
- Encode a set of lines. All lines will be encoded together.
- """
- enc_lines = []
- for line in lines:
- line = line.strip()
- if len(line) == 0 and not self.args.keep_empty:
- return ["EMPTY", None]
- tokens = self.encode(line)
- enc_lines.append(" ".join(tokens))
- return ["PASS", enc_lines]
-
- def decode_lines(self, lines):
- dec_lines = []
- for line in lines:
- tokens = map(int, line.strip().split())
- dec_lines.append(self.decode(tokens))
- return ["PASS", dec_lines]
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/ICML2022/OFA/fairseq/examples/translation/prepare-iwslt14.sh b/spaces/ICML2022/OFA/fairseq/examples/translation/prepare-iwslt14.sh
deleted file mode 100644
index 2fb6643fbccb58701dcbb77d91430e68a821ba38..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/translation/prepare-iwslt14.sh
+++ /dev/null
@@ -1,115 +0,0 @@
-#!/usr/bin/env bash
-#
-# Adapted from https://github.com/facebookresearch/MIXER/blob/master/prepareData.sh
-
-echo 'Cloning Moses github repository (for tokenization scripts)...'
-git clone https://github.com/moses-smt/mosesdecoder.git
-
-echo 'Cloning Subword NMT repository (for BPE pre-processing)...'
-git clone https://github.com/rsennrich/subword-nmt.git
-
-SCRIPTS=mosesdecoder/scripts
-TOKENIZER=$SCRIPTS/tokenizer/tokenizer.perl
-LC=$SCRIPTS/tokenizer/lowercase.perl
-CLEAN=$SCRIPTS/training/clean-corpus-n.perl
-BPEROOT=subword-nmt/subword_nmt
-BPE_TOKENS=10000
-
-URL="http://dl.fbaipublicfiles.com/fairseq/data/iwslt14/de-en.tgz"
-GZ=de-en.tgz
-
-if [ ! -d "$SCRIPTS" ]; then
- echo "Please set SCRIPTS variable correctly to point to Moses scripts."
- exit
-fi
-
-src=de
-tgt=en
-lang=de-en
-prep=iwslt14.tokenized.de-en
-tmp=$prep/tmp
-orig=orig
-
-mkdir -p $orig $tmp $prep
-
-echo "Downloading data from ${URL}..."
-cd $orig
-wget "$URL"
-
-if [ -f $GZ ]; then
- echo "Data successfully downloaded."
-else
- echo "Data not successfully downloaded."
- exit
-fi
-
-tar zxvf $GZ
-cd ..
-
-echo "pre-processing train data..."
-for l in $src $tgt; do
- f=train.tags.$lang.$l
- tok=train.tags.$lang.tok.$l
-
- cat $orig/$lang/$f | \
- grep -v '' | \
- grep -v '' | \
- grep -v '' | \
- sed -e 's///g' | \
- sed -e 's/<\/title>//g' | \
- sed -e 's///g' | \
- sed -e 's/<\/description>//g' | \
- perl $TOKENIZER -threads 8 -l $l > $tmp/$tok
- echo ""
-done
-perl $CLEAN -ratio 1.5 $tmp/train.tags.$lang.tok $src $tgt $tmp/train.tags.$lang.clean 1 175
-for l in $src $tgt; do
- perl $LC < $tmp/train.tags.$lang.clean.$l > $tmp/train.tags.$lang.$l
-done
-
-echo "pre-processing valid/test data..."
-for l in $src $tgt; do
- for o in `ls $orig/$lang/IWSLT14.TED*.$l.xml`; do
- fname=${o##*/}
- f=$tmp/${fname%.*}
- echo $o $f
- grep '\s*//g' | \
- sed -e 's/\s*<\/seg>\s*//g' | \
- sed -e "s/\’/\'/g" | \
- perl $TOKENIZER -threads 8 -l $l | \
- perl $LC > $f
- echo ""
- done
-done
-
-
-echo "creating train, valid, test..."
-for l in $src $tgt; do
- awk '{if (NR%23 == 0) print $0; }' $tmp/train.tags.de-en.$l > $tmp/valid.$l
- awk '{if (NR%23 != 0) print $0; }' $tmp/train.tags.de-en.$l > $tmp/train.$l
-
- cat $tmp/IWSLT14.TED.dev2010.de-en.$l \
- $tmp/IWSLT14.TEDX.dev2012.de-en.$l \
- $tmp/IWSLT14.TED.tst2010.de-en.$l \
- $tmp/IWSLT14.TED.tst2011.de-en.$l \
- $tmp/IWSLT14.TED.tst2012.de-en.$l \
- > $tmp/test.$l
-done
-
-TRAIN=$tmp/train.en-de
-BPE_CODE=$prep/code
-rm -f $TRAIN
-for l in $src $tgt; do
- cat $tmp/train.$l >> $TRAIN
-done
-
-echo "learn_bpe.py on ${TRAIN}..."
-python $BPEROOT/learn_bpe.py -s $BPE_TOKENS < $TRAIN > $BPE_CODE
-
-for L in $src $tgt; do
- for f in train.$L valid.$L test.$L; do
- echo "apply_bpe.py to ${f}..."
- python $BPEROOT/apply_bpe.py -c $BPE_CODE < $tmp/$f > $prep/$f
- done
-done
diff --git a/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/data/scripts/get_imagenet.sh b/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/data/scripts/get_imagenet.sh
deleted file mode 100644
index 6026d502e8f3cce457d7f48cefe19cf55d60c0fc..0000000000000000000000000000000000000000
--- a/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/data/scripts/get_imagenet.sh
+++ /dev/null
@@ -1,51 +0,0 @@
-#!/bin/bash
-# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
-# Download ILSVRC2012 ImageNet dataset https://image-net.org
-# Example usage: bash data/scripts/get_imagenet.sh
-# parent
-# ├── yolov5
-# └── datasets
-# └── imagenet ← downloads here
-
-# Arguments (optional) Usage: bash data/scripts/get_imagenet.sh --train --val
-if [ "$#" -gt 0 ]; then
- for opt in "$@"; do
- case "${opt}" in
- --train) train=true ;;
- --val) val=true ;;
- esac
- done
-else
- train=true
- val=true
-fi
-
-# Make dir
-d='../datasets/imagenet' # unzip directory
-mkdir -p $d && cd $d
-
-# Download/unzip train
-if [ "$train" == "true" ]; then
- wget https://image-net.org/data/ILSVRC/2012/ILSVRC2012_img_train.tar # download 138G, 1281167 images
- mkdir train && mv ILSVRC2012_img_train.tar train/ && cd train
- tar -xf ILSVRC2012_img_train.tar && rm -f ILSVRC2012_img_train.tar
- find . -name "*.tar" | while read NAME; do
- mkdir -p "${NAME%.tar}"
- tar -xf "${NAME}" -C "${NAME%.tar}"
- rm -f "${NAME}"
- done
- cd ..
-fi
-
-# Download/unzip val
-if [ "$val" == "true" ]; then
- wget https://image-net.org/data/ILSVRC/2012/ILSVRC2012_img_val.tar # download 6.3G, 50000 images
- mkdir val && mv ILSVRC2012_img_val.tar val/ && cd val && tar -xf ILSVRC2012_img_val.tar
- wget -qO- https://raw.githubusercontent.com/soumith/imagenetloader.torch/master/valprep.sh | bash # move into subdirs
-fi
-
-# Delete corrupted image (optional: PNG under JPEG name that may cause dataloaders to fail)
-# rm train/n04266014/n04266014_10835.JPEG
-
-# TFRecords (optional)
-# wget https://raw.githubusercontent.com/tensorflow/models/master/research/slim/datasets/imagenet_lsvrc_2015_synsets.txt
diff --git a/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/aws/userdata.sh b/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/aws/userdata.sh
deleted file mode 100644
index 5fc1332ac1b0d1794cf8f8c5f6918059ae5dc381..0000000000000000000000000000000000000000
--- a/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/aws/userdata.sh
+++ /dev/null
@@ -1,27 +0,0 @@
-#!/bin/bash
-# AWS EC2 instance startup script https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
-# This script will run only once on first instance start (for a re-start script see mime.sh)
-# /home/ubuntu (ubuntu) or /home/ec2-user (amazon-linux) is working dir
-# Use >300 GB SSD
-
-cd home/ubuntu
-if [ ! -d yolov5 ]; then
- echo "Running first-time script." # install dependencies, download COCO, pull Docker
- git clone https://github.com/ultralytics/yolov5 -b master && sudo chmod -R 777 yolov5
- cd yolov5
- bash data/scripts/get_coco.sh && echo "COCO done." &
- sudo docker pull ultralytics/yolov5:latest && echo "Docker done." &
- python -m pip install --upgrade pip && pip install -r requirements.txt && python detect.py && echo "Requirements done." &
- wait && echo "All tasks done." # finish background tasks
-else
- echo "Running re-start script." # resume interrupted runs
- i=0
- list=$(sudo docker ps -qa) # container list i.e. $'one\ntwo\nthree\nfour'
- while IFS= read -r id; do
- ((i++))
- echo "restarting container $i: $id"
- sudo docker start $id
- # sudo docker exec -it $id python train.py --resume # single-GPU
- sudo docker exec -d $id python utils/aws/resume.py # multi-scenario
- done <<<"$list"
-fi
diff --git a/spaces/Iceclear/StableSR/StableSR/basicsr/utils/diffjpeg.py b/spaces/Iceclear/StableSR/StableSR/basicsr/utils/diffjpeg.py
deleted file mode 100644
index 65f96b44f9e7f3f8a589668f0003adf328cc5742..0000000000000000000000000000000000000000
--- a/spaces/Iceclear/StableSR/StableSR/basicsr/utils/diffjpeg.py
+++ /dev/null
@@ -1,515 +0,0 @@
-"""
-Modified from https://github.com/mlomnitz/DiffJPEG
-
-For images not divisible by 8
-https://dsp.stackexchange.com/questions/35339/jpeg-dct-padding/35343#35343
-"""
-import itertools
-import numpy as np
-import torch
-import torch.nn as nn
-from torch.nn import functional as F
-
-# ------------------------ utils ------------------------#
-y_table = np.array(
- [[16, 11, 10, 16, 24, 40, 51, 61], [12, 12, 14, 19, 26, 58, 60, 55], [14, 13, 16, 24, 40, 57, 69, 56],
- [14, 17, 22, 29, 51, 87, 80, 62], [18, 22, 37, 56, 68, 109, 103, 77], [24, 35, 55, 64, 81, 104, 113, 92],
- [49, 64, 78, 87, 103, 121, 120, 101], [72, 92, 95, 98, 112, 100, 103, 99]],
- dtype=np.float32).T
-y_table = nn.Parameter(torch.from_numpy(y_table))
-c_table = np.empty((8, 8), dtype=np.float32)
-c_table.fill(99)
-c_table[:4, :4] = np.array([[17, 18, 24, 47], [18, 21, 26, 66], [24, 26, 56, 99], [47, 66, 99, 99]]).T
-c_table = nn.Parameter(torch.from_numpy(c_table))
-
-
-def diff_round(x):
- """ Differentiable rounding function
- """
- return torch.round(x) + (x - torch.round(x))**3
-
-
-def quality_to_factor(quality):
- """ Calculate factor corresponding to quality
-
- Args:
- quality(float): Quality for jpeg compression.
-
- Returns:
- float: Compression factor.
- """
- if quality < 50:
- quality = 5000. / quality
- else:
- quality = 200. - quality * 2
- return quality / 100.
-
-
-# ------------------------ compression ------------------------#
-class RGB2YCbCrJpeg(nn.Module):
- """ Converts RGB image to YCbCr
- """
-
- def __init__(self):
- super(RGB2YCbCrJpeg, self).__init__()
- matrix = np.array([[0.299, 0.587, 0.114], [-0.168736, -0.331264, 0.5], [0.5, -0.418688, -0.081312]],
- dtype=np.float32).T
- self.shift = nn.Parameter(torch.tensor([0., 128., 128.]))
- self.matrix = nn.Parameter(torch.from_numpy(matrix))
-
- def forward(self, image):
- """
- Args:
- image(Tensor): batch x 3 x height x width
-
- Returns:
- Tensor: batch x height x width x 3
- """
- image = image.permute(0, 2, 3, 1)
- result = torch.tensordot(image, self.matrix, dims=1) + self.shift
- return result.view(image.shape)
-
-
-class ChromaSubsampling(nn.Module):
- """ Chroma subsampling on CbCr channels
- """
-
- def __init__(self):
- super(ChromaSubsampling, self).__init__()
-
- def forward(self, image):
- """
- Args:
- image(tensor): batch x height x width x 3
-
- Returns:
- y(tensor): batch x height x width
- cb(tensor): batch x height/2 x width/2
- cr(tensor): batch x height/2 x width/2
- """
- image_2 = image.permute(0, 3, 1, 2).clone()
- cb = F.avg_pool2d(image_2[:, 1, :, :].unsqueeze(1), kernel_size=2, stride=(2, 2), count_include_pad=False)
- cr = F.avg_pool2d(image_2[:, 2, :, :].unsqueeze(1), kernel_size=2, stride=(2, 2), count_include_pad=False)
- cb = cb.permute(0, 2, 3, 1)
- cr = cr.permute(0, 2, 3, 1)
- return image[:, :, :, 0], cb.squeeze(3), cr.squeeze(3)
-
-
-class BlockSplitting(nn.Module):
- """ Splitting image into patches
- """
-
- def __init__(self):
- super(BlockSplitting, self).__init__()
- self.k = 8
-
- def forward(self, image):
- """
- Args:
- image(tensor): batch x height x width
-
- Returns:
- Tensor: batch x h*w/64 x h x w
- """
- height, _ = image.shape[1:3]
- batch_size = image.shape[0]
- image_reshaped = image.view(batch_size, height // self.k, self.k, -1, self.k)
- image_transposed = image_reshaped.permute(0, 1, 3, 2, 4)
- return image_transposed.contiguous().view(batch_size, -1, self.k, self.k)
-
-
-class DCT8x8(nn.Module):
- """ Discrete Cosine Transformation
- """
-
- def __init__(self):
- super(DCT8x8, self).__init__()
- tensor = np.zeros((8, 8, 8, 8), dtype=np.float32)
- for x, y, u, v in itertools.product(range(8), repeat=4):
- tensor[x, y, u, v] = np.cos((2 * x + 1) * u * np.pi / 16) * np.cos((2 * y + 1) * v * np.pi / 16)
- alpha = np.array([1. / np.sqrt(2)] + [1] * 7)
- self.tensor = nn.Parameter(torch.from_numpy(tensor).float())
- self.scale = nn.Parameter(torch.from_numpy(np.outer(alpha, alpha) * 0.25).float())
-
- def forward(self, image):
- """
- Args:
- image(tensor): batch x height x width
-
- Returns:
- Tensor: batch x height x width
- """
- image = image - 128
- result = self.scale * torch.tensordot(image, self.tensor, dims=2)
- result.view(image.shape)
- return result
-
-
-class YQuantize(nn.Module):
- """ JPEG Quantization for Y channel
-
- Args:
- rounding(function): rounding function to use
- """
-
- def __init__(self, rounding):
- super(YQuantize, self).__init__()
- self.rounding = rounding
- self.y_table = y_table
-
- def forward(self, image, factor=1):
- """
- Args:
- image(tensor): batch x height x width
-
- Returns:
- Tensor: batch x height x width
- """
- if isinstance(factor, (int, float)):
- image = image.float() / (self.y_table * factor)
- else:
- b = factor.size(0)
- table = self.y_table.expand(b, 1, 8, 8) * factor.view(b, 1, 1, 1)
- image = image.float() / table
- image = self.rounding(image)
- return image
-
-
-class CQuantize(nn.Module):
- """ JPEG Quantization for CbCr channels
-
- Args:
- rounding(function): rounding function to use
- """
-
- def __init__(self, rounding):
- super(CQuantize, self).__init__()
- self.rounding = rounding
- self.c_table = c_table
-
- def forward(self, image, factor=1):
- """
- Args:
- image(tensor): batch x height x width
-
- Returns:
- Tensor: batch x height x width
- """
- if isinstance(factor, (int, float)):
- image = image.float() / (self.c_table * factor)
- else:
- b = factor.size(0)
- table = self.c_table.expand(b, 1, 8, 8) * factor.view(b, 1, 1, 1)
- image = image.float() / table
- image = self.rounding(image)
- return image
-
-
-class CompressJpeg(nn.Module):
- """Full JPEG compression algorithm
-
- Args:
- rounding(function): rounding function to use
- """
-
- def __init__(self, rounding=torch.round):
- super(CompressJpeg, self).__init__()
- self.l1 = nn.Sequential(RGB2YCbCrJpeg(), ChromaSubsampling())
- self.l2 = nn.Sequential(BlockSplitting(), DCT8x8())
- self.c_quantize = CQuantize(rounding=rounding)
- self.y_quantize = YQuantize(rounding=rounding)
-
- def forward(self, image, factor=1):
- """
- Args:
- image(tensor): batch x 3 x height x width
-
- Returns:
- dict(tensor): Compressed tensor with batch x h*w/64 x 8 x 8.
- """
- y, cb, cr = self.l1(image * 255)
- components = {'y': y, 'cb': cb, 'cr': cr}
- for k in components.keys():
- comp = self.l2(components[k])
- if k in ('cb', 'cr'):
- comp = self.c_quantize(comp, factor=factor)
- else:
- comp = self.y_quantize(comp, factor=factor)
-
- components[k] = comp
-
- return components['y'], components['cb'], components['cr']
-
-
-# ------------------------ decompression ------------------------#
-
-
-class YDequantize(nn.Module):
- """Dequantize Y channel
- """
-
- def __init__(self):
- super(YDequantize, self).__init__()
- self.y_table = y_table
-
- def forward(self, image, factor=1):
- """
- Args:
- image(tensor): batch x height x width
-
- Returns:
- Tensor: batch x height x width
- """
- if isinstance(factor, (int, float)):
- out = image * (self.y_table * factor)
- else:
- b = factor.size(0)
- table = self.y_table.expand(b, 1, 8, 8) * factor.view(b, 1, 1, 1)
- out = image * table
- return out
-
-
-class CDequantize(nn.Module):
- """Dequantize CbCr channel
- """
-
- def __init__(self):
- super(CDequantize, self).__init__()
- self.c_table = c_table
-
- def forward(self, image, factor=1):
- """
- Args:
- image(tensor): batch x height x width
-
- Returns:
- Tensor: batch x height x width
- """
- if isinstance(factor, (int, float)):
- out = image * (self.c_table * factor)
- else:
- b = factor.size(0)
- table = self.c_table.expand(b, 1, 8, 8) * factor.view(b, 1, 1, 1)
- out = image * table
- return out
-
-
-class iDCT8x8(nn.Module):
- """Inverse discrete Cosine Transformation
- """
-
- def __init__(self):
- super(iDCT8x8, self).__init__()
- alpha = np.array([1. / np.sqrt(2)] + [1] * 7)
- self.alpha = nn.Parameter(torch.from_numpy(np.outer(alpha, alpha)).float())
- tensor = np.zeros((8, 8, 8, 8), dtype=np.float32)
- for x, y, u, v in itertools.product(range(8), repeat=4):
- tensor[x, y, u, v] = np.cos((2 * u + 1) * x * np.pi / 16) * np.cos((2 * v + 1) * y * np.pi / 16)
- self.tensor = nn.Parameter(torch.from_numpy(tensor).float())
-
- def forward(self, image):
- """
- Args:
- image(tensor): batch x height x width
-
- Returns:
- Tensor: batch x height x width
- """
- image = image * self.alpha
- result = 0.25 * torch.tensordot(image, self.tensor, dims=2) + 128
- result.view(image.shape)
- return result
-
-
-class BlockMerging(nn.Module):
- """Merge patches into image
- """
-
- def __init__(self):
- super(BlockMerging, self).__init__()
-
- def forward(self, patches, height, width):
- """
- Args:
- patches(tensor) batch x height*width/64, height x width
- height(int)
- width(int)
-
- Returns:
- Tensor: batch x height x width
- """
- k = 8
- batch_size = patches.shape[0]
- image_reshaped = patches.view(batch_size, height // k, width // k, k, k)
- image_transposed = image_reshaped.permute(0, 1, 3, 2, 4)
- return image_transposed.contiguous().view(batch_size, height, width)
-
-
-class ChromaUpsampling(nn.Module):
- """Upsample chroma layers
- """
-
- def __init__(self):
- super(ChromaUpsampling, self).__init__()
-
- def forward(self, y, cb, cr):
- """
- Args:
- y(tensor): y channel image
- cb(tensor): cb channel
- cr(tensor): cr channel
-
- Returns:
- Tensor: batch x height x width x 3
- """
-
- def repeat(x, k=2):
- height, width = x.shape[1:3]
- x = x.unsqueeze(-1)
- x = x.repeat(1, 1, k, k)
- x = x.view(-1, height * k, width * k)
- return x
-
- cb = repeat(cb)
- cr = repeat(cr)
- return torch.cat([y.unsqueeze(3), cb.unsqueeze(3), cr.unsqueeze(3)], dim=3)
-
-
-class YCbCr2RGBJpeg(nn.Module):
- """Converts YCbCr image to RGB JPEG
- """
-
- def __init__(self):
- super(YCbCr2RGBJpeg, self).__init__()
-
- matrix = np.array([[1., 0., 1.402], [1, -0.344136, -0.714136], [1, 1.772, 0]], dtype=np.float32).T
- self.shift = nn.Parameter(torch.tensor([0, -128., -128.]))
- self.matrix = nn.Parameter(torch.from_numpy(matrix))
-
- def forward(self, image):
- """
- Args:
- image(tensor): batch x height x width x 3
-
- Returns:
- Tensor: batch x 3 x height x width
- """
- result = torch.tensordot(image + self.shift, self.matrix, dims=1)
- return result.view(image.shape).permute(0, 3, 1, 2)
-
-
-class DeCompressJpeg(nn.Module):
- """Full JPEG decompression algorithm
-
- Args:
- rounding(function): rounding function to use
- """
-
- def __init__(self, rounding=torch.round):
- super(DeCompressJpeg, self).__init__()
- self.c_dequantize = CDequantize()
- self.y_dequantize = YDequantize()
- self.idct = iDCT8x8()
- self.merging = BlockMerging()
- self.chroma = ChromaUpsampling()
- self.colors = YCbCr2RGBJpeg()
-
- def forward(self, y, cb, cr, imgh, imgw, factor=1):
- """
- Args:
- compressed(dict(tensor)): batch x h*w/64 x 8 x 8
- imgh(int)
- imgw(int)
- factor(float)
-
- Returns:
- Tensor: batch x 3 x height x width
- """
- components = {'y': y, 'cb': cb, 'cr': cr}
- for k in components.keys():
- if k in ('cb', 'cr'):
- comp = self.c_dequantize(components[k], factor=factor)
- height, width = int(imgh / 2), int(imgw / 2)
- else:
- comp = self.y_dequantize(components[k], factor=factor)
- height, width = imgh, imgw
- comp = self.idct(comp)
- components[k] = self.merging(comp, height, width)
- #
- image = self.chroma(components['y'], components['cb'], components['cr'])
- image = self.colors(image)
-
- image = torch.min(255 * torch.ones_like(image), torch.max(torch.zeros_like(image), image))
- return image / 255
-
-
-# ------------------------ main DiffJPEG ------------------------ #
-
-
-class DiffJPEG(nn.Module):
- """This JPEG algorithm result is slightly different from cv2.
- DiffJPEG supports batch processing.
-
- Args:
- differentiable(bool): If True, uses custom differentiable rounding function, if False, uses standard torch.round
- """
-
- def __init__(self, differentiable=True):
- super(DiffJPEG, self).__init__()
- if differentiable:
- rounding = diff_round
- else:
- rounding = torch.round
-
- self.compress = CompressJpeg(rounding=rounding)
- self.decompress = DeCompressJpeg(rounding=rounding)
-
- def forward(self, x, quality):
- """
- Args:
- x (Tensor): Input image, bchw, rgb, [0, 1]
- quality(float): Quality factor for jpeg compression scheme.
- """
- factor = quality
- if isinstance(factor, (int, float)):
- factor = quality_to_factor(factor)
- else:
- for i in range(factor.size(0)):
- factor[i] = quality_to_factor(factor[i])
- h, w = x.size()[-2:]
- h_pad, w_pad = 0, 0
- # why should use 16
- if h % 16 != 0:
- h_pad = 16 - h % 16
- if w % 16 != 0:
- w_pad = 16 - w % 16
- x = F.pad(x, (0, w_pad, 0, h_pad), mode='constant', value=0)
-
- y, cb, cr = self.compress(x, factor=factor)
- recovered = self.decompress(y, cb, cr, (h + h_pad), (w + w_pad), factor=factor)
- recovered = recovered[:, :, 0:h, 0:w]
- return recovered
-
-
-if __name__ == '__main__':
- import cv2
-
- from basicsr.utils import img2tensor, tensor2img
-
- img_gt = cv2.imread('test.png') / 255.
-
- # -------------- cv2 -------------- #
- encode_param = [int(cv2.IMWRITE_JPEG_QUALITY), 20]
- _, encimg = cv2.imencode('.jpg', img_gt * 255., encode_param)
- img_lq = np.float32(cv2.imdecode(encimg, 1))
- cv2.imwrite('cv2_JPEG_20.png', img_lq)
-
- # -------------- DiffJPEG -------------- #
- jpeger = DiffJPEG(differentiable=False).cuda()
- img_gt = img2tensor(img_gt)
- img_gt = torch.stack([img_gt, img_gt]).cuda()
- quality = img_gt.new_tensor([20, 40])
- out = jpeger(img_gt, quality=quality)
-
- cv2.imwrite('pt_JPEG_20.png', tensor2img(out[0]))
- cv2.imwrite('pt_JPEG_40.png', tensor2img(out[1]))
diff --git a/spaces/InpaintAI/Inpaint-Anything/lama_inpaint.py b/spaces/InpaintAI/Inpaint-Anything/lama_inpaint.py
deleted file mode 100644
index 517012a4461e9896fbe564d44c2ec59c43ffdd0a..0000000000000000000000000000000000000000
--- a/spaces/InpaintAI/Inpaint-Anything/lama_inpaint.py
+++ /dev/null
@@ -1,205 +0,0 @@
-import os
-import sys
-import numpy as np
-import torch
-import yaml
-import glob
-import argparse
-from PIL import Image
-from omegaconf import OmegaConf
-from pathlib import Path
-
-os.environ['OMP_NUM_THREADS'] = '1'
-os.environ['OPENBLAS_NUM_THREADS'] = '1'
-os.environ['MKL_NUM_THREADS'] = '1'
-os.environ['VECLIB_MAXIMUM_THREADS'] = '1'
-os.environ['NUMEXPR_NUM_THREADS'] = '1'
-
-sys.path.insert(0, str(Path(__file__).resolve().parent / "third_party" / "lama"))
-
-from saicinpainting.evaluation.utils import move_to_device
-from saicinpainting.training.trainers import load_checkpoint
-from saicinpainting.evaluation.data import pad_tensor_to_modulo
-
-from utils import load_img_to_array, save_array_to_img
-
-
-@torch.no_grad()
-def inpaint_img_with_lama(
- img: np.ndarray,
- mask: np.ndarray,
- config_p: str,
- ckpt_p: str,
- mod=8,
- device="cuda"
-):
- assert len(mask.shape) == 2
- if np.max(mask) == 1:
- mask = mask * 255
- img = torch.from_numpy(img).float().div(255.)
- mask = torch.from_numpy(mask).float()
- predict_config = OmegaConf.load(config_p)
- predict_config.model.path = ckpt_p
- # device = torch.device(predict_config.device)
- device = torch.device(device)
-
- train_config_path = os.path.join(
- predict_config.model.path, 'config.yaml')
-
- with open(train_config_path, 'r') as f:
- train_config = OmegaConf.create(yaml.safe_load(f))
-
- train_config.training_model.predict_only = True
- train_config.visualizer.kind = 'noop'
-
- checkpoint_path = os.path.join(
- predict_config.model.path, 'models',
- predict_config.model.checkpoint
- )
- model = load_checkpoint(
- train_config, checkpoint_path, strict=False, map_location=device)
- model.freeze()
- if not predict_config.get('refine', False):
- model.to(device)
-
- batch = {}
- batch['image'] = img.permute(2, 0, 1).unsqueeze(0)
- batch['mask'] = mask[None, None]
- unpad_to_size = [batch['image'].shape[2], batch['image'].shape[3]]
- batch['image'] = pad_tensor_to_modulo(batch['image'], mod)
- batch['mask'] = pad_tensor_to_modulo(batch['mask'], mod)
- batch = move_to_device(batch, device)
- batch['mask'] = (batch['mask'] > 0) * 1
-
- batch = model(batch)
- cur_res = batch[predict_config.out_key][0].permute(1, 2, 0)
- cur_res = cur_res.detach().cpu().numpy()
-
- if unpad_to_size is not None:
- orig_height, orig_width = unpad_to_size
- cur_res = cur_res[:orig_height, :orig_width]
-
- cur_res = np.clip(cur_res * 255, 0, 255).astype('uint8')
- return cur_res
-
-
-def build_lama_model(
- config_p: str,
- ckpt_p: str,
- device="cuda"
-):
- predict_config = OmegaConf.load(config_p)
- predict_config.model.path = ckpt_p
- # device = torch.device(predict_config.device)
- device = torch.device(device)
-
- train_config_path = os.path.join(
- predict_config.model.path, 'config.yaml')
-
- with open(train_config_path, 'r') as f:
- train_config = OmegaConf.create(yaml.safe_load(f))
-
- train_config.training_model.predict_only = True
- train_config.visualizer.kind = 'noop'
-
- checkpoint_path = os.path.join(
- predict_config.model.path, 'models',
- predict_config.model.checkpoint
- )
- model = load_checkpoint(
- train_config, checkpoint_path, strict=False, map_location=device)
- model.freeze()
- if not predict_config.get('refine', False):
- model.to(device)
-
- return model
-
-
-@torch.no_grad()
-def inpaint_img_with_builded_lama(
- model,
- img: np.ndarray,
- mask: np.ndarray,
- config_p: str,
- mod=8,
- device="cuda"
-):
- assert len(mask.shape) == 2
- if np.max(mask) == 1:
- mask = mask * 255
- img = torch.from_numpy(img).float().div(255.)
- mask = torch.from_numpy(mask).float()
- predict_config = OmegaConf.load(config_p)
-
- batch = {}
- batch['image'] = img.permute(2, 0, 1).unsqueeze(0)
- batch['mask'] = mask[None, None]
- unpad_to_size = [batch['image'].shape[2], batch['image'].shape[3]]
- batch['image'] = pad_tensor_to_modulo(batch['image'], mod)
- batch['mask'] = pad_tensor_to_modulo(batch['mask'], mod)
- batch = move_to_device(batch, device)
- batch['mask'] = (batch['mask'] > 0) * 1
-
- batch = model(batch)
- cur_res = batch[predict_config.out_key][0].permute(1, 2, 0)
- cur_res = cur_res.detach().cpu().numpy()
-
- if unpad_to_size is not None:
- orig_height, orig_width = unpad_to_size
- cur_res = cur_res[:orig_height, :orig_width]
-
- cur_res = np.clip(cur_res * 255, 0, 255).astype('uint8')
- return cur_res
-
-
-def setup_args(parser):
- parser.add_argument(
- "--input_img", type=str, required=True,
- help="Path to a single input img",
- )
- parser.add_argument(
- "--input_mask_glob", type=str, required=True,
- help="Glob to input masks",
- )
- parser.add_argument(
- "--output_dir", type=str, required=True,
- help="Output path to the directory with results.",
- )
- parser.add_argument(
- "--lama_config", type=str,
- default="./third_party/lama/configs/prediction/default.yaml",
- help="The path to the config file of lama model. "
- "Default: the config of big-lama",
- )
- parser.add_argument(
- "--lama_ckpt", type=str, required=True,
- help="The path to the lama checkpoint.",
- )
-
-
-if __name__ == "__main__":
- """Example usage:
- python lama_inpaint.py \
- --input_img FA_demo/FA1_dog.png \
- --input_mask_glob "results/FA1_dog/mask*.png" \
- --output_dir results \
- --lama_config lama/configs/prediction/default.yaml \
- --lama_ckpt big-lama
- """
- parser = argparse.ArgumentParser()
- setup_args(parser)
- args = parser.parse_args(sys.argv[1:])
- device = "cuda" if torch.cuda.is_available() else "cpu"
-
- img_stem = Path(args.input_img).stem
- mask_ps = sorted(glob.glob(args.input_mask_glob))
- out_dir = Path(args.output_dir) / img_stem
- out_dir.mkdir(parents=True, exist_ok=True)
-
- img = load_img_to_array(args.input_img)
- for mask_p in mask_ps:
- mask = load_img_to_array(mask_p)
- img_inpainted_p = out_dir / f"inpainted_with_{Path(mask_p).name}"
- img_inpainted = inpaint_img_with_lama(
- img, mask, args.lama_config, args.lama_ckpt, device=device)
- save_array_to_img(img_inpainted, img_inpainted_p)
\ No newline at end of file
diff --git a/spaces/IoannisTr/Tech_Stocks_Trading_Assistant/FinBERT_training.py b/spaces/IoannisTr/Tech_Stocks_Trading_Assistant/FinBERT_training.py
deleted file mode 100644
index 8659fdef1dd82f8a699604ef2b73eab99d62c4aa..0000000000000000000000000000000000000000
--- a/spaces/IoannisTr/Tech_Stocks_Trading_Assistant/FinBERT_training.py
+++ /dev/null
@@ -1,82 +0,0 @@
-import os
-os.environ["TOKENIZERS_PARALLELISM"] = "false"
-os.environ['WANDB_DISABLED'] = "true"
-import pandas as pd
-from sklearn.preprocessing import LabelEncoder
-from sklearn.model_selection import train_test_split
-from transformers import (
- AutoTokenizer,
- DataCollatorWithPadding,
- TrainingArguments,
- Trainer,
- AutoModelForSequenceClassification
-)
-from datasets import Dataset
-
-#######################################
-########## FinBERT training ###########
-#######################################
-
-class args:
- model = 'ProsusAI/finbert'
-
-df = pd.read_csv('all-data.csv',
- names = ['labels','messages'],
- encoding='ISO-8859-1')
-
-df = df[['messages', 'labels']]
-
-le = LabelEncoder()
-df['labels'] = le.fit_transform(df['labels'])
-
-X, y = df['messages'].values, df['labels'].values
-
-xtrain, xtest, ytrain, ytest = train_test_split(X, y, test_size=0.1)
-xtrain, xvalid, ytrain, yvalid = train_test_split(xtrain, ytrain, test_size=0.2)
-
-train_dataset_raw = Dataset.from_dict({'text':xtrain, 'labels':ytrain})
-valid_dataset_raw = Dataset.from_dict({'text':xvalid, 'labels':yvalid})
-
-tokenizer = AutoTokenizer.from_pretrained(args.model)
-
-def tokenize_fn(examples):
- return tokenizer(examples['text'], truncation=True)
-
-train_dataset = train_dataset_raw.map(tokenize_fn, batched=True)
-valid_dataset = valid_dataset_raw.map(tokenize_fn, batched=True)
-
-data_collator = DataCollatorWithPadding(tokenizer)
-
-model = AutoModelForSequenceClassification.from_pretrained(args.model)
-
-train_args = TrainingArguments(
- './Finbert Trained/',
- per_device_train_batch_size=16,
- per_device_eval_batch_size=2*16,
- num_train_epochs=5,
- learning_rate=2e-5,
- weight_decay=0.01,
- warmup_ratio=0.1,
- do_eval=True,
- do_train=True,
- do_predict=True,
- evaluation_strategy='epoch',
- save_strategy="no",
-)
-
-trainer = Trainer(
- model,
- train_args,
- train_dataset=train_dataset,
- eval_dataset=valid_dataset,
- data_collator=data_collator,
- tokenizer=tokenizer
-)
-
-trainer.train()
-
-# saving the model and the weights
-model.save_pretrained('fine_tuned_FinBERT')
-# saving the tokenizer
-tokenizer.save_pretrained("fine_tuned_FinBERT/tokenizer/")
-
diff --git a/spaces/JeffJing/ZookChatBot/steamship/data/plugin/hosting.py b/spaces/JeffJing/ZookChatBot/steamship/data/plugin/hosting.py
deleted file mode 100644
index 2785c94d9d6d813d6377c156f060d5a49075ee2e..0000000000000000000000000000000000000000
--- a/spaces/JeffJing/ZookChatBot/steamship/data/plugin/hosting.py
+++ /dev/null
@@ -1,66 +0,0 @@
-from enum import Enum
-
-
-class HostingType(str, Enum):
- """The type of hosting provider to deploy to."""
-
- LAMBDA = "lambda"
- ECS = "ecs"
-
-
-class HostingEnvironment(str, Enum):
- """The software environment required for deployment."""
-
- PYTHON38 = "python38"
- STEAMSHIP_PYTORCH_CPU = "inferenceCpu"
-
-
-class HostingMemory(str, Enum):
- """The amount of memory required for deployment.
-
- This is mapped to a value dependent on the HostingType it is combined with.
- """
-
- MIN = "min"
- XXS = "xxs"
- XS = "xs"
- SM = "sm"
- MD = "md"
- LG = "lg"
- XL = "xl"
- XXL = "xxl"
- MAX = "max"
-
-
-class HostingCpu(str, Enum):
- """The amount of CPU required for deployment.
-
- This is mapped to a value dependent on the HostingType it is combined with.
- """
-
- MIN = "min"
- XXS = "xxs"
- XS = "xs"
- SM = "sm"
- MD = "md"
- LG = "lg"
- XL = "xl"
- XXL = "xxl"
- MAX = "max"
-
-
-class HostingTimeout(str, Enum):
- """The request timeout required for deployment.
-
- This is mapped to a value dependent on the HostingType it is combined with.
- """
-
- MIN = "min"
- XXS = "xxs"
- XS = "xs"
- SM = "sm"
- MD = "md"
- LG = "lg"
- XL = "xl"
- XXL = "xxl"
- MAX = "max"
diff --git a/spaces/Joabutt/furry-diffusion/README.md b/spaces/Joabutt/furry-diffusion/README.md
deleted file mode 100644
index 87d4534907210c6977b17aabb4837e938d18787f..0000000000000000000000000000000000000000
--- a/spaces/Joabutt/furry-diffusion/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Furry Diffusion
-emoji: 👁
-colorFrom: blue
-colorTo: pink
-sdk: gradio
-sdk_version: 3.16.2
-app_file: app.py
-pinned: false
-license: wtfpl
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/JohnnyPittt/audio-styling/deepafx_st/callbacks/params.py b/spaces/JohnnyPittt/audio-styling/deepafx_st/callbacks/params.py
deleted file mode 100644
index e327671f665070f0be3e7f561c68fa5e3324811b..0000000000000000000000000000000000000000
--- a/spaces/JohnnyPittt/audio-styling/deepafx_st/callbacks/params.py
+++ /dev/null
@@ -1,87 +0,0 @@
-import numpy as np
-import pytorch_lightning as pl
-import matplotlib.pyplot as plt
-
-import deepafx_st.utils as utils
-
-
-class LogParametersCallback(pl.callbacks.Callback):
- def __init__(self, num_examples=4):
- super().__init__()
- self.num_examples = 4
-
- def on_validation_epoch_start(self, trainer, pl_module):
- """At the start of validation init storage for parameters."""
- self.params = []
-
- def on_validation_batch_end(
- self,
- trainer,
- pl_module,
- outputs,
- batch,
- batch_idx,
- dataloader_idx,
- ):
- """Called when the validation batch ends.
-
- Here we log the parameters only from the first batch.
-
- """
- if outputs is not None and batch_idx == 0:
- examples = np.min([self.num_examples, outputs["x"].shape[0]])
- for n in range(examples):
- self.log_parameters(
- outputs,
- n,
- pl_module.processor.ports,
- trainer.global_step,
- trainer.logger,
- True if batch_idx == 0 else False,
- )
-
- def on_validation_epoch_end(self, trainer, pl_module):
- pass
-
- def log_parameters(self, outputs, batch_idx, ports, global_step, logger, log=True):
- p = outputs["p"][batch_idx, ...]
-
- table = ""
-
- # table += f"""## {plugin["name"]}\n"""
- table += "| Index| Name | Value | Units | Min | Max | Default | Raw Value | \n"
- table += "|------|------|------:|:------|----:|----:|--------:| ---------:| \n"
-
- start_idx = 0
- # set plugin parameters based on provided normalized parameters
- for port_list in ports:
- for pidx, port in enumerate(port_list):
- param_max = port["max"]
- param_min = port["min"]
- param_name = port["name"]
- param_default = port["default"]
- param_units = port["units"]
-
- param_val = p[start_idx]
- denorm_val = utils.denormalize(param_val, param_max, param_min)
-
- # add values to table in row
- table += f"| {start_idx + 1} | {param_name} "
- if np.abs(denorm_val) > 10:
- table += f"| {denorm_val:0.1f} "
- table += f"| {param_units} "
- table += f"| {param_min:0.1f} | {param_max:0.1f} "
- table += f"| {param_default:0.1f} "
- else:
- table += f"| {denorm_val:0.3f} "
- table += f"| {param_units} "
- table += f"| {param_min:0.3f} | {param_max:0.3f} "
- table += f"| {param_default:0.3f} "
-
- table += f"| {np.squeeze(param_val):0.2f} | \n"
- start_idx += 1
-
- table += "\n\n"
-
- if log:
- logger.experiment.add_text(f"params/{batch_idx+1}", table, global_step)
diff --git a/spaces/JohnnyPittt/audio-styling/deepafx_st/processors/dsp/peq.py b/spaces/JohnnyPittt/audio-styling/deepafx_st/processors/dsp/peq.py
deleted file mode 100644
index 8083b6dd1fcc0eb3d5f11aa1d41cb4446d5bffd2..0000000000000000000000000000000000000000
--- a/spaces/JohnnyPittt/audio-styling/deepafx_st/processors/dsp/peq.py
+++ /dev/null
@@ -1,323 +0,0 @@
-import torch
-import numpy as np
-import scipy.signal
-from numba import jit
-
-from deepafx_st.processors.processor import Processor
-
-
-@jit(nopython=True)
-def biqaud(
- gain_dB: float,
- cutoff_freq: float,
- q_factor: float,
- sample_rate: float,
- filter_type: str,
-):
- """Use design parameters to generate coeffieicnets for a specific filter type.
-
- Args:
- gain_dB (float): Shelving filter gain in dB.
- cutoff_freq (float): Cutoff frequency in Hz.
- q_factor (float): Q factor.
- sample_rate (float): Sample rate in Hz.
- filter_type (str): Filter type.
- One of "low_shelf", "high_shelf", or "peaking"
-
- Returns:
- b (np.ndarray): Numerator filter coefficients stored as [b0, b1, b2]
- a (np.ndarray): Denominator filter coefficients stored as [a0, a1, a2]
- """
-
- A = 10 ** (gain_dB / 40.0)
- w0 = 2.0 * np.pi * (cutoff_freq / sample_rate)
- alpha = np.sin(w0) / (2.0 * q_factor)
-
- cos_w0 = np.cos(w0)
- sqrt_A = np.sqrt(A)
-
- if filter_type == "high_shelf":
- b0 = A * ((A + 1) + (A - 1) * cos_w0 + 2 * sqrt_A * alpha)
- b1 = -2 * A * ((A - 1) + (A + 1) * cos_w0)
- b2 = A * ((A + 1) + (A - 1) * cos_w0 - 2 * sqrt_A * alpha)
- a0 = (A + 1) - (A - 1) * cos_w0 + 2 * sqrt_A * alpha
- a1 = 2 * ((A - 1) - (A + 1) * cos_w0)
- a2 = (A + 1) - (A - 1) * cos_w0 - 2 * sqrt_A * alpha
- elif filter_type == "low_shelf":
- b0 = A * ((A + 1) - (A - 1) * cos_w0 + 2 * sqrt_A * alpha)
- b1 = 2 * A * ((A - 1) - (A + 1) * cos_w0)
- b2 = A * ((A + 1) - (A - 1) * cos_w0 - 2 * sqrt_A * alpha)
- a0 = (A + 1) + (A - 1) * cos_w0 + 2 * sqrt_A * alpha
- a1 = -2 * ((A - 1) + (A + 1) * cos_w0)
- a2 = (A + 1) + (A - 1) * cos_w0 - 2 * sqrt_A * alpha
- elif filter_type == "peaking":
- b0 = 1 + alpha * A
- b1 = -2 * cos_w0
- b2 = 1 - alpha * A
- a0 = 1 + alpha / A
- a1 = -2 * cos_w0
- a2 = 1 - alpha / A
- else:
- pass
- # raise ValueError(f"Invalid filter_type: {filter_type}.")
-
- b = np.array([b0, b1, b2]) / a0
- a = np.array([a0, a1, a2]) / a0
-
- return b, a
-
-
-# Adapted from https://github.com/csteinmetz1/pyloudnorm/blob/master/pyloudnorm/iirfilter.py
-def parametric_eq(
- x: np.ndarray,
- sample_rate: float,
- low_shelf_gain_dB: float = 0.0,
- low_shelf_cutoff_freq: float = 80.0,
- low_shelf_q_factor: float = 0.707,
- first_band_gain_dB: float = 0.0,
- first_band_cutoff_freq: float = 300.0,
- first_band_q_factor: float = 0.707,
- second_band_gain_dB: float = 0.0,
- second_band_cutoff_freq: float = 1000.0,
- second_band_q_factor: float = 0.707,
- third_band_gain_dB: float = 0.0,
- third_band_cutoff_freq: float = 4000.0,
- third_band_q_factor: float = 0.707,
- fourth_band_gain_dB: float = 0.0,
- fourth_band_cutoff_freq: float = 8000.0,
- fourth_band_q_factor: float = 0.707,
- high_shelf_gain_dB: float = 0.0,
- high_shelf_cutoff_freq: float = 1000.0,
- high_shelf_q_factor: float = 0.707,
- dtype=np.float32,
-):
- """Six-band parametric EQ.
-
- Low-shelf -> Band 1 -> Band 2 -> Band 3 -> Band 4 -> High-shelf
-
- Args:
-
-
- """
- # print(f"autodiff peq fs = {sample_rate}")
-
- # -------- apply low-shelf filter --------
- b, a = biqaud(
- low_shelf_gain_dB,
- low_shelf_cutoff_freq,
- low_shelf_q_factor,
- sample_rate,
- "low_shelf",
- )
- sos0 = np.concatenate((b, a))
- x = scipy.signal.lfilter(b, a, x)
-
- # -------- apply first-band peaking filter --------
- b, a = biqaud(
- first_band_gain_dB,
- first_band_cutoff_freq,
- first_band_q_factor,
- sample_rate,
- "peaking",
- )
- sos1 = np.concatenate((b, a))
- x = scipy.signal.lfilter(b, a, x)
-
- # -------- apply second-band peaking filter --------
- b, a = biqaud(
- second_band_gain_dB,
- second_band_cutoff_freq,
- second_band_q_factor,
- sample_rate,
- "peaking",
- )
- sos2 = np.concatenate((b, a))
- x = scipy.signal.lfilter(b, a, x)
-
- # -------- apply third-band peaking filter --------
- b, a = biqaud(
- third_band_gain_dB,
- third_band_cutoff_freq,
- third_band_q_factor,
- sample_rate,
- "peaking",
- )
- sos3 = np.concatenate((b, a))
- x = scipy.signal.lfilter(b, a, x)
-
- # -------- apply fourth-band peaking filter --------
- b, a = biqaud(
- fourth_band_gain_dB,
- fourth_band_cutoff_freq,
- fourth_band_q_factor,
- sample_rate,
- "peaking",
- )
- sos4 = np.concatenate((b, a))
- x = scipy.signal.lfilter(b, a, x)
-
- # -------- apply high-shelf filter --------
- b, a = biqaud(
- high_shelf_gain_dB,
- high_shelf_cutoff_freq,
- high_shelf_q_factor,
- sample_rate,
- "high_shelf",
- )
- sos5 = np.concatenate((b, a))
- x = scipy.signal.lfilter(b, a, x)
-
- return x.astype(dtype)
-
-
-class ParametricEQ(Processor):
- def __init__(
- self,
- sample_rate,
- min_gain_dB=-24.0,
- default_gain_dB=0.0,
- max_gain_dB=24.0,
- min_q_factor=0.1,
- default_q_factor=0.707,
- max_q_factor=10,
- eps=1e-8,
- ):
- """ """
- super().__init__()
- self.sample_rate = sample_rate
- self.eps = eps
- self.ports = [
- {
- "name": "Lowshelf gain",
- "min": min_gain_dB,
- "max": max_gain_dB,
- "default": default_gain_dB,
- "units": "dB",
- },
- {
- "name": "Lowshelf cutoff",
- "min": 20.0,
- "max": 200.0,
- "default": 100.0,
- "units": "Hz",
- },
- {
- "name": "Lowshelf Q",
- "min": min_q_factor,
- "max": max_q_factor,
- "default": default_q_factor,
- "units": "",
- },
- {
- "name": "First band gain",
- "min": min_gain_dB,
- "max": max_gain_dB,
- "default": default_gain_dB,
- "units": "dB",
- },
- {
- "name": "First band cutoff",
- "min": 200.0,
- "max": 2000.0,
- "default": 400.0,
- "units": "Hz",
- },
- {
- "name": "First band Q",
- "min": min_q_factor,
- "max": max_q_factor,
- "default": 0.707,
- "units": "",
- },
- {
- "name": "Second band gain",
- "min": min_gain_dB,
- "max": max_gain_dB,
- "default": default_gain_dB,
- "units": "dB",
- },
- {
- "name": "Second band cutoff",
- "min": 800.0,
- "max": 4000.0,
- "default": 1000.0,
- "units": "Hz",
- },
- {
- "name": "Second band Q",
- "min": min_q_factor,
- "max": max_q_factor,
- "default": default_q_factor,
- "units": "",
- },
- {
- "name": "Third band gain",
- "min": min_gain_dB,
- "max": max_gain_dB,
- "default": default_gain_dB,
- "units": "dB",
- },
- {
- "name": "Third band cutoff",
- "min": 2000.0,
- "max": 8000.0,
- "default": 4000.0,
- "units": "Hz",
- },
- {
- "name": "Third band Q",
- "min": min_q_factor,
- "max": max_q_factor,
- "default": default_q_factor,
- "units": "",
- },
- {
- "name": "Fourth band gain",
- "min": min_gain_dB,
- "max": max_gain_dB,
- "default": default_gain_dB,
- "units": "dB",
- },
- {
- "name": "Fourth band cutoff",
- "min": 4000.0,
- "max": (24000 // 2) * 0.9,
- "default": 8000.0,
- "units": "Hz",
- },
- {
- "name": "Fourth band Q",
- "min": min_q_factor,
- "max": max_q_factor,
- "default": default_q_factor,
- "units": "",
- },
- {
- "name": "Highshelf gain",
- "min": min_gain_dB,
- "max": max_gain_dB,
- "default": default_gain_dB,
- "units": "dB",
- },
- {
- "name": "Highshelf cutoff",
- "min": 4000.0,
- "max": (24000 // 2) * 0.9,
- "default": 8000.0,
- "units": "Hz",
- },
- {
- "name": "Highshelf Q",
- "min": min_q_factor,
- "max": max_q_factor,
- "default": default_q_factor,
- "units": "",
- },
- ]
-
- self.num_control_params = len(self.ports)
- self.process_fn = parametric_eq
-
- def forward(self, x, p, sample_rate=24000, **kwargs):
- "All processing in the forward is in numpy."
- return self.run_series(x, p, sample_rate)
diff --git a/spaces/JoshuaWS3/hakurei-waifu-diffusion/app.py b/spaces/JoshuaWS3/hakurei-waifu-diffusion/app.py
deleted file mode 100644
index ccef706bf3035fe470bf6a4f5bd701b18bf59133..0000000000000000000000000000000000000000
--- a/spaces/JoshuaWS3/hakurei-waifu-diffusion/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/hakurei/waifu-diffusion").launch()
\ No newline at end of file
diff --git a/spaces/Jour/Bloom-Translation/app.py b/spaces/Jour/Bloom-Translation/app.py
deleted file mode 100644
index eb999798a2270c96ad365ea5a37700859e8bd319..0000000000000000000000000000000000000000
--- a/spaces/Jour/Bloom-Translation/app.py
+++ /dev/null
@@ -1,54 +0,0 @@
-import gradio as gr
-import requests
-import json
-import os
-
-
-LANGUAGES = ['Akan', 'Arabic', ' Assamese', 'Bambara', 'Bengali', 'Catalan', 'English', 'Spanish', ' Basque', 'French', ' Gujarati', 'Hindi',
-'Indonesian', 'Igbo', 'Kikuyu', 'Kannada', 'Ganda', 'Lingala', 'Malayalam', 'Marathi', 'Nepali', 'Chichewa', 'Oriya', 'Panjabi', 'Portuguese',
-'Kirundi', 'Kinyarwanda', 'Shona', 'Sotho', 'Swahili', 'Tamil', 'Telugu', 'Tswana', 'Tsonga', 'Twi', 'Urdu', 'Viêt Namese', 'Wolof', 'Xhosa',
-'Yoruba', 'Chinese', 'Zulu']
-
-API_URL = "https://api-inference.huggingface.co/models/bigscience/bloom"
-
-
-def translate(input, output, text):
- """Translate text from input language to output language"""
-
- instruction = f"""Translation in {input}: {text.strip()} Translation in {output}:"""
-
- json_ = {
- "inputs": instruction,
- "parameters": {
- "return_full_text": True,
- "do_sample": False,
- "max_new_tokens": 250,
- },
- "options": {
- "use_cache": True,
- "wait_for_model": True,
- },
- }
- response = requests.request("POST", API_URL, json=json_)
- output = response.json()[0]['generated_text']
- output = output.replace(instruction, '', 1)
- search_char = output.find(f'Translation in {output}')
- return output[(search_char+len(f'Translation in {output}') if search_char != -1 else 0):].split('')[0]
-
-demo = gr.Blocks()
-
-with demo:
- gr.Markdown("Translation with Bloom ")
- gr.Markdown("Translation with bloom. ")
-
- with gr.Row():
- input_lang = gr.Dropdown(LANGUAGES, value='English', label='Select input language')
- output_lang = gr.Dropdown(LANGUAGES, value='French', label='Select output language')
-
- input_text = gr.Textbox(label="Input", lines=6)
- output_text = gr.Textbox(lines=6, label="Output")
-
- buton = gr.Button("translate")
- buton.click(translate, inputs=[input_lang, output_lang, input_text], outputs=output_text)
-
-demo.launch(enable_queue=True, debug=True)
diff --git a/spaces/Kayson/InstructDiffusion/stable_diffusion/scripts/txt2img.py b/spaces/Kayson/InstructDiffusion/stable_diffusion/scripts/txt2img.py
deleted file mode 100644
index bc3864043f676c829b623f444f689f6fe7e4824b..0000000000000000000000000000000000000000
--- a/spaces/Kayson/InstructDiffusion/stable_diffusion/scripts/txt2img.py
+++ /dev/null
@@ -1,352 +0,0 @@
-import argparse, os, sys, glob
-import cv2
-import torch
-import numpy as np
-from omegaconf import OmegaConf
-from PIL import Image
-from tqdm import tqdm, trange
-from imwatermark import WatermarkEncoder
-from itertools import islice
-from einops import rearrange
-from torchvision.utils import make_grid
-import time
-from pytorch_lightning import seed_everything
-from torch import autocast
-from contextlib import contextmanager, nullcontext
-
-from ldm.util import instantiate_from_config
-from ldm.models.diffusion.ddim import DDIMSampler
-from ldm.models.diffusion.plms import PLMSSampler
-from ldm.models.diffusion.dpm_solver import DPMSolverSampler
-
-from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
-from transformers import AutoFeatureExtractor
-
-
-# load safety model
-safety_model_id = "CompVis/stable-diffusion-safety-checker"
-safety_feature_extractor = AutoFeatureExtractor.from_pretrained(safety_model_id)
-safety_checker = StableDiffusionSafetyChecker.from_pretrained(safety_model_id)
-
-
-def chunk(it, size):
- it = iter(it)
- return iter(lambda: tuple(islice(it, size)), ())
-
-
-def numpy_to_pil(images):
- """
- Convert a numpy image or a batch of images to a PIL image.
- """
- if images.ndim == 3:
- images = images[None, ...]
- images = (images * 255).round().astype("uint8")
- pil_images = [Image.fromarray(image) for image in images]
-
- return pil_images
-
-
-def load_model_from_config(config, ckpt, verbose=False):
- print(f"Loading model from {ckpt}")
- pl_sd = torch.load(ckpt, map_location="cpu")
- if "global_step" in pl_sd:
- print(f"Global Step: {pl_sd['global_step']}")
- sd = pl_sd["state_dict"]
- model = instantiate_from_config(config.model)
- m, u = model.load_state_dict(sd, strict=False)
- if len(m) > 0 and verbose:
- print("missing keys:")
- print(m)
- if len(u) > 0 and verbose:
- print("unexpected keys:")
- print(u)
-
- model.cuda()
- model.eval()
- return model
-
-
-def put_watermark(img, wm_encoder=None):
- if wm_encoder is not None:
- img = cv2.cvtColor(np.array(img), cv2.COLOR_RGB2BGR)
- img = wm_encoder.encode(img, 'dwtDct')
- img = Image.fromarray(img[:, :, ::-1])
- return img
-
-
-def load_replacement(x):
- try:
- hwc = x.shape
- y = Image.open("assets/rick.jpeg").convert("RGB").resize((hwc[1], hwc[0]))
- y = (np.array(y)/255.0).astype(x.dtype)
- assert y.shape == x.shape
- return y
- except Exception:
- return x
-
-
-def check_safety(x_image):
- safety_checker_input = safety_feature_extractor(numpy_to_pil(x_image), return_tensors="pt")
- x_checked_image, has_nsfw_concept = safety_checker(images=x_image, clip_input=safety_checker_input.pixel_values)
- assert x_checked_image.shape[0] == len(has_nsfw_concept)
- for i in range(len(has_nsfw_concept)):
- if has_nsfw_concept[i]:
- x_checked_image[i] = load_replacement(x_checked_image[i])
- return x_checked_image, has_nsfw_concept
-
-
-def main():
- parser = argparse.ArgumentParser()
-
- parser.add_argument(
- "--prompt",
- type=str,
- nargs="?",
- default="a painting of a virus monster playing guitar",
- help="the prompt to render"
- )
- parser.add_argument(
- "--outdir",
- type=str,
- nargs="?",
- help="dir to write results to",
- default="outputs/txt2img-samples"
- )
- parser.add_argument(
- "--skip_grid",
- action='store_true',
- help="do not save a grid, only individual samples. Helpful when evaluating lots of samples",
- )
- parser.add_argument(
- "--skip_save",
- action='store_true',
- help="do not save individual samples. For speed measurements.",
- )
- parser.add_argument(
- "--ddim_steps",
- type=int,
- default=50,
- help="number of ddim sampling steps",
- )
- parser.add_argument(
- "--plms",
- action='store_true',
- help="use plms sampling",
- )
- parser.add_argument(
- "--dpm_solver",
- action='store_true',
- help="use dpm_solver sampling",
- )
- parser.add_argument(
- "--laion400m",
- action='store_true',
- help="uses the LAION400M model",
- )
- parser.add_argument(
- "--fixed_code",
- action='store_true',
- help="if enabled, uses the same starting code across samples ",
- )
- parser.add_argument(
- "--ddim_eta",
- type=float,
- default=0.0,
- help="ddim eta (eta=0.0 corresponds to deterministic sampling",
- )
- parser.add_argument(
- "--n_iter",
- type=int,
- default=2,
- help="sample this often",
- )
- parser.add_argument(
- "--H",
- type=int,
- default=512,
- help="image height, in pixel space",
- )
- parser.add_argument(
- "--W",
- type=int,
- default=512,
- help="image width, in pixel space",
- )
- parser.add_argument(
- "--C",
- type=int,
- default=4,
- help="latent channels",
- )
- parser.add_argument(
- "--f",
- type=int,
- default=8,
- help="downsampling factor",
- )
- parser.add_argument(
- "--n_samples",
- type=int,
- default=3,
- help="how many samples to produce for each given prompt. A.k.a. batch size",
- )
- parser.add_argument(
- "--n_rows",
- type=int,
- default=0,
- help="rows in the grid (default: n_samples)",
- )
- parser.add_argument(
- "--scale",
- type=float,
- default=7.5,
- help="unconditional guidance scale: eps = eps(x, empty) + scale * (eps(x, cond) - eps(x, empty))",
- )
- parser.add_argument(
- "--from-file",
- type=str,
- help="if specified, load prompts from this file",
- )
- parser.add_argument(
- "--config",
- type=str,
- default="configs/stable-diffusion/v1-inference.yaml",
- help="path to config which constructs model",
- )
- parser.add_argument(
- "--ckpt",
- type=str,
- default="models/ldm/stable-diffusion-v1/model.ckpt",
- help="path to checkpoint of model",
- )
- parser.add_argument(
- "--seed",
- type=int,
- default=42,
- help="the seed (for reproducible sampling)",
- )
- parser.add_argument(
- "--precision",
- type=str,
- help="evaluate at this precision",
- choices=["full", "autocast"],
- default="autocast"
- )
- opt = parser.parse_args()
-
- if opt.laion400m:
- print("Falling back to LAION 400M model...")
- opt.config = "configs/latent-diffusion/txt2img-1p4B-eval.yaml"
- opt.ckpt = "models/ldm/text2img-large/model.ckpt"
- opt.outdir = "outputs/txt2img-samples-laion400m"
-
- seed_everything(opt.seed)
-
- config = OmegaConf.load(f"{opt.config}")
- model = load_model_from_config(config, f"{opt.ckpt}")
-
- device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
- model = model.to(device)
-
- if opt.dpm_solver:
- sampler = DPMSolverSampler(model)
- elif opt.plms:
- sampler = PLMSSampler(model)
- else:
- sampler = DDIMSampler(model)
-
- os.makedirs(opt.outdir, exist_ok=True)
- outpath = opt.outdir
-
- print("Creating invisible watermark encoder (see https://github.com/ShieldMnt/invisible-watermark)...")
- wm = "StableDiffusionV1"
- wm_encoder = WatermarkEncoder()
- wm_encoder.set_watermark('bytes', wm.encode('utf-8'))
-
- batch_size = opt.n_samples
- n_rows = opt.n_rows if opt.n_rows > 0 else batch_size
- if not opt.from_file:
- prompt = opt.prompt
- assert prompt is not None
- data = [batch_size * [prompt]]
-
- else:
- print(f"reading prompts from {opt.from_file}")
- with open(opt.from_file, "r") as f:
- data = f.read().splitlines()
- data = list(chunk(data, batch_size))
-
- sample_path = os.path.join(outpath, "samples")
- os.makedirs(sample_path, exist_ok=True)
- base_count = len(os.listdir(sample_path))
- grid_count = len(os.listdir(outpath)) - 1
-
- start_code = None
- if opt.fixed_code:
- start_code = torch.randn([opt.n_samples, opt.C, opt.H // opt.f, opt.W // opt.f], device=device)
-
- precision_scope = autocast if opt.precision=="autocast" else nullcontext
- with torch.no_grad():
- with precision_scope("cuda"):
- with model.ema_scope():
- tic = time.time()
- all_samples = list()
- for n in trange(opt.n_iter, desc="Sampling"):
- for prompts in tqdm(data, desc="data"):
- uc = None
- if opt.scale != 1.0:
- uc = model.get_learned_conditioning(batch_size * [""])
- if isinstance(prompts, tuple):
- prompts = list(prompts)
- c = model.get_learned_conditioning(prompts)
- shape = [opt.C, opt.H // opt.f, opt.W // opt.f]
- samples_ddim, _ = sampler.sample(S=opt.ddim_steps,
- conditioning=c,
- batch_size=opt.n_samples,
- shape=shape,
- verbose=False,
- unconditional_guidance_scale=opt.scale,
- unconditional_conditioning=uc,
- eta=opt.ddim_eta,
- x_T=start_code)
-
- x_samples_ddim = model.decode_first_stage(samples_ddim)
- x_samples_ddim = torch.clamp((x_samples_ddim + 1.0) / 2.0, min=0.0, max=1.0)
- x_samples_ddim = x_samples_ddim.cpu().permute(0, 2, 3, 1).numpy()
-
- x_checked_image, has_nsfw_concept = check_safety(x_samples_ddim)
-
- x_checked_image_torch = torch.from_numpy(x_checked_image).permute(0, 3, 1, 2)
-
- if not opt.skip_save:
- for x_sample in x_checked_image_torch:
- x_sample = 255. * rearrange(x_sample.cpu().numpy(), 'c h w -> h w c')
- img = Image.fromarray(x_sample.astype(np.uint8))
- img = put_watermark(img, wm_encoder)
- img.save(os.path.join(sample_path, f"{base_count:05}.png"))
- base_count += 1
-
- if not opt.skip_grid:
- all_samples.append(x_checked_image_torch)
-
- if not opt.skip_grid:
- # additionally, save as grid
- grid = torch.stack(all_samples, 0)
- grid = rearrange(grid, 'n b c h w -> (n b) c h w')
- grid = make_grid(grid, nrow=n_rows)
-
- # to image
- grid = 255. * rearrange(grid, 'c h w -> h w c').cpu().numpy()
- img = Image.fromarray(grid.astype(np.uint8))
- img = put_watermark(img, wm_encoder)
- img.save(os.path.join(outpath, f'grid-{grid_count:04}.png'))
- grid_count += 1
-
- toc = time.time()
-
- print(f"Your samples are ready and waiting for you here: \n{outpath} \n"
- f" \nEnjoy.")
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/Kevin676/Telephone-Interviewing_PpaddleSpeech-TTS/app.py b/spaces/Kevin676/Telephone-Interviewing_PpaddleSpeech-TTS/app.py
deleted file mode 100644
index b17f4af589a7ab9b2a24f048f939a0783ecee8a9..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/Telephone-Interviewing_PpaddleSpeech-TTS/app.py
+++ /dev/null
@@ -1,123 +0,0 @@
-import gradio as gr
-import os
-
-os.system('pip install paddlespeech')
-os.system('pip install paddlepaddle')
-
-from transformers import AutoModel, AutoTokenizer
-from TTS.api import TTS
-
-tts = TTS(model_name="voice_conversion_models/multilingual/vctk/freevc24", progress_bar=False, gpu=True)
-
-tts1 = TTS(model_name="tts_models/multilingual/multi-dataset/your_tts", progress_bar=False, gpu=True)
-
-import torch
-import torchaudio
-from speechbrain.pretrained import SpectralMaskEnhancement
-
-enhance_model = SpectralMaskEnhancement.from_hparams(
-source="speechbrain/metricgan-plus-voicebank",
-savedir="pretrained_models/metricgan-plus-voicebank",
-run_opts={"device":"cuda"},
-)
-
-tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True)
-model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).half().cuda()
-model = model.eval()
-
-def inference(text):
- os.system("paddlespeech tts --input '"+text+"' --output output.wav")
- return "output.wav"
-
-def predict(input, history=None):
- if history is None:
- history = []
- response, history = model.chat(tokenizer, input, history)
-
- return history, history, response
-
-def chinese(text_cn, upload1, VoiceMicrophone1):
-
- if upload1 is not None:
-
- tts.voice_conversion_to_file(source_wav=inference(text_cn), target_wav=upload1, file_path="output0.wav")
-
- else:
- tts.voice_conversion_to_file(source_wav=inference(text_cn), target_wav=VoiceMicrophone1, file_path="output0.wav")
-
-
- noisy = enhance_model.load_audio(
- "output0.wav"
- ).unsqueeze(0)
-
- enhanced = enhance_model.enhance_batch(noisy, lengths=torch.tensor([1.]))
- torchaudio.save("enhanced.wav", enhanced.cpu(), 16000)
-
- return "enhanced.wav"
-
-def english(text_en, upload, VoiceMicrophone):
- if upload is not None:
- tts1.tts_to_file(text_en.strip(), speaker_wav = upload, language="en", file_path="output.wav")
-
- else:
- tts1.tts_to_file(text_en.strip(), speaker_wav = VoiceMicrophone, language="en", file_path="output.wav")
-
- noisy = enhance_model.load_audio(
- "output.wav"
- ).unsqueeze(0)
-
- enhanced = enhance_model.enhance_batch(noisy, lengths=torch.tensor([1.]))
- torchaudio.save("enhanced.wav", enhanced.cpu(), 16000)
-
- return "enhanced.wav"
-
-with gr.Blocks() as demo:
- gr.Markdown(
- """ # 🥳💬💕 - TalktoAI,随时随地,谈天说地!
-
- ### 🤖 - 让有人文关怀的AI造福每一个人!AI向善,文明璀璨!TalktoAI - Enable the future!
-
- """
- )
- state = gr.State([])
- chatbot = gr.Chatbot([], elem_id="chatbot").style(height=300)
- res = gr.Textbox(lines=1, placeholder="最新的回答在这里", show_label = False).style(container=False)
- with gr.Row():
-# with gr.Column(scale=4):
- txt = gr.Textbox(label = "说点什么吧(中英皆可)", lines=1)
-# with gr.Column(scale=1):
- button = gr.Button("开始对话吧")
- txt.submit(predict, [txt, state], [chatbot, state, res])
- button.click(predict, [txt, state], [chatbot, state, res])
-
- with gr.Row().style(mobile_collapse=False, equal_height=True):
- inp3 = res
- inp4 = gr.Audio(source="upload", label = "请上传您喜欢的声音(wav/mp3文件);长语音(90s左右)效果更好", type="filepath")
- inp5 = gr.Audio(source="microphone", type="filepath", label = '请用麦克风上传您喜欢的声音,与文件上传二选一即可')
- btn1 = gr.Button("用喜欢的声音听一听吧(中文)")
-
- btn2 = gr.Button("用喜欢的声音听一听吧(英文)")
- with gr.Row():
- out1 = gr.Audio(label="为您合成的专属声音(中文)")
- out2 = gr.Audio(label="为您合成的专属声音(英文)")
- btn1.click(chinese, [inp3, inp4, inp5], [out1])
- btn2.click(english, [inp3, inp4, inp5], [out2])
-
- gr.Markdown(
- """ ### 注意❗:请不要输入或生成会对个人以及组织造成侵害的内容,此程序仅供科研、学习及娱乐使用。用户输入或生成的内容与程序开发者无关,请自觉合法合规使用,违反者一切后果自负。
-
- ### Model by [ChatGLM-6B](https://huggingface.co/THUDM/chatglm-6b). Thanks to [THUDM](https://github.com/THUDM). Please follow me on [Bilibili](https://space.bilibili.com/501495851?spm_id_from=333.1007.0.0).
-
- """
- )
-
- gr.HTML('''
-
- ''')
-
-demo.queue().launch(show_error=True)
diff --git a/spaces/KevinQHLin/UniVTG/model/position_encoding.py b/spaces/KevinQHLin/UniVTG/model/position_encoding.py
deleted file mode 100644
index 7b9bad0b7867faede6179cd27e0a7c859137dcb8..0000000000000000000000000000000000000000
--- a/spaces/KevinQHLin/UniVTG/model/position_encoding.py
+++ /dev/null
@@ -1,126 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-"""
-Various positional encodings for the transformer.
-"""
-import math
-import torch
-from torch import nn
-import numpy as np
-
-def PositionalEncoding(n_position, d_hid):
- def get_position_angle_vec(position, d_hid):
- return [position / np.power(10000, 2 * (hid_j // 2) / d_hid) for hid_j in range(d_hid)]
-
- sinusoid_table = np.array([get_position_angle_vec(pos_i, d_hid) for pos_i in range(n_position)])
- sinusoid_table[:, 0::2] = np.sin(sinusoid_table[:, 0::2]) # dim 2i
- sinusoid_table[:, 1::2] = np.cos(sinusoid_table[:, 1::2]) # dim 2i+1
- return torch.FloatTensor(sinusoid_table) # shape:(1, maxLen(n_position), d_hid)
-
-class TrainablePositionalEncoding(nn.Module):
- """Construct the embeddings from word, position and token_type embeddings.
- """
- def __init__(self, max_position_embeddings, hidden_size, dropout=0.1):
- super(TrainablePositionalEncoding, self).__init__()
- self.position_embeddings = nn.Embedding(max_position_embeddings, hidden_size)
- self.LayerNorm = nn.LayerNorm(hidden_size)
- self.dropout = nn.Dropout(dropout)
-
- def forward(self, input_feat):
- """
- Args:
- input_feat: (N, L, D)
- """
- bsz, seq_length = input_feat.shape[:2]
- position_ids = torch.arange(seq_length, dtype=torch.long, device=input_feat.device)
- position_ids = position_ids.unsqueeze(0).repeat(bsz, 1) # (N, L)
-
- position_embeddings = self.position_embeddings(position_ids)
-
- embeddings = self.LayerNorm(input_feat + position_embeddings)
- embeddings = self.dropout(embeddings)
- return embeddings
-
-
-class PositionEmbeddingSine(nn.Module):
- """
- This is a more standard version of the position embedding, very similar to the one
- used by the Attention is all you need paper, generalized to work on images. (To 1D sequences)
- """
- def __init__(self, num_pos_feats=64, temperature=10000, normalize=False, scale=None):
- super().__init__()
- self.num_pos_feats = num_pos_feats
- self.temperature = temperature
- self.normalize = normalize
- if scale is not None and normalize is False:
- raise ValueError("normalize should be True if scale is passed")
- if scale is None:
- scale = 2 * math.pi
- self.scale = scale
-
- def forward(self, x, mask):
- """
- Args:
- x: torch.tensor, (batch_size, L, d)
- mask: torch.tensor, (batch_size, L), with 1 as valid
-
- Returns:
-
- """
- assert mask is not None
- x_embed = mask.cumsum(1, dtype=torch.float32) # (bsz, L)
- if self.normalize:
- eps = 1e-6
- x_embed = x_embed / (x_embed[:, -1:] + eps) * self.scale
-
- dim_t = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device)
- # import pdb; pdb.set_trace()
- # dim_t = self.temperature ** (2 * (dim_t // 2) / self.num_pos_feats)
- dim_t = self.temperature ** (2 * torch.div(dim_t, 2).int() / self.num_pos_feats)
-
- pos_x = x_embed[:, :, None] / dim_t # (bsz, L, num_pos_feats)
- pos_x = torch.stack((pos_x[:, :, 0::2].sin(), pos_x[:, :, 1::2].cos()), dim=3).flatten(2) # (bsz, L, num_pos_feats*2)
- # import ipdb; ipdb.set_trace()
- return pos_x # .permute(0, 2, 1) # (bsz, num_pos_feats*2, L)
-
-
-class PositionEmbeddingLearned(nn.Module):
- """
- Absolute pos embedding, learned.
- """
- def __init__(self, num_pos_feats=256):
- super().__init__()
- self.row_embed = nn.Embedding(50, num_pos_feats)
- self.col_embed = nn.Embedding(50, num_pos_feats)
- self.reset_parameters()
-
- def reset_parameters(self):
- nn.init.uniform_(self.row_embed.weight)
- nn.init.uniform_(self.col_embed.weight)
-
- def forward(self, x, mask):
- h, w = x.shape[-2:]
- i = torch.arange(w, device=x.device)
- j = torch.arange(h, device=x.device)
- x_emb = self.col_embed(i)
- y_emb = self.row_embed(j)
- pos = torch.cat([
- x_emb.unsqueeze(0).repeat(h, 1, 1),
- y_emb.unsqueeze(1).repeat(1, w, 1),
- ], dim=-1).permute(2, 0, 1).unsqueeze(0).repeat(x.shape[0], 1, 1, 1)
- return pos
-
-
-def build_position_encoding(args):
- N_steps = args.hidden_dim
- if args.position_embedding in ('v2', 'sine'):
- # TODO find a better way of exposing other arguments
- position_embedding = PositionEmbeddingSine(N_steps, normalize=True)
- # elif args.position_embedding in ('v3', 'learned'):
- # position_embedding = PositionEmbeddingLearned(N_steps)
- else:
- raise ValueError(f"not supported {args.position_embedding}")
-
- txt_pos_embed = TrainablePositionalEncoding(
- max_position_embeddings=args.max_q_l,
- hidden_size=args.hidden_dim, dropout=args.input_dropout)
- return position_embedding, txt_pos_embed
diff --git a/spaces/KyanChen/RSPrompter/mmdet/visualization/__init__.py b/spaces/KyanChen/RSPrompter/mmdet/visualization/__init__.py
deleted file mode 100644
index 71881ac1ee3b77061bc9f7d9290ad536d5909690..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmdet/visualization/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .local_visualizer import DetLocalVisualizer
-from .palette import get_palette, jitter_color, palette_val
-
-__all__ = ['palette_val', 'get_palette', 'DetLocalVisualizer', 'jitter_color']
diff --git a/spaces/Lamai/LAMAIGPT/scripts/check_requirements.py b/spaces/Lamai/LAMAIGPT/scripts/check_requirements.py
deleted file mode 100644
index e4eab024a6280c0d54110c69b2e03de639325fa6..0000000000000000000000000000000000000000
--- a/spaces/Lamai/LAMAIGPT/scripts/check_requirements.py
+++ /dev/null
@@ -1,32 +0,0 @@
-import sys
-
-import pkg_resources
-
-
-def main():
- requirements_file = sys.argv[1]
- with open(requirements_file, "r") as f:
- required_packages = [
- line.strip().split("#")[0].strip() for line in f.readlines()
- ]
-
- installed_packages = [package.key for package in pkg_resources.working_set]
-
- missing_packages = []
- for package in required_packages:
- if not package: # Skip empty lines
- continue
- package_name = package.strip().split("==")[0]
- if package_name.lower() not in installed_packages:
- missing_packages.append(package_name)
-
- if missing_packages:
- print("Missing packages:")
- print(", ".join(missing_packages))
- sys.exit(1)
- else:
- print("All packages are installed.")
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/LaynzKunz/AI-Cover-Gen-Web-Ui/src/my_utils.py b/spaces/LaynzKunz/AI-Cover-Gen-Web-Ui/src/my_utils.py
deleted file mode 100644
index a5258394b8ae5385daa665ab6ba6380507d4798a..0000000000000000000000000000000000000000
--- a/spaces/LaynzKunz/AI-Cover-Gen-Web-Ui/src/my_utils.py
+++ /dev/null
@@ -1,21 +0,0 @@
-import ffmpeg
-import numpy as np
-
-
-def load_audio(file, sr):
- try:
- # https://github.com/openai/whisper/blob/main/whisper/audio.py#L26
- # This launches a subprocess to decode audio while down-mixing and resampling as necessary.
- # Requires the ffmpeg CLI and `ffmpeg-python` package to be installed.
- file = (
- file.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
- ) # 防止小白拷路径头尾带了空格和"和回车
- out, _ = (
- ffmpeg.input(file, threads=0)
- .output("-", format="f32le", acodec="pcm_f32le", ac=1, ar=sr)
- .run(cmd=["ffmpeg", "-nostdin"], capture_stdout=True, capture_stderr=True)
- )
- except Exception as e:
- raise RuntimeError(f"Failed to load audio: {e}")
-
- return np.frombuffer(out, np.float32).flatten()
diff --git a/spaces/LaynzKunz/Advanced-RVC-Inference/lib/infer_pack/modules.py b/spaces/LaynzKunz/Advanced-RVC-Inference/lib/infer_pack/modules.py
deleted file mode 100644
index c83289df7c79a4810dacd15c050148544ba0b6a9..0000000000000000000000000000000000000000
--- a/spaces/LaynzKunz/Advanced-RVC-Inference/lib/infer_pack/modules.py
+++ /dev/null
@@ -1,522 +0,0 @@
-import copy
-import math
-import numpy as np
-import scipy
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-from lib.infer_pack import commons
-from lib.infer_pack.commons import init_weights, get_padding
-from lib.infer_pack.transforms import piecewise_rational_quadratic_transform
-
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(
- self,
- in_channels,
- hidden_channels,
- out_channels,
- kernel_size,
- n_layers,
- p_dropout,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(
- nn.Conv1d(
- in_channels, hidden_channels, kernel_size, padding=kernel_size // 2
- )
- )
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout))
- for _ in range(n_layers - 1):
- self.conv_layers.append(
- nn.Conv1d(
- hidden_channels,
- hidden_channels,
- kernel_size,
- padding=kernel_size // 2,
- )
- )
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
-
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size**i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(
- nn.Conv1d(
- channels,
- channels,
- kernel_size,
- groups=channels,
- dilation=dilation,
- padding=padding,
- )
- )
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(
- self,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- p_dropout=0,
- ):
- super(WN, self).__init__()
- assert kernel_size % 2 == 1
- self.hidden_channels = hidden_channels
- self.kernel_size = (kernel_size,)
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(
- gin_channels, 2 * hidden_channels * n_layers, 1
- )
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight")
-
- for i in range(n_layers):
- dilation = dilation_rate**i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(
- hidden_channels,
- 2 * hidden_channels,
- kernel_size,
- dilation=dilation,
- padding=padding,
- )
- in_layer = torch.nn.utils.weight_norm(in_layer, name="weight")
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight")
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:, : self.hidden_channels, :]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:, self.hidden_channels :, :]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2]),
- )
- ),
- ]
- )
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- ]
- )
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]),
- )
- ),
- ]
- )
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels, 1))
- self.logs = nn.Parameter(torch.zeros(channels, 1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1, 2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False,
- ):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=p_dropout,
- gin_channels=gin_channels,
- )
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels] * 2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1, 2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class ConvFlow(nn.Module):
- def __init__(
- self,
- in_channels,
- filter_channels,
- kernel_size,
- n_layers,
- num_bins=10,
- tail_bound=5.0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0)
- self.proj = nn.Conv1d(
- filter_channels, self.half_channels * (num_bins * 3 - 1), 1
- )
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt(
- self.filter_channels
- )
- unnormalized_derivatives = h[..., 2 * self.num_bins :]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(
- x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails="linear",
- tail_bound=self.tail_bound,
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1, 2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/LinkSoul/LLaSM/style.css b/spaces/LinkSoul/LLaSM/style.css
deleted file mode 100644
index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000
--- a/spaces/LinkSoul/LLaSM/style.css
+++ /dev/null
@@ -1,28 +0,0 @@
-body {
- padding: 2rem;
- font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif;
-}
-
-h1 {
- font-size: 16px;
- margin-top: 0;
-}
-
-p {
- color: rgb(107, 114, 128);
- font-size: 15px;
- margin-bottom: 10px;
- margin-top: 5px;
-}
-
-.card {
- max-width: 620px;
- margin: 0 auto;
- padding: 16px;
- border: 1px solid lightgray;
- border-radius: 16px;
-}
-
-.card p:last-child {
- margin-bottom: 0;
-}
diff --git a/spaces/Loren/Streamlit_OCR_comparator/configs/textdet/drrg/README.md b/spaces/Loren/Streamlit_OCR_comparator/configs/textdet/drrg/README.md
deleted file mode 100644
index 2f2beb1b757ccbf2dd2e41a70769d963b098264d..0000000000000000000000000000000000000000
--- a/spaces/Loren/Streamlit_OCR_comparator/configs/textdet/drrg/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
-# DRRG
-
-> [Deep relational reasoning graph network for arbitrary shape text detection](https://arxiv.org/abs/2003.07493)
-
-
-
-## Abstract
-
-Arbitrary shape text detection is a challenging task due to the high variety and complexity of scenes texts. In this paper, we propose a novel unified relational reasoning graph network for arbitrary shape text detection. In our method, an innovative local graph bridges a text proposal model via Convolutional Neural Network (CNN) and a deep relational reasoning network via Graph Convolutional Network (GCN), making our network end-to-end trainable. To be concrete, every text instance will be divided into a series of small rectangular components, and the geometry attributes (e.g., height, width, and orientation) of the small components will be estimated by our text proposal model. Given the geometry attributes, the local graph construction model can roughly establish linkages between different text components. For further reasoning and deducing the likelihood of linkages between the component and its neighbors, we adopt a graph-based network to perform deep relational reasoning on local graphs. Experiments on public available datasets demonstrate the state-of-the-art performance of our method.
-
-
-
-
-
-## Results and models
-
-### CTW1500
-
-| Method | Pretrained Model | Training set | Test set | #epochs | Test size | Recall | Precision | Hmean | Download |
-| :-------------------------------------------------: | :--------------: | :-----------: | :----------: | :-----: | :-------: | :-----------: | :-----------: | :-----------: | :---------------------------------------------------: |
-| [DRRG](configs/textdet/drrg/drrg_r50_fpn_unet_1200e_ctw1500.py) | ImageNet | CTW1500 Train | CTW1500 Test | 1200 | 640 | 0.822 (0.791) | 0.858 (0.862) | 0.840 (0.825) | [model](https://download.openmmlab.com/mmocr/textdet/drrg/drrg_r50_fpn_unet_1200e_ctw1500_20211022-fb30b001.pth) \\ [log](https://download.openmmlab.com/mmocr/textdet/drrg/20210511_234719.log) |
-
-```{note}
-We've upgraded our IoU backend from `Polygon3` to `shapely`. There are some performance differences for some models due to the backends' different logics to handle invalid polygons (more info [here](https://github.com/open-mmlab/mmocr/issues/465)). **New evaluation result is presented in brackets** and new logs will be uploaded soon.
-```
-
-## Citation
-
-```bibtex
-@article{zhang2020drrg,
- title={Deep relational reasoning graph network for arbitrary shape text detection},
- author={Zhang, Shi-Xue and Zhu, Xiaobin and Hou, Jie-Bo and Liu, Chang and Yang, Chun and Wang, Hongfa and Yin, Xu-Cheng},
- booktitle={CVPR},
- pages={9699-9708},
- year={2020}
-}
-```
diff --git a/spaces/LuxOAI/GPT4-30b/app.py b/spaces/LuxOAI/GPT4-30b/app.py
deleted file mode 100644
index 19a5ea60582f2a7c07c3cc6f8a47718f4970f785..0000000000000000000000000000000000000000
--- a/spaces/LuxOAI/GPT4-30b/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/MetaIX/GPT4-X-Alpasta-30b").launch()
\ No newline at end of file
diff --git a/spaces/MWilinski/bot/data/upload_csv_dataset.py b/spaces/MWilinski/bot/data/upload_csv_dataset.py
deleted file mode 100644
index c686b001e5d06c036508b0b8344652ef624eabfb..0000000000000000000000000000000000000000
--- a/spaces/MWilinski/bot/data/upload_csv_dataset.py
+++ /dev/null
@@ -1,24 +0,0 @@
-import sys
-import pandas as pd
-from datasets import Dataset, DatasetDict
-from sklearn.model_selection import train_test_split
-
-
-
-def main():
- dataset_name = sys.argv[1]
- test_size = float(sys.argv[2]) if len(sys.argv) > 2 else 0.1
- print(f'dataset: {dataset_name}, test size: {test_size}')
-
- filename = f'datasets/{dataset_name}.csv'
- df = pd.read_csv(filename)
- dataset = Dataset.from_pandas(df)
- train_dataset, test_dataset = train_test_split(dataset, test_size=test_size)
- train_dataset = Dataset.from_dict(train_dataset)
- test_dataset = Dataset.from_dict(test_dataset)
- dataset_dict = DatasetDict({'train': train_dataset, 'test': test_dataset})
- dataset_dict.push_to_hub(f'KonradSzafer/{dataset_name}', private=False)
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/Mahiruoshi/vits-chatbot/commons.py b/spaces/Mahiruoshi/vits-chatbot/commons.py
deleted file mode 100644
index 2153153f527d94e2abb641ea00c80b518ff6c5bd..0000000000000000000000000000000000000000
--- a/spaces/Mahiruoshi/vits-chatbot/commons.py
+++ /dev/null
@@ -1,97 +0,0 @@
-import math
-import torch
-from torch.nn import functional as F
-import torch.jit
-
-
-def script_method(fn, _rcb=None):
- return fn
-
-
-def script(obj, optimize=True, _frames_up=0, _rcb=None):
- return obj
-
-
-torch.jit.script_method = script_method
-torch.jit.script = script
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size*dilation - dilation)/2)
-
-
-def intersperse(lst, item):
- result = [item] * (len(lst) * 2 + 1)
- result[1::2] = lst
- return result
-
-
-def slice_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, :, idx_str:idx_end]
- return ret
-
-
-def rand_slice_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def subsequent_mask(length):
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
- return mask
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-def generate_path(duration, mask):
- """
- duration: [b, 1, t_x]
- mask: [b, 1, t_y, t_x]
- """
- device = duration.device
-
- b, _, t_y, t_x = mask.shape
- cum_duration = torch.cumsum(duration, -1)
-
- cum_duration_flat = cum_duration.view(b * t_x)
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
- path = path.view(b, t_x, t_y)
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
- path = path.unsqueeze(1).transpose(2,3) * mask
- return path
diff --git a/spaces/Manmay/tortoise-tts/tortoise/models/stream_generator.py b/spaces/Manmay/tortoise-tts/tortoise/models/stream_generator.py
deleted file mode 100644
index a8dd07b1229b40daf9360e420130fa7e1b5df261..0000000000000000000000000000000000000000
--- a/spaces/Manmay/tortoise-tts/tortoise/models/stream_generator.py
+++ /dev/null
@@ -1,1057 +0,0 @@
-# Adapted from: https://github.com/LowinLi/transformers-stream-generator
-
-from transformers import (
- GenerationConfig,
- GenerationMixin,
- LogitsProcessorList,
- StoppingCriteriaList,
- DisjunctiveConstraint,
- BeamSearchScorer,
- PhrasalConstraint,
- ConstrainedBeamSearchScorer,
- PreTrainedModel,
-)
-import numpy as np
-import random
-import warnings
-import inspect
-from transformers.generation.utils import GenerateOutput, SampleOutput, logger
-import torch
-from typing import Callable, List, Optional, Union
-from torch import nn
-import torch.distributed as dist
-import copy
-
-
-def setup_seed(seed):
- if seed == -1:
- return
- torch.manual_seed(seed)
- if torch.cuda.is_available():
- torch.cuda.manual_seed_all(seed)
- np.random.seed(seed)
- random.seed(seed)
- torch.backends.cudnn.deterministic = True
-
-
-class StreamGenerationConfig(GenerationConfig):
- def __init__(self, **kwargs):
- super().__init__(**kwargs)
- self.do_stream = kwargs.pop("do_stream", False)
-
-
-class NewGenerationMixin(GenerationMixin):
- @torch.no_grad()
- def generate(
- self,
- inputs: Optional[torch.Tensor] = None,
- generation_config: Optional[StreamGenerationConfig] = None,
- logits_processor: Optional[LogitsProcessorList] = None,
- stopping_criteria: Optional[StoppingCriteriaList] = None,
- prefix_allowed_tokens_fn: Optional[
- Callable[[int, torch.Tensor], List[int]]
- ] = None,
- synced_gpus: Optional[bool] = False,
- seed=0,
- **kwargs,
- ) -> Union[GenerateOutput, torch.LongTensor]:
- r"""
-
- Generates sequences of token ids for models with a language modeling head.
-
-
-
- Most generation-controlling parameters are set in `generation_config` which, if not passed, will be set to the
- model's default generation configuration. You can override any `generation_config` by passing the corresponding
- parameters to generate(), e.g. `.generate(inputs, num_beams=4, do_sample=True)`.
-
- For an overview of generation strategies and code examples, check out the [following
- guide](./generation_strategies).
-
-
-
- Parameters:
- inputs (`torch.Tensor` of varying shape depending on the modality, *optional*):
- The sequence used as a prompt for the generation or as model inputs to the encoder. If `None` the
- method initializes it with `bos_token_id` and a batch size of 1. For decoder-only models `inputs`
- should of in the format of `input_ids`. For encoder-decoder models *inputs* can represent any of
- `input_ids`, `input_values`, `input_features`, or `pixel_values`.
- generation_config (`~generation.GenerationConfig`, *optional*):
- The generation configuration to be used as base parametrization for the generation call. `**kwargs`
- passed to generate matching the attributes of `generation_config` will override them. If
- `generation_config` is not provided, the default will be used, which had the following loading
- priority: 1) from the `generation_config.json` model file, if it exists; 2) from the model
- configuration. Please note that unspecified parameters will inherit [`~generation.GenerationConfig`]'s
- default values, whose documentation should be checked to parameterize generation.
- logits_processor (`LogitsProcessorList`, *optional*):
- Custom logits processors that complement the default logits processors built from arguments and
- generation config. If a logit processor is passed that is already created with the arguments or a
- generation config an error is thrown. This feature is intended for advanced users.
- stopping_criteria (`StoppingCriteriaList`, *optional*):
- Custom stopping criteria that complement the default stopping criteria built from arguments and a
- generation config. If a stopping criteria is passed that is already created with the arguments or a
- generation config an error is thrown. This feature is intended for advanced users.
- prefix_allowed_tokens_fn (`Callable[[int, torch.Tensor], List[int]]`, *optional*):
- If provided, this function constraints the beam search to allowed tokens only at each step. If not
- provided no constraint is applied. This function takes 2 arguments: the batch ID `batch_id` and
- `input_ids`. It has to return a list with the allowed tokens for the next generation step conditioned
- on the batch ID `batch_id` and the previously generated tokens `inputs_ids`. This argument is useful
- for constrained generation conditioned on the prefix, as described in [Autoregressive Entity
- Retrieval](https://arxiv.org/abs/2010.00904).
- synced_gpus (`bool`, *optional*, defaults to `False`):
- Whether to continue running the while loop until max_length (needed for ZeRO stage 3)
- kwargs:
- Ad hoc parametrization of `generate_config` and/or additional model-specific kwargs that will be
- forwarded to the `forward` function of the model. If the model is an encoder-decoder model, encoder
- specific kwargs should not be prefixed and decoder specific kwargs should be prefixed with *decoder_*.
-
- Return:
- [`~utils.ModelOutput`] or `torch.LongTensor`: A [`~utils.ModelOutput`] (if `return_dict_in_generate=True`
- or when `config.return_dict_in_generate=True`) or a `torch.FloatTensor`.
-
- If the model is *not* an encoder-decoder model (`model.config.is_encoder_decoder=False`), the possible
- [`~utils.ModelOutput`] types are:
-
- - [`~generation.GreedySearchDecoderOnlyOutput`],
- - [`~generation.SampleDecoderOnlyOutput`],
- - [`~generation.BeamSearchDecoderOnlyOutput`],
- - [`~generation.BeamSampleDecoderOnlyOutput`]
-
- If the model is an encoder-decoder model (`model.config.is_encoder_decoder=True`), the possible
- [`~utils.ModelOutput`] types are:
-
- - [`~generation.GreedySearchEncoderDecoderOutput`],
- - [`~generation.SampleEncoderDecoderOutput`],
- - [`~generation.BeamSearchEncoderDecoderOutput`],
- - [`~generation.BeamSampleEncoderDecoderOutput`]
- """
- setup_seed(seed)
- # 1. Handle `generation_config` and kwargs that might update it, and validate the `.generate()` call
- self._validate_model_class()
-
- # priority: `generation_config` argument > `model.generation_config` (the default generation config)
- if generation_config is None:
- # legacy: users may modify the model configuration to control generation -- update the generation config
- # model attribute accordingly, if it was created from the model config
- if self.generation_config._from_model_config:
- new_generation_config = StreamGenerationConfig.from_model_config(
- self.config
- )
- if new_generation_config != self.generation_config:
- warnings.warn(
- "You have modified the pretrained model configuration to control generation. This is a"
- " deprecated strategy to control generation and will be removed soon, in a future version."
- " Please use a generation configuration file (see"
- " https://huggingface.co/docs/transformers/main_classes/text_generation)"
- )
- self.generation_config = new_generation_config
- generation_config = self.generation_config
-
- generation_config = copy.deepcopy(generation_config)
- model_kwargs = generation_config.update(
- **kwargs
- ) # All unused kwargs must be model kwargs
- # self._validate_model_kwargs(model_kwargs.copy())
-
- # 2. Set generation parameters if not already defined
- logits_processor = (
- logits_processor if logits_processor is not None else LogitsProcessorList()
- )
- stopping_criteria = (
- stopping_criteria
- if stopping_criteria is not None
- else StoppingCriteriaList()
- )
-
- if (
- generation_config.pad_token_id is None
- and generation_config.eos_token_id is not None
- ):
- if model_kwargs.get("attention_mask", None) is None:
- logger.warning(
- "The attention mask and the pad token id were not set. As a consequence, you may observe "
- "unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results."
- )
- eos_token_id = generation_config.eos_token_id
- if isinstance(eos_token_id, list):
- eos_token_id = eos_token_id[0]
- logger.warning(
- f"Setting `pad_token_id` to `eos_token_id`:{eos_token_id} for open-end generation."
- )
- generation_config.pad_token_id = eos_token_id
-
- # 3. Define model inputs
- # inputs_tensor has to be defined
- # model_input_name is defined if model-specific keyword input is passed
- # otherwise model_input_name is None
- # all model-specific keyword inputs are removed from `model_kwargs`
- inputs_tensor, model_input_name, model_kwargs = self._prepare_model_inputs(
- inputs, generation_config.bos_token_id, model_kwargs
- )
- batch_size = inputs_tensor.shape[0]
-
- # 4. Define other model kwargs
- model_kwargs["output_attentions"] = generation_config.output_attentions
- model_kwargs["output_hidden_states"] = generation_config.output_hidden_states
- model_kwargs["use_cache"] = generation_config.use_cache
-
- accepts_attention_mask = "attention_mask" in set(
- inspect.signature(self.forward).parameters.keys()
- )
- requires_attention_mask = "encoder_outputs" not in model_kwargs
-
- if (
- model_kwargs.get("attention_mask", None) is None
- and requires_attention_mask
- and accepts_attention_mask
- ):
- model_kwargs[
- "attention_mask"
- ] = self._prepare_attention_mask_for_generation(
- inputs_tensor,
- generation_config.pad_token_id,
- generation_config.eos_token_id,
- )
-
- # decoder-only models should use left-padding for generation
- if not self.config.is_encoder_decoder:
- if (
- generation_config.pad_token_id is not None
- and torch.sum(inputs_tensor[:, -1] == generation_config.pad_token_id)
- > 0
- ):
- logger.warning(
- "A decoder-only architecture is being used, but right-padding was detected! For correct "
- "generation results, please set `padding_side='left'` when initializing the tokenizer."
- )
-
- if self.config.is_encoder_decoder and "encoder_outputs" not in model_kwargs:
- # if model is encoder decoder encoder_outputs are created
- # and added to `model_kwargs`
- model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(
- inputs_tensor, model_kwargs, model_input_name
- )
-
- # 5. Prepare `input_ids` which will be used for auto-regressive generation
- if self.config.is_encoder_decoder:
- input_ids = self._prepare_decoder_input_ids_for_generation(
- batch_size,
- decoder_start_token_id=generation_config.decoder_start_token_id,
- bos_token_id=generation_config.bos_token_id,
- model_kwargs=model_kwargs,
- device=inputs_tensor.device,
- )
- else:
- # if decoder-only then inputs_tensor has to be `input_ids`
- input_ids = inputs_tensor
-
- # 6. Prepare `max_length` depending on other stopping criteria.
- input_ids_seq_length = input_ids.shape[-1]
- has_default_max_length = (
- kwargs.get("max_length") is None
- and generation_config.max_length is not None
- )
- if has_default_max_length and generation_config.max_new_tokens is None:
- warnings.warn(
- "Neither `max_length` nor `max_new_tokens` has been set, `max_length` will default to"
- f" {generation_config.max_length} (`generation_config.max_length`). Controlling `max_length` via the"
- " config is deprecated and `max_length` will be removed from the config in v5 of Transformers -- we"
- " recommend using `max_new_tokens` to control the maximum length of the generation.",
- UserWarning,
- )
- elif has_default_max_length and generation_config.max_new_tokens is not None:
- generation_config.max_length = (
- generation_config.max_new_tokens + input_ids_seq_length
- )
- elif (
- not has_default_max_length and generation_config.max_new_tokens is not None
- ):
- raise ValueError(
- "Both `max_new_tokens` and `max_length` have been set but they serve the same purpose -- setting a"
- " limit to the generated output length. Remove one of those arguments. Please refer to the"
- " documentation for more information. "
- "(https://huggingface.co/docs/transformers/main/en/main_classes/text_generation)"
- )
-
- if (
- generation_config.min_length is not None
- and generation_config.min_length > generation_config.max_length
- ):
- raise ValueError(
- f"Unfeasible length constraints: the minimum length ({generation_config.min_length}) is larger than"
- f" the maximum length ({generation_config.max_length})"
- )
- if input_ids_seq_length >= generation_config.max_length:
- input_ids_string = (
- "decoder_input_ids" if self.config.is_encoder_decoder else "input_ids"
- )
- logger.warning(
- f"Input length of {input_ids_string} is {input_ids_seq_length}, but `max_length` is set to"
- f" {generation_config.max_length}. This can lead to unexpected behavior. You should consider"
- " increasing `max_new_tokens`."
- )
-
- # 7. determine generation mode
- is_constraint_gen_mode = (
- generation_config.constraints is not None
- or generation_config.force_words_ids is not None
- )
-
- is_contrastive_search_gen_mode = (
- generation_config.top_k is not None
- and generation_config.top_k > 1
- and generation_config.do_sample is False
- and generation_config.penalty_alpha is not None
- and generation_config.penalty_alpha > 0
- )
-
- is_greedy_gen_mode = (
- (generation_config.num_beams == 1)
- and (generation_config.num_beam_groups == 1)
- and generation_config.do_sample is False
- and not is_constraint_gen_mode
- and not is_contrastive_search_gen_mode
- )
- is_sample_gen_mode = (
- (generation_config.num_beams == 1)
- and (generation_config.num_beam_groups == 1)
- and generation_config.do_sample is True
- and generation_config.do_stream is False
- and not is_constraint_gen_mode
- and not is_contrastive_search_gen_mode
- )
- is_sample_gen_stream_mode = (
- (generation_config.num_beams == 1)
- and (generation_config.num_beam_groups == 1)
- and generation_config.do_stream is True
- and not is_constraint_gen_mode
- and not is_contrastive_search_gen_mode
- )
- is_beam_gen_mode = (
- (generation_config.num_beams > 1)
- and (generation_config.num_beam_groups == 1)
- and generation_config.do_sample is False
- and not is_constraint_gen_mode
- and not is_contrastive_search_gen_mode
- )
- is_beam_sample_gen_mode = (
- (generation_config.num_beams > 1)
- and (generation_config.num_beam_groups == 1)
- and generation_config.do_sample is True
- and not is_constraint_gen_mode
- and not is_contrastive_search_gen_mode
- )
- is_group_beam_gen_mode = (
- (generation_config.num_beams > 1)
- and (generation_config.num_beam_groups > 1)
- and not is_constraint_gen_mode
- and not is_contrastive_search_gen_mode
- )
-
- if generation_config.num_beam_groups > generation_config.num_beams:
- raise ValueError(
- "`num_beam_groups` has to be smaller or equal to `num_beams`"
- )
- if is_group_beam_gen_mode and generation_config.do_sample is True:
- raise ValueError(
- "Diverse beam search cannot be used in sampling mode. Make sure that `do_sample` is set to `False`."
- )
-
- if self.device.type != input_ids.device.type:
- warnings.warn(
- "You are calling .generate() with the `input_ids` being on a device type different"
- f" than your model's device. `input_ids` is on {input_ids.device.type}, whereas the model"
- f" is on {self.device.type}. You may experience unexpected behaviors or slower generation."
- " Please make sure that you have put `input_ids` to the"
- f" correct device by calling for example input_ids = input_ids.to('{self.device.type}') before"
- " running `.generate()`.",
- UserWarning,
- )
- # 8. prepare distribution pre_processing samplers
- logits_processor = self._get_logits_processor(
- generation_config=generation_config,
- input_ids_seq_length=input_ids_seq_length,
- encoder_input_ids=inputs_tensor,
- prefix_allowed_tokens_fn=prefix_allowed_tokens_fn,
- logits_processor=logits_processor,
- )
-
- # 9. prepare stopping criteria
- stopping_criteria = self._get_stopping_criteria(
- generation_config=generation_config, stopping_criteria=stopping_criteria
- )
- # 10. go into different generation modes
- if is_greedy_gen_mode:
- if generation_config.num_return_sequences > 1:
- raise ValueError(
- f"num_return_sequences has to be 1, but is {generation_config.num_return_sequences} when doing"
- " greedy search."
- )
-
- # 11. run greedy search
- return self.greedy_search(
- input_ids,
- logits_processor=logits_processor,
- stopping_criteria=stopping_criteria,
- pad_token_id=generation_config.pad_token_id,
- eos_token_id=generation_config.eos_token_id,
- output_scores=generation_config.output_scores,
- return_dict_in_generate=generation_config.return_dict_in_generate,
- synced_gpus=synced_gpus,
- **model_kwargs,
- )
-
- elif is_contrastive_search_gen_mode:
- if generation_config.num_return_sequences > 1:
- raise ValueError(
- f"num_return_sequences has to be 1, but is {generation_config.num_return_sequences} when doing"
- " contrastive search."
- )
-
- return self.contrastive_search(
- input_ids,
- top_k=generation_config.top_k,
- penalty_alpha=generation_config.penalty_alpha,
- logits_processor=logits_processor,
- stopping_criteria=stopping_criteria,
- pad_token_id=generation_config.pad_token_id,
- eos_token_id=generation_config.eos_token_id,
- output_scores=generation_config.output_scores,
- return_dict_in_generate=generation_config.return_dict_in_generate,
- synced_gpus=synced_gpus,
- **model_kwargs,
- )
-
- elif is_sample_gen_mode:
- # 11. prepare logits warper
- logits_warper = self._get_logits_warper(generation_config)
-
- # 12. expand input_ids with `num_return_sequences` additional sequences per batch
- input_ids, model_kwargs = self._expand_inputs_for_generation(
- input_ids=input_ids,
- expand_size=generation_config.num_return_sequences,
- is_encoder_decoder=self.config.is_encoder_decoder,
- **model_kwargs,
- )
-
- # 13. run sample
- return self.sample(
- input_ids,
- logits_processor=logits_processor,
- logits_warper=logits_warper,
- stopping_criteria=stopping_criteria,
- pad_token_id=generation_config.pad_token_id,
- eos_token_id=generation_config.eos_token_id,
- output_scores=generation_config.output_scores,
- return_dict_in_generate=generation_config.return_dict_in_generate,
- synced_gpus=synced_gpus,
- **model_kwargs,
- )
- elif is_sample_gen_stream_mode:
- # 11. prepare logits warper
- logits_warper = self._get_logits_warper(generation_config)
-
- # 12. expand input_ids with `num_return_sequences` additional sequences per batch
- input_ids, model_kwargs = self._expand_inputs_for_generation(
- input_ids=input_ids,
- expand_size=generation_config.num_return_sequences,
- is_encoder_decoder=self.config.is_encoder_decoder,
- **model_kwargs,
- )
-
- # 13. run sample
- return self.sample_stream(
- input_ids,
- logits_processor=logits_processor,
- logits_warper=logits_warper,
- stopping_criteria=stopping_criteria,
- pad_token_id=generation_config.pad_token_id,
- eos_token_id=generation_config.eos_token_id,
- output_scores=generation_config.output_scores,
- return_dict_in_generate=generation_config.return_dict_in_generate,
- synced_gpus=synced_gpus,
- **model_kwargs,
- )
- elif is_beam_gen_mode:
- if generation_config.num_return_sequences > generation_config.num_beams:
- raise ValueError(
- "`num_return_sequences` has to be smaller or equal to `num_beams`."
- )
-
- if stopping_criteria.max_length is None:
- raise ValueError(
- "`max_length` needs to be a stopping_criteria for now."
- )
-
- # 11. prepare beam search scorer
- beam_scorer = BeamSearchScorer(
- batch_size=batch_size,
- num_beams=generation_config.num_beams,
- device=inputs_tensor.device,
- length_penalty=generation_config.length_penalty,
- do_early_stopping=generation_config.early_stopping,
- num_beam_hyps_to_keep=generation_config.num_return_sequences,
- )
- # 12. interleave input_ids with `num_beams` additional sequences per batch
- input_ids, model_kwargs = self._expand_inputs_for_generation(
- input_ids=input_ids,
- expand_size=generation_config.num_beams,
- is_encoder_decoder=self.config.is_encoder_decoder,
- **model_kwargs,
- )
- # 13. run beam search
- return self.beam_search(
- input_ids,
- beam_scorer,
- logits_processor=logits_processor,
- stopping_criteria=stopping_criteria,
- pad_token_id=generation_config.pad_token_id,
- eos_token_id=generation_config.eos_token_id,
- output_scores=generation_config.output_scores,
- return_dict_in_generate=generation_config.return_dict_in_generate,
- synced_gpus=synced_gpus,
- **model_kwargs,
- )
-
- elif is_beam_sample_gen_mode:
- # 11. prepare logits warper
- logits_warper = self._get_logits_warper(generation_config)
-
- if stopping_criteria.max_length is None:
- raise ValueError(
- "`max_length` needs to be a stopping_criteria for now."
- )
- # 12. prepare beam search scorer
- beam_scorer = BeamSearchScorer(
- batch_size=batch_size * generation_config.num_return_sequences,
- num_beams=generation_config.num_beams,
- device=inputs_tensor.device,
- length_penalty=generation_config.length_penalty,
- do_early_stopping=generation_config.early_stopping,
- )
-
- # 13. interleave input_ids with `num_beams` additional sequences per batch
- input_ids, model_kwargs = self._expand_inputs_for_generation(
- input_ids=input_ids,
- expand_size=generation_config.num_beams
- * generation_config.num_return_sequences,
- is_encoder_decoder=self.config.is_encoder_decoder,
- **model_kwargs,
- )
-
- # 14. run beam sample
- return self.beam_sample(
- input_ids,
- beam_scorer,
- logits_processor=logits_processor,
- logits_warper=logits_warper,
- stopping_criteria=stopping_criteria,
- pad_token_id=generation_config.pad_token_id,
- eos_token_id=generation_config.eos_token_id,
- output_scores=generation_config.output_scores,
- return_dict_in_generate=generation_config.return_dict_in_generate,
- synced_gpus=synced_gpus,
- **model_kwargs,
- )
-
- elif is_group_beam_gen_mode:
- if generation_config.num_return_sequences > generation_config.num_beams:
- raise ValueError(
- "`num_return_sequences` has to be smaller or equal to `num_beams`."
- )
-
- if generation_config.num_beams % generation_config.num_beam_groups != 0:
- raise ValueError(
- "`num_beams` should be divisible by `num_beam_groups` for group beam search."
- )
-
- if stopping_criteria.max_length is None:
- raise ValueError(
- "`max_length` needs to be a stopping_criteria for now."
- )
-
- has_default_typical_p = (
- kwargs.get("typical_p") is None and generation_config.typical_p == 1.0
- )
- if not has_default_typical_p:
- raise ValueError(
- "Decoder argument `typical_p` is not supported with beam groups."
- )
-
- # 11. prepare beam search scorer
- beam_scorer = BeamSearchScorer(
- batch_size=batch_size,
- num_beams=generation_config.num_beams,
- max_length=stopping_criteria.max_length,
- device=inputs_tensor.device,
- length_penalty=generation_config.length_penalty,
- do_early_stopping=generation_config.early_stopping,
- num_beam_hyps_to_keep=generation_config.num_return_sequences,
- num_beam_groups=generation_config.num_beam_groups,
- )
- # 12. interleave input_ids with `num_beams` additional sequences per batch
- input_ids, model_kwargs = self._expand_inputs_for_generation(
- input_ids=input_ids,
- expand_size=generation_config.num_beams,
- is_encoder_decoder=self.config.is_encoder_decoder,
- **model_kwargs,
- )
- # 13. run beam search
- return self.group_beam_search(
- input_ids,
- beam_scorer,
- logits_processor=logits_processor,
- stopping_criteria=stopping_criteria,
- pad_token_id=generation_config.pad_token_id,
- eos_token_id=generation_config.eos_token_id,
- output_scores=generation_config.output_scores,
- return_dict_in_generate=generation_config.return_dict_in_generate,
- synced_gpus=synced_gpus,
- **model_kwargs,
- )
-
- elif is_constraint_gen_mode:
- if generation_config.num_return_sequences > generation_config.num_beams:
- raise ValueError(
- "`num_return_sequences` has to be smaller or equal to `num_beams`."
- )
-
- if stopping_criteria.max_length is None:
- raise ValueError(
- "`max_length` needs to be a stopping_criteria for now."
- )
-
- if generation_config.num_beams <= 1:
- raise ValueError(
- "`num_beams` needs to be greater than 1 for constrained generation."
- )
-
- if generation_config.do_sample:
- raise ValueError(
- "`do_sample` needs to be false for constrained generation."
- )
-
- if (
- generation_config.num_beam_groups is not None
- and generation_config.num_beam_groups > 1
- ):
- raise ValueError(
- "`num_beam_groups` not supported yet for constrained generation."
- )
-
- final_constraints = []
- if generation_config.constraints is not None:
- final_constraints = generation_config.constraints
-
- if generation_config.force_words_ids is not None:
-
- def typeerror():
- raise ValueError(
- "`force_words_ids` has to either be a `List[List[List[int]]]` or `List[List[int]]`"
- f"of positive integers, but is {generation_config.force_words_ids}."
- )
-
- if (
- not isinstance(generation_config.force_words_ids, list)
- or len(generation_config.force_words_ids) == 0
- ):
- typeerror()
-
- for word_ids in generation_config.force_words_ids:
- if isinstance(word_ids[0], list):
- if not isinstance(word_ids, list) or len(word_ids) == 0:
- typeerror()
- if any(
- not isinstance(token_ids, list) for token_ids in word_ids
- ):
- typeerror()
- if any(
- any(
- (not isinstance(token_id, int) or token_id < 0)
- for token_id in token_ids
- )
- for token_ids in word_ids
- ):
- typeerror()
-
- constraint = DisjunctiveConstraint(word_ids)
- else:
- if not isinstance(word_ids, list) or len(word_ids) == 0:
- typeerror()
- if any(
- (not isinstance(token_id, int) or token_id < 0)
- for token_id in word_ids
- ):
- typeerror()
-
- constraint = PhrasalConstraint(word_ids)
- final_constraints.append(constraint)
-
- # 11. prepare beam search scorer
- constrained_beam_scorer = ConstrainedBeamSearchScorer(
- constraints=final_constraints,
- batch_size=batch_size,
- num_beams=generation_config.num_beams,
- device=inputs_tensor.device,
- length_penalty=generation_config.length_penalty,
- do_early_stopping=generation_config.early_stopping,
- num_beam_hyps_to_keep=generation_config.num_return_sequences,
- )
- # 12. interleave input_ids with `num_beams` additional sequences per batch
- input_ids, model_kwargs = self._expand_inputs_for_generation(
- input_ids=input_ids,
- expand_size=generation_config.num_beams,
- is_encoder_decoder=self.config.is_encoder_decoder,
- **model_kwargs,
- )
- # 13. run beam search
- return self.constrained_beam_search(
- input_ids,
- constrained_beam_scorer=constrained_beam_scorer,
- logits_processor=logits_processor,
- stopping_criteria=stopping_criteria,
- pad_token_id=generation_config.pad_token_id,
- eos_token_id=generation_config.eos_token_id,
- output_scores=generation_config.output_scores,
- return_dict_in_generate=generation_config.return_dict_in_generate,
- synced_gpus=synced_gpus,
- **model_kwargs,
- )
-
- @torch.no_grad()
- def sample_stream(
- self,
- input_ids: torch.LongTensor,
- logits_processor: Optional[LogitsProcessorList] = None,
- stopping_criteria: Optional[StoppingCriteriaList] = None,
- logits_warper: Optional[LogitsProcessorList] = None,
- max_length: Optional[int] = None,
- pad_token_id: Optional[int] = None,
- eos_token_id: Optional[Union[int, List[int]]] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- output_scores: Optional[bool] = None,
- return_dict_in_generate: Optional[bool] = None,
- synced_gpus: Optional[bool] = False,
- **model_kwargs,
- ) -> Union[SampleOutput, torch.LongTensor]:
- r"""
- Generates sequences of token ids for models with a language modeling head using **multinomial sampling** and
- can be used for text-decoder, text-to-text, speech-to-text, and vision-to-text models.
-
-
-
- In most cases, you do not need to call [`~generation.GenerationMixin.sample`] directly. Use generate() instead.
- For an overview of generation strategies and code examples, check the [following
- guide](./generation_strategies).
-
-
-
- Parameters:
- input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
- The sequence used as a prompt for the generation.
- logits_processor (`LogitsProcessorList`, *optional*):
- An instance of [`LogitsProcessorList`]. List of instances of class derived from [`LogitsProcessor`]
- used to modify the prediction scores of the language modeling head applied at each generation step.
- stopping_criteria (`StoppingCriteriaList`, *optional*):
- An instance of [`StoppingCriteriaList`]. List of instances of class derived from [`StoppingCriteria`]
- used to tell if the generation loop should stop.
- logits_warper (`LogitsProcessorList`, *optional*):
- An instance of [`LogitsProcessorList`]. List of instances of class derived from [`LogitsWarper`] used
- to warp the prediction score distribution of the language modeling head applied before multinomial
- sampling at each generation step.
- max_length (`int`, *optional*, defaults to 20):
- **DEPRECATED**. Use `logits_processor` or `stopping_criteria` directly to cap the number of generated
- tokens. The maximum length of the sequence to be generated.
- pad_token_id (`int`, *optional*):
- The id of the *padding* token.
- eos_token_id (`int`, *optional*):
- The id of the *end-of-sequence* token.
- output_attentions (`bool`, *optional*, defaults to `False`):
- Whether or not to return the attentions tensors of all attention layers. See `attentions` under
- returned tensors for more details.
- output_hidden_states (`bool`, *optional*, defaults to `False`):
- Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors
- for more details.
- output_scores (`bool`, *optional*, defaults to `False`):
- Whether or not to return the prediction scores. See `scores` under returned tensors for more details.
- return_dict_in_generate (`bool`, *optional*, defaults to `False`):
- Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
- synced_gpus (`bool`, *optional*, defaults to `False`):
- Whether to continue running the while loop until max_length (needed for ZeRO stage 3)
- model_kwargs:
- Additional model specific kwargs will be forwarded to the `forward` function of the model. If model is
- an encoder-decoder model the kwargs should include `encoder_outputs`.
-
- Return:
- [`~generation.SampleDecoderOnlyOutput`], [`~generation.SampleEncoderDecoderOutput`] or `torch.LongTensor`:
- A `torch.LongTensor` containing the generated tokens (default behaviour) or a
- [`~generation.SampleDecoderOnlyOutput`] if `model.config.is_encoder_decoder=False` and
- `return_dict_in_generate=True` or a [`~generation.SampleEncoderDecoderOutput`] if
- `model.config.is_encoder_decoder=True`.
-
- Examples:
-
- ```python
- >>> from transformers import (
- ... AutoTokenizer,
- ... AutoModelForCausalLM,
- ... LogitsProcessorList,
- ... MinLengthLogitsProcessor,
- ... TopKLogitsWarper,
- ... TemperatureLogitsWarper,
- ... StoppingCriteriaList,
- ... MaxLengthCriteria,
- ... )
- >>> import torch
-
- >>> tokenizer = AutoTokenizer.from_pretrained("gpt2")
- >>> model = AutoModelForCausalLM.from_pretrained("gpt2")
-
- >>> # set pad_token_id to eos_token_id because GPT2 does not have a EOS token
- >>> model.config.pad_token_id = model.config.eos_token_id
- >>> model.generation_config.pad_token_id = model.config.eos_token_id
-
- >>> input_prompt = "Today is a beautiful day, and"
- >>> input_ids = tokenizer(input_prompt, return_tensors="pt").input_ids
-
- >>> # instantiate logits processors
- >>> logits_processor = LogitsProcessorList(
- ... [
- ... MinLengthLogitsProcessor(15, eos_token_id=model.generation_config.eos_token_id),
- ... ]
- ... )
- >>> # instantiate logits processors
- >>> logits_warper = LogitsProcessorList(
- ... [
- ... TopKLogitsWarper(50),
- ... TemperatureLogitsWarper(0.7),
- ... ]
- ... )
-
- >>> stopping_criteria = StoppingCriteriaList([MaxLengthCriteria(max_length=20)])
-
- >>> torch.manual_seed(0) # doctest: +IGNORE_RESULT
- >>> outputs = model.sample(
- ... input_ids,
- ... logits_processor=logits_processor,
- ... logits_warper=logits_warper,
- ... stopping_criteria=stopping_criteria,
- ... )
-
- >>> tokenizer.batch_decode(outputs, skip_special_tokens=True)
- ['Today is a beautiful day, and a wonderful day.\n\nI was lucky enough to meet the']
- ```"""
- # init values
- logits_processor = (
- logits_processor if logits_processor is not None else LogitsProcessorList()
- )
- stopping_criteria = (
- stopping_criteria
- if stopping_criteria is not None
- else StoppingCriteriaList()
- )
- if max_length is not None:
- warnings.warn(
- "`max_length` is deprecated in this function, use"
- " `stopping_criteria=StoppingCriteriaList(MaxLengthCriteria(max_length=max_length))` instead.",
- UserWarning,
- )
- stopping_criteria = validate_stopping_criteria(
- stopping_criteria, max_length
- )
- logits_warper = (
- logits_warper if logits_warper is not None else LogitsProcessorList()
- )
- pad_token_id = (
- pad_token_id
- if pad_token_id is not None
- else self.generation_config.pad_token_id
- )
- eos_token_id = (
- eos_token_id
- if eos_token_id is not None
- else self.generation_config.eos_token_id
- )
- if isinstance(eos_token_id, int):
- eos_token_id = [eos_token_id]
- output_scores = (
- output_scores
- if output_scores is not None
- else self.generation_config.output_scores
- )
- output_attentions = (
- output_attentions
- if output_attentions is not None
- else self.generation_config.output_attentions
- )
- output_hidden_states = (
- output_hidden_states
- if output_hidden_states is not None
- else self.generation_config.output_hidden_states
- )
- return_dict_in_generate = (
- return_dict_in_generate
- if return_dict_in_generate is not None
- else self.generation_config.return_dict_in_generate
- )
-
- # init attention / hidden states / scores tuples
- scores = () if (return_dict_in_generate and output_scores) else None
- decoder_attentions = (
- () if (return_dict_in_generate and output_attentions) else None
- )
- cross_attentions = (
- () if (return_dict_in_generate and output_attentions) else None
- )
- decoder_hidden_states = (
- () if (return_dict_in_generate and output_hidden_states) else None
- )
-
- # keep track of which sequences are already finished
- unfinished_sequences = input_ids.new(input_ids.shape[0]).fill_(1)
-
- this_peer_finished = False # used by synced_gpus only
- # auto-regressive generation
- while True:
- if synced_gpus:
- # Under synced_gpus the `forward` call must continue until all gpus complete their sequence.
- # The following logic allows an early break if all peers finished generating their sequence
- this_peer_finished_flag = torch.tensor(
- 0.0 if this_peer_finished else 1.0
- ).to(input_ids.device)
- # send 0.0 if we finished, 1.0 otherwise
- dist.all_reduce(this_peer_finished_flag, op=dist.ReduceOp.SUM)
- # did all peers finish? the reduced sum will be 0.0 then
- if this_peer_finished_flag.item() == 0.0:
- break
-
- # prepare model inputs
- model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
-
- # forward pass to get next token
- outputs = self(
- **model_inputs,
- return_dict=True,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- )
-
- if synced_gpus and this_peer_finished:
- continue # don't waste resources running the code we don't need
-
- next_token_logits = outputs.logits[:, -1, :]
-
- # pre-process distribution
- next_token_scores = logits_processor(input_ids, next_token_logits)
- next_token_scores = logits_warper(input_ids, next_token_scores)
-
- # Store scores, attentions and hidden_states when required
- if return_dict_in_generate:
- if output_scores:
- scores += (next_token_scores,)
- if output_attentions:
- decoder_attentions += (
- (outputs.decoder_attentions,)
- if self.config.is_encoder_decoder
- else (outputs.attentions,)
- )
- if self.config.is_encoder_decoder:
- cross_attentions += (outputs.cross_attentions,)
-
- if output_hidden_states:
- decoder_hidden_states += (
- (outputs.decoder_hidden_states,)
- if self.config.is_encoder_decoder
- else (outputs.hidden_states,)
- )
-
- # sample
- probs = nn.functional.softmax(next_token_scores, dim=-1)
- next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)
-
- # finished sentences should have their next token be a padding token
- if eos_token_id is not None:
- if pad_token_id is None:
- raise ValueError(
- "If `eos_token_id` is defined, make sure that `pad_token_id` is defined."
- )
- next_tokens = next_tokens * unfinished_sequences + pad_token_id * (
- 1 - unfinished_sequences
- )
- yield next_tokens, self.final_norm(outputs.hidden_states[-1][:, -1])
- # update generated ids, model inputs, and length for next step
- input_ids = torch.cat([input_ids, next_tokens[:, None]], dim=-1)
- model_kwargs = self._update_model_kwargs_for_generation(
- outputs, model_kwargs, is_encoder_decoder=self.config.is_encoder_decoder
- )
-
- # if eos_token was found in one sentence, set sentence to finished
- if eos_token_id is not None:
- unfinished_sequences = unfinished_sequences.mul(
- (sum(next_tokens != i for i in eos_token_id)).long()
- )
-
- # stop when each sentence is finished, or if we exceed the maximum length
- if unfinished_sequences.max() == 0 or stopping_criteria(input_ids, scores):
- if not synced_gpus:
- break
- else:
- this_peer_finished = True
-
-
-def init_stream_support():
- """Overload PreTrainedModel for streaming."""
- PreTrainedModel.generate_stream = NewGenerationMixin.generate
- PreTrainedModel.sample_stream = NewGenerationMixin.sample_stream
-
-
-if __name__ == "__main__":
- from transformers import PreTrainedModel
- from transformers import AutoTokenizer, AutoModelForCausalLM
-
- PreTrainedModel.generate = NewGenerationMixin.generate
- PreTrainedModel.sample_stream = NewGenerationMixin.sample_stream
- model = AutoModelForCausalLM.from_pretrained(
- "bigscience/bloom-560m", torch_dtype=torch.float16
- )
-
- tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-560m")
- model = model.to("cuda:0")
- model = model.eval()
- prompt_text = "hello? \n"
- input_ids = tokenizer(
- prompt_text, return_tensors="pt", add_special_tokens=False
- ).input_ids
- input_ids = input_ids.to("cuda:0")
-
- with torch.no_grad():
- result = model.generate(
- input_ids,
- max_new_tokens=200,
- do_sample=True,
- top_k=30,
- top_p=0.85,
- temperature=0.35,
- repetition_penalty=1.2,
- early_stopping=True,
- seed=0,
- )
- print(tokenizer.decode(result, skip_special_tokens=True))
- generator = model.generate(
- input_ids,
- max_new_tokens=200,
- do_sample=True,
- top_k=30,
- top_p=0.85,
- temperature=0.35,
- repetition_penalty=1.2,
- early_stopping=True,
- seed=0,
- do_stream=True,
- )
- stream_result = ""
- for x in generator:
- chunk = tokenizer.decode(x, skip_special_tokens=True)
- stream_result += chunk
- print(stream_result)
diff --git a/spaces/MattyWhite/ChatGPT-ImageCaptioner2/tools/dump_clip_features.py b/spaces/MattyWhite/ChatGPT-ImageCaptioner2/tools/dump_clip_features.py
deleted file mode 100644
index 127f8c2a86c2425611c8ec075006664f5e07df45..0000000000000000000000000000000000000000
--- a/spaces/MattyWhite/ChatGPT-ImageCaptioner2/tools/dump_clip_features.py
+++ /dev/null
@@ -1,116 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import argparse
-import json
-import torch
-import numpy as np
-import itertools
-from nltk.corpus import wordnet
-import sys
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--ann', default='datasets/lvis/lvis_v1_val.json')
- parser.add_argument('--out_path', default='')
- parser.add_argument('--prompt', default='a')
- parser.add_argument('--model', default='clip')
- parser.add_argument('--clip_model', default="ViT-B/32")
- parser.add_argument('--fix_space', action='store_true')
- parser.add_argument('--use_underscore', action='store_true')
- parser.add_argument('--avg_synonyms', action='store_true')
- parser.add_argument('--use_wn_name', action='store_true')
- args = parser.parse_args()
-
- print('Loading', args.ann)
- data = json.load(open(args.ann, 'r'))
- cat_names = [x['name'] for x in \
- sorted(data['categories'], key=lambda x: x['id'])]
- if 'synonyms' in data['categories'][0]:
- if args.use_wn_name:
- synonyms = [
- [xx.name() for xx in wordnet.synset(x['synset']).lemmas()] \
- if x['synset'] != 'stop_sign.n.01' else ['stop_sign'] \
- for x in sorted(data['categories'], key=lambda x: x['id'])]
- else:
- synonyms = [x['synonyms'] for x in \
- sorted(data['categories'], key=lambda x: x['id'])]
- else:
- synonyms = []
- if args.fix_space:
- cat_names = [x.replace('_', ' ') for x in cat_names]
- if args.use_underscore:
- cat_names = [x.strip().replace('/ ', '/').replace(' ', '_') for x in cat_names]
- print('cat_names', cat_names)
- device = "cuda" if torch.cuda.is_available() else "cpu"
-
- if args.prompt == 'a':
- sentences = ['a ' + x for x in cat_names]
- sentences_synonyms = [['a ' + xx for xx in x] for x in synonyms]
- if args.prompt == 'none':
- sentences = [x for x in cat_names]
- sentences_synonyms = [[xx for xx in x] for x in synonyms]
- elif args.prompt == 'photo':
- sentences = ['a photo of a {}'.format(x) for x in cat_names]
- sentences_synonyms = [['a photo of a {}'.format(xx) for xx in x] \
- for x in synonyms]
- elif args.prompt == 'scene':
- sentences = ['a photo of a {} in the scene'.format(x) for x in cat_names]
- sentences_synonyms = [['a photo of a {} in the scene'.format(xx) for xx in x] \
- for x in synonyms]
-
- print('sentences_synonyms', len(sentences_synonyms), \
- sum(len(x) for x in sentences_synonyms))
- if args.model == 'clip':
- import clip
- print('Loading CLIP')
- model, preprocess = clip.load(args.clip_model, device=device)
- if args.avg_synonyms:
- sentences = list(itertools.chain.from_iterable(sentences_synonyms))
- print('flattened_sentences', len(sentences))
- text = clip.tokenize(sentences).to(device)
- with torch.no_grad():
- if len(text) > 10000:
- text_features = torch.cat([
- model.encode_text(text[:len(text) // 2]),
- model.encode_text(text[len(text) // 2:])],
- dim=0)
- else:
- text_features = model.encode_text(text)
- print('text_features.shape', text_features.shape)
- if args.avg_synonyms:
- synonyms_per_cat = [len(x) for x in sentences_synonyms]
- text_features = text_features.split(synonyms_per_cat, dim=0)
- text_features = [x.mean(dim=0) for x in text_features]
- text_features = torch.stack(text_features, dim=0)
- print('after stack', text_features.shape)
- text_features = text_features.cpu().numpy()
- elif args.model in ['bert', 'roberta']:
- from transformers import AutoTokenizer, AutoModel
- if args.model == 'bert':
- model_name = 'bert-large-uncased'
- if args.model == 'roberta':
- model_name = 'roberta-large'
- tokenizer = AutoTokenizer.from_pretrained(model_name)
- model = AutoModel.from_pretrained(model_name)
- model.eval()
- if args.avg_synonyms:
- sentences = list(itertools.chain.from_iterable(sentences_synonyms))
- print('flattened_sentences', len(sentences))
- inputs = tokenizer(sentences, padding=True, return_tensors="pt")
- with torch.no_grad():
- model_outputs = model(**inputs)
- outputs = model_outputs.pooler_output
- text_features = outputs.detach().cpu()
- if args.avg_synonyms:
- synonyms_per_cat = [len(x) for x in sentences_synonyms]
- text_features = text_features.split(synonyms_per_cat, dim=0)
- text_features = [x.mean(dim=0) for x in text_features]
- text_features = torch.stack(text_features, dim=0)
- print('after stack', text_features.shape)
- text_features = text_features.numpy()
- print('text_features.shape', text_features.shape)
- else:
- assert 0, args.model
- if args.out_path != '':
- print('saveing to', args.out_path)
- np.save(open(args.out_path, 'wb'), text_features)
- import pdb; pdb.set_trace()
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/models/fast_scnn.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/models/fast_scnn.py
deleted file mode 100644
index 32fdeb659355a5ce5ef2cc7c2f30742703811cdf..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/models/fast_scnn.py
+++ /dev/null
@@ -1,57 +0,0 @@
-# model settings
-norm_cfg = dict(type='SyncBN', requires_grad=True, momentum=0.01)
-model = dict(
- type='EncoderDecoder',
- backbone=dict(
- type='FastSCNN',
- downsample_dw_channels=(32, 48),
- global_in_channels=64,
- global_block_channels=(64, 96, 128),
- global_block_strides=(2, 2, 1),
- global_out_channels=128,
- higher_in_channels=64,
- lower_in_channels=128,
- fusion_out_channels=128,
- out_indices=(0, 1, 2),
- norm_cfg=norm_cfg,
- align_corners=False),
- decode_head=dict(
- type='DepthwiseSeparableFCNHead',
- in_channels=128,
- channels=128,
- concat_input=False,
- num_classes=19,
- in_index=-1,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=True, loss_weight=0.4)),
- auxiliary_head=[
- dict(
- type='FCNHead',
- in_channels=128,
- channels=32,
- num_convs=1,
- num_classes=19,
- in_index=-2,
- norm_cfg=norm_cfg,
- concat_input=False,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=True, loss_weight=0.4)),
- dict(
- type='FCNHead',
- in_channels=64,
- channels=32,
- num_convs=1,
- num_classes=19,
- in_index=-3,
- norm_cfg=norm_cfg,
- concat_input=False,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=True, loss_weight=0.4)),
- ],
- # model training and testing settings
- train_cfg=dict(),
- test_cfg=dict(mode='whole'))
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/cnn/bricks/hsigmoid.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/cnn/bricks/hsigmoid.py
deleted file mode 100644
index 30b1a3d6580cf0360710426fbea1f05acdf07b4b..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/cnn/bricks/hsigmoid.py
+++ /dev/null
@@ -1,34 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import torch.nn as nn
-
-from .registry import ACTIVATION_LAYERS
-
-
-@ACTIVATION_LAYERS.register_module()
-class HSigmoid(nn.Module):
- """Hard Sigmoid Module. Apply the hard sigmoid function:
- Hsigmoid(x) = min(max((x + bias) / divisor, min_value), max_value)
- Default: Hsigmoid(x) = min(max((x + 1) / 2, 0), 1)
-
- Args:
- bias (float): Bias of the input feature map. Default: 1.0.
- divisor (float): Divisor of the input feature map. Default: 2.0.
- min_value (float): Lower bound value. Default: 0.0.
- max_value (float): Upper bound value. Default: 1.0.
-
- Returns:
- Tensor: The output tensor.
- """
-
- def __init__(self, bias=1.0, divisor=2.0, min_value=0.0, max_value=1.0):
- super(HSigmoid, self).__init__()
- self.bias = bias
- self.divisor = divisor
- assert self.divisor != 0
- self.min_value = min_value
- self.max_value = max_value
-
- def forward(self, x):
- x = (x + self.bias) / self.divisor
-
- return x.clamp_(self.min_value, self.max_value)
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/backbones/uniformer.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/backbones/uniformer.py
deleted file mode 100644
index 0c4bb88e4c928540cca9ab609988b916520f5b7a..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/backbones/uniformer.py
+++ /dev/null
@@ -1,422 +0,0 @@
-# --------------------------------------------------------
-# UniFormer
-# Copyright (c) 2022 SenseTime X-Lab
-# Licensed under The MIT License [see LICENSE for details]
-# Written by Kunchang Li
-# --------------------------------------------------------
-
-from collections import OrderedDict
-import math
-
-from functools import partial
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import torch.utils.checkpoint as checkpoint
-import numpy as np
-from timm.models.layers import DropPath, to_2tuple, trunc_normal_
-
-from annotator.uniformer.mmcv_custom import load_checkpoint
-from annotator.uniformer.mmseg.utils import get_root_logger
-from ..builder import BACKBONES
-
-
-class Mlp(nn.Module):
- def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
- super().__init__()
- out_features = out_features or in_features
- hidden_features = hidden_features or in_features
- self.fc1 = nn.Linear(in_features, hidden_features)
- self.act = act_layer()
- self.fc2 = nn.Linear(hidden_features, out_features)
- self.drop = nn.Dropout(drop)
-
- def forward(self, x):
- x = self.fc1(x)
- x = self.act(x)
- x = self.drop(x)
- x = self.fc2(x)
- x = self.drop(x)
- return x
-
-
-class CMlp(nn.Module):
- def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
- super().__init__()
- out_features = out_features or in_features
- hidden_features = hidden_features or in_features
- self.fc1 = nn.Conv2d(in_features, hidden_features, 1)
- self.act = act_layer()
- self.fc2 = nn.Conv2d(hidden_features, out_features, 1)
- self.drop = nn.Dropout(drop)
-
- def forward(self, x):
- x = self.fc1(x)
- x = self.act(x)
- x = self.drop(x)
- x = self.fc2(x)
- x = self.drop(x)
- return x
-
-
-class CBlock(nn.Module):
- def __init__(self, dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0.,
- drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm):
- super().__init__()
- self.pos_embed = nn.Conv2d(dim, dim, 3, padding=1, groups=dim)
- self.norm1 = nn.BatchNorm2d(dim)
- self.conv1 = nn.Conv2d(dim, dim, 1)
- self.conv2 = nn.Conv2d(dim, dim, 1)
- self.attn = nn.Conv2d(dim, dim, 5, padding=2, groups=dim)
- # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here
- self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
- self.norm2 = nn.BatchNorm2d(dim)
- mlp_hidden_dim = int(dim * mlp_ratio)
- self.mlp = CMlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
-
- def forward(self, x):
- x = x + self.pos_embed(x)
- x = x + self.drop_path(self.conv2(self.attn(self.conv1(self.norm1(x)))))
- x = x + self.drop_path(self.mlp(self.norm2(x)))
- return x
-
-
-class Attention(nn.Module):
- def __init__(self, dim, num_heads=8, qkv_bias=False, qk_scale=None, attn_drop=0., proj_drop=0.):
- super().__init__()
- self.num_heads = num_heads
- head_dim = dim // num_heads
- # NOTE scale factor was wrong in my original version, can set manually to be compat with prev weights
- self.scale = qk_scale or head_dim ** -0.5
-
- self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
- self.attn_drop = nn.Dropout(attn_drop)
- self.proj = nn.Linear(dim, dim)
- self.proj_drop = nn.Dropout(proj_drop)
-
- def forward(self, x):
- B, N, C = x.shape
- qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
- q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
-
- attn = (q @ k.transpose(-2, -1)) * self.scale
- attn = attn.softmax(dim=-1)
- attn = self.attn_drop(attn)
-
- x = (attn @ v).transpose(1, 2).reshape(B, N, C)
- x = self.proj(x)
- x = self.proj_drop(x)
- return x
-
-
-class SABlock(nn.Module):
- def __init__(self, dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0.,
- drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm):
- super().__init__()
- self.pos_embed = nn.Conv2d(dim, dim, 3, padding=1, groups=dim)
- self.norm1 = norm_layer(dim)
- self.attn = Attention(
- dim,
- num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale,
- attn_drop=attn_drop, proj_drop=drop)
- # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here
- self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
- self.norm2 = norm_layer(dim)
- mlp_hidden_dim = int(dim * mlp_ratio)
- self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
-
- def forward(self, x):
- x = x + self.pos_embed(x)
- B, N, H, W = x.shape
- x = x.flatten(2).transpose(1, 2)
- x = x + self.drop_path(self.attn(self.norm1(x)))
- x = x + self.drop_path(self.mlp(self.norm2(x)))
- x = x.transpose(1, 2).reshape(B, N, H, W)
- return x
-
-
-def window_partition(x, window_size):
- """
- Args:
- x: (B, H, W, C)
- window_size (int): window size
- Returns:
- windows: (num_windows*B, window_size, window_size, C)
- """
- B, H, W, C = x.shape
- x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
- windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
- return windows
-
-
-def window_reverse(windows, window_size, H, W):
- """
- Args:
- windows: (num_windows*B, window_size, window_size, C)
- window_size (int): Window size
- H (int): Height of image
- W (int): Width of image
- Returns:
- x: (B, H, W, C)
- """
- B = int(windows.shape[0] / (H * W / window_size / window_size))
- x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1)
- x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)
- return x
-
-
-class SABlock_Windows(nn.Module):
- def __init__(self, dim, num_heads, window_size=14, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0.,
- drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm):
- super().__init__()
- self.window_size=window_size
- self.pos_embed = nn.Conv2d(dim, dim, 3, padding=1, groups=dim)
- self.norm1 = norm_layer(dim)
- self.attn = Attention(
- dim,
- num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale,
- attn_drop=attn_drop, proj_drop=drop)
- # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here
- self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
- self.norm2 = norm_layer(dim)
- mlp_hidden_dim = int(dim * mlp_ratio)
- self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
-
- def forward(self, x):
- x = x + self.pos_embed(x)
- x = x.permute(0, 2, 3, 1)
- B, H, W, C = x.shape
- shortcut = x
- x = self.norm1(x)
-
- pad_l = pad_t = 0
- pad_r = (self.window_size - W % self.window_size) % self.window_size
- pad_b = (self.window_size - H % self.window_size) % self.window_size
- x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b))
- _, Hp, Wp, _ = x.shape
-
- x_windows = window_partition(x, self.window_size) # nW*B, window_size, window_size, C
- x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C
-
- # W-MSA/SW-MSA
- attn_windows = self.attn(x_windows) # nW*B, window_size*window_size, C
-
- # merge windows
- attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C)
- x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C
-
- # reverse cyclic shift
- if pad_r > 0 or pad_b > 0:
- x = x[:, :H, :W, :].contiguous()
-
- x = shortcut + self.drop_path(x)
- x = x + self.drop_path(self.mlp(self.norm2(x)))
- x = x.permute(0, 3, 1, 2).reshape(B, C, H, W)
- return x
-
-
-class PatchEmbed(nn.Module):
- """ Image to Patch Embedding
- """
- def __init__(self, img_size=224, patch_size=16, in_chans=3, embed_dim=768):
- super().__init__()
- img_size = to_2tuple(img_size)
- patch_size = to_2tuple(patch_size)
- num_patches = (img_size[1] // patch_size[1]) * (img_size[0] // patch_size[0])
- self.img_size = img_size
- self.patch_size = patch_size
- self.num_patches = num_patches
- self.norm = nn.LayerNorm(embed_dim)
- self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size)
-
- def forward(self, x):
- B, _, H, W = x.shape
- x = self.proj(x)
- B, _, H, W = x.shape
- x = x.flatten(2).transpose(1, 2)
- x = self.norm(x)
- x = x.reshape(B, H, W, -1).permute(0, 3, 1, 2).contiguous()
- return x
-
-
-@BACKBONES.register_module()
-class UniFormer(nn.Module):
- """ Vision Transformer
- A PyTorch impl of : `An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale` -
- https://arxiv.org/abs/2010.11929
- """
- def __init__(self, layers=[3, 4, 8, 3], img_size=224, in_chans=3, num_classes=80, embed_dim=[64, 128, 320, 512],
- head_dim=64, mlp_ratio=4., qkv_bias=True, qk_scale=None, representation_size=None,
- drop_rate=0., attn_drop_rate=0., drop_path_rate=0., norm_layer=partial(nn.LayerNorm, eps=1e-6),
- pretrained_path=None, use_checkpoint=False, checkpoint_num=[0, 0, 0, 0],
- windows=False, hybrid=False, window_size=14):
- """
- Args:
- layer (list): number of block in each layer
- img_size (int, tuple): input image size
- in_chans (int): number of input channels
- num_classes (int): number of classes for classification head
- embed_dim (int): embedding dimension
- head_dim (int): dimension of attention heads
- mlp_ratio (int): ratio of mlp hidden dim to embedding dim
- qkv_bias (bool): enable bias for qkv if True
- qk_scale (float): override default qk scale of head_dim ** -0.5 if set
- representation_size (Optional[int]): enable and set representation layer (pre-logits) to this value if set
- drop_rate (float): dropout rate
- attn_drop_rate (float): attention dropout rate
- drop_path_rate (float): stochastic depth rate
- norm_layer (nn.Module): normalization layer
- pretrained_path (str): path of pretrained model
- use_checkpoint (bool): whether use checkpoint
- checkpoint_num (list): index for using checkpoint in every stage
- windows (bool): whether use window MHRA
- hybrid (bool): whether use hybrid MHRA
- window_size (int): size of window (>14)
- """
- super().__init__()
- self.num_classes = num_classes
- self.use_checkpoint = use_checkpoint
- self.checkpoint_num = checkpoint_num
- self.windows = windows
- print(f'Use Checkpoint: {self.use_checkpoint}')
- print(f'Checkpoint Number: {self.checkpoint_num}')
- self.num_features = self.embed_dim = embed_dim # num_features for consistency with other models
- norm_layer = norm_layer or partial(nn.LayerNorm, eps=1e-6)
-
- self.patch_embed1 = PatchEmbed(
- img_size=img_size, patch_size=4, in_chans=in_chans, embed_dim=embed_dim[0])
- self.patch_embed2 = PatchEmbed(
- img_size=img_size // 4, patch_size=2, in_chans=embed_dim[0], embed_dim=embed_dim[1])
- self.patch_embed3 = PatchEmbed(
- img_size=img_size // 8, patch_size=2, in_chans=embed_dim[1], embed_dim=embed_dim[2])
- self.patch_embed4 = PatchEmbed(
- img_size=img_size // 16, patch_size=2, in_chans=embed_dim[2], embed_dim=embed_dim[3])
-
- self.pos_drop = nn.Dropout(p=drop_rate)
- dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(layers))] # stochastic depth decay rule
- num_heads = [dim // head_dim for dim in embed_dim]
- self.blocks1 = nn.ModuleList([
- CBlock(
- dim=embed_dim[0], num_heads=num_heads[0], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,
- drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i], norm_layer=norm_layer)
- for i in range(layers[0])])
- self.norm1=norm_layer(embed_dim[0])
- self.blocks2 = nn.ModuleList([
- CBlock(
- dim=embed_dim[1], num_heads=num_heads[1], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,
- drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]], norm_layer=norm_layer)
- for i in range(layers[1])])
- self.norm2 = norm_layer(embed_dim[1])
- if self.windows:
- print('Use local window for all blocks in stage3')
- self.blocks3 = nn.ModuleList([
- SABlock_Windows(
- dim=embed_dim[2], num_heads=num_heads[2], window_size=window_size, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,
- drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]+layers[1]], norm_layer=norm_layer)
- for i in range(layers[2])])
- elif hybrid:
- print('Use hybrid window for blocks in stage3')
- block3 = []
- for i in range(layers[2]):
- if (i + 1) % 4 == 0:
- block3.append(SABlock(
- dim=embed_dim[2], num_heads=num_heads[2], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,
- drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]+layers[1]], norm_layer=norm_layer))
- else:
- block3.append(SABlock_Windows(
- dim=embed_dim[2], num_heads=num_heads[2], window_size=window_size, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,
- drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]+layers[1]], norm_layer=norm_layer))
- self.blocks3 = nn.ModuleList(block3)
- else:
- print('Use global window for all blocks in stage3')
- self.blocks3 = nn.ModuleList([
- SABlock(
- dim=embed_dim[2], num_heads=num_heads[2], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,
- drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]+layers[1]], norm_layer=norm_layer)
- for i in range(layers[2])])
- self.norm3 = norm_layer(embed_dim[2])
- self.blocks4 = nn.ModuleList([
- SABlock(
- dim=embed_dim[3], num_heads=num_heads[3], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,
- drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]+layers[1]+layers[2]], norm_layer=norm_layer)
- for i in range(layers[3])])
- self.norm4 = norm_layer(embed_dim[3])
-
- # Representation layer
- if representation_size:
- self.num_features = representation_size
- self.pre_logits = nn.Sequential(OrderedDict([
- ('fc', nn.Linear(embed_dim, representation_size)),
- ('act', nn.Tanh())
- ]))
- else:
- self.pre_logits = nn.Identity()
-
- self.apply(self._init_weights)
- self.init_weights(pretrained=pretrained_path)
-
- def init_weights(self, pretrained):
- if isinstance(pretrained, str):
- logger = get_root_logger()
- load_checkpoint(self, pretrained, map_location='cpu', strict=False, logger=logger)
- print(f'Load pretrained model from {pretrained}')
- def _init_weights(self, m):
- if isinstance(m, nn.Linear):
- trunc_normal_(m.weight, std=.02)
- if isinstance(m, nn.Linear) and m.bias is not None:
- nn.init.constant_(m.bias, 0)
- elif isinstance(m, nn.LayerNorm):
- nn.init.constant_(m.bias, 0)
- nn.init.constant_(m.weight, 1.0)
-
- @torch.jit.ignore
- def no_weight_decay(self):
- return {'pos_embed', 'cls_token'}
-
- def get_classifier(self):
- return self.head
-
- def reset_classifier(self, num_classes, global_pool=''):
- self.num_classes = num_classes
- self.head = nn.Linear(self.embed_dim, num_classes) if num_classes > 0 else nn.Identity()
-
- def forward_features(self, x):
- out = []
- x = self.patch_embed1(x)
- x = self.pos_drop(x)
- for i, blk in enumerate(self.blocks1):
- if self.use_checkpoint and i < self.checkpoint_num[0]:
- x = checkpoint.checkpoint(blk, x)
- else:
- x = blk(x)
- x_out = self.norm1(x.permute(0, 2, 3, 1))
- out.append(x_out.permute(0, 3, 1, 2).contiguous())
- x = self.patch_embed2(x)
- for i, blk in enumerate(self.blocks2):
- if self.use_checkpoint and i < self.checkpoint_num[1]:
- x = checkpoint.checkpoint(blk, x)
- else:
- x = blk(x)
- x_out = self.norm2(x.permute(0, 2, 3, 1))
- out.append(x_out.permute(0, 3, 1, 2).contiguous())
- x = self.patch_embed3(x)
- for i, blk in enumerate(self.blocks3):
- if self.use_checkpoint and i < self.checkpoint_num[2]:
- x = checkpoint.checkpoint(blk, x)
- else:
- x = blk(x)
- x_out = self.norm3(x.permute(0, 2, 3, 1))
- out.append(x_out.permute(0, 3, 1, 2).contiguous())
- x = self.patch_embed4(x)
- for i, blk in enumerate(self.blocks4):
- if self.use_checkpoint and i < self.checkpoint_num[3]:
- x = checkpoint.checkpoint(blk, x)
- else:
- x = blk(x)
- x_out = self.norm4(x.permute(0, 2, 3, 1))
- out.append(x_out.permute(0, 3, 1, 2).contiguous())
- return tuple(out)
-
- def forward(self, x):
- x = self.forward_features(x)
- return x
diff --git a/spaces/MercuryLeafer/img-to-music/app.py b/spaces/MercuryLeafer/img-to-music/app.py
deleted file mode 100644
index 30d094ce05b344d21f1c497c183a4ce7649ec164..0000000000000000000000000000000000000000
--- a/spaces/MercuryLeafer/img-to-music/app.py
+++ /dev/null
@@ -1,333 +0,0 @@
-import gradio as gr
-import openai
-import numpy as np
-import time
-import base64
-import ffmpeg
-from sentence_transformers import SentenceTransformer
-from audio2numpy import open_audio
-import httpx
-import json
-import os
-import requests
-import urllib
-import pydub
-from os import path
-from pydub import AudioSegment
-import re
-
-MUBERT_LICENSE = os.environ.get('MUBERT_LICENSE')
-MUBERT_TOKEN = os.environ.get('MUBERT_TOKEN')
-
-#img_to_text = gr.Blocks.load(name="spaces/pharma/CLIP-Interrogator")
-img_to_text = gr.Blocks.load(name="spaces/fffiloni/CLIP-Interrogator-2")
-
-from share_btn import community_icon_html, loading_icon_html, share_js
-from utils import get_tags_for_prompts, get_mubert_tags_embeddings
-
-minilm = SentenceTransformer('all-MiniLM-L6-v2')
-mubert_tags_embeddings = get_mubert_tags_embeddings(minilm)
-
-##————————————————————————————————————
-
-MUBERT_LICENSE = os.environ.get('MUBERT_LICENSE')
-MUBERT_TOKEN = os.environ.get('MUBERT_TOKEN')
-
-##————————————————————————————————————
-def get_pat_token():
- r = httpx.post('https://api-b2b.mubert.com/v2/GetServiceAccess',
- json={
- "method": "GetServiceAccess",
- "params": {
- "email":"mail@mail.com",
- "phone":"+11234567890",
- "license": MUBERT_LICENSE,
- "token": MUBERT_TOKEN,
-
- }
- })
-
- rdata = json.loads(r.text)
- assert rdata['status'] == 1, "probably incorrect e-mail"
- pat = rdata['data']['pat']
- #print(f"pat: {pat}")
- return pat
-
-def get_music(pat, prompt, track_duration, gen_intensity, gen_mode):
-
- if len(prompt) > 200:
- prompt = prompt[:200]
-
- r = httpx.post('https://api-b2b.mubert.com/v2/TTMRecordTrack',
- json={
- "method": "TTMRecordTrack",
- "params":
- {
- "text": prompt,
- "pat": pat,
- "mode":gen_mode,
- "duration":track_duration,
- "intensity": gen_intensity,
- "format": "wav"
- }
- })
-
- rdata = json.loads(r.text)
-
- #print(f"rdata: {rdata}")
- assert rdata['status'] == 1, rdata['error']['text']
- track = rdata['data']['tasks'][0]['download_link']
- print(track)
-
- local_file_path = "sample.wav"
-
- # Download the MP3 file from the URL
- headers = {
- 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7; rv:93.0) Gecko/20100101 Firefox/93.0'}
-
- retries = 3
- delay = 5 # in seconds
- while retries > 0:
- response = requests.get(track, headers=headers)
- if response.status_code == 200:
- break
- retries -= 1
- time.sleep(delay)
- response = requests.get(track, headers=headers)
- print(f"{response}")
- # Save the downloaded content to a local file
- with open(local_file_path, 'wb') as f:
- f.write(response.content)
- return "sample.wav", track
-
-
-def get_results(text_prompt,track_duration,gen_intensity,gen_mode):
- pat_token = get_pat_token()
- music = get_music(pat_token, text_prompt, track_duration, gen_intensity, gen_mode)
- return pat_token, music[0], music[1]
-
-def get_prompts(uploaded_image, track_duration, gen_intensity, gen_mode, openai_api_key):
- print("calling clip interrogator")
- #prompt = img_to_text(uploaded_image, "ViT-L (best for Stable Diffusion 1.*)", "fast", fn_index=1)[0]
-
- prompt = img_to_text(uploaded_image, 'best', 4, fn_index=1)[0]
- print(prompt)
- clean_prompt = clean_text(prompt)
- print(f"prompt cleaned: {clean_prompt}")
- musical_prompt = 'You did not use any OpenAI API key to pimp your result :)'
- if openai_api_key is not None:
- gpt_adaptation = try_api(prompt, openai_api_key)
- if gpt_adaptation[0] != "oups":
- musical_prompt = gpt_adaptation[0]
- print(f"musical adapt: {musical_prompt}")
- music_result = get_results(musical_prompt, track_duration, gen_intensity, gen_mode)
- else:
- music_result = get_results(clean_prompt, track_duration, gen_intensity, gen_mode)
- else:
- music_result = get_results(clean_prompt, track_duration, gen_intensity, gen_mode)
-
- show_prompts = f"""
- CLIP Interrogator Caption: '{prompt}'
- —
- OpenAI Musical Adaptation: '{musical_prompt}'
- —
- Audio file link: {music_result[2]}
- """
- #wave_file = convert_mp3_to_wav(music_result[1])
-
- time.sleep(1)
- return gr.Textbox.update(value=show_prompts, visible=True), music_result[1], gr.update(visible=True), gr.update(visible=True), gr.update(visible=True)
-
-def try_api(message, openai_api_key):
-
- try:
- response = call_api(message, openai_api_key)
- return response, "no error "
- except openai.error.Timeout as e:
- #Handle timeout error, e.g. retry or log
- #print(f"OpenAI API request timed out: {e}")
- return "oups", f"OpenAI API request timed out: {e} "
- except openai.error.APIError as e:
- #Handle API error, e.g. retry or log
- #print(f"OpenAI API returned an API Error: {e}")
- return "oups", f"OpenAI API returned an API Error: {e} "
- except openai.error.APIConnectionError as e:
- #Handle connection error, e.g. check network or log
- #print(f"OpenAI API request failed to connect: {e}")
- return "oups", f"OpenAI API request failed to connect: {e} "
- except openai.error.InvalidRequestError as e:
- #Handle invalid request error, e.g. validate parameters or log
- #print(f"OpenAI API request was invalid: {e}")
- return "oups", f"OpenAI API request was invalid: {e} "
- except openai.error.AuthenticationError as e:
- #Handle authentication error, e.g. check credentials or log
- #print(f"OpenAI API request was not authorized: {e}")
- return "oups", f"OpenAI API request was not authorized: {e} "
- except openai.error.PermissionError as e:
- #Handle permission error, e.g. check scope or log
- #print(f"OpenAI API request was not permitted: {e}")
- return "oups", f"OpenAI API request was not permitted: {e} "
- except openai.error.RateLimitError as e:
- #Handle rate limit error, e.g. wait or log
- #print(f"OpenAI API request exceeded rate limit: {e}")
- return "oups", f"OpenAI API request exceeded rate limit: {e} "
-
-def call_api(message, openai_api_key):
-
- instruction = "Convert in less than 200 characters this image caption to a very concise musical description with musical terms, as if you wanted to describe a musical ambiance, stricly in English"
-
- print("starting open ai")
- augmented_prompt = f"{instruction}: '{message}'."
- openai.api_key = openai_api_key
-
- response = openai.Completion.create(
- model="text-davinci-003",
- prompt=augmented_prompt,
- temperature=0.5,
- max_tokens=2048,
- top_p=1,
- frequency_penalty=0,
- presence_penalty=0.6
- )
-
- #print(response)
-
- #return str(response.choices[0].text).split("\n",2)[2]
- return str(response.choices[0].text).lstrip('\n')
-
-
-def get_track_by_tags(tags, pat, duration, gen_intensity, gen_mode, maxit=20):
-
- r = httpx.post('https://api-b2b.mubert.com/v2/RecordTrackTTM',
- json={
- "method": "RecordTrackTTM",
- "params": {
- "pat": pat,
- "duration": duration,
- "format": "wav",
- "intensity":gen_intensity,
- "tags": tags,
- "mode": gen_mode
- }
- })
-
- rdata = json.loads(r.text)
- print(rdata)
- #assert rdata['status'] == 1, rdata['error']['text']
- trackurl = rdata['data']['tasks'][0]
-
- print('Generating track ', end='')
- for i in range(maxit):
- r = httpx.get(trackurl)
- if r.status_code == 200:
- return trackurl
- time.sleep(1)
-
-
-def generate_track_by_prompt(pat, prompt, duration, gen_intensity, gen_mode):
- try:
- _, tags = get_tags_for_prompts(minilm, mubert_tags_embeddings, prompt)[0]
- result = get_track_by_tags(tags, pat, int(duration), gen_intensity, gen_mode)
- print(result)
- return result, ",".join(tags), "Success"
- except Exception as e:
- return None, "", str(e)
-
-def convert_mp3_to_wav(mp3_filepath):
-
- wave_file="file.wav"
-
- sound = AudioSegment.from_mp3(mp3_filepath)
- sound.export(wave_file, format="wav")
-
- return wave_file
-
-def remove_emoji(text):
- emoji_pattern = re.compile("["
- u"\U0001F600-\U0001F64F" # emoticons
- u"\U0001F300-\U0001F5FF" # symbols & pictographs
- u"\U0001F680-\U0001F6FF" # transport & map symbols
- u"\U0001F1E0-\U0001F1FF" # flags (iOS)
- "]+", flags=re.UNICODE)
- return emoji_pattern.sub(r'', text)
-
-def remove_nonalphanumeric(text):
- return re.sub(r'[^a-zA-Z0-9\s]', '', text)
-
-def clean_text(text):
- clean_text = remove_nonalphanumeric(text)
- clean_text = remove_emoji(clean_text)
- clean_text = re.sub(r'\d+', '', clean_text) # Remove any number
- return clean_text
-
-article = """
-
-
-
-
-
-
-"""
-
-with gr.Blocks(css="style.css") as demo:
- with gr.Column(elem_id="col-container"):
-
- gr.HTML("""
-
-
- Image to Music
-
-
-
- Sends an image in to CLIP Interrogator
- to generate a text prompt which is then run through
- Mubert text-to-music to generate music from the input image!
-
-
""")
-
- input_img = gr.Image(type="filepath", elem_id="input-img")
- prompts_out = gr.Textbox(label="Text Captions", visible=False, elem_id="prompts_out", info="If player do not work, try to copy/paste the link in a new browser window")
- music_output = gr.Audio(label="Result", type="filepath", elem_id="music-output").style(height="5rem")
- #music_url = gr.Textbox(max_lines=1, info="If player do not work, try to copy/paste the link in a new browser window")
- #text_status = gr.Textbox(label="status")
- with gr.Group(elem_id="share-btn-container"):
- community_icon = gr.HTML(community_icon_html, visible=False)
- loading_icon = gr.HTML(loading_icon_html, visible=False)
- share_button = gr.Button("Share to community", elem_id="share-btn", visible=False)
-
- with gr.Accordion(label="Music Generation Options", open=False):
- openai_api_key = gr.Textbox(type="password", label="🔐 Your OpenAI API Key (optional)", placeholder="sk-123abc...", info="You can use your OpenAI key to adapt CLIP Interrogator caption to a musical translation.")
- track_duration = gr.Slider(minimum=20, maximum=120, value=55, ustep=5, label="Track duration", elem_id="duration-inp")
- with gr.Row():
- gen_intensity = gr.Dropdown(choices=["low", "medium", "high"], value="medium", label="Intensity")
- gen_mode = gr.Radio(label="mode", choices=["track", "loop"], value="loop")
-
- generate = gr.Button("Generate Music from Image")
-
- gr.HTML(article)
-
- generate.click(get_prompts, inputs=[input_img,track_duration,gen_intensity,gen_mode, openai_api_key], outputs=[prompts_out, music_output, share_button, community_icon, loading_icon], api_name="i2m")
- share_button.click(None, [], [], _js=share_js)
-
-demo.queue(max_size=32).launch()
\ No newline at end of file
diff --git a/spaces/MilaNLProc/wordify/src/configs.py b/spaces/MilaNLProc/wordify/src/configs.py
deleted file mode 100644
index 09fc01dd3366da2bcd758f1a5b6e4233f812008c..0000000000000000000000000000000000000000
--- a/spaces/MilaNLProc/wordify/src/configs.py
+++ /dev/null
@@ -1,57 +0,0 @@
-from enum import Enum
-
-import pandas as pd
-
-
-class ColumnNames(Enum):
- LABEL = "label"
- TEXT = "text"
- PROCESSED_TEXT = "processed_text"
-
-
-class ModelConfigs(Enum):
- NUM_ITERS = 500
- SELECTION_THRESHOLD = 0.0
- PENALTIES = [10, 5, 2, 1, 0.5, 0.1, 0.05, 0.01, 0.005, 0.001, 0.0001, 0.00001]
- MAX_SELECTION = 100_000
- MIN_SELECTION = 10_000
-
-
-class InputTransformConfigs(Enum):
- NGRAM_RANGE = (1, 3)
- MIN_DF = 0.001
- MAX_DF = 0.75
- SUBLINEAR = True
-
-
-class PreprocessingConfigs(Enum):
- DEFAULT_PRE = [1, 14, 2, 3, 4, 5, 23, 22, 21, 24]
- DEFAULT_LEMMA = 1
- DEFAULT_POST = [0, 17, 15, 19, 23, 22, 21, 24]
-
-
-class Languages(Enum):
- English = "en_core_web_sm"
- Italian = "it_core_news_sm"
- German = "de_core_news_sm"
- Spanish = "es_core_news_sm"
- Greek = "el_core_news_sm"
- Dutch = "nl_core_news_sm"
- Portuguese = "pt_core_news_sm"
- French = "fr_core_news_sm"
- Danish = "da_core_news_sm"
- # Japanese = "ja_core_news_sm"
- Lithuanian = "lt_core_news_sm"
- Norvegian = "nb_core_news_sm"
- Polish = "pl_core_news_sm"
- Romanian = "ro_core_news_sm"
- Russian = "ru_core_news_sm"
- MultiLanguage = "xx_ent_wiki_sm"
- Chinese = "zh_core_web_sm"
-
-
-class SupportedFiles(Enum):
- xlsx = (lambda x: pd.read_excel(x, dtype=str),)
- tsv = (lambda x: pd.read_csv(x, dtype=str, sep="\t"),)
- csv = (lambda x: pd.read_csv(x, dtype=str, sep=","),)
- parquet = (lambda x: pd.read_parquet(x),)
diff --git a/spaces/Mileena/PIFu-Clothed-Human-Digitization/PIFu/spaces.py b/spaces/Mileena/PIFu-Clothed-Human-Digitization/PIFu/spaces.py
deleted file mode 100644
index 44e894aa1d1244d492a17f61045e59e12f86b350..0000000000000000000000000000000000000000
--- a/spaces/Mileena/PIFu-Clothed-Human-Digitization/PIFu/spaces.py
+++ /dev/null
@@ -1,161 +0,0 @@
-import os
-os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
-os.environ["CUDA_VISIBLE_DEVICES"]="0"
-try:
- os.system("pip install --upgrade torch==1.11.0+cu113 torchvision==0.12.0+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html")
-except Exception as e:
- print(e)
-
-from pydoc import describe
-from huggingface_hub import hf_hub_download
-import gradio as gr
-import os
-from datetime import datetime
-from PIL import Image
-import torch
-import torchvision
-import skimage
-import paddlehub
-import numpy as np
-from lib.options import BaseOptions
-from apps.crop_img import process_img
-from apps.eval import Evaluator
-from types import SimpleNamespace
-import trimesh
-import glob
-
-print(
- "torch: ", torch.__version__,
- "\ntorchvision: ", torchvision.__version__,
- "\nskimage:", skimage.__version__
-)
-
-print("EnV", os.environ)
-
-net_C = hf_hub_download("radames/PIFu-upright-standing", filename="net_C")
-net_G = hf_hub_download("radames/PIFu-upright-standing", filename="net_G")
-
-
-opt = BaseOptions()
-opts = opt.parse_to_dict()
-opts['batch_size'] = 1
-opts['mlp_dim'] = [257, 1024, 512, 256, 128, 1]
-opts['mlp_dim_color'] = [513, 1024, 512, 256, 128, 3]
-opts['num_stack'] = 4
-opts['num_hourglass'] = 2
-opts['resolution'] = 128
-opts['hg_down'] = 'ave_pool'
-opts['norm'] = 'group'
-opts['norm_color'] = 'group'
-opts['load_netG_checkpoint_path'] = net_G
-opts['load_netC_checkpoint_path'] = net_C
-opts['results_path'] = "./results"
-opts['name'] = "spaces_demo"
-opts = SimpleNamespace(**opts)
-print("Params", opts)
-evaluator = Evaluator(opts)
-bg_remover_model = paddlehub.Module(name="U2Net")
-
-
-def process(img_path):
- base = os.path.basename(img_path)
- img_name = os.path.splitext(base)[0]
- print("\n\n\nStarting Process", datetime.now())
- print("image name", img_name)
- img_raw = Image.open(img_path).convert('RGB')
-
- img = img_raw.resize(
- (512, int(512 * img_raw.size[1] / img_raw.size[0])),
- Image.Resampling.LANCZOS)
-
- try:
- # remove background
- print("Removing Background")
- masks = bg_remover_model.Segmentation(
- images=[np.array(img)],
- paths=None,
- batch_size=1,
- input_size=320,
- output_dir='./PIFu/inputs',
- visualization=False)
- mask = masks[0]["mask"]
- front = masks[0]["front"]
- except Exception as e:
- print(e)
-
- print("Aliging mask with input training image")
- print("Not aligned", front.shape, mask.shape)
- img_new, msk_new = process_img(front, mask)
- print("Aligned", img_new.shape, msk_new.shape)
-
- try:
- time = datetime.now()
- data = evaluator.load_image_from_memory(img_new, msk_new, img_name)
- print("Evaluating via PIFu", time)
- evaluator.eval(data, True)
- print("Success Evaluating via PIFu", datetime.now() - time)
- result_path = f'./{opts.results_path}/{opts.name}/result_{img_name}'
- except Exception as e:
- print("Error evaluating via PIFu", e)
-
- try:
- mesh = trimesh.load(result_path + '.obj')
- # flip mesh
- mesh.apply_transform([[-1, 0, 0, 0],
- [0, 1, 0, 0],
- [0, 0, -1, 0],
- [0, 0, 0, 1]])
- mesh.export(file_obj=result_path + '.glb')
- result_gltf = result_path + '.glb'
- return [result_gltf, result_gltf]
-
- except Exception as e:
- print("error generating MESH", e)
-
-
-examples = sorted(glob.glob('examples/*.png'))
-description = '''
-# PIFu Clothed Human Digitization
-### PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization
-
-
-This is a demo for PIFu model .
-The pre-trained model has the following warning:
-> Warning: The released model is trained with mostly upright standing scans with weak perspectie projection and the pitch angle of 0 degree. Reconstruction quality may degrade for images highly deviated from trainining data.
-
-**The inference takes about 180seconds for a new image.**
-
-
-More
-
-#### Image Credits
-
-* Julien and Clem
-* [StyleGAN Humans](https://huggingface.co/spaces/hysts/StyleGAN-Human)
-* [Renderpeople: Dennis](https://renderpeople.com)
-
-
-#### More
-* https://phorhum.github.io/
-* https://github.com/yuliangxiu/icon
-* https://shunsukesaito.github.io/PIFuHD/
-
-
-'''
-
-iface = gr.Interface(
- fn=process,
- description=description,
- inputs=gr.Image(type="filepath", label="Input Image"),
- outputs=[
- gr.Model3D(
- clear_color=[0.0, 0.0, 0.0, 0.0], label="3D Model"),
- gr.File(label="Download 3D Model")
- ],
- examples=examples,
- allow_flagging="never",
- cache_examples=True
-)
-
-if __name__ == "__main__":
- iface.launch(debug=True, enable_queue=False)
diff --git a/spaces/Minoumimi/WaifuMakinTime/README.md b/spaces/Minoumimi/WaifuMakinTime/README.md
deleted file mode 100644
index 5cf22ec37376f90295a251b08285de27278c63c8..0000000000000000000000000000000000000000
--- a/spaces/Minoumimi/WaifuMakinTime/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: WaifuMakinTime
-emoji: 👁
-colorFrom: red
-colorTo: gray
-sdk: gradio
-sdk_version: 3.27.0
-app_file: app.py
-pinned: false
-license: gpl-3.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/MirageML/sjc/sd1/ldm/modules/attention.py b/spaces/MirageML/sjc/sd1/ldm/modules/attention.py
deleted file mode 100644
index f4eff39ccb6d75daa764f6eb70a7cef024fb5a3f..0000000000000000000000000000000000000000
--- a/spaces/MirageML/sjc/sd1/ldm/modules/attention.py
+++ /dev/null
@@ -1,261 +0,0 @@
-from inspect import isfunction
-import math
-import torch
-import torch.nn.functional as F
-from torch import nn, einsum
-from einops import rearrange, repeat
-
-from ldm.modules.diffusionmodules.util import checkpoint
-
-
-def exists(val):
- return val is not None
-
-
-def uniq(arr):
- return{el: True for el in arr}.keys()
-
-
-def default(val, d):
- if exists(val):
- return val
- return d() if isfunction(d) else d
-
-
-def max_neg_value(t):
- return -torch.finfo(t.dtype).max
-
-
-def init_(tensor):
- dim = tensor.shape[-1]
- std = 1 / math.sqrt(dim)
- tensor.uniform_(-std, std)
- return tensor
-
-
-# feedforward
-class GEGLU(nn.Module):
- def __init__(self, dim_in, dim_out):
- super().__init__()
- self.proj = nn.Linear(dim_in, dim_out * 2)
-
- def forward(self, x):
- x, gate = self.proj(x).chunk(2, dim=-1)
- return x * F.gelu(gate)
-
-
-class FeedForward(nn.Module):
- def __init__(self, dim, dim_out=None, mult=4, glu=False, dropout=0.):
- super().__init__()
- inner_dim = int(dim * mult)
- dim_out = default(dim_out, dim)
- project_in = nn.Sequential(
- nn.Linear(dim, inner_dim),
- nn.GELU()
- ) if not glu else GEGLU(dim, inner_dim)
-
- self.net = nn.Sequential(
- project_in,
- nn.Dropout(dropout),
- nn.Linear(inner_dim, dim_out)
- )
-
- def forward(self, x):
- return self.net(x)
-
-
-def zero_module(module):
- """
- Zero out the parameters of a module and return it.
- """
- for p in module.parameters():
- p.detach().zero_()
- return module
-
-
-def Normalize(in_channels):
- return torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True)
-
-
-class LinearAttention(nn.Module):
- def __init__(self, dim, heads=4, dim_head=32):
- super().__init__()
- self.heads = heads
- hidden_dim = dim_head * heads
- self.to_qkv = nn.Conv2d(dim, hidden_dim * 3, 1, bias = False)
- self.to_out = nn.Conv2d(hidden_dim, dim, 1)
-
- def forward(self, x):
- b, c, h, w = x.shape
- qkv = self.to_qkv(x)
- q, k, v = rearrange(qkv, 'b (qkv heads c) h w -> qkv b heads c (h w)', heads = self.heads, qkv=3)
- k = k.softmax(dim=-1)
- context = torch.einsum('bhdn,bhen->bhde', k, v)
- out = torch.einsum('bhde,bhdn->bhen', context, q)
- out = rearrange(out, 'b heads c (h w) -> b (heads c) h w', heads=self.heads, h=h, w=w)
- return self.to_out(out)
-
-
-class SpatialSelfAttention(nn.Module):
- def __init__(self, in_channels):
- super().__init__()
- self.in_channels = in_channels
-
- self.norm = Normalize(in_channels)
- self.q = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=1,
- stride=1,
- padding=0)
- self.k = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=1,
- stride=1,
- padding=0)
- self.v = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=1,
- stride=1,
- padding=0)
- self.proj_out = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=1,
- stride=1,
- padding=0)
-
- def forward(self, x):
- h_ = x
- h_ = self.norm(h_)
- q = self.q(h_)
- k = self.k(h_)
- v = self.v(h_)
-
- # compute attention
- b,c,h,w = q.shape
- q = rearrange(q, 'b c h w -> b (h w) c')
- k = rearrange(k, 'b c h w -> b c (h w)')
- w_ = torch.einsum('bij,bjk->bik', q, k)
-
- w_ = w_ * (int(c)**(-0.5))
- w_ = torch.nn.functional.softmax(w_, dim=2)
-
- # attend to values
- v = rearrange(v, 'b c h w -> b c (h w)')
- w_ = rearrange(w_, 'b i j -> b j i')
- h_ = torch.einsum('bij,bjk->bik', v, w_)
- h_ = rearrange(h_, 'b c (h w) -> b c h w', h=h)
- h_ = self.proj_out(h_)
-
- return x+h_
-
-
-class CrossAttention(nn.Module):
- def __init__(self, query_dim, context_dim=None, heads=8, dim_head=64, dropout=0.):
- super().__init__()
- inner_dim = dim_head * heads
- context_dim = default(context_dim, query_dim)
-
- self.scale = dim_head ** -0.5
- self.heads = heads
-
- self.to_q = nn.Linear(query_dim, inner_dim, bias=False)
- self.to_k = nn.Linear(context_dim, inner_dim, bias=False)
- self.to_v = nn.Linear(context_dim, inner_dim, bias=False)
-
- self.to_out = nn.Sequential(
- nn.Linear(inner_dim, query_dim),
- nn.Dropout(dropout)
- )
-
- def forward(self, x, context=None, mask=None):
- h = self.heads
-
- q = self.to_q(x)
- context = default(context, x)
- k = self.to_k(context)
- v = self.to_v(context)
-
- q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> (b h) n d', h=h), (q, k, v))
-
- sim = einsum('b i d, b j d -> b i j', q, k) * self.scale
-
- if exists(mask):
- mask = rearrange(mask, 'b ... -> b (...)')
- max_neg_value = -torch.finfo(sim.dtype).max
- mask = repeat(mask, 'b j -> (b h) () j', h=h)
- sim.masked_fill_(~mask, max_neg_value)
-
- # attention, what we cannot get enough of
- attn = sim.softmax(dim=-1)
-
- out = einsum('b i j, b j d -> b i d', attn, v)
- out = rearrange(out, '(b h) n d -> b n (h d)', h=h)
- return self.to_out(out)
-
-
-class BasicTransformerBlock(nn.Module):
- def __init__(self, dim, n_heads, d_head, dropout=0., context_dim=None, gated_ff=True, checkpoint=True):
- super().__init__()
- self.attn1 = CrossAttention(query_dim=dim, heads=n_heads, dim_head=d_head, dropout=dropout) # is a self-attention
- self.ff = FeedForward(dim, dropout=dropout, glu=gated_ff)
- self.attn2 = CrossAttention(query_dim=dim, context_dim=context_dim,
- heads=n_heads, dim_head=d_head, dropout=dropout) # is self-attn if context is none
- self.norm1 = nn.LayerNorm(dim)
- self.norm2 = nn.LayerNorm(dim)
- self.norm3 = nn.LayerNorm(dim)
- self.checkpoint = checkpoint
-
- def forward(self, x, context=None):
- return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
-
- def _forward(self, x, context=None):
- x = self.attn1(self.norm1(x)) + x
- x = self.attn2(self.norm2(x), context=context) + x
- x = self.ff(self.norm3(x)) + x
- return x
-
-
-class SpatialTransformer(nn.Module):
- """
- Transformer block for image-like data.
- First, project the input (aka embedding)
- and reshape to b, t, d.
- Then apply standard transformer action.
- Finally, reshape to image
- """
- def __init__(self, in_channels, n_heads, d_head,
- depth=1, dropout=0., context_dim=None):
- super().__init__()
- self.in_channels = in_channels
- inner_dim = n_heads * d_head
- self.norm = Normalize(in_channels)
-
- self.proj_in = nn.Conv2d(in_channels,
- inner_dim,
- kernel_size=1,
- stride=1,
- padding=0)
-
- self.transformer_blocks = nn.ModuleList(
- [BasicTransformerBlock(inner_dim, n_heads, d_head, dropout=dropout, context_dim=context_dim)
- for d in range(depth)]
- )
-
- self.proj_out = zero_module(nn.Conv2d(inner_dim,
- in_channels,
- kernel_size=1,
- stride=1,
- padding=0))
-
- def forward(self, x, context=None):
- # note: if no context is given, cross-attention defaults to self-attention
- b, c, h, w = x.shape
- x_in = x
- x = self.norm(x)
- x = self.proj_in(x)
- x = rearrange(x, 'b c h w -> b (h w) c')
- for block in self.transformer_blocks:
- x = block(x, context=context)
- x = rearrange(x, 'b (h w) c -> b c h w', h=h, w=w)
- x = self.proj_out(x)
- return x + x_in
\ No newline at end of file
diff --git a/spaces/MountLiteraSwd/stabilityai-stable-diffusion-7/app.py b/spaces/MountLiteraSwd/stabilityai-stable-diffusion-7/app.py
deleted file mode 100644
index d2782cea00b1bfcd22df7c204d9e52a6baf46ac2..0000000000000000000000000000000000000000
--- a/spaces/MountLiteraSwd/stabilityai-stable-diffusion-7/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/stabilityai/stable-diffusion-2").launch()
\ No newline at end of file
diff --git a/spaces/Munna0912/URL_CLASSIFIER/README.md b/spaces/Munna0912/URL_CLASSIFIER/README.md
deleted file mode 100644
index 0ede5c5bcad7daa6c7e24777629ef6851014b332..0000000000000000000000000000000000000000
--- a/spaces/Munna0912/URL_CLASSIFIER/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: URL CLASSIFIER
-emoji: 🔥
-colorFrom: indigo
-colorTo: purple
-sdk: gradio
-sdk_version: 3.40.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/NCTCMumbai/NCTC/models/official/nlp/nhnet/testdata/crawled_articles/domain_1.com/url_001.html b/spaces/NCTCMumbai/NCTC/models/official/nlp/nhnet/testdata/crawled_articles/domain_1.com/url_001.html
deleted file mode 100644
index 7c8bb8d285c3e9da41ea8ca546d6d1503e3a7e51..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/official/nlp/nhnet/testdata/crawled_articles/domain_1.com/url_001.html
+++ /dev/null
@@ -1,3 +0,0 @@
-
-
-Page Title 1
diff --git a/spaces/NCTCMumbai/NCTC/models/research/cognitive_planning/envs/task_env.py b/spaces/NCTCMumbai/NCTC/models/research/cognitive_planning/envs/task_env.py
deleted file mode 100644
index 84d527cd2e4e09b587fa47f2a98a6df1592915e9..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/research/cognitive_planning/envs/task_env.py
+++ /dev/null
@@ -1,218 +0,0 @@
-# Copyright 2018 The TensorFlow Authors All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-
-"""An interface representing the topology of an environment.
-
-Allows for high level planning and high level instruction generation for
-navigation tasks.
-"""
-
-from __future__ import absolute_import
-from __future__ import division
-from __future__ import print_function
-
-import abc
-import enum
-import gym
-import gin
-
-
-@gin.config.constants_from_enum
-class ModalityTypes(enum.Enum):
- """Types of the modalities that can be used."""
- IMAGE = 0
- SEMANTIC_SEGMENTATION = 1
- OBJECT_DETECTION = 2
- DEPTH = 3
- GOAL = 4
- PREV_ACTION = 5
- PREV_SUCCESS = 6
- STATE = 7
- DISTANCE = 8
- CAN_STEP = 9
-
- def __lt__(self, other):
- if self.__class__ is other.__class__:
- return self.value < other.value
- return NotImplemented
-
-
-class TaskEnvInterface(object):
- """Interface for an environment topology.
-
- An environment can implement this interface if there is a topological graph
- underlying this environment. All paths below are defined as paths in this
- graph. Using path_to_actions function one can translate a topological path
- to a geometric path in the environment.
- """
-
- __metaclass__ = abc.ABCMeta
-
- @abc.abstractmethod
- def random_step_sequence(self, min_len=None, max_len=None):
- """Generates a random sequence of actions and executes them.
-
- Args:
- min_len: integer, minimum length of a step sequence.
- max_len: integer, if it is set to non-None, the method returns only
- the first n steps of a random sequence. If the environment is
- computationally heavy this argument should be set to speed up the
- training and avoid unnecessary computations by the environment.
-
- Returns:
- A path, defined as a list of vertex indices, a list of actions, a list of
- states, and a list of step() return tuples.
- """
- raise NotImplementedError(
- 'Needs implementation as part of EnvTopology interface.')
-
- @abc.abstractmethod
- def targets(self):
- """A list of targets in the environment.
-
- Returns:
- A list of target locations.
- """
- raise NotImplementedError(
- 'Needs implementation as part of EnvTopology interface.')
-
- @abc.abstractproperty
- def state(self):
- """Returns the position for the current location of agent."""
- raise NotImplementedError(
- 'Needs implementation as part of EnvTopology interface.')
-
- @abc.abstractproperty
- def graph(self):
- """Returns a graph representing the environment topology.
-
- Returns:
- nx.Graph object.
- """
- raise NotImplementedError(
- 'Needs implementation as part of EnvTopology interface.')
-
- @abc.abstractmethod
- def vertex_to_pose(self, vertex_index):
- """Maps a vertex index to a pose in the environment.
-
- Pose of the camera can be represented by (x,y,theta) or (x,y,z,theta).
- Args:
- vertex_index: index of a vertex in the topology graph.
-
- Returns:
- A np.array of floats of size 3 or 4 representing the pose of the vertex.
- """
- raise NotImplementedError(
- 'Needs implementation as part of EnvTopology interface.')
-
- @abc.abstractmethod
- def pose_to_vertex(self, pose):
- """Maps a coordinate in the maze to the closest vertex in topology graph.
-
- Args:
- pose: np.array of floats containing a the pose of the view.
-
- Returns:
- index of a vertex.
- """
- raise NotImplementedError(
- 'Needs implementation as part of EnvTopology interface.')
-
- @abc.abstractmethod
- def observation(self, state):
- """Returns observation at location xy and orientation theta.
-
- Args:
- state: a np.array of floats containing coordinates of a location and
- orientation.
-
- Returns:
- Dictionary of observations in the case of multiple observations.
- The keys are the modality names and the values are the np.array of float
- of observations for corresponding modality.
- """
- raise NotImplementedError(
- 'Needs implementation as part of EnvTopology interface.')
-
- def action(self, init_state, final_state):
- """Computes the transition action from state1 to state2.
-
- If the environment is discrete and the views are not adjacent in the
- environment. i.e. it is not possible to move from the first view to the
- second view with one action it should return None. In the continuous case,
- it will be the continuous difference of first view and second view.
-
- Args:
- init_state: numpy array, the initial view of the agent.
- final_state: numpy array, the final view of the agent.
- """
- raise NotImplementedError(
- 'Needs implementation as part of EnvTopology interface.')
-
-
-@gin.configurable
-class TaskEnv(gym.Env, TaskEnvInterface):
- """An environment which uses a Task to compute reward.
-
- The environment implements a a gym interface, as well as EnvTopology. The
- former makes sure it can be used within an RL training, while the latter
- makes sure it can be used by a Task.
-
- This environment requires _step_no_reward to be implemented, which steps
- through it but does not return reward. Instead, the reward calculation is
- delegated to the Task object, which in return can access needed properties
- of the environment. These properties are exposed via the EnvTopology
- interface.
- """
-
- def __init__(self, task=None):
- self._task = task
-
- def set_task(self, task):
- self._task = task
-
- @abc.abstractmethod
- def _step_no_reward(self, action):
- """Same as _step without returning reward.
-
- Args:
- action: see _step.
-
- Returns:
- state, done, info as defined in _step.
- """
- raise NotImplementedError('Implement step.')
-
- @abc.abstractmethod
- def _reset_env(self):
- """Resets the environment. Returns initial observation."""
- raise NotImplementedError('Implement _reset. Must call super!')
-
- def step(self, action):
- obs, done, info = self._step_no_reward(action)
-
- reward = 0.0
- if self._task is not None:
- obs, reward, done, info = self._task.reward(obs, done, info)
-
- return obs, reward, done, info
-
- def reset(self):
- """Resets the environment. Gym API."""
- obs = self._reset_env()
- if self._task is not None:
- self._task.reset(obs)
- return obs
diff --git a/spaces/NMEX/rvc-hoyo-game/vc_infer_pipeline.py b/spaces/NMEX/rvc-hoyo-game/vc_infer_pipeline.py
deleted file mode 100644
index 7ff98b2c812f4e74afe92048fb26009fb008479d..0000000000000000000000000000000000000000
--- a/spaces/NMEX/rvc-hoyo-game/vc_infer_pipeline.py
+++ /dev/null
@@ -1,320 +0,0 @@
-import numpy as np, parselmouth, torch, pdb
-from time import time as ttime
-import torch.nn.functional as F
-import scipy.signal as signal
-import pyworld, os, traceback, faiss
-from scipy import signal
-
-bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000)
-
-
-class VC(object):
- def __init__(self, tgt_sr, config):
- self.x_pad, self.x_query, self.x_center, self.x_max, self.is_half = (
- config.x_pad,
- config.x_query,
- config.x_center,
- config.x_max,
- config.is_half,
- )
- self.sr = 16000 # hubert输入采样率
- self.window = 160 # 每帧点数
- self.t_pad = self.sr * self.x_pad # 每条前后pad时间
- self.t_pad_tgt = tgt_sr * self.x_pad
- self.t_pad2 = self.t_pad * 2
- self.t_query = self.sr * self.x_query # 查询切点前后查询时间
- self.t_center = self.sr * self.x_center # 查询切点位置
- self.t_max = self.sr * self.x_max # 免查询时长阈值
- self.device = config.device
-
- def get_f0(self, x, p_len, f0_up_key, f0_method, inp_f0=None):
- time_step = self.window / self.sr * 1000
- f0_min = 50
- f0_max = 1100
- f0_mel_min = 1127 * np.log(1 + f0_min / 700)
- f0_mel_max = 1127 * np.log(1 + f0_max / 700)
- if f0_method == "pm":
- f0 = (
- parselmouth.Sound(x, self.sr)
- .to_pitch_ac(
- time_step=time_step / 1000,
- voicing_threshold=0.6,
- pitch_floor=f0_min,
- pitch_ceiling=f0_max,
- )
- .selected_array["frequency"]
- )
- pad_size = (p_len - len(f0) + 1) // 2
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
- f0 = np.pad(
- f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant"
- )
- elif f0_method == "harvest":
- f0, t = pyworld.harvest(
- x.astype(np.double),
- fs=self.sr,
- f0_ceil=f0_max,
- f0_floor=f0_min,
- frame_period=10,
- )
- f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr)
- f0 = signal.medfilt(f0, 3)
- f0 *= pow(2, f0_up_key / 12)
- # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
- tf0 = self.sr // self.window # 每秒f0点数
- if inp_f0 is not None:
- delta_t = np.round(
- (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1
- ).astype("int16")
- replace_f0 = np.interp(
- list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1]
- )
- shape = f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)].shape[0]
- f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)] = replace_f0[
- :shape
- ]
- # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
- f0bak = f0.copy()
- f0_mel = 1127 * np.log(1 + f0 / 700)
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (
- f0_mel_max - f0_mel_min
- ) + 1
- f0_mel[f0_mel <= 1] = 1
- f0_mel[f0_mel > 255] = 255
- f0_coarse = np.rint(f0_mel).astype(np.int)
- return f0_coarse, f0bak # 1-0
-
- def vc(
- self,
- model,
- net_g,
- sid,
- audio0,
- pitch,
- pitchf,
- times,
- index,
- big_npy,
- index_rate,
- ): # ,file_index,file_big_npy
- feats = torch.from_numpy(audio0)
- if self.is_half:
- feats = feats.half()
- else:
- feats = feats.float()
- if feats.dim() == 2: # double channels
- feats = feats.mean(-1)
- assert feats.dim() == 1, feats.dim()
- feats = feats.view(1, -1)
- padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False)
-
- inputs = {
- "source": feats.to(self.device),
- "padding_mask": padding_mask,
- "output_layer": 9, # layer 9
- }
- t0 = ttime()
- with torch.no_grad():
- logits = model.extract_features(**inputs)
- feats = model.final_proj(logits[0])
-
- if (
- isinstance(index, type(None)) == False
- and isinstance(big_npy, type(None)) == False
- and index_rate != 0
- ):
- npy = feats[0].cpu().numpy()
- if self.is_half:
- npy = npy.astype("float32")
-
- # _, I = index.search(npy, 1)
- # npy = big_npy[I.squeeze()]
-
- score, ix = index.search(npy, k=8)
- weight = np.square(1 / score)
- weight /= weight.sum(axis=1, keepdims=True)
- npy = np.sum(big_npy[ix] * np.expand_dims(weight, axis=2), axis=1)
-
- if self.is_half:
- npy = npy.astype("float16")
- feats = (
- torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate
- + (1 - index_rate) * feats
- )
-
- feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1)
- t1 = ttime()
- p_len = audio0.shape[0] // self.window
- if feats.shape[1] < p_len:
- p_len = feats.shape[1]
- if pitch != None and pitchf != None:
- pitch = pitch[:, :p_len]
- pitchf = pitchf[:, :p_len]
- p_len = torch.tensor([p_len], device=self.device).long()
- with torch.no_grad():
- if pitch != None and pitchf != None:
- audio1 = (
- (net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0] * 32768)
- .data.cpu()
- .float()
- .numpy()
- .astype(np.int16)
- )
- else:
- audio1 = (
- (net_g.infer(feats, p_len, sid)[0][0, 0] * 32768)
- .data.cpu()
- .float()
- .numpy()
- .astype(np.int16)
- )
- del feats, p_len, padding_mask
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- t2 = ttime()
- times[0] += t1 - t0
- times[2] += t2 - t1
- return audio1
-
- def pipeline(
- self,
- model,
- net_g,
- sid,
- audio,
- times,
- f0_up_key,
- f0_method,
- file_index,
- # file_big_npy,
- index_rate,
- if_f0,
- f0_file=None,
- ):
- if (
- file_index != ""
- # and file_big_npy != ""
- # and os.path.exists(file_big_npy) == True
- and os.path.exists(file_index) == True
- and index_rate != 0
- ):
- try:
- index = faiss.read_index(file_index)
- # big_npy = np.load(file_big_npy)
- big_npy = index.reconstruct_n(0, index.ntotal)
- except:
- traceback.print_exc()
- index = big_npy = None
- else:
- index = big_npy = None
- audio = signal.filtfilt(bh, ah, audio)
- audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect")
- opt_ts = []
- if audio_pad.shape[0] > self.t_max:
- audio_sum = np.zeros_like(audio)
- for i in range(self.window):
- audio_sum += audio_pad[i : i - self.window]
- for t in range(self.t_center, audio.shape[0], self.t_center):
- opt_ts.append(
- t
- - self.t_query
- + np.where(
- np.abs(audio_sum[t - self.t_query : t + self.t_query])
- == np.abs(audio_sum[t - self.t_query : t + self.t_query]).min()
- )[0][0]
- )
- s = 0
- audio_opt = []
- t = None
- t1 = ttime()
- audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect")
- p_len = audio_pad.shape[0] // self.window
- inp_f0 = None
- if hasattr(f0_file, "name") == True:
- try:
- with open(f0_file.name, "r") as f:
- lines = f.read().strip("\n").split("\n")
- inp_f0 = []
- for line in lines:
- inp_f0.append([float(i) for i in line.split(",")])
- inp_f0 = np.array(inp_f0, dtype="float32")
- except:
- traceback.print_exc()
- sid = torch.tensor(sid, device=self.device).unsqueeze(0).long()
- pitch, pitchf = None, None
- if if_f0 == 1:
- pitch, pitchf = self.get_f0(audio_pad, p_len, f0_up_key, f0_method, inp_f0)
- pitch = pitch[:p_len]
- pitchf = pitchf[:p_len]
- pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long()
- pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float()
- t2 = ttime()
- times[1] += t2 - t1
- for t in opt_ts:
- t = t // self.window * self.window
- if if_f0 == 1:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[s : t + self.t_pad2 + self.window],
- pitch[:, s // self.window : (t + self.t_pad2) // self.window],
- pitchf[:, s // self.window : (t + self.t_pad2) // self.window],
- times,
- index,
- big_npy,
- index_rate,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- else:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[s : t + self.t_pad2 + self.window],
- None,
- None,
- times,
- index,
- big_npy,
- index_rate,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- s = t
- if if_f0 == 1:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[t:],
- pitch[:, t // self.window :] if t is not None else pitch,
- pitchf[:, t // self.window :] if t is not None else pitchf,
- times,
- index,
- big_npy,
- index_rate,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- else:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[t:],
- None,
- None,
- times,
- index,
- big_npy,
- index_rate,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- audio_opt = np.concatenate(audio_opt)
- del pitch, pitchf, sid
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- return audio_opt
diff --git a/spaces/NeoonN/Video_whisper/app.py b/spaces/NeoonN/Video_whisper/app.py
deleted file mode 100644
index fc4f1a2032ff8f11d510c96a3d3d5e6c9ee7b144..0000000000000000000000000000000000000000
--- a/spaces/NeoonN/Video_whisper/app.py
+++ /dev/null
@@ -1,35 +0,0 @@
-import gradio as gr
-from transformers import pipeline
-from pytube import YouTube
-
-pipe = pipeline(model="NeoonN/ID2223_Lab2_Whisper")
-
-def transcribe(audio, url):
- """
- Transcribes a YouTube video if a url is specified and returns the transcription.
- If not url is specified, it transcribes the audio file as passed by Gradio.
- :param audio: Audio file as passed by Gradio. Only used if no url is specified.
- :param url: YouTube URL to transcribe.
- """
- if url:
- video=YouTube(url).streams.filter(only_audio=True).all()
- audio=video[0].download()
- text = pipe(audio)["text"]
- return text
-
- else:
- text = pipe(audio)["text"]
- return text
-
-iface = gr.Interface(
- fn=transcribe,
- inputs=[
- gr.Audio(source="microphone", type="filepath", label="Transcribe from Microphone"),
- gr.Text(max_lines=1, placeholder="Enter YouTube Link with Chinese speech to be transcribed", label="Transcribe from YouTube URL"),
- ],
- outputs="text",
- title="Whisper Small Chinese",
- description="Realtime demo for Chinese speech recognition using a fine-tuned Whisper small model.",
-)
-
-iface.launch()
\ No newline at end of file
diff --git a/spaces/Nephele/bert-vits2-multi-voice/train_ms.py b/spaces/Nephele/bert-vits2-multi-voice/train_ms.py
deleted file mode 100644
index 5d109003d40497ea4493e7c73f47c1eb7370a81e..0000000000000000000000000000000000000000
--- a/spaces/Nephele/bert-vits2-multi-voice/train_ms.py
+++ /dev/null
@@ -1,402 +0,0 @@
-import os
-import json
-import argparse
-import itertools
-import math
-import torch
-import shutil
-from torch import nn, optim
-from torch.nn import functional as F
-from torch.utils.data import DataLoader
-from torch.utils.tensorboard import SummaryWriter
-import torch.multiprocessing as mp
-import torch.distributed as dist
-from torch.nn.parallel import DistributedDataParallel as DDP
-from torch.cuda.amp import autocast, GradScaler
-from tqdm import tqdm
-import logging
-logging.getLogger('numba').setLevel(logging.WARNING)
-import commons
-import utils
-from data_utils import (
- TextAudioSpeakerLoader,
- TextAudioSpeakerCollate,
- DistributedBucketSampler
-)
-from models import (
- SynthesizerTrn,
- MultiPeriodDiscriminator,
- DurationDiscriminator,
-)
-from losses import (
- generator_loss,
- discriminator_loss,
- feature_loss,
- kl_loss
-)
-from mel_processing import mel_spectrogram_torch, spec_to_mel_torch
-from text.symbols import symbols
-
-torch.backends.cudnn.benchmark = True
-torch.backends.cuda.matmul.allow_tf32 = True
-torch.backends.cudnn.allow_tf32 = True
-torch.set_float32_matmul_precision('medium')
-global_step = 0
-
-
-def main():
- """Assume Single Node Multi GPUs Training Only"""
- assert torch.cuda.is_available(), "CPU training is not allowed."
-
- n_gpus = torch.cuda.device_count()
- os.environ['MASTER_ADDR'] = 'localhost'
- os.environ['MASTER_PORT'] = '65280'
-
- hps = utils.get_hparams()
- if not hps.cont:
- shutil.copy('./pretrained_models/D_0.pth','./logs/OUTPUT_MODEL/D_0.pth')
- shutil.copy('./pretrained_models/G_0.pth','./logs/OUTPUT_MODEL/G_0.pth')
- shutil.copy('./pretrained_models/DUR_0.pth','./logs/OUTPUT_MODEL/DUR_0.pth')
- mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,))
-
-
-def run(rank, n_gpus, hps):
- global global_step
- if rank == 0:
- logger = utils.get_logger(hps.model_dir)
- logger.info(hps)
- utils.check_git_hash(hps.model_dir)
- writer = SummaryWriter(log_dir=hps.model_dir)
- writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval"))
-
- dist.init_process_group(backend= 'gloo' if os.name == 'nt' else 'nccl', init_method='env://', world_size=n_gpus, rank=rank)
- torch.manual_seed(hps.train.seed)
- torch.cuda.set_device(rank)
-
- train_dataset = TextAudioSpeakerLoader(hps.data.training_files, hps.data)
- train_sampler = DistributedBucketSampler(
- train_dataset,
- hps.train.batch_size,
- [32, 300, 400, 500, 600, 700, 800, 900, 1000],
- num_replicas=n_gpus,
- rank=rank,
- shuffle=True)
- collate_fn = TextAudioSpeakerCollate()
- train_loader = DataLoader(train_dataset, num_workers=2, shuffle=False, pin_memory=True,
- collate_fn=collate_fn, batch_sampler=train_sampler)
- if rank == 0:
- eval_dataset = TextAudioSpeakerLoader(hps.data.validation_files, hps.data)
- eval_loader = DataLoader(eval_dataset, num_workers=0, shuffle=False,
- batch_size=1, pin_memory=True,
- drop_last=False, collate_fn=collate_fn)
- if "use_noise_scaled_mas" in hps.model.keys() and hps.model.use_noise_scaled_mas == True:
- print("Using noise scaled MAS for VITS2")
- use_noise_scaled_mas = True
- mas_noise_scale_initial = 0.01
- noise_scale_delta = 2e-6
- else:
- print("Using normal MAS for VITS1")
- use_noise_scaled_mas = False
- mas_noise_scale_initial = 0.0
- noise_scale_delta = 0.0
- if "use_duration_discriminator" in hps.model.keys() and hps.model.use_duration_discriminator == True:
- print("Using duration discriminator for VITS2")
- use_duration_discriminator = True
- net_dur_disc = DurationDiscriminator(
- hps.model.hidden_channels,
- hps.model.hidden_channels,
- 3,
- 0.1,
- gin_channels=hps.model.gin_channels if hps.data.n_speakers != 0 else 0,
- ).cuda(rank)
- if "use_spk_conditioned_encoder" in hps.model.keys() and hps.model.use_spk_conditioned_encoder == True:
- if hps.data.n_speakers == 0:
- raise ValueError("n_speakers must be > 0 when using spk conditioned encoder to train multi-speaker model")
- use_spk_conditioned_encoder = True
- else:
- print("Using normal encoder for VITS1")
- use_spk_conditioned_encoder = False
-
- net_g = SynthesizerTrn(
- len(symbols),
- hps.data.filter_length // 2 + 1,
- hps.train.segment_size // hps.data.hop_length,
- n_speakers=hps.data.n_speakers,
- mas_noise_scale_initial = mas_noise_scale_initial,
- noise_scale_delta = noise_scale_delta,
- **hps.model).cuda(rank)
-
- freeze_enc = getattr(hps.model, "freeze_enc", False)
- if freeze_enc:
- print("freeze encoder !!!")
- for param in net_g.enc_p.parameters():
- param.requires_grad = False
-
- net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank)
- optim_g = torch.optim.AdamW(
- filter(lambda p: p.requires_grad, net_g.parameters()),
- hps.train.learning_rate,
- betas=hps.train.betas,
- eps=hps.train.eps)
- optim_d = torch.optim.AdamW(
- net_d.parameters(),
- hps.train.learning_rate,
- betas=hps.train.betas,
- eps=hps.train.eps)
- if net_dur_disc is not None:
- optim_dur_disc = torch.optim.AdamW(
- net_dur_disc.parameters(),
- hps.train.learning_rate,
- betas=hps.train.betas,
- eps=hps.train.eps)
- else:
- optim_dur_disc = None
- net_g = DDP(net_g, device_ids=[rank], find_unused_parameters=True)
- net_d = DDP(net_d, device_ids=[rank], find_unused_parameters=True)
- if net_dur_disc is not None:
- net_dur_disc = DDP(net_dur_disc, device_ids=[rank], find_unused_parameters=True)
-
- pretrain_dir = None
- if pretrain_dir is None:
- try:
- if net_dur_disc is not None:
- _, optim_dur_disc, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "DUR_*.pth"), net_dur_disc, optim_dur_disc, skip_optimizer=not hps.cont)
- _, optim_g, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g,
- optim_g, skip_optimizer=not hps.cont)
- _, optim_d, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d,
- optim_d, skip_optimizer=not hps.cont)
-
- epoch_str = max(epoch_str, 1)
- global_step = (epoch_str - 1) * len(train_loader)
- except Exception as e:
- print(e)
- epoch_str = 1
- global_step = 0
- else:
- _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(pretrain_dir, "G_*.pth"), net_g,
- optim_g, True)
- _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(pretrain_dir, "D_*.pth"), net_d,
- optim_d, True)
-
-
-
- scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2)
- scheduler_d = torch.optim.lr_scheduler.ExponentialLR(optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2)
- if net_dur_disc is not None:
- scheduler_dur_disc = torch.optim.lr_scheduler.ExponentialLR(optim_dur_disc, gamma=hps.train.lr_decay, last_epoch=epoch_str-2)
- else:
- scheduler_dur_disc = None
- scaler = GradScaler(enabled=hps.train.fp16_run)
-
- for epoch in range(epoch_str, hps.train.epochs + 1):
- if rank == 0:
- train_and_evaluate(rank, epoch, hps, [net_g, net_d, net_dur_disc], [optim_g, optim_d, optim_dur_disc], [scheduler_g, scheduler_d, scheduler_dur_disc], scaler, [train_loader, eval_loader], logger, [writer, writer_eval])
- else:
- train_and_evaluate(rank, epoch, hps, [net_g, net_d, net_dur_disc], [optim_g, optim_d, optim_dur_disc], [scheduler_g, scheduler_d, scheduler_dur_disc], scaler, [train_loader, None], None, None)
- scheduler_g.step()
- scheduler_d.step()
- if net_dur_disc is not None:
- scheduler_dur_disc.step()
-
-
-def train_and_evaluate(rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers):
- net_g, net_d, net_dur_disc = nets
- optim_g, optim_d, optim_dur_disc = optims
- scheduler_g, scheduler_d, scheduler_dur_disc = schedulers
- train_loader, eval_loader = loaders
- if writers is not None:
- writer, writer_eval = writers
-
- train_loader.batch_sampler.set_epoch(epoch)
- global global_step
-
- net_g.train()
- net_d.train()
- if net_dur_disc is not None:
- net_dur_disc.train()
- for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers, tone, language, bert) in tqdm(enumerate(train_loader)):
- if net_g.module.use_noise_scaled_mas:
- current_mas_noise_scale = net_g.module.mas_noise_scale_initial - net_g.module.noise_scale_delta * global_step
- net_g.module.current_mas_noise_scale = max(current_mas_noise_scale, 0.0)
- x, x_lengths = x.cuda(rank, non_blocking=True), x_lengths.cuda(rank, non_blocking=True)
- spec, spec_lengths = spec.cuda(rank, non_blocking=True), spec_lengths.cuda(rank, non_blocking=True)
- y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda(rank, non_blocking=True)
- speakers = speakers.cuda(rank, non_blocking=True)
- tone = tone.cuda(rank, non_blocking=True)
- language = language.cuda(rank, non_blocking=True)
- bert = bert.cuda(rank, non_blocking=True)
-
- with autocast(enabled=hps.train.fp16_run):
- y_hat, l_length, attn, ids_slice, x_mask, z_mask, \
- (z, z_p, m_p, logs_p, m_q, logs_q), (hidden_x, logw, logw_) = net_g(x, x_lengths, spec, spec_lengths, speakers, tone, language, bert)
- mel = spec_to_mel_torch(
- spec,
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.mel_fmin,
- hps.data.mel_fmax)
- y_mel = commons.slice_segments(mel, ids_slice, hps.train.segment_size // hps.data.hop_length)
- y_hat_mel = mel_spectrogram_torch(
- y_hat.squeeze(1),
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.hop_length,
- hps.data.win_length,
- hps.data.mel_fmin,
- hps.data.mel_fmax
- )
-
- y = commons.slice_segments(y, ids_slice * hps.data.hop_length, hps.train.segment_size) # slice
-
- # Discriminator
- y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach())
- with autocast(enabled=False):
- loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(y_d_hat_r, y_d_hat_g)
- loss_disc_all = loss_disc
- if net_dur_disc is not None:
- y_dur_hat_r, y_dur_hat_g = net_dur_disc(hidden_x.detach(), x_mask.detach(), logw.detach(), logw_.detach())
- with autocast(enabled=False):
- # TODO: I think need to mean using the mask, but for now, just mean all
- loss_dur_disc, losses_dur_disc_r, losses_dur_disc_g = discriminator_loss(y_dur_hat_r, y_dur_hat_g)
- loss_dur_disc_all = loss_dur_disc
- optim_dur_disc.zero_grad()
- scaler.scale(loss_dur_disc_all).backward()
- scaler.unscale_(optim_dur_disc)
- grad_norm_dur_disc = commons.clip_grad_value_(net_dur_disc.parameters(), None)
- scaler.step(optim_dur_disc)
-
- optim_d.zero_grad()
- scaler.scale(loss_disc_all).backward()
- scaler.unscale_(optim_d)
- grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None)
- scaler.step(optim_d)
-
- with autocast(enabled=hps.train.fp16_run):
- # Generator
- y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat)
- if net_dur_disc is not None:
- y_dur_hat_r, y_dur_hat_g = net_dur_disc(hidden_x, x_mask, logw, logw_)
- with autocast(enabled=False):
- loss_dur = torch.sum(l_length.float())
- loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel
- loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl
-
- loss_fm = feature_loss(fmap_r, fmap_g)
- loss_gen, losses_gen = generator_loss(y_d_hat_g)
- loss_gen_all = loss_gen + loss_fm + loss_mel + loss_dur + loss_kl
- if net_dur_disc is not None:
- loss_dur_gen, losses_dur_gen = generator_loss(y_dur_hat_g)
- loss_gen_all += loss_dur_gen
- optim_g.zero_grad()
- scaler.scale(loss_gen_all).backward()
- scaler.unscale_(optim_g)
- grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None)
- scaler.step(optim_g)
- scaler.update()
-
- if rank == 0:
- if global_step % hps.train.log_interval == 0:
- lr = optim_g.param_groups[0]['lr']
- losses = [loss_disc, loss_gen, loss_fm, loss_mel, loss_dur, loss_kl]
- logger.info('Train Epoch: {} [{:.0f}%]'.format(
- epoch,
- 100. * batch_idx / len(train_loader)))
- logger.info([x.item() for x in losses] + [global_step, lr])
-
- scalar_dict = {"loss/g/total": loss_gen_all, "loss/d/total": loss_disc_all, "learning_rate": lr,
- "grad_norm_d": grad_norm_d, "grad_norm_g": grad_norm_g}
- scalar_dict.update(
- {"loss/g/fm": loss_fm, "loss/g/mel": loss_mel, "loss/g/dur": loss_dur, "loss/g/kl": loss_kl})
- scalar_dict.update({"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)})
- scalar_dict.update({"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)})
- scalar_dict.update({"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)})
-
- image_dict = {
- "slice/mel_org": utils.plot_spectrogram_to_numpy(y_mel[0].data.cpu().numpy()),
- "slice/mel_gen": utils.plot_spectrogram_to_numpy(y_hat_mel[0].data.cpu().numpy()),
- "all/mel": utils.plot_spectrogram_to_numpy(mel[0].data.cpu().numpy()),
- "all/attn": utils.plot_alignment_to_numpy(attn[0, 0].data.cpu().numpy())
- }
- utils.summarize(
- writer=writer,
- global_step=global_step,
- images=image_dict,
- scalars=scalar_dict)
-
- if global_step % hps.train.eval_interval == 0:
- evaluate(hps, net_g, eval_loader, writer_eval)
- utils.save_checkpoint(net_g, optim_g, hps.train.learning_rate, epoch,
- os.path.join(hps.model_dir, "G_{}.pth".format(global_step)))
- utils.save_checkpoint(net_d, optim_d, hps.train.learning_rate, epoch,
- os.path.join(hps.model_dir, "D_{}.pth".format(global_step)))
- if net_dur_disc is not None:
- utils.save_checkpoint(net_dur_disc, optim_dur_disc, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "DUR_{}.pth".format(global_step)))
- keep_ckpts = getattr(hps.train, 'keep_ckpts', 5)
- if keep_ckpts > 0:
- utils.clean_checkpoints(path_to_models=hps.model_dir, n_ckpts_to_keep=keep_ckpts, sort_by_time=True)
-
-
- global_step += 1
-
- if rank == 0:
- logger.info('====> Epoch: {}'.format(epoch))
-
-
-
-def evaluate(hps, generator, eval_loader, writer_eval):
- generator.eval()
- image_dict = {}
- audio_dict = {}
- print("Evaluating ...")
- with torch.no_grad():
- for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers, tone, language, bert) in enumerate(eval_loader):
- x, x_lengths = x.cuda(), x_lengths.cuda()
- spec, spec_lengths = spec.cuda(), spec_lengths.cuda()
- y, y_lengths = y.cuda(), y_lengths.cuda()
- speakers = speakers.cuda()
- bert = bert.cuda()
- tone = tone.cuda()
- language = language.cuda()
- for use_sdp in [True, False]:
- y_hat, attn, mask, *_ = generator.module.infer(x, x_lengths, speakers, tone, language, bert, y=spec, max_len=1000, sdp_ratio=0.0 if not use_sdp else 1.0)
- y_hat_lengths = mask.sum([1, 2]).long() * hps.data.hop_length
-
- mel = spec_to_mel_torch(
- spec,
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.mel_fmin,
- hps.data.mel_fmax)
- y_hat_mel = mel_spectrogram_torch(
- y_hat.squeeze(1).float(),
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.hop_length,
- hps.data.win_length,
- hps.data.mel_fmin,
- hps.data.mel_fmax
- )
- image_dict.update({
- f"gen/mel_{batch_idx}": utils.plot_spectrogram_to_numpy(y_hat_mel[0].cpu().numpy())
- })
- audio_dict.update({
- f"gen/audio_{batch_idx}_{use_sdp}": y_hat[0, :, :y_hat_lengths[0]]
- })
- image_dict.update({f"gt/mel_{batch_idx}": utils.plot_spectrogram_to_numpy(mel[0].cpu().numpy())})
- audio_dict.update({f"gt/audio_{batch_idx}": y[0, :, :y_lengths[0]]})
-
- utils.summarize(
- writer=writer_eval,
- global_step=global_step,
- images=image_dict,
- audios=audio_dict,
- audio_sampling_rate=hps.data.sampling_rate
- )
- generator.train()
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/OAOA/DifFace/basicsr/archs/dfdnet_util.py b/spaces/OAOA/DifFace/basicsr/archs/dfdnet_util.py
deleted file mode 100644
index b4dc0ff738c76852e830b32fffbe65bffb5ddf50..0000000000000000000000000000000000000000
--- a/spaces/OAOA/DifFace/basicsr/archs/dfdnet_util.py
+++ /dev/null
@@ -1,162 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torch.autograd import Function
-from torch.nn.utils.spectral_norm import spectral_norm
-
-
-class BlurFunctionBackward(Function):
-
- @staticmethod
- def forward(ctx, grad_output, kernel, kernel_flip):
- ctx.save_for_backward(kernel, kernel_flip)
- grad_input = F.conv2d(grad_output, kernel_flip, padding=1, groups=grad_output.shape[1])
- return grad_input
-
- @staticmethod
- def backward(ctx, gradgrad_output):
- kernel, _ = ctx.saved_tensors
- grad_input = F.conv2d(gradgrad_output, kernel, padding=1, groups=gradgrad_output.shape[1])
- return grad_input, None, None
-
-
-class BlurFunction(Function):
-
- @staticmethod
- def forward(ctx, x, kernel, kernel_flip):
- ctx.save_for_backward(kernel, kernel_flip)
- output = F.conv2d(x, kernel, padding=1, groups=x.shape[1])
- return output
-
- @staticmethod
- def backward(ctx, grad_output):
- kernel, kernel_flip = ctx.saved_tensors
- grad_input = BlurFunctionBackward.apply(grad_output, kernel, kernel_flip)
- return grad_input, None, None
-
-
-blur = BlurFunction.apply
-
-
-class Blur(nn.Module):
-
- def __init__(self, channel):
- super().__init__()
- kernel = torch.tensor([[1, 2, 1], [2, 4, 2], [1, 2, 1]], dtype=torch.float32)
- kernel = kernel.view(1, 1, 3, 3)
- kernel = kernel / kernel.sum()
- kernel_flip = torch.flip(kernel, [2, 3])
-
- self.kernel = kernel.repeat(channel, 1, 1, 1)
- self.kernel_flip = kernel_flip.repeat(channel, 1, 1, 1)
-
- def forward(self, x):
- return blur(x, self.kernel.type_as(x), self.kernel_flip.type_as(x))
-
-
-def calc_mean_std(feat, eps=1e-5):
- """Calculate mean and std for adaptive_instance_normalization.
-
- Args:
- feat (Tensor): 4D tensor.
- eps (float): A small value added to the variance to avoid
- divide-by-zero. Default: 1e-5.
- """
- size = feat.size()
- assert len(size) == 4, 'The input feature should be 4D tensor.'
- n, c = size[:2]
- feat_var = feat.view(n, c, -1).var(dim=2) + eps
- feat_std = feat_var.sqrt().view(n, c, 1, 1)
- feat_mean = feat.view(n, c, -1).mean(dim=2).view(n, c, 1, 1)
- return feat_mean, feat_std
-
-
-def adaptive_instance_normalization(content_feat, style_feat):
- """Adaptive instance normalization.
-
- Adjust the reference features to have the similar color and illuminations
- as those in the degradate features.
-
- Args:
- content_feat (Tensor): The reference feature.
- style_feat (Tensor): The degradate features.
- """
- size = content_feat.size()
- style_mean, style_std = calc_mean_std(style_feat)
- content_mean, content_std = calc_mean_std(content_feat)
- normalized_feat = (content_feat - content_mean.expand(size)) / content_std.expand(size)
- return normalized_feat * style_std.expand(size) + style_mean.expand(size)
-
-
-def AttentionBlock(in_channel):
- return nn.Sequential(
- spectral_norm(nn.Conv2d(in_channel, in_channel, 3, 1, 1)), nn.LeakyReLU(0.2, True),
- spectral_norm(nn.Conv2d(in_channel, in_channel, 3, 1, 1)))
-
-
-def conv_block(in_channels, out_channels, kernel_size=3, stride=1, dilation=1, bias=True):
- """Conv block used in MSDilationBlock."""
-
- return nn.Sequential(
- spectral_norm(
- nn.Conv2d(
- in_channels,
- out_channels,
- kernel_size=kernel_size,
- stride=stride,
- dilation=dilation,
- padding=((kernel_size - 1) // 2) * dilation,
- bias=bias)),
- nn.LeakyReLU(0.2),
- spectral_norm(
- nn.Conv2d(
- out_channels,
- out_channels,
- kernel_size=kernel_size,
- stride=stride,
- dilation=dilation,
- padding=((kernel_size - 1) // 2) * dilation,
- bias=bias)),
- )
-
-
-class MSDilationBlock(nn.Module):
- """Multi-scale dilation block."""
-
- def __init__(self, in_channels, kernel_size=3, dilation=(1, 1, 1, 1), bias=True):
- super(MSDilationBlock, self).__init__()
-
- self.conv_blocks = nn.ModuleList()
- for i in range(4):
- self.conv_blocks.append(conv_block(in_channels, in_channels, kernel_size, dilation=dilation[i], bias=bias))
- self.conv_fusion = spectral_norm(
- nn.Conv2d(
- in_channels * 4,
- in_channels,
- kernel_size=kernel_size,
- stride=1,
- padding=(kernel_size - 1) // 2,
- bias=bias))
-
- def forward(self, x):
- out = []
- for i in range(4):
- out.append(self.conv_blocks[i](x))
- out = torch.cat(out, 1)
- out = self.conv_fusion(out) + x
- return out
-
-
-class UpResBlock(nn.Module):
-
- def __init__(self, in_channel):
- super(UpResBlock, self).__init__()
- self.body = nn.Sequential(
- nn.Conv2d(in_channel, in_channel, 3, 1, 1),
- nn.LeakyReLU(0.2, True),
- nn.Conv2d(in_channel, in_channel, 3, 1, 1),
- )
-
- def forward(self, x):
- out = x + self.body(x)
- return out
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/translation/README.md b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/translation/README.md
deleted file mode 100644
index 2941f5eb8482dab61dca5eca27a71abd7ee5bf5c..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/translation/README.md
+++ /dev/null
@@ -1,301 +0,0 @@
-# Neural Machine Translation
-
-This README contains instructions for [using pretrained translation models](#example-usage-torchhub)
-as well as [training new models](#training-a-new-model).
-
-## Pre-trained models
-
-Model | Description | Dataset | Download
----|---|---|---
-`conv.wmt14.en-fr` | Convolutional ([Gehring et al., 2017](https://arxiv.org/abs/1705.03122)) | [WMT14 English-French](http://statmt.org/wmt14/translation-task.html#Download) | model: [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt14.v2.en-fr.fconv-py.tar.bz2) newstest2014: [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.v2.en-fr.newstest2014.tar.bz2) newstest2012/2013: [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.v2.en-fr.ntst1213.tar.bz2)
-`conv.wmt14.en-de` | Convolutional ([Gehring et al., 2017](https://arxiv.org/abs/1705.03122)) | [WMT14 English-German](http://statmt.org/wmt14/translation-task.html#Download) | model: [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt14.en-de.fconv-py.tar.bz2) newstest2014: [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.en-de.newstest2014.tar.bz2)
-`conv.wmt17.en-de` | Convolutional ([Gehring et al., 2017](https://arxiv.org/abs/1705.03122)) | [WMT17 English-German](http://statmt.org/wmt17/translation-task.html#Download) | model: [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt17.v2.en-de.fconv-py.tar.bz2) newstest2014: [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt17.v2.en-de.newstest2014.tar.bz2)
-`transformer.wmt14.en-fr` | Transformer ([Ott et al., 2018](https://arxiv.org/abs/1806.00187)) | [WMT14 English-French](http://statmt.org/wmt14/translation-task.html#Download) | model: [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt14.en-fr.joined-dict.transformer.tar.bz2) newstest2014: [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.en-fr.joined-dict.newstest2014.tar.bz2)
-`transformer.wmt16.en-de` | Transformer ([Ott et al., 2018](https://arxiv.org/abs/1806.00187)) | [WMT16 English-German](https://drive.google.com/uc?export=download&id=0B_bZck-ksdkpM25jRUN2X2UxMm8) | model: [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt16.en-de.joined-dict.transformer.tar.bz2) newstest2014: [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt16.en-de.joined-dict.newstest2014.tar.bz2)
-`transformer.wmt18.en-de` | Transformer ([Edunov et al., 2018](https://arxiv.org/abs/1808.09381)) WMT'18 winner | [WMT'18 English-German](http://www.statmt.org/wmt18/translation-task.html) | model: [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt18.en-de.ensemble.tar.gz) See NOTE in the archive
-`transformer.wmt19.en-de` | Transformer ([Ng et al., 2019](https://arxiv.org/abs/1907.06616)) WMT'19 winner | [WMT'19 English-German](http://www.statmt.org/wmt19/translation-task.html) | model: [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt19.en-de.joined-dict.ensemble.tar.gz)
-`transformer.wmt19.de-en` | Transformer ([Ng et al., 2019](https://arxiv.org/abs/1907.06616)) WMT'19 winner | [WMT'19 German-English](http://www.statmt.org/wmt19/translation-task.html) | model: [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt19.de-en.joined-dict.ensemble.tar.gz)
-`transformer.wmt19.en-ru` | Transformer ([Ng et al., 2019](https://arxiv.org/abs/1907.06616)) WMT'19 winner | [WMT'19 English-Russian](http://www.statmt.org/wmt19/translation-task.html) | model: [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt19.en-ru.ensemble.tar.gz)
-`transformer.wmt19.ru-en` | Transformer ([Ng et al., 2019](https://arxiv.org/abs/1907.06616)) WMT'19 winner | [WMT'19 Russian-English](http://www.statmt.org/wmt19/translation-task.html) | model: [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt19.ru-en.ensemble.tar.gz)
-
-## Example usage (torch.hub)
-
-We require a few additional Python dependencies for preprocessing:
-```bash
-pip install fastBPE sacremoses subword_nmt
-```
-
-Interactive translation via PyTorch Hub:
-```python
-import torch
-
-# List available models
-torch.hub.list('pytorch/fairseq') # [..., 'transformer.wmt16.en-de', ... ]
-
-# Load a transformer trained on WMT'16 En-De
-# Note: WMT'19 models use fastBPE instead of subword_nmt, see instructions below
-en2de = torch.hub.load('pytorch/fairseq', 'transformer.wmt16.en-de',
- tokenizer='moses', bpe='subword_nmt')
-en2de.eval() # disable dropout
-
-# The underlying model is available under the *models* attribute
-assert isinstance(en2de.models[0], fairseq.models.transformer.TransformerModel)
-
-# Move model to GPU for faster translation
-en2de.cuda()
-
-# Translate a sentence
-en2de.translate('Hello world!')
-# 'Hallo Welt!'
-
-# Batched translation
-en2de.translate(['Hello world!', 'The cat sat on the mat.'])
-# ['Hallo Welt!', 'Die Katze saß auf der Matte.']
-```
-
-Loading custom models:
-```python
-from fairseq.models.transformer import TransformerModel
-zh2en = TransformerModel.from_pretrained(
- '/path/to/checkpoints',
- checkpoint_file='checkpoint_best.pt',
- data_name_or_path='data-bin/wmt17_zh_en_full',
- bpe='subword_nmt',
- bpe_codes='data-bin/wmt17_zh_en_full/zh.code'
-)
-zh2en.translate('你好 世界')
-# 'Hello World'
-```
-
-If you are using a `transformer.wmt19` models, you will need to set the `bpe`
-argument to `'fastbpe'` and (optionally) load the 4-model ensemble:
-```python
-en2de = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.en-de',
- checkpoint_file='model1.pt:model2.pt:model3.pt:model4.pt',
- tokenizer='moses', bpe='fastbpe')
-en2de.eval() # disable dropout
-```
-
-## Example usage (CLI tools)
-
-Generation with the binarized test sets can be run in batch mode as follows, e.g. for WMT 2014 English-French on a GTX-1080ti:
-```bash
-mkdir -p data-bin
-curl https://dl.fbaipublicfiles.com/fairseq/models/wmt14.v2.en-fr.fconv-py.tar.bz2 | tar xvjf - -C data-bin
-curl https://dl.fbaipublicfiles.com/fairseq/data/wmt14.v2.en-fr.newstest2014.tar.bz2 | tar xvjf - -C data-bin
-fairseq-generate data-bin/wmt14.en-fr.newstest2014 \
- --path data-bin/wmt14.en-fr.fconv-py/model.pt \
- --beam 5 --batch-size 128 --remove-bpe | tee /tmp/gen.out
-# ...
-# | Translated 3003 sentences (96311 tokens) in 166.0s (580.04 tokens/s)
-# | Generate test with beam=5: BLEU4 = 40.83, 67.5/46.9/34.4/25.5 (BP=1.000, ratio=1.006, syslen=83262, reflen=82787)
-
-# Compute BLEU score
-grep ^H /tmp/gen.out | cut -f3- > /tmp/gen.out.sys
-grep ^T /tmp/gen.out | cut -f2- > /tmp/gen.out.ref
-fairseq-score --sys /tmp/gen.out.sys --ref /tmp/gen.out.ref
-# BLEU4 = 40.83, 67.5/46.9/34.4/25.5 (BP=1.000, ratio=1.006, syslen=83262, reflen=82787)
-```
-
-## Training a new model
-
-### IWSLT'14 German to English (Transformer)
-
-The following instructions can be used to train a Transformer model on the [IWSLT'14 German to English dataset](http://workshop2014.iwslt.org/downloads/proceeding.pdf).
-
-First download and preprocess the data:
-```bash
-# Download and prepare the data
-cd examples/translation/
-bash prepare-iwslt14.sh
-cd ../..
-
-# Preprocess/binarize the data
-TEXT=examples/translation/iwslt14.tokenized.de-en
-fairseq-preprocess --source-lang de --target-lang en \
- --trainpref $TEXT/train --validpref $TEXT/valid --testpref $TEXT/test \
- --destdir data-bin/iwslt14.tokenized.de-en \
- --workers 20
-```
-
-Next we'll train a Transformer translation model over this data:
-```bash
-CUDA_VISIBLE_DEVICES=0 fairseq-train \
- data-bin/iwslt14.tokenized.de-en \
- --arch transformer_iwslt_de_en --share-decoder-input-output-embed \
- --optimizer adam --adam-betas '(0.9, 0.98)' --clip-norm 0.0 \
- --lr 5e-4 --lr-scheduler inverse_sqrt --warmup-updates 4000 \
- --dropout 0.3 --weight-decay 0.0001 \
- --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \
- --max-tokens 4096 \
- --eval-bleu \
- --eval-bleu-args '{"beam": 5, "max_len_a": 1.2, "max_len_b": 10}' \
- --eval-bleu-detok moses \
- --eval-bleu-remove-bpe \
- --eval-bleu-print-samples \
- --best-checkpoint-metric bleu --maximize-best-checkpoint-metric
-```
-
-Finally we can evaluate our trained model:
-```bash
-fairseq-generate data-bin/iwslt14.tokenized.de-en \
- --path checkpoints/checkpoint_best.pt \
- --batch-size 128 --beam 5 --remove-bpe
-```
-
-### WMT'14 English to German (Convolutional)
-
-The following instructions can be used to train a Convolutional translation model on the WMT English to German dataset.
-See the [Scaling NMT README](../scaling_nmt/README.md) for instructions to train a Transformer translation model on this data.
-
-The WMT English to German dataset can be preprocessed using the `prepare-wmt14en2de.sh` script.
-By default it will produce a dataset that was modeled after [Attention Is All You Need (Vaswani et al., 2017)](https://arxiv.org/abs/1706.03762), but with additional news-commentary-v12 data from WMT'17.
-
-To use only data available in WMT'14 or to replicate results obtained in the original [Convolutional Sequence to Sequence Learning (Gehring et al., 2017)](https://arxiv.org/abs/1705.03122) paper, please use the `--icml17` option.
-
-```bash
-# Download and prepare the data
-cd examples/translation/
-# WMT'17 data:
-bash prepare-wmt14en2de.sh
-# or to use WMT'14 data:
-# bash prepare-wmt14en2de.sh --icml17
-cd ../..
-
-# Binarize the dataset
-TEXT=examples/translation/wmt17_en_de
-fairseq-preprocess \
- --source-lang en --target-lang de \
- --trainpref $TEXT/train --validpref $TEXT/valid --testpref $TEXT/test \
- --destdir data-bin/wmt17_en_de --thresholdtgt 0 --thresholdsrc 0 \
- --workers 20
-
-# Train the model
-mkdir -p checkpoints/fconv_wmt_en_de
-fairseq-train \
- data-bin/wmt17_en_de \
- --arch fconv_wmt_en_de \
- --dropout 0.2 \
- --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \
- --optimizer nag --clip-norm 0.1 \
- --lr 0.5 --lr-scheduler fixed --force-anneal 50 \
- --max-tokens 4000 \
- --save-dir checkpoints/fconv_wmt_en_de
-
-# Evaluate
-fairseq-generate data-bin/wmt17_en_de \
- --path checkpoints/fconv_wmt_en_de/checkpoint_best.pt \
- --beam 5 --remove-bpe
-```
-
-### WMT'14 English to French
-```bash
-# Download and prepare the data
-cd examples/translation/
-bash prepare-wmt14en2fr.sh
-cd ../..
-
-# Binarize the dataset
-TEXT=examples/translation/wmt14_en_fr
-fairseq-preprocess \
- --source-lang en --target-lang fr \
- --trainpref $TEXT/train --validpref $TEXT/valid --testpref $TEXT/test \
- --destdir data-bin/wmt14_en_fr --thresholdtgt 0 --thresholdsrc 0 \
- --workers 60
-
-# Train the model
-mkdir -p checkpoints/fconv_wmt_en_fr
-fairseq-train \
- data-bin/wmt14_en_fr \
- --arch fconv_wmt_en_fr \
- --dropout 0.1 \
- --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \
- --optimizer nag --clip-norm 0.1 \
- --lr 0.5 --lr-scheduler fixed --force-anneal 50 \
- --max-tokens 3000 \
- --save-dir checkpoints/fconv_wmt_en_fr
-
-# Evaluate
-fairseq-generate \
- data-bin/fconv_wmt_en_fr \
- --path checkpoints/fconv_wmt_en_fr/checkpoint_best.pt \
- --beam 5 --remove-bpe
-```
-
-## Multilingual Translation
-
-We also support training multilingual translation models. In this example we'll
-train a multilingual `{de,fr}-en` translation model using the IWSLT'17 datasets.
-
-Note that we use slightly different preprocessing here than for the IWSLT'14
-En-De data above. In particular we learn a joint BPE code for all three
-languages and use fairseq-interactive and sacrebleu for scoring the test set.
-
-```bash
-# First install sacrebleu and sentencepiece
-pip install sacrebleu sentencepiece
-
-# Then download and preprocess the data
-cd examples/translation/
-bash prepare-iwslt17-multilingual.sh
-cd ../..
-
-# Binarize the de-en dataset
-TEXT=examples/translation/iwslt17.de_fr.en.bpe16k
-fairseq-preprocess --source-lang de --target-lang en \
- --trainpref $TEXT/train.bpe.de-en \
- --validpref $TEXT/valid0.bpe.de-en,$TEXT/valid1.bpe.de-en,$TEXT/valid2.bpe.de-en,$TEXT/valid3.bpe.de-en,$TEXT/valid4.bpe.de-en,$TEXT/valid5.bpe.de-en \
- --destdir data-bin/iwslt17.de_fr.en.bpe16k \
- --workers 10
-
-# Binarize the fr-en dataset
-# NOTE: it's important to reuse the en dictionary from the previous step
-fairseq-preprocess --source-lang fr --target-lang en \
- --trainpref $TEXT/train.bpe.fr-en \
- --validpref $TEXT/valid0.bpe.fr-en,$TEXT/valid1.bpe.fr-en,$TEXT/valid2.bpe.fr-en,$TEXT/valid3.bpe.fr-en,$TEXT/valid4.bpe.fr-en,$TEXT/valid5.bpe.fr-en \
- --tgtdict data-bin/iwslt17.de_fr.en.bpe16k/dict.en.txt \
- --destdir data-bin/iwslt17.de_fr.en.bpe16k \
- --workers 10
-
-# Train a multilingual transformer model
-# NOTE: the command below assumes 1 GPU, but accumulates gradients from
-# 8 fwd/bwd passes to simulate training on 8 GPUs
-mkdir -p checkpoints/multilingual_transformer
-CUDA_VISIBLE_DEVICES=0 fairseq-train data-bin/iwslt17.de_fr.en.bpe16k/ \
- --max-epoch 50 \
- --ddp-backend=legacy_ddp \
- --task multilingual_translation --lang-pairs de-en,fr-en \
- --arch multilingual_transformer_iwslt_de_en \
- --share-decoders --share-decoder-input-output-embed \
- --optimizer adam --adam-betas '(0.9, 0.98)' \
- --lr 0.0005 --lr-scheduler inverse_sqrt \
- --warmup-updates 4000 --warmup-init-lr '1e-07' \
- --label-smoothing 0.1 --criterion label_smoothed_cross_entropy \
- --dropout 0.3 --weight-decay 0.0001 \
- --save-dir checkpoints/multilingual_transformer \
- --max-tokens 4000 \
- --update-freq 8
-
-# Generate and score the test set with sacrebleu
-SRC=de
-sacrebleu --test-set iwslt17 --language-pair ${SRC}-en --echo src \
- | python scripts/spm_encode.py --model examples/translation/iwslt17.de_fr.en.bpe16k/sentencepiece.bpe.model \
- > iwslt17.test.${SRC}-en.${SRC}.bpe
-cat iwslt17.test.${SRC}-en.${SRC}.bpe \
- | fairseq-interactive data-bin/iwslt17.de_fr.en.bpe16k/ \
- --task multilingual_translation --lang-pairs de-en,fr-en \
- --source-lang ${SRC} --target-lang en \
- --path checkpoints/multilingual_transformer/checkpoint_best.pt \
- --buffer-size 2000 --batch-size 128 \
- --beam 5 --remove-bpe=sentencepiece \
- > iwslt17.test.${SRC}-en.en.sys
-grep ^H iwslt17.test.${SRC}-en.en.sys | cut -f3 \
- | sacrebleu --test-set iwslt17 --language-pair ${SRC}-en
-```
-
-##### Argument format during inference
-
-During inference it is required to specify a single `--source-lang` and
-`--target-lang`, which indicates the inference langauge direction.
-`--lang-pairs`, `--encoder-langtok`, `--decoder-langtok` have to be set to
-the same value as training.
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/noisychannel/rerank_utils.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/noisychannel/rerank_utils.py
deleted file mode 100644
index 2c6bf1b1afbb089cf5e84f720eb7a067479fbcbc..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/noisychannel/rerank_utils.py
+++ /dev/null
@@ -1,850 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-import os
-import re
-import subprocess
-from contextlib import redirect_stdout
-
-from fairseq import options
-from fairseq_cli import eval_lm, preprocess
-
-
-def reprocess(fle):
- # takes in a file of generate.py translation generate_output
- # returns a source dict and hypothesis dict, where keys are the ID num (as a string)
- # and values and the corresponding source and translation. There may be several translations
- # per source, so the values for hypothesis_dict are lists.
- # parses output of generate.py
-
- with open(fle, "r") as f:
- txt = f.read()
-
- """reprocess generate.py output"""
- p = re.compile(r"[STHP][-]\d+\s*")
- hp = re.compile(r"(\s*[-]?\d+[.]?\d+\s*)|(\s*(-inf)\s*)")
- source_dict = {}
- hypothesis_dict = {}
- score_dict = {}
- target_dict = {}
- pos_score_dict = {}
- lines = txt.split("\n")
-
- for line in lines:
- line += "\n"
- prefix = re.search(p, line)
- if prefix is not None:
- assert len(prefix.group()) > 2, "prefix id not found"
- _, j = prefix.span()
- id_num = prefix.group()[2:]
- id_num = int(id_num)
- line_type = prefix.group()[0]
- if line_type == "H":
- h_txt = line[j:]
- hypo = re.search(hp, h_txt)
- assert (
- hypo is not None
- ), "regular expression failed to find the hypothesis scoring"
- _, i = hypo.span()
- score = hypo.group()
- if id_num in hypothesis_dict:
- hypothesis_dict[id_num].append(h_txt[i:])
- score_dict[id_num].append(float(score))
- else:
- hypothesis_dict[id_num] = [h_txt[i:]]
- score_dict[id_num] = [float(score)]
-
- elif line_type == "S":
- source_dict[id_num] = line[j:]
- elif line_type == "T":
- target_dict[id_num] = line[j:]
- elif line_type == "P":
- pos_scores = (line[j:]).split()
- pos_scores = [float(x) for x in pos_scores]
- if id_num in pos_score_dict:
- pos_score_dict[id_num].append(pos_scores)
- else:
- pos_score_dict[id_num] = [pos_scores]
-
- return source_dict, hypothesis_dict, score_dict, target_dict, pos_score_dict
-
-
-def reprocess_nbest(fle):
- """reprocess interactive.py output"""
- with open(fle, "r") as f:
- txt = f.read()
-
- source_dict = {}
- hypothesis_dict = {}
- score_dict = {}
- target_dict = {}
- pos_score_dict = {}
- lines = txt.split("\n")
-
- hp = re.compile(r"[-]?\d+[.]?\d+")
- j = -1
-
- for _i, line in enumerate(lines):
- line += "\n"
- line_type = line[0]
-
- if line_type == "H":
- hypo = re.search(hp, line)
- _, start_index = hypo.span()
- score = hypo.group()
- if j in score_dict:
- score_dict[j].append(float(score))
- hypothesis_dict[j].append(line[start_index:].strip("\t"))
- else:
- score_dict[j] = [float(score)]
- hypothesis_dict[j] = [line[start_index:].strip("\t")]
- elif line_type == "O":
- j += 1
- source_dict[j] = line[2:]
- # we don't have the targets for interactive.py
- target_dict[j] = "filler"
-
- elif line_type == "P":
- pos_scores = [float(pos_score) for pos_score in line.split()[1:]]
- if j in pos_score_dict:
- pos_score_dict[j].append(pos_scores)
- else:
- pos_score_dict[j] = [pos_scores]
-
- assert source_dict.keys() == hypothesis_dict.keys()
- assert source_dict.keys() == pos_score_dict.keys()
- assert source_dict.keys() == score_dict.keys()
-
- return source_dict, hypothesis_dict, score_dict, target_dict, pos_score_dict
-
-
-def write_reprocessed(
- sources,
- hypos,
- targets,
- source_outfile,
- hypo_outfile,
- target_outfile,
- right_to_left=False,
- prefix_len=None,
- bpe_symbol=None,
- target_prefix_frac=None,
- source_prefix_frac=None,
-):
-
- """writes nbest hypothesis for rescoring"""
- assert not (
- prefix_len is not None and target_prefix_frac is not None
- ), "in writing reprocessed, only one type of prefix may be used"
- assert not (
- prefix_len is not None and source_prefix_frac is not None
- ), "in writing reprocessed, only one type of prefix may be used"
- assert not (
- target_prefix_frac is not None and source_prefix_frac is not None
- ), "in writing reprocessed, only one type of prefix may be used"
-
- with open(source_outfile, "w") as source_file, open(
- hypo_outfile, "w"
- ) as hypo_file, open(target_outfile, "w") as target_file:
-
- assert len(sources) == len(hypos), "sources and hypos list length mismatch"
- if right_to_left:
- for i in range(len(sources)):
- for j in range(len(hypos[i])):
- if prefix_len is None:
- hypo_file.write(make_right_to_left(hypos[i][j]) + "\n")
- else:
- raise NotImplementedError()
- source_file.write(make_right_to_left(sources[i]) + "\n")
- target_file.write(make_right_to_left(targets[i]) + "\n")
- else:
- for i in sorted(sources.keys()):
- for j in range(len(hypos[i])):
- if prefix_len is not None:
- shortened = (
- get_prefix_no_bpe(hypos[i][j], bpe_symbol, prefix_len)
- + "\n"
- )
- hypo_file.write(shortened)
- source_file.write(sources[i])
- target_file.write(targets[i])
- elif target_prefix_frac is not None:
- num_words, shortened, num_bpe_tokens = calc_length_from_frac(
- hypos[i][j], target_prefix_frac, bpe_symbol
- )
- shortened += "\n"
- hypo_file.write(shortened)
- source_file.write(sources[i])
- target_file.write(targets[i])
- elif source_prefix_frac is not None:
- num_words, shortened, num_bpe_tokensn = calc_length_from_frac(
- sources[i], source_prefix_frac, bpe_symbol
- )
- shortened += "\n"
- hypo_file.write(hypos[i][j])
- source_file.write(shortened)
- target_file.write(targets[i])
- else:
- hypo_file.write(hypos[i][j])
- source_file.write(sources[i])
- target_file.write(targets[i])
-
-
-def calc_length_from_frac(bpe_sentence, prefix_frac, bpe_symbol):
- # return number of words, (not bpe tokens) that we want
- no_bpe_sen = remove_bpe(bpe_sentence, bpe_symbol)
- len_sen = len(no_bpe_sen.split())
-
- num_words = math.ceil(len_sen * prefix_frac)
- prefix = get_prefix_no_bpe(bpe_sentence, bpe_symbol, num_words)
- num_bpe_tokens = len(prefix.split())
- return num_words, prefix, num_bpe_tokens
-
-
-def get_prefix(sentence, prefix_len):
- """assuming no bpe, gets the prefix of the sentence with prefix_len words"""
- tokens = sentence.strip("\n").split()
- if prefix_len >= len(tokens):
- return sentence.strip("\n")
- else:
- return " ".join(tokens[:prefix_len])
-
-
-def get_prefix_no_bpe(sentence, bpe_symbol, prefix_len):
- if bpe_symbol is None:
- return get_prefix(sentence, prefix_len)
- else:
- return " ".join(get_prefix_from_len(sentence.split(), bpe_symbol, prefix_len))
-
-
-def get_prefix_from_len(sentence, bpe_symbol, prefix_len):
- """get the prefix of sentence with bpe, with prefix len in terms of words, not bpe tokens"""
- bpe_count = sum([bpe_symbol.strip(" ") in t for t in sentence[:prefix_len]])
- if bpe_count == 0:
- return sentence[:prefix_len]
- else:
- return sentence[:prefix_len] + get_prefix_from_len(
- sentence[prefix_len:], bpe_symbol, bpe_count
- )
-
-
-def get_num_bpe_tokens_from_len(sentence, bpe_symbol, prefix_len):
- """given a prefix length in terms of words, return the number of bpe tokens"""
- prefix = get_prefix_no_bpe(sentence, bpe_symbol, prefix_len)
- assert len(remove_bpe(prefix, bpe_symbol).split()) <= prefix_len
- return len(prefix.split(" "))
-
-
-def make_right_to_left(line):
- tokens = line.split()
- tokens.reverse()
- new_line = " ".join(tokens)
- return new_line
-
-
-def remove_bpe(line, bpe_symbol):
- line = line.replace("\n", "")
- line = (line + " ").replace(bpe_symbol, "").rstrip()
- return line + ("\n")
-
-
-def remove_bpe_dict(pred_dict, bpe_symbol):
- new_dict = {}
- for i in pred_dict:
- if type(pred_dict[i]) == list:
- new_list = [remove_bpe(elem, bpe_symbol) for elem in pred_dict[i]]
- new_dict[i] = new_list
- else:
- new_dict[i] = remove_bpe(pred_dict[i], bpe_symbol)
- return new_dict
-
-
-def parse_bleu_scoring(line):
- p = re.compile(r"(BLEU4 = )\d+[.]\d+")
- res = re.search(p, line)
- assert res is not None, line
- return float(res.group()[8:])
-
-
-def get_full_from_prefix(hypo_prefix, hypos):
- """given a hypo prefix, recover the first hypo from the list of complete hypos beginning with that prefix"""
- for hypo in hypos:
- hypo_prefix = hypo_prefix.strip("\n")
- len_prefix = len(hypo_prefix)
- if hypo[:len_prefix] == hypo_prefix:
- return hypo
- # no match found
- raise Exception()
-
-
-def get_score(
- a,
- b,
- c,
- target_len,
- bitext_score1,
- bitext_score2=None,
- lm_score=None,
- lenpen=None,
- src_len=None,
- tgt_len=None,
- bitext1_backwards=False,
- bitext2_backwards=False,
- normalize=False,
-):
- if bitext1_backwards:
- bitext1_norm = src_len
- else:
- bitext1_norm = tgt_len
- if bitext_score2 is not None:
- if bitext2_backwards:
- bitext2_norm = src_len
- else:
- bitext2_norm = tgt_len
- else:
- bitext2_norm = 1
- bitext_score2 = 0
- if normalize:
- score = (
- a * bitext_score1 / bitext1_norm
- + b * bitext_score2 / bitext2_norm
- + c * lm_score / src_len
- )
- else:
- score = a * bitext_score1 + b * bitext_score2 + c * lm_score
-
- if lenpen is not None:
- score /= (target_len) ** float(lenpen)
-
- return score
-
-
-class BitextOutput(object):
- def __init__(
- self,
- output_file,
- backwards,
- right_to_left,
- bpe_symbol,
- prefix_len=None,
- target_prefix_frac=None,
- source_prefix_frac=None,
- ):
- """process output from rescoring"""
- source, hypo, score, target, pos_score = reprocess(output_file)
- if backwards:
- self.hypo_fracs = source_prefix_frac
- else:
- self.hypo_fracs = target_prefix_frac
-
- # remove length penalty so we can use raw scores
- score, num_bpe_tokens = get_score_from_pos(
- pos_score, prefix_len, hypo, bpe_symbol, self.hypo_fracs, backwards
- )
- source_lengths = {}
- target_lengths = {}
-
- assert hypo.keys() == source.keys(), "key mismatch"
- if backwards:
- tmp = hypo
- hypo = source
- source = tmp
- for i in source:
- # since we are reranking, there should only be one hypo per source sentence
- if backwards:
- len_src = len(source[i][0].split())
- # record length without
- if len_src == num_bpe_tokens[i][0] - 1:
- source_lengths[i] = num_bpe_tokens[i][0] - 1
- else:
- source_lengths[i] = num_bpe_tokens[i][0]
-
- target_lengths[i] = len(hypo[i].split())
-
- source[i] = remove_bpe(source[i][0], bpe_symbol)
- target[i] = remove_bpe(target[i], bpe_symbol)
- hypo[i] = remove_bpe(hypo[i], bpe_symbol)
-
- score[i] = float(score[i][0])
- pos_score[i] = pos_score[i][0]
-
- else:
- len_tgt = len(hypo[i][0].split())
- # record length without
- if len_tgt == num_bpe_tokens[i][0] - 1:
- target_lengths[i] = num_bpe_tokens[i][0] - 1
- else:
- target_lengths[i] = num_bpe_tokens[i][0]
-
- source_lengths[i] = len(source[i].split())
-
- if right_to_left:
- source[i] = remove_bpe(make_right_to_left(source[i]), bpe_symbol)
- target[i] = remove_bpe(make_right_to_left(target[i]), bpe_symbol)
- hypo[i] = remove_bpe(make_right_to_left(hypo[i][0]), bpe_symbol)
- score[i] = float(score[i][0])
- pos_score[i] = pos_score[i][0]
- else:
- assert (
- len(hypo[i]) == 1
- ), "expected only one hypothesis per source sentence"
- source[i] = remove_bpe(source[i], bpe_symbol)
- target[i] = remove_bpe(target[i], bpe_symbol)
- hypo[i] = remove_bpe(hypo[i][0], bpe_symbol)
- score[i] = float(score[i][0])
- pos_score[i] = pos_score[i][0]
-
- self.rescore_source = source
- self.rescore_hypo = hypo
- self.rescore_score = score
- self.rescore_target = target
- self.rescore_pos_score = pos_score
- self.backwards = backwards
- self.right_to_left = right_to_left
- self.target_lengths = target_lengths
- self.source_lengths = source_lengths
-
-
-class BitextOutputFromGen(object):
- def __init__(
- self,
- predictions_bpe_file,
- bpe_symbol=None,
- nbest=False,
- prefix_len=None,
- target_prefix_frac=None,
- ):
- if nbest:
- (
- pred_source,
- pred_hypo,
- pred_score,
- pred_target,
- pred_pos_score,
- ) = reprocess_nbest(predictions_bpe_file)
- else:
- pred_source, pred_hypo, pred_score, pred_target, pred_pos_score = reprocess(
- predictions_bpe_file
- )
-
- assert len(pred_source) == len(pred_hypo)
- assert len(pred_source) == len(pred_score)
- assert len(pred_source) == len(pred_target)
- assert len(pred_source) == len(pred_pos_score)
-
- # remove length penalty so we can use raw scores
- pred_score, num_bpe_tokens = get_score_from_pos(
- pred_pos_score, prefix_len, pred_hypo, bpe_symbol, target_prefix_frac, False
- )
-
- self.source = pred_source
- self.target = pred_target
- self.score = pred_score
- self.pos_score = pred_pos_score
- self.hypo = pred_hypo
- self.target_lengths = {}
- self.source_lengths = {}
-
- self.no_bpe_source = remove_bpe_dict(pred_source.copy(), bpe_symbol)
- self.no_bpe_hypo = remove_bpe_dict(pred_hypo.copy(), bpe_symbol)
- self.no_bpe_target = remove_bpe_dict(pred_target.copy(), bpe_symbol)
-
- # indexes to match those from the rescoring models
- self.rescore_source = {}
- self.rescore_target = {}
- self.rescore_pos_score = {}
- self.rescore_hypo = {}
- self.rescore_score = {}
- self.num_hypos = {}
- self.backwards = False
- self.right_to_left = False
-
- index = 0
-
- for i in sorted(pred_source.keys()):
- for j in range(len(pred_hypo[i])):
-
- self.target_lengths[index] = len(self.hypo[i][j].split())
- self.source_lengths[index] = len(self.source[i].split())
-
- self.rescore_source[index] = self.no_bpe_source[i]
- self.rescore_target[index] = self.no_bpe_target[i]
- self.rescore_hypo[index] = self.no_bpe_hypo[i][j]
- self.rescore_score[index] = float(pred_score[i][j])
- self.rescore_pos_score[index] = pred_pos_score[i][j]
- self.num_hypos[index] = len(pred_hypo[i])
- index += 1
-
-
-def get_score_from_pos(
- pos_score_dict, prefix_len, hypo_dict, bpe_symbol, hypo_frac, backwards
-):
- score_dict = {}
- num_bpe_tokens_dict = {}
- assert prefix_len is None or hypo_frac is None
- for key in pos_score_dict:
- score_dict[key] = []
- num_bpe_tokens_dict[key] = []
- for i in range(len(pos_score_dict[key])):
- if prefix_len is not None and not backwards:
- num_bpe_tokens = get_num_bpe_tokens_from_len(
- hypo_dict[key][i], bpe_symbol, prefix_len
- )
- score_dict[key].append(sum(pos_score_dict[key][i][:num_bpe_tokens]))
- num_bpe_tokens_dict[key].append(num_bpe_tokens)
- elif hypo_frac is not None:
- num_words, shortened, hypo_prefix_len = calc_length_from_frac(
- hypo_dict[key][i], hypo_frac, bpe_symbol
- )
- score_dict[key].append(sum(pos_score_dict[key][i][:hypo_prefix_len]))
- num_bpe_tokens_dict[key].append(hypo_prefix_len)
- else:
- score_dict[key].append(sum(pos_score_dict[key][i]))
- num_bpe_tokens_dict[key].append(len(pos_score_dict[key][i]))
- return score_dict, num_bpe_tokens_dict
-
-
-class LMOutput(object):
- def __init__(
- self,
- lm_score_file,
- lm_dict=None,
- prefix_len=None,
- bpe_symbol=None,
- target_prefix_frac=None,
- ):
- (
- lm_sentences,
- lm_sen_scores,
- lm_sen_pos_scores,
- lm_no_bpe_sentences,
- lm_bpe_tokens,
- ) = parse_lm(
- lm_score_file,
- prefix_len=prefix_len,
- bpe_symbol=bpe_symbol,
- target_prefix_frac=target_prefix_frac,
- )
-
- self.sentences = lm_sentences
- self.score = lm_sen_scores
- self.pos_score = lm_sen_pos_scores
- self.lm_dict = lm_dict
- self.no_bpe_sentences = lm_no_bpe_sentences
- self.bpe_tokens = lm_bpe_tokens
-
-
-def parse_lm(input_file, prefix_len=None, bpe_symbol=None, target_prefix_frac=None):
- """parse output of eval_lm"""
- with open(input_file, "r") as f:
- text = f.readlines()
- text = text[7:]
- cleaned_text = text[:-2]
-
- sentences = {}
- sen_scores = {}
- sen_pos_scores = {}
- no_bpe_sentences = {}
- num_bpe_tokens_dict = {}
- for _i, line in enumerate(cleaned_text):
- tokens = line.split()
- if tokens[0].isdigit():
- line_id = int(tokens[0])
- scores = [float(x[1:-1]) for x in tokens[2::2]]
- sentences[line_id] = " ".join(tokens[1::2][:-1]) + "\n"
- if bpe_symbol is not None:
- # exclude symbol to match output from generate.py
- bpe_sen = " ".join(tokens[1::2][:-1]) + "\n"
- no_bpe_sen = remove_bpe(bpe_sen, bpe_symbol)
- no_bpe_sentences[line_id] = no_bpe_sen
-
- if prefix_len is not None:
- num_bpe_tokens = get_num_bpe_tokens_from_len(
- bpe_sen, bpe_symbol, prefix_len
- )
- sen_scores[line_id] = sum(scores[:num_bpe_tokens])
- num_bpe_tokens_dict[line_id] = num_bpe_tokens
- elif target_prefix_frac is not None:
- num_words, shortened, target_prefix_len = calc_length_from_frac(
- bpe_sen, target_prefix_frac, bpe_symbol
- )
- sen_scores[line_id] = sum(scores[:target_prefix_len])
- num_bpe_tokens_dict[line_id] = target_prefix_len
- else:
- sen_scores[line_id] = sum(scores)
- num_bpe_tokens_dict[line_id] = len(scores)
-
- sen_pos_scores[line_id] = scores
-
- return sentences, sen_scores, sen_pos_scores, no_bpe_sentences, num_bpe_tokens_dict
-
-
-def get_directories(
- data_dir_name,
- num_rescore,
- gen_subset,
- fw_name,
- shard_id,
- num_shards,
- sampling=False,
- prefix_len=None,
- target_prefix_frac=None,
- source_prefix_frac=None,
-):
- nbest_file_id = (
- "nbest_"
- + str(num_rescore)
- + "_subset_"
- + gen_subset
- + "_fw_name_"
- + fw_name
- + "_shard_"
- + str(shard_id)
- + "_of_"
- + str(num_shards)
- )
-
- if sampling:
- nbest_file_id += "_sampling"
-
- # the directory containing all information for this nbest list
- pre_gen = (
- os.path.join(os.path.dirname(__file__))
- + "/rerank_data/"
- + data_dir_name
- + "/"
- + nbest_file_id
- )
- # the directory to store the preprocessed nbest list, for left to right rescoring
- left_to_right_preprocessed_dir = pre_gen + "/left_to_right_preprocessed"
- if source_prefix_frac is not None:
- left_to_right_preprocessed_dir = (
- left_to_right_preprocessed_dir + "/prefix_frac" + str(source_prefix_frac)
- )
- # the directory to store the preprocessed nbest list, for right to left rescoring
- right_to_left_preprocessed_dir = pre_gen + "/right_to_left_preprocessed"
- # the directory to store the preprocessed nbest list, for backwards rescoring
- backwards_preprocessed_dir = pre_gen + "/backwards"
- if target_prefix_frac is not None:
- backwards_preprocessed_dir = (
- backwards_preprocessed_dir + "/prefix_frac" + str(target_prefix_frac)
- )
- elif prefix_len is not None:
- backwards_preprocessed_dir = (
- backwards_preprocessed_dir + "/prefix_" + str(prefix_len)
- )
-
- # the directory to store the preprocessed nbest list, for rescoring with P(T)
- lm_preprocessed_dir = pre_gen + "/lm_preprocessed"
-
- return (
- pre_gen,
- left_to_right_preprocessed_dir,
- right_to_left_preprocessed_dir,
- backwards_preprocessed_dir,
- lm_preprocessed_dir,
- )
-
-
-def lm_scoring(
- preprocess_directory,
- bpe_status,
- gen_output,
- pre_gen,
- cur_lm_dict,
- cur_lm_name,
- cur_language_model,
- cur_lm_bpe_code,
- batch_size,
- lm_score_file,
- target_lang,
- source_lang,
- prefix_len=None,
-):
- if prefix_len is not None:
- assert (
- bpe_status == "different"
- ), "bpe status must be different to use prefix len"
- if bpe_status == "no bpe":
- # run lm on output without bpe
- write_reprocessed(
- gen_output.no_bpe_source,
- gen_output.no_bpe_hypo,
- gen_output.no_bpe_target,
- pre_gen + "/rescore_data_no_bpe.de",
- pre_gen + "/rescore_data_no_bpe.en",
- pre_gen + "/reference_file_no_bpe",
- )
-
- preprocess_lm_param = [
- "--only-source",
- "--trainpref",
- pre_gen + "/rescore_data_no_bpe." + target_lang,
- "--srcdict",
- cur_lm_dict,
- "--destdir",
- preprocess_directory,
- ]
- preprocess_parser = options.get_preprocessing_parser()
- input_args = preprocess_parser.parse_args(preprocess_lm_param)
- preprocess.main(input_args)
-
- eval_lm_param = [
- preprocess_directory,
- "--path",
- cur_language_model,
- "--output-word-probs",
- "--batch-size",
- str(batch_size),
- "--max-tokens",
- "1024",
- "--sample-break-mode",
- "eos",
- "--gen-subset",
- "train",
- ]
-
- eval_lm_parser = options.get_eval_lm_parser()
- input_args = options.parse_args_and_arch(eval_lm_parser, eval_lm_param)
-
- with open(lm_score_file, "w") as f:
- with redirect_stdout(f):
- eval_lm.main(input_args)
-
- elif bpe_status == "shared":
- preprocess_lm_param = [
- "--only-source",
- "--trainpref",
- pre_gen + "/rescore_data." + target_lang,
- "--srcdict",
- cur_lm_dict,
- "--destdir",
- preprocess_directory,
- ]
- preprocess_parser = options.get_preprocessing_parser()
- input_args = preprocess_parser.parse_args(preprocess_lm_param)
- preprocess.main(input_args)
-
- eval_lm_param = [
- preprocess_directory,
- "--path",
- cur_language_model,
- "--output-word-probs",
- "--batch-size",
- str(batch_size),
- "--sample-break-mode",
- "eos",
- "--gen-subset",
- "train",
- ]
-
- eval_lm_parser = options.get_eval_lm_parser()
- input_args = options.parse_args_and_arch(eval_lm_parser, eval_lm_param)
-
- with open(lm_score_file, "w") as f:
- with redirect_stdout(f):
- eval_lm.main(input_args)
-
- elif bpe_status == "different":
- rescore_file = pre_gen + "/rescore_data_no_bpe"
- rescore_bpe = pre_gen + "/rescore_data_new_bpe"
-
- rescore_file += "."
- rescore_bpe += "."
-
- write_reprocessed(
- gen_output.no_bpe_source,
- gen_output.no_bpe_hypo,
- gen_output.no_bpe_target,
- rescore_file + source_lang,
- rescore_file + target_lang,
- pre_gen + "/reference_file_no_bpe",
- bpe_symbol=None,
- )
-
- # apply LM bpe to nbest list
- bpe_src_param = [
- "-c",
- cur_lm_bpe_code,
- "--input",
- rescore_file + target_lang,
- "--output",
- rescore_bpe + target_lang,
- ]
- subprocess.call(
- [
- "python",
- os.path.join(
- os.path.dirname(__file__), "subword-nmt/subword_nmt/apply_bpe.py"
- ),
- ]
- + bpe_src_param,
- shell=False,
- )
- # uncomment to use fastbpe instead of subword-nmt bpe
- # bpe_src_param = [rescore_bpe+target_lang, rescore_file+target_lang, cur_lm_bpe_code]
- # subprocess.call(["/private/home/edunov/fastBPE/fast", "applybpe"] + bpe_src_param, shell=False)
-
- preprocess_dir = preprocess_directory
-
- preprocess_lm_param = [
- "--only-source",
- "--trainpref",
- rescore_bpe + target_lang,
- "--srcdict",
- cur_lm_dict,
- "--destdir",
- preprocess_dir,
- ]
- preprocess_parser = options.get_preprocessing_parser()
- input_args = preprocess_parser.parse_args(preprocess_lm_param)
- preprocess.main(input_args)
-
- eval_lm_param = [
- preprocess_dir,
- "--path",
- cur_language_model,
- "--output-word-probs",
- "--batch-size",
- str(batch_size),
- "--max-tokens",
- "1024",
- "--sample-break-mode",
- "eos",
- "--gen-subset",
- "train",
- ]
-
- eval_lm_parser = options.get_eval_lm_parser()
- input_args = options.parse_args_and_arch(eval_lm_parser, eval_lm_param)
-
- with open(lm_score_file, "w") as f:
- with redirect_stdout(f):
- eval_lm.main(input_args)
-
-
-def rescore_file_name(
- nbest_dir,
- prefix_len,
- scorer_name,
- lm_file=False,
- target_prefix_frac=None,
- source_prefix_frac=None,
- backwards=None,
-):
- if lm_file:
- score_file = nbest_dir + "/lm_score_translations_model_" + scorer_name + ".txt"
- else:
- score_file = nbest_dir + "/" + scorer_name + "_score_translations.txt"
- if backwards:
- if prefix_len is not None:
- score_file += "prefix_len" + str(prefix_len)
- elif target_prefix_frac is not None:
- score_file += "target_prefix_frac" + str(target_prefix_frac)
- else:
- if source_prefix_frac is not None:
- score_file += "source_prefix_frac" + str(source_prefix_frac)
- return score_file
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_recognition/data/collaters.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_recognition/data/collaters.py
deleted file mode 100644
index 6acfec876b87e5a00bc92083b1181301a2a18e3f..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_recognition/data/collaters.py
+++ /dev/null
@@ -1,131 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-"""
- This module contains collection of classes which implement
- collate functionalities for various tasks.
-
- Collaters should know what data to expect for each sample
- and they should pack / collate them into batches
-"""
-
-
-from __future__ import absolute_import, division, print_function, unicode_literals
-
-import numpy as np
-import torch
-from fairseq.data import data_utils as fairseq_data_utils
-
-
-class Seq2SeqCollater(object):
- """
- Implements collate function mainly for seq2seq tasks
- This expects each sample to contain feature (src_tokens) and
- targets.
- This collator is also used for aligned training task.
- """
-
- def __init__(
- self,
- feature_index=0,
- label_index=1,
- pad_index=1,
- eos_index=2,
- move_eos_to_beginning=True,
- ):
- self.feature_index = feature_index
- self.label_index = label_index
- self.pad_index = pad_index
- self.eos_index = eos_index
- self.move_eos_to_beginning = move_eos_to_beginning
-
- def _collate_frames(self, frames):
- """Convert a list of 2d frames into a padded 3d tensor
- Args:
- frames (list): list of 2d frames of size L[i]*f_dim. Where L[i] is
- length of i-th frame and f_dim is static dimension of features
- Returns:
- 3d tensor of size len(frames)*len_max*f_dim where len_max is max of L[i]
- """
- len_max = max(frame.size(0) for frame in frames)
- f_dim = frames[0].size(1)
- res = frames[0].new(len(frames), len_max, f_dim).fill_(0.0)
-
- for i, v in enumerate(frames):
- res[i, : v.size(0)] = v
-
- return res
-
- def collate(self, samples):
- """
- utility function to collate samples into batch for speech recognition.
- """
- if len(samples) == 0:
- return {}
-
- # parse samples into torch tensors
- parsed_samples = []
- for s in samples:
- # skip invalid samples
- if s["data"][self.feature_index] is None:
- continue
- source = s["data"][self.feature_index]
- if isinstance(source, (np.ndarray, np.generic)):
- source = torch.from_numpy(source)
- target = s["data"][self.label_index]
- if isinstance(target, (np.ndarray, np.generic)):
- target = torch.from_numpy(target).long()
- elif isinstance(target, list):
- target = torch.LongTensor(target)
-
- parsed_sample = {"id": s["id"], "source": source, "target": target}
- parsed_samples.append(parsed_sample)
- samples = parsed_samples
-
- id = torch.LongTensor([s["id"] for s in samples])
- frames = self._collate_frames([s["source"] for s in samples])
- # sort samples by descending number of frames
- frames_lengths = torch.LongTensor([s["source"].size(0) for s in samples])
- frames_lengths, sort_order = frames_lengths.sort(descending=True)
- id = id.index_select(0, sort_order)
- frames = frames.index_select(0, sort_order)
-
- target = None
- target_lengths = None
- prev_output_tokens = None
- if samples[0].get("target", None) is not None:
- ntokens = sum(len(s["target"]) for s in samples)
- target = fairseq_data_utils.collate_tokens(
- [s["target"] for s in samples],
- self.pad_index,
- self.eos_index,
- left_pad=False,
- move_eos_to_beginning=False,
- )
- target = target.index_select(0, sort_order)
- target_lengths = torch.LongTensor(
- [s["target"].size(0) for s in samples]
- ).index_select(0, sort_order)
- prev_output_tokens = fairseq_data_utils.collate_tokens(
- [s["target"] for s in samples],
- self.pad_index,
- self.eos_index,
- left_pad=False,
- move_eos_to_beginning=self.move_eos_to_beginning,
- )
- prev_output_tokens = prev_output_tokens.index_select(0, sort_order)
- else:
- ntokens = sum(len(s["source"]) for s in samples)
-
- batch = {
- "id": id,
- "ntokens": ntokens,
- "net_input": {"src_tokens": frames, "src_lengths": frames_lengths},
- "target": target,
- "target_lengths": target_lengths,
- "nsentences": len(samples),
- }
- if prev_output_tokens is not None:
- batch["net_input"]["prev_output_tokens"] = prev_output_tokens
- return batch
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/tests/test_iterators.py b/spaces/OFA-Sys/OFA-vqa/fairseq/tests/test_iterators.py
deleted file mode 100644
index 7b3dd4848553357e5e8326ed3a31cf5d68ceea94..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/tests/test_iterators.py
+++ /dev/null
@@ -1,137 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import unittest
-
-from fairseq.data import iterators
-
-
-class TestIterators(unittest.TestCase):
- def test_counting_iterator_index(self, ref=None, itr=None):
- # Test the indexing functionality of CountingIterator
- if ref is None:
- assert itr is None
- ref = list(range(10))
- itr = iterators.CountingIterator(ref)
- else:
- assert len(ref) == 10
- assert itr is not None
-
- self.assertTrue(itr.has_next())
- self.assertEqual(itr.n, 0)
- self.assertEqual(next(itr), ref[0])
- self.assertEqual(itr.n, 1)
- self.assertEqual(next(itr), ref[1])
- self.assertEqual(itr.n, 2)
- itr.skip(3)
- self.assertEqual(itr.n, 5)
- self.assertEqual(next(itr), ref[5])
- itr.skip(2)
- self.assertEqual(itr.n, 8)
- self.assertEqual(list(itr), [ref[8], ref[9]])
- self.assertFalse(itr.has_next())
-
- def test_counting_iterator_length_mismatch(self):
- ref = list(range(10))
- # When the underlying iterable is longer than the CountingIterator,
- # the remaining items in the iterable should be ignored
- itr = iterators.CountingIterator(ref, total=8)
- self.assertEqual(list(itr), ref[:8])
- # When the underlying iterable is shorter than the CountingIterator,
- # raise an IndexError when the underlying iterable is exhausted
- itr = iterators.CountingIterator(ref, total=12)
- self.assertRaises(IndexError, list, itr)
-
- def test_counting_iterator_take(self):
- # Test the "take" method of CountingIterator
- ref = list(range(10))
- itr = iterators.CountingIterator(ref)
- itr.take(5)
- self.assertEqual(len(itr), len(list(iter(itr))))
- self.assertEqual(len(itr), 5)
-
- itr = iterators.CountingIterator(ref)
- itr.take(5)
- self.assertEqual(next(itr), ref[0])
- self.assertEqual(next(itr), ref[1])
- itr.skip(2)
- self.assertEqual(next(itr), ref[4])
- self.assertFalse(itr.has_next())
-
- def test_grouped_iterator(self):
- # test correctness
- x = list(range(10))
- itr = iterators.GroupedIterator(x, 1)
- self.assertEqual(list(itr), [[0], [1], [2], [3], [4], [5], [6], [7], [8], [9]])
- itr = iterators.GroupedIterator(x, 4)
- self.assertEqual(list(itr), [[0, 1, 2, 3], [4, 5, 6, 7], [8, 9]])
- itr = iterators.GroupedIterator(x, 5)
- self.assertEqual(list(itr), [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]])
-
- # test the GroupIterator also works correctly as a CountingIterator
- x = list(range(30))
- ref = list(iterators.GroupedIterator(x, 3))
- itr = iterators.GroupedIterator(x, 3)
- self.test_counting_iterator_index(ref, itr)
-
- def test_sharded_iterator(self):
- # test correctness
- x = list(range(10))
- itr = iterators.ShardedIterator(x, num_shards=1, shard_id=0)
- self.assertEqual(list(itr), x)
- itr = iterators.ShardedIterator(x, num_shards=2, shard_id=0)
- self.assertEqual(list(itr), [0, 2, 4, 6, 8])
- itr = iterators.ShardedIterator(x, num_shards=2, shard_id=1)
- self.assertEqual(list(itr), [1, 3, 5, 7, 9])
- itr = iterators.ShardedIterator(x, num_shards=3, shard_id=0)
- self.assertEqual(list(itr), [0, 3, 6, 9])
- itr = iterators.ShardedIterator(x, num_shards=3, shard_id=1)
- self.assertEqual(list(itr), [1, 4, 7, None])
- itr = iterators.ShardedIterator(x, num_shards=3, shard_id=2)
- self.assertEqual(list(itr), [2, 5, 8, None])
-
- # test CountingIterator functionality
- x = list(range(30))
- ref = list(iterators.ShardedIterator(x, num_shards=3, shard_id=0))
- itr = iterators.ShardedIterator(x, num_shards=3, shard_id=0)
- self.test_counting_iterator_index(ref, itr)
-
- def test_counting_iterator_buffered_iterator_take(self):
- ref = list(range(10))
- buffered_itr = iterators.BufferedIterator(2, ref)
- itr = iterators.CountingIterator(buffered_itr)
- itr.take(5)
- self.assertEqual(len(itr), len(list(iter(itr))))
- self.assertEqual(len(itr), 5)
-
- buffered_itr = iterators.BufferedIterator(2, ref)
- itr = iterators.CountingIterator(buffered_itr)
- itr.take(5)
- self.assertEqual(len(buffered_itr), 5)
- self.assertEqual(len(list(iter(buffered_itr))), 5)
-
- buffered_itr = iterators.BufferedIterator(2, ref)
- itr = iterators.CountingIterator(buffered_itr)
- itr.take(5)
- self.assertEqual(next(itr), ref[0])
- self.assertEqual(next(itr), ref[1])
- itr.skip(2)
- self.assertEqual(next(itr), ref[4])
- self.assertFalse(itr.has_next())
- self.assertRaises(StopIteration, next, buffered_itr)
-
- ref = list(range(4, 10))
- buffered_itr = iterators.BufferedIterator(2, ref)
- itr = iterators.CountingIterator(buffered_itr, start=4)
- itr.take(5)
- self.assertEqual(len(itr), 5)
- self.assertEqual(len(buffered_itr), 1)
- self.assertEqual(next(itr), ref[0])
- self.assertFalse(itr.has_next())
- self.assertRaises(StopIteration, next, buffered_itr)
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/OIUGLK/bingo/postcss.config.js b/spaces/OIUGLK/bingo/postcss.config.js
deleted file mode 100644
index 33ad091d26d8a9dc95ebdf616e217d985ec215b8..0000000000000000000000000000000000000000
--- a/spaces/OIUGLK/bingo/postcss.config.js
+++ /dev/null
@@ -1,6 +0,0 @@
-module.exports = {
- plugins: {
- tailwindcss: {},
- autoprefixer: {},
- },
-}
diff --git a/spaces/Omnibus/Video-Diffusion-WebUI/video_diffusion/inpaint_zoom/zoom_in_app.py b/spaces/Omnibus/Video-Diffusion-WebUI/video_diffusion/inpaint_zoom/zoom_in_app.py
deleted file mode 100644
index 425790870a5f6ed5c4db2de8f0a9affa371fb4be..0000000000000000000000000000000000000000
--- a/spaces/Omnibus/Video-Diffusion-WebUI/video_diffusion/inpaint_zoom/zoom_in_app.py
+++ /dev/null
@@ -1,194 +0,0 @@
-import os
-
-import gradio as gr
-import numpy as np
-import torch
-from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
-from PIL import Image
-
-from video_diffusion.inpaint_zoom.utils.zoom_in_utils import dummy, image_grid, shrink_and_paste_on_blank, write_video
-
-if torch.cuda.is_available():
- device = torch.device("cuda")
- dtype = torch.float16
-else:
- device = torch.device("cpu")
- dtype= None
-os.environ["CUDA_VISIBLE_DEVICES"] = "0"
-
-
-stable_paint_model_list = ["stabilityai/stable-diffusion-2-inpainting", "runwayml/stable-diffusion-inpainting"]
-
-stable_paint_prompt_list = [
- "children running in the forest , sunny, bright, by studio ghibli painting, superior quality, masterpiece, traditional Japanese colors, by Grzegorz Rutkowski, concept art",
- "A beautiful landscape of a mountain range with a lake in the foreground",
-]
-
-stable_paint_negative_prompt_list = [
- "lurry, bad art, blurred, text, watermark",
-]
-
-
-class StableDiffusionZoomIn:
- def __init__(self):
- self.pipe = None
-
- def load_model(self, model_id):
- if self.pipe is None:
- #self.pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=dtype, revision="fp16")
- self.pipe = DiffusionPipeline.from_pretrained(model_id)
-
- self.pipe.scheduler = DPMSolverMultistepScheduler.from_config(self.pipe.scheduler.config)
- self.pipe = self.pipe.to(device)
- self.pipe.safety_checker = dummy
- self.pipe.enable_attention_slicing()
- #self.pipe.enable_xformers_memory_efficient_attention()
- self.g_cpu = torch.Generator(device=device)
-
- return self.pipe
-
- def generate_video(
- self,
- model_id,
- prompt,
- negative_prompt,
- guidance_scale,
- num_inference_steps,
- ):
- pipe = self.load_model(model_id)
-
- num_init_images = 2
- seed = 42
- height = 512
- width = height
-
- current_image = Image.new(mode="RGBA", size=(height, width))
- mask_image = np.array(current_image)[:, :, 3]
- mask_image = Image.fromarray(255 - mask_image).convert("RGB")
- current_image = current_image.convert("RGB")
-
- init_images = pipe(
- prompt=[prompt] * num_init_images,
- negative_prompt=[negative_prompt] * num_init_images,
- image=current_image,
- guidance_scale=guidance_scale,
- height=height,
- width=width,
- generator=self.g_cpu.manual_seed(seed),
- mask_image=mask_image,
- num_inference_steps=num_inference_steps,
- )[0]
-
- image_grid(init_images, rows=1, cols=num_init_images)
-
- init_image_selected = 1 # @param
- if num_init_images == 1:
- init_image_selected = 0
- else:
- init_image_selected = init_image_selected - 1
-
- num_outpainting_steps = 20 # @param
- mask_width = 128 # @param
- num_interpol_frames = 30 # @param
-
- current_image = init_images[init_image_selected]
- all_frames = []
- all_frames.append(current_image)
-
- for i in range(num_outpainting_steps):
- print("Generating image: " + str(i + 1) + " / " + str(num_outpainting_steps))
-
- prev_image_fix = current_image
-
- prev_image = shrink_and_paste_on_blank(current_image, mask_width)
-
- current_image = prev_image
-
- # create mask (black image with white mask_width width edges)
- mask_image = np.array(current_image)[:, :, 3]
- mask_image = Image.fromarray(255 - mask_image).convert("RGB")
-
- # inpainting step
- current_image = current_image.convert("RGB")
- images = pipe(
- prompt=prompt,
- negative_prompt=negative_prompt,
- image=current_image,
- guidance_scale=guidance_scale,
- height=height,
- width=width,
- # this can make the whole thing deterministic but the output less exciting
- # generator = g_cuda.manual_seed(seed),
- mask_image=mask_image,
- num_inference_steps=num_inference_steps,
- )[0]
- current_image = images[0]
- current_image.paste(prev_image, mask=prev_image)
-
- # interpolation steps bewteen 2 inpainted images (=sequential zoom and crop)
- for j in range(num_interpol_frames - 1):
- interpol_image = current_image
- interpol_width = round(
- (1 - (1 - 2 * mask_width / height) ** (1 - (j + 1) / num_interpol_frames)) * height / 2
- )
- interpol_image = interpol_image.crop(
- (interpol_width, interpol_width, width - interpol_width, height - interpol_width)
- )
-
- interpol_image = interpol_image.resize((height, width))
-
- # paste the higher resolution previous image in the middle to avoid drop in quality caused by zooming
- interpol_width2 = round((1 - (height - 2 * mask_width) / (height - 2 * interpol_width)) / 2 * height)
- prev_image_fix_crop = shrink_and_paste_on_blank(prev_image_fix, interpol_width2)
- interpol_image.paste(prev_image_fix_crop, mask=prev_image_fix_crop)
-
- all_frames.append(interpol_image)
-
- all_frames.append(current_image)
-
- video_file_name = "infinite_zoom_out"
- fps = 30
- save_path = video_file_name + ".mp4"
- write_video(save_path, all_frames, fps)
- return save_path
-
- def app():
- with gr.Blocks():
- with gr.Row():
- with gr.Column():
- text2image_in_model_path = gr.Dropdown(
- choices=stable_paint_model_list, value=stable_paint_model_list[0], label="Text-Image Model Id"
- )
-
- text2image_in_prompt = gr.Textbox(lines=2, value=stable_paint_prompt_list[0], label="Prompt")
-
- text2image_in_negative_prompt = gr.Textbox(
- lines=1, value=stable_paint_negative_prompt_list[0], label="Negative Prompt"
- )
-
- with gr.Row():
- with gr.Column():
- text2image_in_guidance_scale = gr.Slider(
- minimum=0.1, maximum=15, step=0.1, value=7.5, label="Guidance Scale"
- )
-
- text2image_in_num_inference_step = gr.Slider(
- minimum=1, maximum=100, step=1, value=50, label="Num Inference Step"
- )
-
- text2image_in_predict = gr.Button(value="Generator")
-
- with gr.Column():
- output_image = gr.Video(label="Output")
-
- text2image_in_predict.click(
- fn=StableDiffusionZoomIn().generate_video,
- inputs=[
- text2image_in_model_path,
- text2image_in_prompt,
- text2image_in_negative_prompt,
- text2image_in_guidance_scale,
- text2image_in_num_inference_step,
- ],
- outputs=output_image,
- )
diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/export/caffe2_export.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/export/caffe2_export.py
deleted file mode 100644
index 74ac123a7aed6cd77d6d833446a831d9048745b2..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/export/caffe2_export.py
+++ /dev/null
@@ -1,207 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-import copy
-import io
-import logging
-import numpy as np
-from typing import List
-import onnx
-import torch
-from caffe2.proto import caffe2_pb2
-from caffe2.python import core
-from caffe2.python.onnx.backend import Caffe2Backend
-from tabulate import tabulate
-from termcolor import colored
-from torch.onnx import OperatorExportTypes
-
-from .shared import (
- ScopedWS,
- construct_init_net_from_params,
- fuse_alias_placeholder,
- fuse_copy_between_cpu_and_gpu,
- get_params_from_init_net,
- group_norm_replace_aten_with_caffe2,
- infer_device_type,
- remove_dead_end_ops,
- remove_reshape_for_fc,
- save_graph,
-)
-
-logger = logging.getLogger(__name__)
-
-
-def export_onnx_model(model, inputs):
- """
- Trace and export a model to onnx format.
-
- Args:
- model (nn.Module):
- inputs (tuple[args]): the model will be called by `model(*inputs)`
-
- Returns:
- an onnx model
- """
- assert isinstance(model, torch.nn.Module)
-
- # make sure all modules are in eval mode, onnx may change the training state
- # of the module if the states are not consistent
- def _check_eval(module):
- assert not module.training
-
- model.apply(_check_eval)
-
- # Export the model to ONNX
- with torch.no_grad():
- with io.BytesIO() as f:
- torch.onnx.export(
- model,
- inputs,
- f,
- operator_export_type=OperatorExportTypes.ONNX_ATEN_FALLBACK,
- # verbose=True, # NOTE: uncomment this for debugging
- # export_params=True,
- )
- onnx_model = onnx.load_from_string(f.getvalue())
-
- # Apply ONNX's Optimization
- all_passes = onnx.optimizer.get_available_passes()
- passes = ["fuse_bn_into_conv"]
- assert all(p in all_passes for p in passes)
- onnx_model = onnx.optimizer.optimize(onnx_model, passes)
- return onnx_model
-
-
-def _op_stats(net_def):
- type_count = {}
- for t in [op.type for op in net_def.op]:
- type_count[t] = type_count.get(t, 0) + 1
- type_count_list = sorted(type_count.items(), key=lambda kv: kv[0]) # alphabet
- type_count_list = sorted(type_count_list, key=lambda kv: -kv[1]) # count
- return "\n".join("{:>4}x {}".format(count, name) for name, count in type_count_list)
-
-
-def _assign_device_option(
- predict_net: caffe2_pb2.NetDef, init_net: caffe2_pb2.NetDef, tensor_inputs: List[torch.Tensor]
-):
- """
- ONNX exported network doesn't have concept of device, assign necessary
- device option for each op in order to make it runable on GPU runtime.
- """
-
- def _get_device_type(torch_tensor):
- assert torch_tensor.device.type in ["cpu", "cuda"]
- assert torch_tensor.device.index == 0
- return torch_tensor.device.type
-
- def _assign_op_device_option(net_proto, net_ssa, blob_device_types):
- for op, ssa_i in zip(net_proto.op, net_ssa):
- if op.type in ["CopyCPUToGPU", "CopyGPUToCPU"]:
- op.device_option.CopyFrom(core.DeviceOption(caffe2_pb2.CUDA, 0))
- else:
- devices = [blob_device_types[b] for b in ssa_i[0] + ssa_i[1]]
- assert all(d == devices[0] for d in devices)
- if devices[0] == "cuda":
- op.device_option.CopyFrom(core.DeviceOption(caffe2_pb2.CUDA, 0))
-
- # update ops in predict_net
- predict_net_input_device_types = {
- (name, 0): _get_device_type(tensor)
- for name, tensor in zip(predict_net.external_input, tensor_inputs)
- }
- predict_net_device_types = infer_device_type(
- predict_net, known_status=predict_net_input_device_types, device_name_style="pytorch"
- )
- predict_net_ssa, _ = core.get_ssa(predict_net)
- _assign_op_device_option(predict_net, predict_net_ssa, predict_net_device_types)
-
- # update ops in init_net
- init_net_ssa, versions = core.get_ssa(init_net)
- init_net_output_device_types = {
- (name, versions[name]): predict_net_device_types[(name, 0)]
- for name in init_net.external_output
- }
- init_net_device_types = infer_device_type(
- init_net, known_status=init_net_output_device_types, device_name_style="pytorch"
- )
- _assign_op_device_option(init_net, init_net_ssa, init_net_device_types)
-
-
-def export_caffe2_detection_model(model: torch.nn.Module, tensor_inputs: List[torch.Tensor]):
- """
- Export a caffe2-compatible Detectron2 model to caffe2 format via ONNX.
-
- Arg:
- model: a caffe2-compatible version of detectron2 model, defined in caffe2_modeling.py
- tensor_inputs: a list of tensors that caffe2 model takes as input.
- """
- model = copy.deepcopy(model)
- assert isinstance(model, torch.nn.Module)
- assert hasattr(model, "encode_additional_info")
-
- # Export via ONNX
- logger.info(
- "Exporting a {} model via ONNX ...".format(type(model).__name__)
- + " Some warnings from ONNX are expected and are usually not to worry about."
- )
- onnx_model = export_onnx_model(model, (tensor_inputs,))
- # Convert ONNX model to Caffe2 protobuf
- init_net, predict_net = Caffe2Backend.onnx_graph_to_caffe2_net(onnx_model)
- ops_table = [[op.type, op.input, op.output] for op in predict_net.op]
- table = tabulate(ops_table, headers=["type", "input", "output"], tablefmt="pipe")
- logger.info(
- "ONNX export Done. Exported predict_net (before optimizations):\n" + colored(table, "cyan")
- )
-
- # Apply protobuf optimization
- fuse_alias_placeholder(predict_net, init_net)
- if any(t.device.type != "cpu" for t in tensor_inputs):
- fuse_copy_between_cpu_and_gpu(predict_net)
- remove_dead_end_ops(init_net)
- _assign_device_option(predict_net, init_net, tensor_inputs)
- params, device_options = get_params_from_init_net(init_net)
- predict_net, params = remove_reshape_for_fc(predict_net, params)
- init_net = construct_init_net_from_params(params, device_options)
- group_norm_replace_aten_with_caffe2(predict_net)
-
- # Record necessary information for running the pb model in Detectron2 system.
- model.encode_additional_info(predict_net, init_net)
-
- logger.info("Operators used in predict_net: \n{}".format(_op_stats(predict_net)))
- logger.info("Operators used in init_net: \n{}".format(_op_stats(init_net)))
-
- return predict_net, init_net
-
-
-def run_and_save_graph(predict_net, init_net, tensor_inputs, graph_save_path):
- """
- Run the caffe2 model on given inputs, recording the shape and draw the graph.
-
- predict_net/init_net: caffe2 model.
- tensor_inputs: a list of tensors that caffe2 model takes as input.
- graph_save_path: path for saving graph of exported model.
- """
-
- logger.info("Saving graph of ONNX exported model to {} ...".format(graph_save_path))
- save_graph(predict_net, graph_save_path, op_only=False)
-
- # Run the exported Caffe2 net
- logger.info("Running ONNX exported model ...")
- with ScopedWS("__ws_tmp__", True) as ws:
- ws.RunNetOnce(init_net)
- initialized_blobs = set(ws.Blobs())
- uninitialized = [inp for inp in predict_net.external_input if inp not in initialized_blobs]
- for name, blob in zip(uninitialized, tensor_inputs):
- ws.FeedBlob(name, blob)
-
- try:
- ws.RunNetOnce(predict_net)
- except RuntimeError as e:
- logger.warning("Encountered RuntimeError: \n{}".format(str(e)))
-
- ws_blobs = {b: ws.FetchBlob(b) for b in ws.Blobs()}
- blob_sizes = {b: ws_blobs[b].shape for b in ws_blobs if isinstance(ws_blobs[b], np.ndarray)}
-
- logger.info("Saving graph with blob shapes to {} ...".format(graph_save_path))
- save_graph(predict_net, graph_save_path, op_only=False, blob_sizes=blob_sizes)
-
- return ws_blobs
diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/layers/test_losses.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/layers/test_losses.py
deleted file mode 100644
index d74920246cbd4a188b3c81cf0c78e982af6da1ac..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/layers/test_losses.py
+++ /dev/null
@@ -1,82 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import numpy as np
-import unittest
-import torch
-
-from detectron2.layers import ciou_loss, diou_loss
-
-
-class TestLosses(unittest.TestCase):
- def test_diou_loss(self):
- """
- loss = 1 - iou + d/c
- where,
- d = (distance between centers of the 2 boxes)^2
- c = (diagonal length of the smallest enclosing box covering the 2 boxes)^2
- """
- # Identical boxes should have loss of 0
- box = torch.tensor([-1, -1, 1, 1], dtype=torch.float32)
- loss = diou_loss(box, box)
- self.assertTrue(np.allclose(loss, [0.0]))
-
- # Half size box inside other box
- # iou = 0.5, d = 0.25, c = 8
- box2 = torch.tensor([0, -1, 1, 1], dtype=torch.float32)
- loss = diou_loss(box, box2)
- self.assertTrue(np.allclose(loss, [0.53125]))
-
- # Two diagonally adjacent boxes
- # iou = 0, d = 2, c = 8
- box3 = torch.tensor([0, 0, 1, 1], dtype=torch.float32)
- box4 = torch.tensor([1, 1, 2, 2], dtype=torch.float32)
- loss = diou_loss(box3, box4)
- self.assertTrue(np.allclose(loss, [1.25]))
-
- # Test batched loss and reductions
- box1s = torch.stack([box, box3], dim=0)
- box2s = torch.stack([box2, box4], dim=0)
-
- loss = diou_loss(box1s, box2s, reduction="sum")
- self.assertTrue(np.allclose(loss, [1.78125]))
-
- loss = diou_loss(box1s, box2s, reduction="mean")
- self.assertTrue(np.allclose(loss, [0.890625]))
-
- def test_ciou_loss(self):
- """
- loss = 1 - iou + d/c + alpha*v
- where,
- d = (distance between centers of the 2 boxes)^2
- c = (diagonal length of the smallest enclosing box covering the 2 boxes)^2
- v = (4/pi^2) * (arctan(box1_w/box1_h) - arctan(box2_w/box2_h))^2
- alpha = v/(1 - iou + v)
- """
- # Identical boxes should have loss of 0
- box = torch.tensor([-1, -1, 1, 1], dtype=torch.float32)
- loss = ciou_loss(box, box)
- self.assertTrue(np.allclose(loss, [0.0]))
-
- # Half size box inside other box
- # iou = 0.5, d = 0.25, c = 8
- # v = (4/pi^2) * (arctan(1) - arctan(0.5))^2 = 0.042
- # alpha = 0.0775
- box2 = torch.tensor([0, -1, 1, 1], dtype=torch.float32)
- loss = ciou_loss(box, box2)
- self.assertTrue(np.allclose(loss, [0.5345]))
-
- # Two diagonally adjacent boxes
- # iou = 0, d = 2, c = 8, v = 0, alpha = 0
- box3 = torch.tensor([0, 0, 1, 1], dtype=torch.float32)
- box4 = torch.tensor([1, 1, 2, 2], dtype=torch.float32)
- loss = ciou_loss(box3, box4)
- self.assertTrue(np.allclose(loss, [1.25]))
-
- # Test batched loss and reductions
- box1s = torch.stack([box, box3], dim=0)
- box2s = torch.stack([box2, box4], dim=0)
-
- loss = ciou_loss(box1s, box2s, reduction="sum")
- self.assertTrue(np.allclose(loss, [1.7845]))
-
- loss = ciou_loss(box1s, box2s, reduction="mean")
- self.assertTrue(np.allclose(loss, [0.89225]))
diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/saicinpainting/evaluation/masks/__init__.py b/spaces/OpenGVLab/InternGPT/third-party/lama/saicinpainting/evaluation/masks/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/OpenMotionLab/MotionGPT/pyrender/pyrender/camera.py b/spaces/OpenMotionLab/MotionGPT/pyrender/pyrender/camera.py
deleted file mode 100644
index e019358039033c3a372c990ebad3151258c3651d..0000000000000000000000000000000000000000
--- a/spaces/OpenMotionLab/MotionGPT/pyrender/pyrender/camera.py
+++ /dev/null
@@ -1,437 +0,0 @@
-"""Virtual cameras compliant with the glTF 2.0 specification as described at
-https://github.com/KhronosGroup/glTF/tree/master/specification/2.0#reference-camera
-
-Author: Matthew Matl
-"""
-import abc
-import numpy as np
-import six
-import sys
-
-from .constants import DEFAULT_Z_NEAR, DEFAULT_Z_FAR
-
-
-@six.add_metaclass(abc.ABCMeta)
-class Camera(object):
- """Abstract base class for all cameras.
-
- Note
- ----
- Camera poses are specified in the OpenGL format,
- where the z axis points away from the view direction and the
- x and y axes point to the right and up in the image plane, respectively.
-
- Parameters
- ----------
- znear : float
- The floating-point distance to the near clipping plane.
- zfar : float
- The floating-point distance to the far clipping plane.
- ``zfar`` must be greater than ``znear``.
- name : str, optional
- The user-defined name of this object.
- """
-
- def __init__(self,
- znear=DEFAULT_Z_NEAR,
- zfar=DEFAULT_Z_FAR,
- name=None):
- self.name = name
- self.znear = znear
- self.zfar = zfar
-
- @property
- def name(self):
- """str : The user-defined name of this object.
- """
- return self._name
-
- @name.setter
- def name(self, value):
- if value is not None:
- value = str(value)
- self._name = value
-
- @property
- def znear(self):
- """float : The distance to the near clipping plane.
- """
- return self._znear
-
- @znear.setter
- def znear(self, value):
- value = float(value)
- if value < 0:
- raise ValueError('z-near must be >= 0.0')
- self._znear = value
-
- @property
- def zfar(self):
- """float : The distance to the far clipping plane.
- """
- return self._zfar
-
- @zfar.setter
- def zfar(self, value):
- value = float(value)
- if value <= 0 or value <= self.znear:
- raise ValueError('zfar must be >0 and >znear')
- self._zfar = value
-
- @abc.abstractmethod
- def get_projection_matrix(self, width=None, height=None):
- """Return the OpenGL projection matrix for this camera.
-
- Parameters
- ----------
- width : int
- Width of the current viewport, in pixels.
- height : int
- Height of the current viewport, in pixels.
- """
- pass
-
-
-class PerspectiveCamera(Camera):
-
- """A perspective camera for perspective projection.
-
- Parameters
- ----------
- yfov : float
- The floating-point vertical field of view in radians.
- znear : float
- The floating-point distance to the near clipping plane.
- If not specified, defaults to 0.05.
- zfar : float, optional
- The floating-point distance to the far clipping plane.
- ``zfar`` must be greater than ``znear``.
- If None, the camera uses an infinite projection matrix.
- aspectRatio : float, optional
- The floating-point aspect ratio of the field of view.
- If not specified, the camera uses the viewport's aspect ratio.
- name : str, optional
- The user-defined name of this object.
- """
-
- def __init__(self,
- yfov,
- znear=DEFAULT_Z_NEAR,
- zfar=None,
- aspectRatio=None,
- name=None):
- super(PerspectiveCamera, self).__init__(
- znear=znear,
- zfar=zfar,
- name=name,
- )
-
- self.yfov = yfov
- self.aspectRatio = aspectRatio
-
- @property
- def yfov(self):
- """float : The vertical field of view in radians.
- """
- return self._yfov
-
- @yfov.setter
- def yfov(self, value):
- value = float(value)
- if value <= 0.0:
- raise ValueError('Field of view must be positive')
- self._yfov = value
-
- @property
- def zfar(self):
- """float : The distance to the far clipping plane.
- """
- return self._zfar
-
- @zfar.setter
- def zfar(self, value):
- if value is not None:
- value = float(value)
- if value <= 0 or value <= self.znear:
- raise ValueError('zfar must be >0 and >znear')
- self._zfar = value
-
- @property
- def aspectRatio(self):
- """float : The ratio of the width to the height of the field of view.
- """
- return self._aspectRatio
-
- @aspectRatio.setter
- def aspectRatio(self, value):
- if value is not None:
- value = float(value)
- if value <= 0.0:
- raise ValueError('Aspect ratio must be positive')
- self._aspectRatio = value
-
- def get_projection_matrix(self, width=None, height=None):
- """Return the OpenGL projection matrix for this camera.
-
- Parameters
- ----------
- width : int
- Width of the current viewport, in pixels.
- height : int
- Height of the current viewport, in pixels.
- """
- aspect_ratio = self.aspectRatio
- if aspect_ratio is None:
- if width is None or height is None:
- raise ValueError('Aspect ratio of camera must be defined')
- aspect_ratio = float(width) / float(height)
-
- a = aspect_ratio
- t = np.tan(self.yfov / 2.0)
- n = self.znear
- f = self.zfar
-
- P = np.zeros((4,4))
- P[0][0] = 1.0 / (a * t)
- P[1][1] = 1.0 / t
- P[3][2] = -1.0
-
- if f is None:
- P[2][2] = -1.0
- P[2][3] = -2.0 * n
- else:
- P[2][2] = (f + n) / (n - f)
- P[2][3] = (2 * f * n) / (n - f)
-
- return P
-
-
-class OrthographicCamera(Camera):
- """An orthographic camera for orthographic projection.
-
- Parameters
- ----------
- xmag : float
- The floating-point horizontal magnification of the view.
- ymag : float
- The floating-point vertical magnification of the view.
- znear : float
- The floating-point distance to the near clipping plane.
- If not specified, defaults to 0.05.
- zfar : float
- The floating-point distance to the far clipping plane.
- ``zfar`` must be greater than ``znear``.
- If not specified, defaults to 100.0.
- name : str, optional
- The user-defined name of this object.
- """
-
- def __init__(self,
- xmag,
- ymag,
- znear=DEFAULT_Z_NEAR,
- zfar=DEFAULT_Z_FAR,
- name=None):
- super(OrthographicCamera, self).__init__(
- znear=znear,
- zfar=zfar,
- name=name,
- )
-
- self.xmag = xmag
- self.ymag = ymag
-
- @property
- def xmag(self):
- """float : The horizontal magnification of the view.
- """
- return self._xmag
-
- @xmag.setter
- def xmag(self, value):
- value = float(value)
- if value <= 0.0:
- raise ValueError('X magnification must be positive')
- self._xmag = value
-
- @property
- def ymag(self):
- """float : The vertical magnification of the view.
- """
- return self._ymag
-
- @ymag.setter
- def ymag(self, value):
- value = float(value)
- if value <= 0.0:
- raise ValueError('Y magnification must be positive')
- self._ymag = value
-
- @property
- def znear(self):
- """float : The distance to the near clipping plane.
- """
- return self._znear
-
- @znear.setter
- def znear(self, value):
- value = float(value)
- if value <= 0:
- raise ValueError('z-near must be > 0.0')
- self._znear = value
-
- def get_projection_matrix(self, width=None, height=None):
- """Return the OpenGL projection matrix for this camera.
-
- Parameters
- ----------
- width : int
- Width of the current viewport, in pixels.
- Unused in this function.
- height : int
- Height of the current viewport, in pixels.
- Unused in this function.
- """
- xmag = self.xmag
- ymag = self.ymag
-
- # If screen width/height defined, rescale xmag
- if width is not None and height is not None:
- xmag = width / height * ymag
-
- n = self.znear
- f = self.zfar
- P = np.zeros((4,4))
- P[0][0] = 1.0 / xmag
- P[1][1] = 1.0 / ymag
- P[2][2] = 2.0 / (n - f)
- P[2][3] = (f + n) / (n - f)
- P[3][3] = 1.0
- return P
-
-
-class IntrinsicsCamera(Camera):
- """A perspective camera with custom intrinsics.
-
- Parameters
- ----------
- fx : float
- X-axis focal length in pixels.
- fy : float
- Y-axis focal length in pixels.
- cx : float
- X-axis optical center in pixels.
- cy : float
- Y-axis optical center in pixels.
- znear : float
- The floating-point distance to the near clipping plane.
- If not specified, defaults to 0.05.
- zfar : float
- The floating-point distance to the far clipping plane.
- ``zfar`` must be greater than ``znear``.
- If not specified, defaults to 100.0.
- name : str, optional
- The user-defined name of this object.
- """
-
- def __init__(self,
- fx,
- fy,
- cx,
- cy,
- znear=DEFAULT_Z_NEAR,
- zfar=DEFAULT_Z_FAR,
- name=None):
- super(IntrinsicsCamera, self).__init__(
- znear=znear,
- zfar=zfar,
- name=name,
- )
-
- self.fx = fx
- self.fy = fy
- self.cx = cx
- self.cy = cy
-
- @property
- def fx(self):
- """float : X-axis focal length in meters.
- """
- return self._fx
-
- @fx.setter
- def fx(self, value):
- self._fx = float(value)
-
- @property
- def fy(self):
- """float : Y-axis focal length in meters.
- """
- return self._fy
-
- @fy.setter
- def fy(self, value):
- self._fy = float(value)
-
- @property
- def cx(self):
- """float : X-axis optical center in pixels.
- """
- return self._cx
-
- @cx.setter
- def cx(self, value):
- self._cx = float(value)
-
- @property
- def cy(self):
- """float : Y-axis optical center in pixels.
- """
- return self._cy
-
- @cy.setter
- def cy(self, value):
- self._cy = float(value)
-
- def get_projection_matrix(self, width, height):
- """Return the OpenGL projection matrix for this camera.
-
- Parameters
- ----------
- width : int
- Width of the current viewport, in pixels.
- height : int
- Height of the current viewport, in pixels.
- """
- width = float(width)
- height = float(height)
-
- cx, cy = self.cx, self.cy
- fx, fy = self.fx, self.fy
- if sys.platform == 'darwin':
- cx = self.cx * 2.0
- cy = self.cy * 2.0
- fx = self.fx * 2.0
- fy = self.fy * 2.0
-
- P = np.zeros((4,4))
- P[0][0] = 2.0 * fx / width
- P[1][1] = 2.0 * fy / height
- P[0][2] = 1.0 - 2.0 * cx / width
- P[1][2] = 2.0 * cy / height - 1.0
- P[3][2] = -1.0
-
- n = self.znear
- f = self.zfar
- if f is None:
- P[2][2] = -1.0
- P[2][3] = -2.0 * n
- else:
- P[2][2] = (f + n) / (n - f)
- P[2][3] = (2 * f * n) / (n - f)
-
- return P
-
-
-__all__ = ['Camera', 'PerspectiveCamera', 'OrthographicCamera',
- 'IntrinsicsCamera']
diff --git a/spaces/OrangeBusiness/OrangeBranding/README.md b/spaces/OrangeBusiness/OrangeBranding/README.md
deleted file mode 100644
index fc0695f2962a8c1bd3db66ec333fb6bc96cdb3c3..0000000000000000000000000000000000000000
--- a/spaces/OrangeBusiness/OrangeBranding/README.md
+++ /dev/null
@@ -1,17 +0,0 @@
-
----
-tags: [gradio-theme]
-title: OrangeBranding
-colorFrom: orange
-colorTo: purple
-sdk: gradio
-sdk_version: 3.47.1
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-# OrangeBranding
-## Description
-Add a description of this theme here!
-## Contributions
-Thanks to [@OrangeBusiness](https://huggingface.co/OrangeBusiness) for adding this gradio theme!
diff --git a/spaces/PHZane/emrwa/app.py b/spaces/PHZane/emrwa/app.py
deleted file mode 100644
index 59b339e60c0ae7c1e12f2d330d3686a689689a12..0000000000000000000000000000000000000000
--- a/spaces/PHZane/emrwa/app.py
+++ /dev/null
@@ -1,128 +0,0 @@
-# import time
-
-# import gradio as gr
-# # from generate import main,generate
-# show = open("show.txt",'r',encoding='utf-8')
-# a_show = str(show.read())
-# e_show = []
-# e_show.append(a_show)
-# # print(e_show)
-
-# trainingModels = {
-# 'ssd-Asthma': '入院初诊:哮喘',
-# 'ssd-COPD': '入院初诊:慢性阻塞性肺病',
-# 'ssd-Diabetes': '入院初诊:糖尿病',
-# 'ssd-Gastritis': '入院初诊:胃炎',
-# 'ssd-Gout': '入院初诊:痛风',
-# 'ssd-Heart': '入院初诊:心律失常',
-# 'ssd-HTN': '入院初诊:高血压',
-# 'ssd-Polyps': '入院初诊:胃息肉',
-
-
-
-
-
-
-# }
-# trainingModels2 = {
-# 'mrd-DiaHeart': '入院初诊:糖尿病 入院初诊:心律失常',
-# 'mrd-DiaHtn': '入院初诊:糖尿病 入院初诊:高血压',
-# 'mrd-HtnHeart': '入院初诊:高血压 入院初诊:心律失常',
-# 'mrd-DiaHtnHeart': '入院初诊:糖尿病 入院初诊:高血压 入院初诊:心律失常',
-# 'mrd-GastritisPolyps': '入院初诊:胃炎 入院初诊:胃息肉',
-# }
-
-# trainingModels3 = {
-# 'mud-CopdDiabetes': '入院初诊:慢性阻塞性肺病 入院初诊:糖尿病',
-# 'mud-CopdGastritis': '入院初诊:慢性阻塞性肺病 入院初诊:胃炎',
-# 'mud-CopdPolyps': '入院初诊:慢性阻塞性肺病 入院初诊:胃息肉',
-# 'mud-GastritisHtn': '入院初诊:胃炎 入院初诊:高血压',
-# 'mud-HeartPolyps': '入院初诊:心律失常 入院初诊:胃息肉',
-# }
-# models = []
-# # models2 = []
-# # models3 = []
-# for model, prompt in trainingModels.items():
-# models.append(model)
-# for model, prompt in trainingModels2.items():
-# models.append(model)
-# for model, prompt in trainingModels3.items():
-# models.append(model)
-# def out1 (a):
-# import random
-# random.randint(1,3)
-# s = str(random.randint(1,3))
-# print(s)
-# time.sleep(3)
-# shengcheng = open("1/"+a+"/"+s+".txt", 'r', encoding='utf-8')
-# out_show = str(shengcheng.read())
-# print("正在生成",a)
-# return out_show
-
-# def out():
-# print("正在运行")
-
-
-
-# a = gr.inputs.Radio(choices=models, type="value", default=None, label="Please select the case to be generated", optional=False)
-# # b = gr.inputs.Radio(choices=models2, type="value", default=None, label="Please select the case to be generated", optional=False)
-# # c = gr.inputs.Radio(choices=models3, type="value", default=None, label="Please select the case to be generated", optional=False)
-# # if a!=None:
-# interface = gr.Interface(fn=out1,inputs=a,outputs="text")
-# # elif b!=None:
-# # interface = gr.Interface(fn=out1,inputs=b,outputs="text")
-# # else:
-# # interface = gr.Interface(fn=out1,inputs=c,outputs="text")
-# interface.launch()
-
-
-# out()
-
-trainingModels = {
- 'ssd-Asthma': '入院初诊:哮喘',
- 'ssd-COPD': '入院初诊:慢性阻塞性肺病',
- 'ssd-Diabetes': '入院初诊:糖尿病',
- 'ssd-Gastritis': '入院初诊:胃炎',
- 'ssd-Gout': '入院初诊:痛风',
- 'ssd-Heart': '入院初诊:心律失常',
- 'ssd-HTN': '入院初诊:高血压',
- 'ssd-Polyps': '入院初诊:胃息肉',
-
-
-
-
-
-
-}
-trainingModels2 = {
- 'mrd-DiaHeart': '入院初诊:糖尿病 入院初诊:心律失常',
- 'mrd-DiaHtn': '入院初诊:糖尿病 入院初诊:高血压',
- 'mrd-HtnHeart': '入院初诊:高血压 入院初诊:心律失常',
- 'mrd-DiaHtnHeart': '入院初诊:糖尿病 入院初诊:高血压 入院初诊:心律失常',
- 'mrd-GastritisPolyps': '入院初诊:胃炎 入院初诊:胃息肉',
-}
-
-trainingModels3 = {
- 'mud-CopdDiabetes': '入院初诊:慢性阻塞性肺病 入院初诊:糖尿病',
- 'mud-CopdGastritis': '入院初诊:慢性阻塞性肺病 入院初诊:胃炎',
- 'mud-CopdPolyps': '入院初诊:慢性阻塞性肺病 入院初诊:胃息肉',
- 'mud-GastritisHtn': '入院初诊:胃炎 入院初诊:高血压',
- 'mud-HeartPolyps': '入院初诊:心律失常 入院初诊:胃息肉',
-}
-models = []
-
-for model, prompt in trainingModels.items():
- models.append(model)
-for model, prompt in trainingModels2.items():
- models.append(model)
-for model, prompt in trainingModels3.items():
- models.append(model)
-import gradio as gr
-from generate1 import generate,main
-
-
-
-a = gr.inputs.Radio(choices=models, type="value", default=None, label="Please select the case to be generated", optional=False)
-
-interface = gr.Interface(fn=main,inputs=a,outputs="text",allow_flagging="manual")
-interface.launch()
diff --git a/spaces/PSLD/PSLD/diffusion-posterior-sampling/bkse/models/dsd/op/fused_act.py b/spaces/PSLD/PSLD/diffusion-posterior-sampling/bkse/models/dsd/op/fused_act.py
deleted file mode 100644
index d5642f912ee7b488981dba83fba4876b3a27a954..0000000000000000000000000000000000000000
--- a/spaces/PSLD/PSLD/diffusion-posterior-sampling/bkse/models/dsd/op/fused_act.py
+++ /dev/null
@@ -1,107 +0,0 @@
-import os
-
-import torch
-from torch import nn
-from torch.autograd import Function
-from torch.nn import functional as F
-from torch.utils.cpp_extension import load
-
-
-module_path = os.path.dirname(__file__)
-fused = load(
- "fused",
- sources=[
- os.path.join(module_path, "fused_bias_act.cpp"),
- os.path.join(module_path, "fused_bias_act_kernel.cu"),
- ],
-)
-
-
-class FusedLeakyReLUFunctionBackward(Function):
- @staticmethod
- def forward(ctx, grad_output, out, bias, negative_slope, scale):
- ctx.save_for_backward(out)
- ctx.negative_slope = negative_slope
- ctx.scale = scale
-
- empty = grad_output.new_empty(0)
-
- grad_input = fused.fused_bias_act(grad_output, empty, out, 3, 1, negative_slope, scale)
-
- dim = [0]
-
- if grad_input.ndim > 2:
- dim += list(range(2, grad_input.ndim))
-
- if bias:
- grad_bias = grad_input.sum(dim).detach()
-
- else:
- grad_bias = None
-
- return grad_input, grad_bias
-
- @staticmethod
- def backward(ctx, gradgrad_input, gradgrad_bias):
- (out,) = ctx.saved_tensors
- gradgrad_out = fused.fused_bias_act(gradgrad_input, gradgrad_bias, out, 3, 1, ctx.negative_slope, ctx.scale)
-
- return gradgrad_out, None, None, None, None
-
-
-class FusedLeakyReLUFunction(Function):
- @staticmethod
- def forward(ctx, input, bias, negative_slope, scale):
- empty = input.new_empty(0)
-
- if bias is None:
- bias = empty
-
- ctx.bias = bias is not None
-
- out = fused.fused_bias_act(input, bias, empty, 3, 0, negative_slope, scale)
- ctx.save_for_backward(out)
- ctx.negative_slope = negative_slope
- ctx.scale = scale
-
- return out
-
- @staticmethod
- def backward(ctx, grad_output):
- (out,) = ctx.saved_tensors
-
- grad_input, grad_bias = FusedLeakyReLUFunctionBackward.apply(
- grad_output, out, ctx.bias, ctx.negative_slope, ctx.scale
- )
-
- return grad_input, grad_bias, None, None
-
-
-class FusedLeakyReLU(nn.Module):
- def __init__(self, channel, bias=True, negative_slope=0.2, scale=2 ** 0.5):
- super().__init__()
-
- if bias:
- self.bias = nn.Parameter(torch.zeros(channel))
-
- else:
- self.bias = None
-
- self.negative_slope = negative_slope
- self.scale = scale
-
- def forward(self, input):
- return fused_leaky_relu(input, self.bias, self.negative_slope, self.scale)
-
-
-def fused_leaky_relu(input, bias=None, negative_slope=0.2, scale=2 ** 0.5):
- if input.device.type == "cpu":
- if bias is not None:
- rest_dim = [1] * (input.ndim - bias.ndim - 1)
- return F.leaky_relu(input + bias.view(1, bias.shape[0], *rest_dim), negative_slope=0.2) * scale
-
- else:
- return F.leaky_relu(input, negative_slope=0.2) * scale
-
- else:
- return FusedLeakyReLUFunction.apply(input, bias, negative_slope, scale)
diff --git a/spaces/Pippoz/Hugging_Space/app.py b/spaces/Pippoz/Hugging_Space/app.py
deleted file mode 100644
index 023742cb3dc0a854472b3ea0a3224fd1400fdebc..0000000000000000000000000000000000000000
--- a/spaces/Pippoz/Hugging_Space/app.py
+++ /dev/null
@@ -1,58 +0,0 @@
-import streamlit as st
-import time
-from transformers import pipeline
-import torch
-
-st.markdown('## Text-generation OPT from Facebook')
-
-@st.cache(allow_output_mutation=True, suppress_st_warning =True, show_spinner=False)
-def get_model():
- return pipeline('text-generation', model=model, do_sample=True)
-
-col1, col2 = st.columns([2,1])
-
-with st.sidebar:
- st.markdown('## Model Parameters')
-
- max_length = st.slider('Max text length', 0, 150, 80)
-
- num_beams = st.slider('N° tree beams search', 2, 15, 5)
-
- early_stopping = st.selectbox(
- 'Early stopping text generation',
- ('True', 'False'), key={'True' : True, 'False': False}, index=0)
-
- no_ngram_repeat = st.slider('Max repetition limit', 1, 5, 2)
-
-with col1:
- prompt= st.text_area('Your prompt here',
- '''Who is Elon Musk?''')
-
-with col2:
- select_model = st.radio(
- "Select the model to use:",
- ('OPT-125m', 'OPT-350m', 'OPT-1.3b'), index = 1)
-
- if select_model == 'OPT-1.3b':
- model = 'facebook/opt-1.3b'
- elif select_model == 'OPT-350m':
- model = 'facebook/opt-350m'
- elif select_model == 'OPT-125m':
- model = 'facebook/opt-125m'
-
- with st.spinner('Loading Model... (This may take a while)'):
- generator = get_model()
- st.success('Model loaded correctly!')
-
-gen = st.info('Generating text...')
-answer = generator(prompt,
- max_length=max_length, no_repeat_ngram_size=no_ngram_repeat,
- early_stopping=early_stopping, num_beams=num_beams)
-gen.empty()
-
-lst = answer[0]['generated_text']
-
-t = st.empty()
-for i in range(len(lst)):
- t.markdown("#### %s" % lst[0:i])
- time.sleep(0.04)
\ No newline at end of file
diff --git a/spaces/Pranjal12345/Text_to_Speech/tortoise/do_tts.py b/spaces/Pranjal12345/Text_to_Speech/tortoise/do_tts.py
deleted file mode 100644
index 5554d027c008e12f210d8204a406517077f5191c..0000000000000000000000000000000000000000
--- a/spaces/Pranjal12345/Text_to_Speech/tortoise/do_tts.py
+++ /dev/null
@@ -1,52 +0,0 @@
-import argparse
-import os
-
-import torch
-import torchaudio
-
-from api import TextToSpeech, MODELS_DIR
-from utils.audio import load_voices
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--text', type=str, help='Text to speak.', default="The expressiveness of autoregressive transformers is literally nuts! I absolutely adore them.")
- parser.add_argument('--voice', type=str, help='Selects the voice to use for generation. See options in voices/ directory (and add your own!) '
- 'Use the & character to join two voices together. Use a comma to perform inference on multiple voices.', default='random')
- parser.add_argument('--preset', type=str, help='Which voice preset to use.', default='ultra_fast')
- parser.add_argument('--use_deepspeed', type=str, help='use deepspeed or not for inference speed gain ~2x.', default=True)
- parser.add_argument('--kv_cache', type=bool, help='If you disable this please wait for a long a time to get the output', default=True)
- parser.add_argument('--half', type=bool, help="float16(half) precision inference if True it's faster and take less vram and ram", default=True)
- parser.add_argument('--output_path', type=str, help='Where to store outputs.', default='results/')
- parser.add_argument('--model_dir', type=str, help='Where to find pretrained model checkpoints. Tortoise automatically downloads these to .models, so this'
- 'should only be specified if you have custom checkpoints.', default=MODELS_DIR)
- parser.add_argument('--candidates', type=int, help='How many output candidates to produce per-voice.', default=3)
- parser.add_argument('--seed', type=int, help='Random seed which can be used to reproduce results.', default=None)
- parser.add_argument('--produce_debug_state', type=bool, help='Whether or not to produce debug_state.pth, which can aid in reproducing problems. Defaults to true.', default=True)
- parser.add_argument('--cvvp_amount', type=float, help='How much the CVVP model should influence the output.'
- 'Increasing this can in some cases reduce the likelihood of multiple speakers. Defaults to 0 (disabled)', default=.0)
- args = parser.parse_args()
- if torch.backends.mps.is_available():
- args.use_deepspeed = False
- os.makedirs(args.output_path, exist_ok=True)
- tts = TextToSpeech(models_dir=args.model_dir, use_deepspeed=args.use_deepspeed, kv_cache=args.kv_cache, half=args.half)
-
- selected_voices = args.voice.split(',')
- for k, selected_voice in enumerate(selected_voices):
- if '&' in selected_voice:
- voice_sel = selected_voice.split('&')
- else:
- voice_sel = [selected_voice]
- voice_samples, conditioning_latents = load_voices(voice_sel)
-
- gen, dbg_state = tts.tts_with_preset(args.text, k=args.candidates, voice_samples=voice_samples, conditioning_latents=conditioning_latents,
- preset=args.preset, use_deterministic_seed=args.seed, return_deterministic_state=True, cvvp_amount=args.cvvp_amount)
- if isinstance(gen, list):
- for j, g in enumerate(gen):
- torchaudio.save(os.path.join(args.output_path, f'{selected_voice}_{k}_{j}.wav'), g.squeeze(0).cpu(), 24000)
- else:
- torchaudio.save(os.path.join(args.output_path, f'{selected_voice}_{k}.wav'), gen.squeeze(0).cpu(), 24000)
-
- if args.produce_debug_state:
- os.makedirs('debug_states', exist_ok=True)
- torch.save(dbg_state, f'debug_states/do_tts_debug_{selected_voice}.pth')
-
diff --git a/spaces/RMXK/RVC_HFF/Applio-RVC-Fork/utils/i18n.py b/spaces/RMXK/RVC_HFF/Applio-RVC-Fork/utils/i18n.py
deleted file mode 100644
index 8e75d2bc26ff86ab1716b8d7f239ad9f5cc1e32d..0000000000000000000000000000000000000000
--- a/spaces/RMXK/RVC_HFF/Applio-RVC-Fork/utils/i18n.py
+++ /dev/null
@@ -1,28 +0,0 @@
-import locale
-import json
-import os
-
-
-def load_language_list(language):
- with open(f"./i18n/{language}.json", "r", encoding="utf-8") as f:
- language_list = json.load(f)
- return language_list
-
-
-class I18nAuto:
- def __init__(self, language=None):
- if language in ["Auto", None]:
- language = "es_ES"
- if not os.path.exists(f"./i18n/{language}.json"):
- language = "es_ES"
- language = "es_ES"
- self.language = language
- # print("Use Language:", language)
- self.language_map = load_language_list(language)
-
- def __call__(self, key):
- return self.language_map.get(key, key)
-
- def print(self):
- # print("Use Language:", self.language)
- print("")
diff --git a/spaces/Ranvelx/Ai2/Dockerfile b/spaces/Ranvelx/Ai2/Dockerfile
deleted file mode 100644
index 6c01c09373883afcb4ea34ae2d316cd596e1737b..0000000000000000000000000000000000000000
--- a/spaces/Ranvelx/Ai2/Dockerfile
+++ /dev/null
@@ -1,21 +0,0 @@
-FROM node:18-bullseye-slim
-
-RUN apt-get update && \
-
-apt-get install -y git
-
-RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app
-
-WORKDIR /app
-
-RUN npm install
-
-COPY Dockerfile greeting.md* .env* ./
-
-RUN npm run build
-
-EXPOSE 7860
-
-ENV NODE_ENV=production
-
-CMD [ "npm", "start" ]
\ No newline at end of file
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/__pip-runner__.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/__pip-runner__.py
deleted file mode 100644
index 49a148a097e9cc06c165571e0bffaf7cae17dc5b..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/__pip-runner__.py
+++ /dev/null
@@ -1,50 +0,0 @@
-"""Execute exactly this copy of pip, within a different environment.
-
-This file is named as it is, to ensure that this module can't be imported via
-an import statement.
-"""
-
-# /!\ This version compatibility check section must be Python 2 compatible. /!\
-
-import sys
-
-# Copied from setup.py
-PYTHON_REQUIRES = (3, 7)
-
-
-def version_str(version): # type: ignore
- return ".".join(str(v) for v in version)
-
-
-if sys.version_info[:2] < PYTHON_REQUIRES:
- raise SystemExit(
- "This version of pip does not support python {} (requires >={}).".format(
- version_str(sys.version_info[:2]), version_str(PYTHON_REQUIRES)
- )
- )
-
-# From here on, we can use Python 3 features, but the syntax must remain
-# Python 2 compatible.
-
-import runpy # noqa: E402
-from importlib.machinery import PathFinder # noqa: E402
-from os.path import dirname # noqa: E402
-
-PIP_SOURCES_ROOT = dirname(dirname(__file__))
-
-
-class PipImportRedirectingFinder:
- @classmethod
- def find_spec(self, fullname, path=None, target=None): # type: ignore
- if fullname != "pip":
- return None
-
- spec = PathFinder.find_spec(fullname, [PIP_SOURCES_ROOT], target)
- assert spec, (PIP_SOURCES_ROOT, fullname)
- return spec
-
-
-sys.meta_path.insert(0, PipImportRedirectingFinder())
-
-assert __name__ == "__main__", "Cannot run __pip-runner__.py as a non-main module"
-runpy.run_module("pip", run_name="__main__", alter_sys=True)
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/platformdirs/version.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/platformdirs/version.py
deleted file mode 100644
index 4552c02aff927f3c833e3a617d38c00e36b05ead..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/platformdirs/version.py
+++ /dev/null
@@ -1,4 +0,0 @@
-"""Version information"""
-
-__version__ = "2.5.2"
-__version_info__ = (2, 5, 2)
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/platformdirs/windows.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/platformdirs/windows.py
deleted file mode 100644
index ef972bdf29ce91b5abe3714eb92587458cf3f03c..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/platformdirs/windows.py
+++ /dev/null
@@ -1,182 +0,0 @@
-from __future__ import annotations
-
-import ctypes
-import os
-from functools import lru_cache
-from typing import Callable
-
-from .api import PlatformDirsABC
-
-
-class Windows(PlatformDirsABC):
- """`MSDN on where to store app data files
- `_.
- Makes use of the
- `appname `,
- `appauthor `,
- `version `,
- `roaming `,
- `opinion `."""
-
- @property
- def user_data_dir(self) -> str:
- """
- :return: data directory tied to the user, e.g.
- ``%USERPROFILE%\\AppData\\Local\\$appauthor\\$appname`` (not roaming) or
- ``%USERPROFILE%\\AppData\\Roaming\\$appauthor\\$appname`` (roaming)
- """
- const = "CSIDL_APPDATA" if self.roaming else "CSIDL_LOCAL_APPDATA"
- path = os.path.normpath(get_win_folder(const))
- return self._append_parts(path)
-
- def _append_parts(self, path: str, *, opinion_value: str | None = None) -> str:
- params = []
- if self.appname:
- if self.appauthor is not False:
- author = self.appauthor or self.appname
- params.append(author)
- params.append(self.appname)
- if opinion_value is not None and self.opinion:
- params.append(opinion_value)
- if self.version:
- params.append(self.version)
- return os.path.join(path, *params)
-
- @property
- def site_data_dir(self) -> str:
- """:return: data directory shared by users, e.g. ``C:\\ProgramData\\$appauthor\\$appname``"""
- path = os.path.normpath(get_win_folder("CSIDL_COMMON_APPDATA"))
- return self._append_parts(path)
-
- @property
- def user_config_dir(self) -> str:
- """:return: config directory tied to the user, same as `user_data_dir`"""
- return self.user_data_dir
-
- @property
- def site_config_dir(self) -> str:
- """:return: config directory shared by the users, same as `site_data_dir`"""
- return self.site_data_dir
-
- @property
- def user_cache_dir(self) -> str:
- """
- :return: cache directory tied to the user (if opinionated with ``Cache`` folder within ``$appname``) e.g.
- ``%USERPROFILE%\\AppData\\Local\\$appauthor\\$appname\\Cache\\$version``
- """
- path = os.path.normpath(get_win_folder("CSIDL_LOCAL_APPDATA"))
- return self._append_parts(path, opinion_value="Cache")
-
- @property
- def user_state_dir(self) -> str:
- """:return: state directory tied to the user, same as `user_data_dir`"""
- return self.user_data_dir
-
- @property
- def user_log_dir(self) -> str:
- """
- :return: log directory tied to the user, same as `user_data_dir` if not opinionated else ``Logs`` in it
- """
- path = self.user_data_dir
- if self.opinion:
- path = os.path.join(path, "Logs")
- return path
-
- @property
- def user_documents_dir(self) -> str:
- """
- :return: documents directory tied to the user e.g. ``%USERPROFILE%\\Documents``
- """
- return os.path.normpath(get_win_folder("CSIDL_PERSONAL"))
-
- @property
- def user_runtime_dir(self) -> str:
- """
- :return: runtime directory tied to the user, e.g.
- ``%USERPROFILE%\\AppData\\Local\\Temp\\$appauthor\\$appname``
- """
- path = os.path.normpath(os.path.join(get_win_folder("CSIDL_LOCAL_APPDATA"), "Temp"))
- return self._append_parts(path)
-
-
-def get_win_folder_from_env_vars(csidl_name: str) -> str:
- """Get folder from environment variables."""
- if csidl_name == "CSIDL_PERSONAL": # does not have an environment name
- return os.path.join(os.path.normpath(os.environ["USERPROFILE"]), "Documents")
-
- env_var_name = {
- "CSIDL_APPDATA": "APPDATA",
- "CSIDL_COMMON_APPDATA": "ALLUSERSPROFILE",
- "CSIDL_LOCAL_APPDATA": "LOCALAPPDATA",
- }.get(csidl_name)
- if env_var_name is None:
- raise ValueError(f"Unknown CSIDL name: {csidl_name}")
- result = os.environ.get(env_var_name)
- if result is None:
- raise ValueError(f"Unset environment variable: {env_var_name}")
- return result
-
-
-def get_win_folder_from_registry(csidl_name: str) -> str:
- """Get folder from the registry.
-
- This is a fallback technique at best. I'm not sure if using the
- registry for this guarantees us the correct answer for all CSIDL_*
- names.
- """
- shell_folder_name = {
- "CSIDL_APPDATA": "AppData",
- "CSIDL_COMMON_APPDATA": "Common AppData",
- "CSIDL_LOCAL_APPDATA": "Local AppData",
- "CSIDL_PERSONAL": "Personal",
- }.get(csidl_name)
- if shell_folder_name is None:
- raise ValueError(f"Unknown CSIDL name: {csidl_name}")
-
- import winreg
-
- key = winreg.OpenKey(winreg.HKEY_CURRENT_USER, r"Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders")
- directory, _ = winreg.QueryValueEx(key, shell_folder_name)
- return str(directory)
-
-
-def get_win_folder_via_ctypes(csidl_name: str) -> str:
- """Get folder with ctypes."""
- csidl_const = {
- "CSIDL_APPDATA": 26,
- "CSIDL_COMMON_APPDATA": 35,
- "CSIDL_LOCAL_APPDATA": 28,
- "CSIDL_PERSONAL": 5,
- }.get(csidl_name)
- if csidl_const is None:
- raise ValueError(f"Unknown CSIDL name: {csidl_name}")
-
- buf = ctypes.create_unicode_buffer(1024)
- windll = getattr(ctypes, "windll") # noqa: B009 # using getattr to avoid false positive with mypy type checker
- windll.shell32.SHGetFolderPathW(None, csidl_const, None, 0, buf)
-
- # Downgrade to short path name if it has highbit chars.
- if any(ord(c) > 255 for c in buf):
- buf2 = ctypes.create_unicode_buffer(1024)
- if windll.kernel32.GetShortPathNameW(buf.value, buf2, 1024):
- buf = buf2
-
- return buf.value
-
-
-def _pick_get_win_folder() -> Callable[[str], str]:
- if hasattr(ctypes, "windll"):
- return get_win_folder_via_ctypes
- try:
- import winreg # noqa: F401
- except ImportError:
- return get_win_folder_from_env_vars
- else:
- return get_win_folder_from_registry
-
-
-get_win_folder = lru_cache(maxsize=None)(_pick_get_win_folder())
-
-__all__ = [
- "Windows",
-]
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/losses/ghm_loss.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/losses/ghm_loss.py
deleted file mode 100644
index 8969a23fd98bb746415f96ac5e4ad9e37ba3af52..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/losses/ghm_loss.py
+++ /dev/null
@@ -1,172 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from ..builder import LOSSES
-
-
-def _expand_onehot_labels(labels, label_weights, label_channels):
- bin_labels = labels.new_full((labels.size(0), label_channels), 0)
- inds = torch.nonzero(
- (labels >= 0) & (labels < label_channels), as_tuple=False).squeeze()
- if inds.numel() > 0:
- bin_labels[inds, labels[inds]] = 1
- bin_label_weights = label_weights.view(-1, 1).expand(
- label_weights.size(0), label_channels)
- return bin_labels, bin_label_weights
-
-
-# TODO: code refactoring to make it consistent with other losses
-@LOSSES.register_module()
-class GHMC(nn.Module):
- """GHM Classification Loss.
-
- Details of the theorem can be viewed in the paper
- `Gradient Harmonized Single-stage Detector
- `_.
-
- Args:
- bins (int): Number of the unit regions for distribution calculation.
- momentum (float): The parameter for moving average.
- use_sigmoid (bool): Can only be true for BCE based loss now.
- loss_weight (float): The weight of the total GHM-C loss.
- """
-
- def __init__(self, bins=10, momentum=0, use_sigmoid=True, loss_weight=1.0):
- super(GHMC, self).__init__()
- self.bins = bins
- self.momentum = momentum
- edges = torch.arange(bins + 1).float() / bins
- self.register_buffer('edges', edges)
- self.edges[-1] += 1e-6
- if momentum > 0:
- acc_sum = torch.zeros(bins)
- self.register_buffer('acc_sum', acc_sum)
- self.use_sigmoid = use_sigmoid
- if not self.use_sigmoid:
- raise NotImplementedError
- self.loss_weight = loss_weight
-
- def forward(self, pred, target, label_weight, *args, **kwargs):
- """Calculate the GHM-C loss.
-
- Args:
- pred (float tensor of size [batch_num, class_num]):
- The direct prediction of classification fc layer.
- target (float tensor of size [batch_num, class_num]):
- Binary class target for each sample.
- label_weight (float tensor of size [batch_num, class_num]):
- the value is 1 if the sample is valid and 0 if ignored.
- Returns:
- The gradient harmonized loss.
- """
- # the target should be binary class label
- if pred.dim() != target.dim():
- target, label_weight = _expand_onehot_labels(
- target, label_weight, pred.size(-1))
- target, label_weight = target.float(), label_weight.float()
- edges = self.edges
- mmt = self.momentum
- weights = torch.zeros_like(pred)
-
- # gradient length
- g = torch.abs(pred.sigmoid().detach() - target)
-
- valid = label_weight > 0
- tot = max(valid.float().sum().item(), 1.0)
- n = 0 # n valid bins
- for i in range(self.bins):
- inds = (g >= edges[i]) & (g < edges[i + 1]) & valid
- num_in_bin = inds.sum().item()
- if num_in_bin > 0:
- if mmt > 0:
- self.acc_sum[i] = mmt * self.acc_sum[i] \
- + (1 - mmt) * num_in_bin
- weights[inds] = tot / self.acc_sum[i]
- else:
- weights[inds] = tot / num_in_bin
- n += 1
- if n > 0:
- weights = weights / n
-
- loss = F.binary_cross_entropy_with_logits(
- pred, target, weights, reduction='sum') / tot
- return loss * self.loss_weight
-
-
-# TODO: code refactoring to make it consistent with other losses
-@LOSSES.register_module()
-class GHMR(nn.Module):
- """GHM Regression Loss.
-
- Details of the theorem can be viewed in the paper
- `Gradient Harmonized Single-stage Detector
- `_.
-
- Args:
- mu (float): The parameter for the Authentic Smooth L1 loss.
- bins (int): Number of the unit regions for distribution calculation.
- momentum (float): The parameter for moving average.
- loss_weight (float): The weight of the total GHM-R loss.
- """
-
- def __init__(self, mu=0.02, bins=10, momentum=0, loss_weight=1.0):
- super(GHMR, self).__init__()
- self.mu = mu
- self.bins = bins
- edges = torch.arange(bins + 1).float() / bins
- self.register_buffer('edges', edges)
- self.edges[-1] = 1e3
- self.momentum = momentum
- if momentum > 0:
- acc_sum = torch.zeros(bins)
- self.register_buffer('acc_sum', acc_sum)
- self.loss_weight = loss_weight
-
- # TODO: support reduction parameter
- def forward(self, pred, target, label_weight, avg_factor=None):
- """Calculate the GHM-R loss.
-
- Args:
- pred (float tensor of size [batch_num, 4 (* class_num)]):
- The prediction of box regression layer. Channel number can be 4
- or 4 * class_num depending on whether it is class-agnostic.
- target (float tensor of size [batch_num, 4 (* class_num)]):
- The target regression values with the same size of pred.
- label_weight (float tensor of size [batch_num, 4 (* class_num)]):
- The weight of each sample, 0 if ignored.
- Returns:
- The gradient harmonized loss.
- """
- mu = self.mu
- edges = self.edges
- mmt = self.momentum
-
- # ASL1 loss
- diff = pred - target
- loss = torch.sqrt(diff * diff + mu * mu) - mu
-
- # gradient length
- g = torch.abs(diff / torch.sqrt(mu * mu + diff * diff)).detach()
- weights = torch.zeros_like(g)
-
- valid = label_weight > 0
- tot = max(label_weight.float().sum().item(), 1.0)
- n = 0 # n: valid bins
- for i in range(self.bins):
- inds = (g >= edges[i]) & (g < edges[i + 1]) & valid
- num_in_bin = inds.sum().item()
- if num_in_bin > 0:
- n += 1
- if mmt > 0:
- self.acc_sum[i] = mmt * self.acc_sum[i] \
- + (1 - mmt) * num_in_bin
- weights[inds] = tot / self.acc_sum[i]
- else:
- weights[inds] = tot / num_in_bin
- if n > 0:
- weights /= n
-
- loss = loss * weights
- loss = loss.sum() / tot
- return loss * self.loss_weight
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/core/bbox/match_costs/__init__.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/core/bbox/match_costs/__init__.py
deleted file mode 100644
index add5e0d394034d89b2d47c314ff1938294deb6ea..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/core/bbox/match_costs/__init__.py
+++ /dev/null
@@ -1,7 +0,0 @@
-from .builder import build_match_cost
-from .match_cost import BBoxL1Cost, ClassificationCost, FocalLossCost, IoUCost
-
-__all__ = [
- 'build_match_cost', 'ClassificationCost', 'BBoxL1Cost', 'IoUCost',
- 'FocalLossCost'
-]
diff --git a/spaces/Rongjiehuang/GenerSpeech/modules/GenerSpeech/model/prosody_util.py b/spaces/Rongjiehuang/GenerSpeech/modules/GenerSpeech/model/prosody_util.py
deleted file mode 100644
index 113c39df9d1b0144aa5a5f00505c7e08bfc6ea11..0000000000000000000000000000000000000000
--- a/spaces/Rongjiehuang/GenerSpeech/modules/GenerSpeech/model/prosody_util.py
+++ /dev/null
@@ -1,385 +0,0 @@
-from torch import nn
-import copy
-import torch
-from utils.hparams import hparams
-from modules.GenerSpeech.model.wavenet import WN
-import math
-
-from modules.fastspeech.tts_modules import LayerNorm
-import torch.nn.functional as F
-from utils.tts_utils import group_hidden_by_segs, sequence_mask
-
-from scipy.cluster.vq import kmeans2
-from torch.nn import functional as F
-
-
-class VQEmbeddingEMA(nn.Module):
- def __init__(self, n_embeddings, embedding_dim, commitment_cost=0.25, decay=0.999, epsilon=1e-5,
- print_vq_prob=False):
- super(VQEmbeddingEMA, self).__init__()
- self.commitment_cost = commitment_cost
- self.n_embeddings = n_embeddings
- self.decay = decay
- self.epsilon = epsilon
- self.print_vq_prob = print_vq_prob
- self.register_buffer('data_initialized', torch.zeros(1))
- init_bound = 1 / 512
- embedding = torch.Tensor(n_embeddings, embedding_dim)
- embedding.uniform_(-init_bound, init_bound)
- self.register_buffer("embedding", embedding)
- self.register_buffer("ema_count", torch.zeros(n_embeddings))
- self.register_buffer("ema_weight", self.embedding.clone())
-
- def encode(self, x):
- B, T, _ = x.shape
- M, D = self.embedding.size()
- x_flat = x.detach().reshape(-1, D)
-
- distances = torch.addmm(torch.sum(self.embedding ** 2, dim=1) +
- torch.sum(x_flat ** 2, dim=1, keepdim=True),
- x_flat, self.embedding.t(),
- alpha=-2.0, beta=1.0) # [B*T_mel, N_vq]
- indices = torch.argmin(distances.float(), dim=-1) # [B*T_mel]
- quantized = F.embedding(indices, self.embedding)
- quantized = quantized.view_as(x)
- return x_flat, quantized, indices
-
- def forward(self, x):
- """
-
- :param x: [B, T, D]
- :return: [B, T, D]
- """
- B, T, _ = x.shape
- M, D = self.embedding.size()
- if self.training and self.data_initialized.item() == 0:
- print('| running kmeans in VQVAE') # data driven initialization for the embeddings
- x_flat = x.detach().reshape(-1, D)
- rp = torch.randperm(x_flat.size(0))
- kd = kmeans2(x_flat[rp].data.cpu().numpy(), self.n_embeddings, minit='points')
- self.embedding.copy_(torch.from_numpy(kd[0]))
- x_flat, quantized, indices = self.encode(x)
- encodings = F.one_hot(indices, M).float()
- self.ema_weight.copy_(torch.matmul(encodings.t(), x_flat))
- self.ema_count.copy_(torch.sum(encodings, dim=0))
-
- x_flat, quantized, indices = self.encode(x)
- encodings = F.one_hot(indices, M).float()
- indices = indices.reshape(B, T)
-
- if self.training and self.data_initialized.item() != 0:
- self.ema_count = self.decay * self.ema_count + (1 - self.decay) * torch.sum(encodings, dim=0)
-
- n = torch.sum(self.ema_count)
- self.ema_count = (self.ema_count + self.epsilon) / (n + M * self.epsilon) * n
-
- dw = torch.matmul(encodings.t(), x_flat)
- self.ema_weight = self.decay * self.ema_weight + (1 - self.decay) * dw
-
- self.embedding = self.ema_weight / self.ema_count.unsqueeze(-1)
- self.data_initialized.fill_(1)
-
- e_latent_loss = F.mse_loss(x, quantized.detach(), reduction='none')
- nonpadding = (x.abs().sum(-1) > 0).float()
- e_latent_loss = (e_latent_loss.mean(-1) * nonpadding).sum() / nonpadding.sum()
- loss = self.commitment_cost * e_latent_loss
-
- quantized = x + (quantized - x).detach()
-
- avg_probs = torch.mean(encodings, dim=0)
- perplexity = torch.exp(-torch.sum(avg_probs * torch.log(avg_probs + 1e-10)))
- if self.print_vq_prob:
- print("| VQ code avg_probs: ", avg_probs)
- return quantized, loss, indices, perplexity
-
-class CrossAttenLayer(nn.Module):
- def __init__(self, d_model, nhead, dim_feedforward=2048, dropout=0.1):
- super(CrossAttenLayer, self).__init__()
- self.multihead_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout)
- self.linear1 = nn.Linear(d_model, dim_feedforward)
- self.dropout1 = nn.Dropout(dropout)
- self.norm1 = nn.LayerNorm(d_model)
- self.linear2 = nn.Linear(dim_feedforward, d_model)
- self.dropout2 = nn.Dropout(dropout)
- self.norm2 = nn.LayerNorm(d_model)
- self.activation = nn.ReLU()
-
- def forward(self, src, local_emotion, emotion_key_padding_mask=None, forcing=False):
- # src: (Tph, B, 256) local_emotion: (Temo, B, 256) emotion_key_padding_mask: (B, Temo)
- if forcing:
- maxlength = src.shape[0]
- k = local_emotion.shape[0] / src.shape[0]
- lengths1 = torch.ceil(torch.tensor([i for i in range(maxlength)]).to(src.device) * k) + 1
- lengths2 = torch.floor(torch.tensor([i for i in range(maxlength)]).to(src.device) * k) - 1
- mask1 = sequence_mask(lengths1, local_emotion.shape[0])
- mask2 = sequence_mask(lengths2, local_emotion.shape[0])
- mask = mask1.float() - mask2.float()
- attn_emo = mask.repeat(src.shape[1], 1, 1) # (B, Tph, Temo)
- src2 = torch.matmul(local_emotion.permute(1, 2, 0), attn_emo.float().transpose(1, 2)).permute(2, 0, 1)
- else:
- src2, attn_emo = self.multihead_attn(src, local_emotion, local_emotion, key_padding_mask=emotion_key_padding_mask)
- src = src + self.dropout1(src2)
- src = self.norm1(src)
- src2 = self.linear2(self.activation(self.linear1(src)))
- src = src + self.dropout2(src2)
- src = self.norm2(src)
- return src, attn_emo
-
-
-class ProsodyAligner(nn.Module):
- def __init__(self, num_layers, guided_sigma=0.3, guided_layers=None, norm=None):
- super(ProsodyAligner, self).__init__()
- self.layers = nn.ModuleList([CrossAttenLayer(d_model=hparams['hidden_size'], nhead=2) for _ in range(num_layers)])
- self.num_layers = num_layers
- self.norm = norm
- self.guided_sigma = guided_sigma
- self.guided_layers = guided_layers if guided_layers is not None else num_layers
-
- def forward(self, src, local_emotion, src_key_padding_mask=None, emotion_key_padding_mask=None, forcing=False):
- output = src
- guided_loss = 0
- attn_emo_list = []
- for i, mod in enumerate(self.layers):
- # output: (Tph, B, 256), global_emotion: (1, B, 256), local_emotion: (Temo, B, 256) mask: None, src_key_padding_mask: (B, Tph),
- # emotion_key_padding_mask: (B, Temo)
- output, attn_emo = mod(output, local_emotion, emotion_key_padding_mask=emotion_key_padding_mask, forcing=forcing)
- attn_emo_list.append(attn_emo.unsqueeze(1))
- # attn_emo: (B, Tph, Temo) attn: (B, Tph, Tph)
- if i < self.guided_layers and src_key_padding_mask is not None:
- s_length = (~src_key_padding_mask).float().sum(-1) # B
- emo_length = (~emotion_key_padding_mask).float().sum(-1)
- attn_w_emo = _make_guided_attention_mask(src_key_padding_mask.size(-1), s_length, emotion_key_padding_mask.size(-1), emo_length, self.guided_sigma)
-
- g_loss_emo = attn_emo * attn_w_emo # N, L, S
- non_padding_mask = (~src_key_padding_mask).unsqueeze(-1) & (~emotion_key_padding_mask).unsqueeze(1)
- guided_loss = g_loss_emo[non_padding_mask].mean() + guided_loss
-
- if self.norm is not None:
- output = self.norm(output)
-
- return output, guided_loss, attn_emo_list
-
-def _make_guided_attention_mask(ilen, rilen, olen, rolen, sigma):
- grid_x, grid_y = torch.meshgrid(torch.arange(ilen, device=rilen.device), torch.arange(olen, device=rolen.device))
- grid_x = grid_x.unsqueeze(0).expand(rilen.size(0), -1, -1)
- grid_y = grid_y.unsqueeze(0).expand(rolen.size(0), -1, -1)
- rilen = rilen.unsqueeze(1).unsqueeze(1)
- rolen = rolen.unsqueeze(1).unsqueeze(1)
- return 1.0 - torch.exp(
- -((grid_y.float() / rolen - grid_x.float() / rilen) ** 2) / (2 * (sigma ** 2))
- )
-
-class LocalStyleAdaptor(nn.Module):
- def __init__(self, hidden_size, num_vq_codes=64, padding_idx=0):
- super(LocalStyleAdaptor, self).__init__()
- self.encoder = ConvBlocks(80, hidden_size, [1] * 5, 5, dropout=hparams['vae_dropout'])
- self.n_embed = num_vq_codes
- self.vqvae = VQEmbeddingEMA(self.n_embed, hidden_size, commitment_cost=hparams['lambda_commit'])
- self.wavenet = WN(hidden_channels=80, gin_channels=80, kernel_size=3, dilation_rate=1, n_layers=4)
- self.padding_idx = padding_idx
- self.hidden_size = hidden_size
-
- def forward(self, ref_mels, mel2ph=None, no_vq=False):
- """
-
- :param ref_mels: [B, T, 80]
- :return: [B, 1, H]
- """
- padding_mask = ref_mels[:, :, 0].eq(self.padding_idx).data
- ref_mels = self.wavenet(ref_mels.transpose(1, 2), x_mask=(~padding_mask).unsqueeze(1).repeat([1, 80, 1])).transpose(1, 2)
- if mel2ph is not None:
- ref_ph, _ = group_hidden_by_segs(ref_mels, mel2ph, torch.max(mel2ph))
- else:
- ref_ph = ref_mels
- prosody = self.encoder(ref_ph)
- if no_vq:
- return prosody
- z, vq_loss, vq_tokens, ppl = self.vqvae(prosody)
- vq_loss = vq_loss.mean()
- return z, vq_loss, ppl
-
-
-
-
-class LambdaLayer(nn.Module):
- def __init__(self, lambd):
- super(LambdaLayer, self).__init__()
- self.lambd = lambd
-
- def forward(self, x):
- return self.lambd(x)
-
-
-class Conv1d(nn.Conv1d):
- """A wrapper around nn.Conv1d, that works on (batch, time, channels)"""
-
- def __init__(self, in_channels, out_channels, kernel_size=1, stride=1, dilation=1, groups=1, bias=True, padding=0):
- super(Conv1d, self).__init__(in_channels=in_channels, out_channels=out_channels,
- kernel_size=kernel_size, stride=stride, dilation=dilation,
- groups=groups, bias=bias, padding=padding)
-
- def forward(self, x):
- return super().forward(x.transpose(2, 1)).transpose(2, 1)
-
-
-def init_weights_func(m):
- classname = m.__class__.__name__
- if classname.find("Conv1d") != -1:
- torch.nn.init.xavier_uniform_(m.weight)
-
-
-class ResidualBlock(nn.Module):
- """Implements conv->PReLU->norm n-times"""
-
- def __init__(self, channels, kernel_size, dilation, n=2, norm_type='bn', dropout=0.0,
- c_multiple=2, ln_eps=1e-12):
- super(ResidualBlock, self).__init__()
-
- if norm_type == 'bn':
- norm_builder = lambda: nn.BatchNorm1d(channels)
- elif norm_type == 'in':
- norm_builder = lambda: nn.InstanceNorm1d(channels, affine=True)
- elif norm_type == 'gn':
- norm_builder = lambda: nn.GroupNorm(8, channels)
- elif norm_type == 'ln':
- norm_builder = lambda: LayerNorm(channels, dim=1, eps=ln_eps)
- else:
- norm_builder = lambda: nn.Identity()
-
- self.blocks = [
- nn.Sequential(
- norm_builder(),
- nn.Conv1d(channels, c_multiple * channels, kernel_size, dilation=dilation,
- padding=(dilation * (kernel_size - 1)) // 2),
- LambdaLayer(lambda x: x * kernel_size ** -0.5),
- nn.GELU(),
- nn.Conv1d(c_multiple * channels, channels, 1, dilation=dilation),
- )
- for i in range(n)
- ]
-
- self.blocks = nn.ModuleList(self.blocks)
- self.dropout = dropout
-
- def forward(self, x):
- nonpadding = (x.abs().sum(1) > 0).float()[:, None, :]
- for b in self.blocks:
- x_ = b(x)
- if self.dropout > 0 and self.training:
- x_ = F.dropout(x_, self.dropout, training=self.training)
- x = x + x_
- x = x * nonpadding
- return x
-
-
-class Pad(nn.ZeroPad2d):
- def __init__(self, kernel_size, dilation):
- pad_total = dilation * (kernel_size - 1)
- begin = pad_total // 2
- end = pad_total - begin
-
- super(Pad, self).__init__((begin, end, begin, end))
-
-
-class ZeroTemporalPad(nn.ZeroPad2d):
- """Pad sequences to equal lentgh in the temporal dimension"""
-
- def __init__(self, kernel_size, dilation, causal=False):
- total_pad = (dilation * (kernel_size - 1))
-
- if causal:
- super(ZeroTemporalPad, self).__init__((total_pad, 0))
- else:
- begin = total_pad // 2
- end = total_pad - begin
- super(ZeroTemporalPad, self).__init__((begin, end))
-
-
-class ConvBlocks(nn.Module):
- """Decodes the expanded phoneme encoding into spectrograms"""
-
- def __init__(self, channels, out_dims, dilations, kernel_size,
- norm_type='ln', layers_in_block=2, c_multiple=2,
- dropout=0.0, ln_eps=1e-5, init_weights=True):
- super(ConvBlocks, self).__init__()
- self.res_blocks = nn.Sequential(
- *[ResidualBlock(channels, kernel_size, d,
- n=layers_in_block, norm_type=norm_type, c_multiple=c_multiple,
- dropout=dropout, ln_eps=ln_eps)
- for d in dilations],
- )
- if norm_type == 'bn':
- norm = nn.BatchNorm1d(channels)
- elif norm_type == 'in':
- norm = nn.InstanceNorm1d(channels, affine=True)
- elif norm_type == 'gn':
- norm = nn.GroupNorm(8, channels)
- elif norm_type == 'ln':
- norm = LayerNorm(channels, dim=1, eps=ln_eps)
- self.last_norm = norm
- self.post_net1 = nn.Conv1d(channels, out_dims, kernel_size=3, padding=1)
- if init_weights:
- self.apply(init_weights_func)
-
- def forward(self, x):
- """
-
- :param x: [B, T, H]
- :return: [B, T, H]
- """
- x = x.transpose(1, 2)
- nonpadding = (x.abs().sum(1) > 0).float()[:, None, :]
- x = self.res_blocks(x) * nonpadding
- x = self.last_norm(x) * nonpadding
- x = self.post_net1(x) * nonpadding
- return x.transpose(1, 2)
-
-
-class TextConvEncoder(ConvBlocks):
- def __init__(self, embed_tokens, channels, out_dims, dilations, kernel_size,
- norm_type='ln', layers_in_block=2, c_multiple=2,
- dropout=0.0, ln_eps=1e-5, init_weights=True):
- super().__init__(channels, out_dims, dilations, kernel_size,
- norm_type, layers_in_block, c_multiple,
- dropout, ln_eps, init_weights)
- self.embed_tokens = embed_tokens
- self.embed_scale = math.sqrt(channels)
-
- def forward(self, txt_tokens):
- """
-
- :param txt_tokens: [B, T]
- :return: {
- 'encoder_out': [B x T x C]
- }
- """
- x = self.embed_scale * self.embed_tokens(txt_tokens)
- return super().forward(x)
-
-
-class ConditionalConvBlocks(ConvBlocks):
- def __init__(self, channels, g_channels, out_dims, dilations, kernel_size,
- norm_type='ln', layers_in_block=2, c_multiple=2,
- dropout=0.0, ln_eps=1e-5, init_weights=True, is_BTC=True):
- super().__init__(channels, out_dims, dilations, kernel_size,
- norm_type, layers_in_block, c_multiple,
- dropout, ln_eps, init_weights)
- self.g_prenet = nn.Conv1d(g_channels, channels, 3, padding=1)
- self.is_BTC = is_BTC
- if init_weights:
- self.g_prenet.apply(init_weights_func)
-
- def forward(self, x, g, x_mask):
- if self.is_BTC:
- x = x.transpose(1, 2)
- g = g.transpose(1, 2)
- x_mask = x_mask.transpose(1, 2)
- x = x + self.g_prenet(g)
- x = x * x_mask
-
- if not self.is_BTC:
- x = x.transpose(1, 2)
- x = super(ConditionalConvBlocks, self).forward(x) # input needs to be BTC
- if not self.is_BTC:
- x = x.transpose(1, 2)
- return x
diff --git a/spaces/ServerX/PorcoDiaz/demucs/__main__.py b/spaces/ServerX/PorcoDiaz/demucs/__main__.py
deleted file mode 100644
index 5148f20623bdaa827777558844796ded1876d7d0..0000000000000000000000000000000000000000
--- a/spaces/ServerX/PorcoDiaz/demucs/__main__.py
+++ /dev/null
@@ -1,317 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import json
-import math
-import os
-import sys
-import time
-from dataclasses import dataclass, field
-
-import torch as th
-from torch import distributed, nn
-from torch.nn.parallel.distributed import DistributedDataParallel
-
-from .augment import FlipChannels, FlipSign, Remix, Scale, Shift
-from .compressed import get_compressed_datasets
-from .model import Demucs
-from .parser import get_name, get_parser
-from .raw import Rawset
-from .repitch import RepitchedWrapper
-from .pretrained import load_pretrained, SOURCES
-from .tasnet import ConvTasNet
-from .test import evaluate
-from .train import train_model, validate_model
-from .utils import (human_seconds, load_model, save_model, get_state,
- save_state, sizeof_fmt, get_quantizer)
-from .wav import get_wav_datasets, get_musdb_wav_datasets
-
-
-@dataclass
-class SavedState:
- metrics: list = field(default_factory=list)
- last_state: dict = None
- best_state: dict = None
- optimizer: dict = None
-
-
-def main():
- parser = get_parser()
- args = parser.parse_args()
- name = get_name(parser, args)
- print(f"Experiment {name}")
-
- if args.musdb is None and args.rank == 0:
- print(
- "You must provide the path to the MusDB dataset with the --musdb flag. "
- "To download the MusDB dataset, see https://sigsep.github.io/datasets/musdb.html.",
- file=sys.stderr)
- sys.exit(1)
-
- eval_folder = args.evals / name
- eval_folder.mkdir(exist_ok=True, parents=True)
- args.logs.mkdir(exist_ok=True)
- metrics_path = args.logs / f"{name}.json"
- eval_folder.mkdir(exist_ok=True, parents=True)
- args.checkpoints.mkdir(exist_ok=True, parents=True)
- args.models.mkdir(exist_ok=True, parents=True)
-
- if args.device is None:
- device = "cpu"
- if th.cuda.is_available():
- device = "cuda"
- else:
- device = args.device
-
- th.manual_seed(args.seed)
- # Prevents too many threads to be started when running `museval` as it can be quite
- # inefficient on NUMA architectures.
- os.environ["OMP_NUM_THREADS"] = "1"
- os.environ["MKL_NUM_THREADS"] = "1"
-
- if args.world_size > 1:
- if device != "cuda" and args.rank == 0:
- print("Error: distributed training is only available with cuda device", file=sys.stderr)
- sys.exit(1)
- th.cuda.set_device(args.rank % th.cuda.device_count())
- distributed.init_process_group(backend="nccl",
- init_method="tcp://" + args.master,
- rank=args.rank,
- world_size=args.world_size)
-
- checkpoint = args.checkpoints / f"{name}.th"
- checkpoint_tmp = args.checkpoints / f"{name}.th.tmp"
- if args.restart and checkpoint.exists() and args.rank == 0:
- checkpoint.unlink()
-
- if args.test or args.test_pretrained:
- args.epochs = 1
- args.repeat = 0
- if args.test:
- model = load_model(args.models / args.test)
- else:
- model = load_pretrained(args.test_pretrained)
- elif args.tasnet:
- model = ConvTasNet(audio_channels=args.audio_channels,
- samplerate=args.samplerate, X=args.X,
- segment_length=4 * args.samples,
- sources=SOURCES)
- else:
- model = Demucs(
- audio_channels=args.audio_channels,
- channels=args.channels,
- context=args.context,
- depth=args.depth,
- glu=args.glu,
- growth=args.growth,
- kernel_size=args.kernel_size,
- lstm_layers=args.lstm_layers,
- rescale=args.rescale,
- rewrite=args.rewrite,
- stride=args.conv_stride,
- resample=args.resample,
- normalize=args.normalize,
- samplerate=args.samplerate,
- segment_length=4 * args.samples,
- sources=SOURCES,
- )
- model.to(device)
- if args.init:
- model.load_state_dict(load_pretrained(args.init).state_dict())
-
- if args.show:
- print(model)
- size = sizeof_fmt(4 * sum(p.numel() for p in model.parameters()))
- print(f"Model size {size}")
- return
-
- try:
- saved = th.load(checkpoint, map_location='cpu')
- except IOError:
- saved = SavedState()
-
- optimizer = th.optim.Adam(model.parameters(), lr=args.lr)
-
- quantizer = None
- quantizer = get_quantizer(model, args, optimizer)
-
- if saved.last_state is not None:
- model.load_state_dict(saved.last_state, strict=False)
- if saved.optimizer is not None:
- optimizer.load_state_dict(saved.optimizer)
-
- model_name = f"{name}.th"
- if args.save_model:
- if args.rank == 0:
- model.to("cpu")
- model.load_state_dict(saved.best_state)
- save_model(model, quantizer, args, args.models / model_name)
- return
- elif args.save_state:
- model_name = f"{args.save_state}.th"
- if args.rank == 0:
- model.to("cpu")
- model.load_state_dict(saved.best_state)
- state = get_state(model, quantizer)
- save_state(state, args.models / model_name)
- return
-
- if args.rank == 0:
- done = args.logs / f"{name}.done"
- if done.exists():
- done.unlink()
-
- augment = [Shift(args.data_stride)]
- if args.augment:
- augment += [FlipSign(), FlipChannels(), Scale(),
- Remix(group_size=args.remix_group_size)]
- augment = nn.Sequential(*augment).to(device)
- print("Agumentation pipeline:", augment)
-
- if args.mse:
- criterion = nn.MSELoss()
- else:
- criterion = nn.L1Loss()
-
- # Setting number of samples so that all convolution windows are full.
- # Prevents hard to debug mistake with the prediction being shifted compared
- # to the input mixture.
- samples = model.valid_length(args.samples)
- print(f"Number of training samples adjusted to {samples}")
- samples = samples + args.data_stride
- if args.repitch:
- # We need a bit more audio samples, to account for potential
- # tempo change.
- samples = math.ceil(samples / (1 - 0.01 * args.max_tempo))
-
- args.metadata.mkdir(exist_ok=True, parents=True)
- if args.raw:
- train_set = Rawset(args.raw / "train",
- samples=samples,
- channels=args.audio_channels,
- streams=range(1, len(model.sources) + 1),
- stride=args.data_stride)
-
- valid_set = Rawset(args.raw / "valid", channels=args.audio_channels)
- elif args.wav:
- train_set, valid_set = get_wav_datasets(args, samples, model.sources)
- elif args.is_wav:
- train_set, valid_set = get_musdb_wav_datasets(args, samples, model.sources)
- else:
- train_set, valid_set = get_compressed_datasets(args, samples)
-
- if args.repitch:
- train_set = RepitchedWrapper(
- train_set,
- proba=args.repitch,
- max_tempo=args.max_tempo)
-
- best_loss = float("inf")
- for epoch, metrics in enumerate(saved.metrics):
- print(f"Epoch {epoch:03d}: "
- f"train={metrics['train']:.8f} "
- f"valid={metrics['valid']:.8f} "
- f"best={metrics['best']:.4f} "
- f"ms={metrics.get('true_model_size', 0):.2f}MB "
- f"cms={metrics.get('compressed_model_size', 0):.2f}MB "
- f"duration={human_seconds(metrics['duration'])}")
- best_loss = metrics['best']
-
- if args.world_size > 1:
- dmodel = DistributedDataParallel(model,
- device_ids=[th.cuda.current_device()],
- output_device=th.cuda.current_device())
- else:
- dmodel = model
-
- for epoch in range(len(saved.metrics), args.epochs):
- begin = time.time()
- model.train()
- train_loss, model_size = train_model(
- epoch, train_set, dmodel, criterion, optimizer, augment,
- quantizer=quantizer,
- batch_size=args.batch_size,
- device=device,
- repeat=args.repeat,
- seed=args.seed,
- diffq=args.diffq,
- workers=args.workers,
- world_size=args.world_size)
- model.eval()
- valid_loss = validate_model(
- epoch, valid_set, model, criterion,
- device=device,
- rank=args.rank,
- split=args.split_valid,
- overlap=args.overlap,
- world_size=args.world_size)
-
- ms = 0
- cms = 0
- if quantizer and args.rank == 0:
- ms = quantizer.true_model_size()
- cms = quantizer.compressed_model_size(num_workers=min(40, args.world_size * 10))
-
- duration = time.time() - begin
- if valid_loss < best_loss and ms <= args.ms_target:
- best_loss = valid_loss
- saved.best_state = {
- key: value.to("cpu").clone()
- for key, value in model.state_dict().items()
- }
-
- saved.metrics.append({
- "train": train_loss,
- "valid": valid_loss,
- "best": best_loss,
- "duration": duration,
- "model_size": model_size,
- "true_model_size": ms,
- "compressed_model_size": cms,
- })
- if args.rank == 0:
- json.dump(saved.metrics, open(metrics_path, "w"))
-
- saved.last_state = model.state_dict()
- saved.optimizer = optimizer.state_dict()
- if args.rank == 0 and not args.test:
- th.save(saved, checkpoint_tmp)
- checkpoint_tmp.rename(checkpoint)
-
- print(f"Epoch {epoch:03d}: "
- f"train={train_loss:.8f} valid={valid_loss:.8f} best={best_loss:.4f} ms={ms:.2f}MB "
- f"cms={cms:.2f}MB "
- f"duration={human_seconds(duration)}")
-
- if args.world_size > 1:
- distributed.barrier()
-
- del dmodel
- model.load_state_dict(saved.best_state)
- if args.eval_cpu:
- device = "cpu"
- model.to(device)
- model.eval()
- evaluate(model, args.musdb, eval_folder,
- is_wav=args.is_wav,
- rank=args.rank,
- world_size=args.world_size,
- device=device,
- save=args.save,
- split=args.split_valid,
- shifts=args.shifts,
- overlap=args.overlap,
- workers=args.eval_workers)
- model.to("cpu")
- if args.rank == 0:
- if not (args.test or args.test_pretrained):
- save_model(model, quantizer, args, args.models / model_name)
- print("done")
- done.write_text("done")
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/ServerX/PorcoDiaz/demucs/utils.py b/spaces/ServerX/PorcoDiaz/demucs/utils.py
deleted file mode 100644
index 4364184059b1afe3c8379c77793a8e76dccf9699..0000000000000000000000000000000000000000
--- a/spaces/ServerX/PorcoDiaz/demucs/utils.py
+++ /dev/null
@@ -1,323 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import errno
-import functools
-import hashlib
-import inspect
-import io
-import os
-import random
-import socket
-import tempfile
-import warnings
-import zlib
-from contextlib import contextmanager
-
-from diffq import UniformQuantizer, DiffQuantizer
-import torch as th
-import tqdm
-from torch import distributed
-from torch.nn import functional as F
-
-
-def center_trim(tensor, reference):
- """
- Center trim `tensor` with respect to `reference`, along the last dimension.
- `reference` can also be a number, representing the length to trim to.
- If the size difference != 0 mod 2, the extra sample is removed on the right side.
- """
- if hasattr(reference, "size"):
- reference = reference.size(-1)
- delta = tensor.size(-1) - reference
- if delta < 0:
- raise ValueError("tensor must be larger than reference. " f"Delta is {delta}.")
- if delta:
- tensor = tensor[..., delta // 2:-(delta - delta // 2)]
- return tensor
-
-
-def average_metric(metric, count=1.):
- """
- Average `metric` which should be a float across all hosts. `count` should be
- the weight for this particular host (i.e. number of examples).
- """
- metric = th.tensor([count, count * metric], dtype=th.float32, device='cuda')
- distributed.all_reduce(metric, op=distributed.ReduceOp.SUM)
- return metric[1].item() / metric[0].item()
-
-
-def free_port(host='', low=20000, high=40000):
- """
- Return a port number that is most likely free.
- This could suffer from a race condition although
- it should be quite rare.
- """
- sock = socket.socket()
- while True:
- port = random.randint(low, high)
- try:
- sock.bind((host, port))
- except OSError as error:
- if error.errno == errno.EADDRINUSE:
- continue
- raise
- return port
-
-
-def sizeof_fmt(num, suffix='B'):
- """
- Given `num` bytes, return human readable size.
- Taken from https://stackoverflow.com/a/1094933
- """
- for unit in ['', 'Ki', 'Mi', 'Gi', 'Ti', 'Pi', 'Ei', 'Zi']:
- if abs(num) < 1024.0:
- return "%3.1f%s%s" % (num, unit, suffix)
- num /= 1024.0
- return "%.1f%s%s" % (num, 'Yi', suffix)
-
-
-def human_seconds(seconds, display='.2f'):
- """
- Given `seconds` seconds, return human readable duration.
- """
- value = seconds * 1e6
- ratios = [1e3, 1e3, 60, 60, 24]
- names = ['us', 'ms', 's', 'min', 'hrs', 'days']
- last = names.pop(0)
- for name, ratio in zip(names, ratios):
- if value / ratio < 0.3:
- break
- value /= ratio
- last = name
- return f"{format(value, display)} {last}"
-
-
-class TensorChunk:
- def __init__(self, tensor, offset=0, length=None):
- total_length = tensor.shape[-1]
- assert offset >= 0
- assert offset < total_length
-
- if length is None:
- length = total_length - offset
- else:
- length = min(total_length - offset, length)
-
- self.tensor = tensor
- self.offset = offset
- self.length = length
- self.device = tensor.device
-
- @property
- def shape(self):
- shape = list(self.tensor.shape)
- shape[-1] = self.length
- return shape
-
- def padded(self, target_length):
- delta = target_length - self.length
- total_length = self.tensor.shape[-1]
- assert delta >= 0
-
- start = self.offset - delta // 2
- end = start + target_length
-
- correct_start = max(0, start)
- correct_end = min(total_length, end)
-
- pad_left = correct_start - start
- pad_right = end - correct_end
-
- out = F.pad(self.tensor[..., correct_start:correct_end], (pad_left, pad_right))
- assert out.shape[-1] == target_length
- return out
-
-
-def tensor_chunk(tensor_or_chunk):
- if isinstance(tensor_or_chunk, TensorChunk):
- return tensor_or_chunk
- else:
- assert isinstance(tensor_or_chunk, th.Tensor)
- return TensorChunk(tensor_or_chunk)
-
-
-def apply_model(model, mix, shifts=None, split=False,
- overlap=0.25, transition_power=1., progress=False):
- """
- Apply model to a given mixture.
-
- Args:
- shifts (int): if > 0, will shift in time `mix` by a random amount between 0 and 0.5 sec
- and apply the oppositve shift to the output. This is repeated `shifts` time and
- all predictions are averaged. This effectively makes the model time equivariant
- and improves SDR by up to 0.2 points.
- split (bool): if True, the input will be broken down in 8 seconds extracts
- and predictions will be performed individually on each and concatenated.
- Useful for model with large memory footprint like Tasnet.
- progress (bool): if True, show a progress bar (requires split=True)
- """
- assert transition_power >= 1, "transition_power < 1 leads to weird behavior."
- device = mix.device
- channels, length = mix.shape
- if split:
- out = th.zeros(len(model.sources), channels, length, device=device)
- sum_weight = th.zeros(length, device=device)
- segment = model.segment_length
- stride = int((1 - overlap) * segment)
- offsets = range(0, length, stride)
- scale = stride / model.samplerate
- if progress:
- offsets = tqdm.tqdm(offsets, unit_scale=scale, ncols=120, unit='seconds')
- # We start from a triangle shaped weight, with maximal weight in the middle
- # of the segment. Then we normalize and take to the power `transition_power`.
- # Large values of transition power will lead to sharper transitions.
- weight = th.cat([th.arange(1, segment // 2 + 1),
- th.arange(segment - segment // 2, 0, -1)]).to(device)
- assert len(weight) == segment
- # If the overlap < 50%, this will translate to linear transition when
- # transition_power is 1.
- weight = (weight / weight.max())**transition_power
- for offset in offsets:
- chunk = TensorChunk(mix, offset, segment)
- chunk_out = apply_model(model, chunk, shifts=shifts)
- chunk_length = chunk_out.shape[-1]
- out[..., offset:offset + segment] += weight[:chunk_length] * chunk_out
- sum_weight[offset:offset + segment] += weight[:chunk_length]
- offset += segment
- assert sum_weight.min() > 0
- out /= sum_weight
- return out
- elif shifts:
- max_shift = int(0.5 * model.samplerate)
- mix = tensor_chunk(mix)
- padded_mix = mix.padded(length + 2 * max_shift)
- out = 0
- for _ in range(shifts):
- offset = random.randint(0, max_shift)
- shifted = TensorChunk(padded_mix, offset, length + max_shift - offset)
- shifted_out = apply_model(model, shifted)
- out += shifted_out[..., max_shift - offset:]
- out /= shifts
- return out
- else:
- valid_length = model.valid_length(length)
- mix = tensor_chunk(mix)
- padded_mix = mix.padded(valid_length)
- with th.no_grad():
- out = model(padded_mix.unsqueeze(0))[0]
- return center_trim(out, length)
-
-
-@contextmanager
-def temp_filenames(count, delete=True):
- names = []
- try:
- for _ in range(count):
- names.append(tempfile.NamedTemporaryFile(delete=False).name)
- yield names
- finally:
- if delete:
- for name in names:
- os.unlink(name)
-
-
-def get_quantizer(model, args, optimizer=None):
- quantizer = None
- if args.diffq:
- quantizer = DiffQuantizer(
- model, min_size=args.q_min_size, group_size=8)
- if optimizer is not None:
- quantizer.setup_optimizer(optimizer)
- elif args.qat:
- quantizer = UniformQuantizer(
- model, bits=args.qat, min_size=args.q_min_size)
- return quantizer
-
-
-def load_model(path, strict=False):
- with warnings.catch_warnings():
- warnings.simplefilter("ignore")
- load_from = path
- package = th.load(load_from, 'cpu')
-
- klass = package["klass"]
- args = package["args"]
- kwargs = package["kwargs"]
-
- if strict:
- model = klass(*args, **kwargs)
- else:
- sig = inspect.signature(klass)
- for key in list(kwargs):
- if key not in sig.parameters:
- warnings.warn("Dropping inexistant parameter " + key)
- del kwargs[key]
- model = klass(*args, **kwargs)
-
- state = package["state"]
- training_args = package["training_args"]
- quantizer = get_quantizer(model, training_args)
-
- set_state(model, quantizer, state)
- return model
-
-
-def get_state(model, quantizer):
- if quantizer is None:
- state = {k: p.data.to('cpu') for k, p in model.state_dict().items()}
- else:
- state = quantizer.get_quantized_state()
- buf = io.BytesIO()
- th.save(state, buf)
- state = {'compressed': zlib.compress(buf.getvalue())}
- return state
-
-
-def set_state(model, quantizer, state):
- if quantizer is None:
- model.load_state_dict(state)
- else:
- buf = io.BytesIO(zlib.decompress(state["compressed"]))
- state = th.load(buf, "cpu")
- quantizer.restore_quantized_state(state)
-
- return state
-
-
-def save_state(state, path):
- buf = io.BytesIO()
- th.save(state, buf)
- sig = hashlib.sha256(buf.getvalue()).hexdigest()[:8]
-
- path = path.parent / (path.stem + "-" + sig + path.suffix)
- path.write_bytes(buf.getvalue())
-
-
-def save_model(model, quantizer, training_args, path):
- args, kwargs = model._init_args_kwargs
- klass = model.__class__
-
- state = get_state(model, quantizer)
-
- save_to = path
- package = {
- 'klass': klass,
- 'args': args,
- 'kwargs': kwargs,
- 'state': state,
- 'training_args': training_args,
- }
- th.save(package, save_to)
-
-
-def capture_init(init):
- @functools.wraps(init)
- def __init__(self, *args, **kwargs):
- self._init_args_kwargs = (args, kwargs)
- init(self, *args, **kwargs)
-
- return __init__
diff --git a/spaces/SmilingWolf/danbooru2022_image_similarity/Utils/dbimutils.py b/spaces/SmilingWolf/danbooru2022_image_similarity/Utils/dbimutils.py
deleted file mode 100644
index e01496710f8905e542dbe7e89c91fd2c8d1bc14a..0000000000000000000000000000000000000000
--- a/spaces/SmilingWolf/danbooru2022_image_similarity/Utils/dbimutils.py
+++ /dev/null
@@ -1,54 +0,0 @@
-# DanBooru IMage Utility functions
-
-import cv2
-import numpy as np
-from PIL import Image
-
-
-def smart_imread(img, flag=cv2.IMREAD_UNCHANGED):
- if img.endswith(".gif"):
- img = Image.open(img)
- img = img.convert("RGB")
- img = cv2.cvtColor(np.array(img), cv2.COLOR_RGB2BGR)
- else:
- img = cv2.imread(img, flag)
- return img
-
-
-def smart_24bit(img):
- if img.dtype is np.dtype(np.uint16):
- img = (img / 257).astype(np.uint8)
-
- if len(img.shape) == 2:
- img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
- elif img.shape[2] == 4:
- trans_mask = img[:, :, 3] == 0
- img[trans_mask] = [255, 255, 255, 255]
- img = cv2.cvtColor(img, cv2.COLOR_BGRA2BGR)
- return img
-
-
-def make_square(img, target_size):
- old_size = img.shape[:2]
- desired_size = max(old_size)
- desired_size = max(desired_size, target_size)
-
- delta_w = desired_size - old_size[1]
- delta_h = desired_size - old_size[0]
- top, bottom = delta_h // 2, delta_h - (delta_h // 2)
- left, right = delta_w // 2, delta_w - (delta_w // 2)
-
- color = [255, 255, 255]
- new_im = cv2.copyMakeBorder(
- img, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color
- )
- return new_im
-
-
-def smart_resize(img, size):
- # Assumes the image has already gone through make_square
- if img.shape[0] > size:
- img = cv2.resize(img, (size, size), interpolation=cv2.INTER_AREA)
- elif img.shape[0] < size:
- img = cv2.resize(img, (size, size), interpolation=cv2.INTER_CUBIC)
- return img
diff --git a/spaces/Soybean01/White-box-Cartoonization/app.py b/spaces/Soybean01/White-box-Cartoonization/app.py
deleted file mode 100644
index c55ced56bd87a85f59d1c8ef84b7eca87422720f..0000000000000000000000000000000000000000
--- a/spaces/Soybean01/White-box-Cartoonization/app.py
+++ /dev/null
@@ -1,108 +0,0 @@
-#!/usr/bin/env python
-
-from __future__ import annotations
-import argparse
-import functools
-import os
-import pathlib
-import sys
-from typing import Callable
-import uuid
-
-import gradio as gr
-import huggingface_hub
-import numpy as np
-import PIL.Image
-
-from io import BytesIO
-from wbc.cartoonize import Cartoonize
-
-ORIGINAL_REPO_URL = 'https://github.com/SystemErrorWang/White-box-Cartoonization'
-TITLE = 'SystemErrorWang/White-box-Cartoonization'
-DESCRIPTION = f"""This is a demo for {ORIGINAL_REPO_URL}.
-
-"""
-ARTICLE = """
-
-"""
-
-SAFEHASH = [x for x in "0123456789-abcdefghijklmnopqrstuvwxyz_ABCDEFGHIJKLMNOPQRSTUVWXYZ"]
-def compress_UUID():
- '''
- 根据http://www.ietf.org/rfc/rfc1738.txt,由uuid编码扩bai大字符域生成du串
- 包括:[0-9a-zA-Z\-_]共64个
- 长度:(32-2)/3*2=20
- 备注:可在地球上人zhi人都用,使用100年不重复(2^120)
- :return:String
- '''
- row = str(uuid.uuid4()).replace('-', '')
- safe_code = ''
- for i in range(10):
- enbin = "%012d" % int(bin(int(row[i * 3] + row[i * 3 + 1] + row[i * 3 + 2], 16))[2:], 10)
- safe_code += (SAFEHASH[int(enbin[0:6], 2)] + SAFEHASH[int(enbin[6:12], 2)])
- safe_code = safe_code.replace('-', '')
- return safe_code
-
-
-def parse_args() -> argparse.Namespace:
- parser = argparse.ArgumentParser()
- parser.add_argument('--device', type=str, default='cpu')
- parser.add_argument('--theme', type=str)
- parser.add_argument('--live', action='store_true')
- parser.add_argument('--share', action='store_true')
- parser.add_argument('--port', type=int)
- parser.add_argument('--disable-queue',
- dest='enable_queue',
- action='store_false')
- parser.add_argument('--allow-flagging', type=str, default='never')
- parser.add_argument('--allow-screenshot', action='store_true')
- return parser.parse_args()
-
-def run(
- image,
- cartoonize : Cartoonize
-) -> tuple[PIL.Image.Image]:
-
- out_path = compress_UUID()+'.png'
- cartoonize.run_sigle(image.name, out_path)
-
- return PIL.Image.open(out_path)
-
-
-def main():
- gr.close_all()
-
- args = parse_args()
-
- cartoonize = Cartoonize(os.path.join(os.path.dirname(os.path.abspath(__file__)),'wbc/saved_models/'))
-
- func = functools.partial(run, cartoonize=cartoonize)
- func = functools.update_wrapper(func, run)
-
- gr.Interface(
- func,
- [
- gr.inputs.Image(type='file', label='Input Image'),
- ],
- [
- gr.outputs.Image(
- type='pil',
- label='Result'),
- ],
- # examples=examples,
- theme=args.theme,
- title=TITLE,
- description=DESCRIPTION,
- article=ARTICLE,
- allow_screenshot=args.allow_screenshot,
- allow_flagging=args.allow_flagging,
- live=args.live,
- ).launch(
- enable_queue=args.enable_queue,
- server_port=args.port,
- share=args.share,
- )
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/SriniJalasuthram/SJ-05-GR-NLP-Image2Text-Multilingual-OCR/README.md b/spaces/SriniJalasuthram/SJ-05-GR-NLP-Image2Text-Multilingual-OCR/README.md
deleted file mode 100644
index 9ce9b1e99de3f6b9c7d8eeb5e212a49049bbab32..0000000000000000000000000000000000000000
--- a/spaces/SriniJalasuthram/SJ-05-GR-NLP-Image2Text-Multilingual-OCR/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: SJ 05 GR NLP Image2Text Multilingual OCR
-emoji: 🦀
-colorFrom: yellow
-colorTo: green
-sdk: gradio
-sdk_version: 3.4
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Stanlito/openvino_QandA/README.md b/spaces/Stanlito/openvino_QandA/README.md
deleted file mode 100644
index a2bd912f1716e34ecbce7fd2fadd7c4aec110796..0000000000000000000000000000000000000000
--- a/spaces/Stanlito/openvino_QandA/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Openvino QandA
-emoji: 📈
-colorFrom: green
-colorTo: gray
-sdk: gradio
-sdk_version: 3.38.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/SuYuanS/AudioCraft_Plus/tests/models/test_encodec_model.py b/spaces/SuYuanS/AudioCraft_Plus/tests/models/test_encodec_model.py
deleted file mode 100644
index 2f9c1db3f69a45f02451b71da95f44356811acbb..0000000000000000000000000000000000000000
--- a/spaces/SuYuanS/AudioCraft_Plus/tests/models/test_encodec_model.py
+++ /dev/null
@@ -1,60 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import random
-
-import numpy as np
-import torch
-
-from audiocraft.models import EncodecModel
-from audiocraft.modules import SEANetEncoder, SEANetDecoder
-from audiocraft.quantization import DummyQuantizer
-
-
-class TestEncodecModel:
-
- def _create_encodec_model(self,
- sample_rate: int,
- channels: int,
- dim: int = 5,
- n_filters: int = 3,
- n_residual_layers: int = 1,
- ratios: list = [5, 4, 3, 2],
- **kwargs):
- frame_rate = np.prod(ratios)
- encoder = SEANetEncoder(channels=channels, dimension=dim, n_filters=n_filters,
- n_residual_layers=n_residual_layers, ratios=ratios)
- decoder = SEANetDecoder(channels=channels, dimension=dim, n_filters=n_filters,
- n_residual_layers=n_residual_layers, ratios=ratios)
- quantizer = DummyQuantizer()
- model = EncodecModel(encoder, decoder, quantizer, frame_rate=frame_rate,
- sample_rate=sample_rate, channels=channels, **kwargs)
- return model
-
- def test_model(self):
- random.seed(1234)
- sample_rate = 24_000
- channels = 1
- model = self._create_encodec_model(sample_rate, channels)
- for _ in range(10):
- length = random.randrange(1, 10_000)
- x = torch.randn(2, channels, length)
- res = model(x)
- assert res.x.shape == x.shape
-
- def test_model_renorm(self):
- random.seed(1234)
- sample_rate = 24_000
- channels = 1
- model_nonorm = self._create_encodec_model(sample_rate, channels, renormalize=False)
- model_renorm = self._create_encodec_model(sample_rate, channels, renormalize=True)
-
- for _ in range(10):
- length = random.randrange(1, 10_000)
- x = torch.randn(2, channels, length)
- codes, scales = model_nonorm.encode(x)
- codes, scales = model_renorm.encode(x)
- assert scales is not None
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/http_parser.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/http_parser.py
deleted file mode 100644
index 5a66ce4b9eec19777800ddc3c0f5e66b2270f9d3..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/http_parser.py
+++ /dev/null
@@ -1,969 +0,0 @@
-import abc
-import asyncio
-import collections
-import re
-import string
-import zlib
-from contextlib import suppress
-from enum import IntEnum
-from typing import (
- Any,
- Generic,
- List,
- NamedTuple,
- Optional,
- Pattern,
- Set,
- Tuple,
- Type,
- TypeVar,
- Union,
- cast,
-)
-
-from multidict import CIMultiDict, CIMultiDictProxy, istr
-from yarl import URL
-
-from . import hdrs
-from .base_protocol import BaseProtocol
-from .helpers import NO_EXTENSIONS, BaseTimerContext
-from .http_exceptions import (
- BadHttpMessage,
- BadStatusLine,
- ContentEncodingError,
- ContentLengthError,
- InvalidHeader,
- LineTooLong,
- TransferEncodingError,
-)
-from .http_writer import HttpVersion, HttpVersion10
-from .log import internal_logger
-from .streams import EMPTY_PAYLOAD, StreamReader
-from .typedefs import Final, RawHeaders
-
-try:
- import brotli
-
- HAS_BROTLI = True
-except ImportError: # pragma: no cover
- HAS_BROTLI = False
-
-
-__all__ = (
- "HeadersParser",
- "HttpParser",
- "HttpRequestParser",
- "HttpResponseParser",
- "RawRequestMessage",
- "RawResponseMessage",
-)
-
-ASCIISET: Final[Set[str]] = set(string.printable)
-
-# See https://tools.ietf.org/html/rfc7230#section-3.1.1
-# and https://tools.ietf.org/html/rfc7230#appendix-B
-#
-# method = token
-# tchar = "!" / "#" / "$" / "%" / "&" / "'" / "*" / "+" / "-" / "." /
-# "^" / "_" / "`" / "|" / "~" / DIGIT / ALPHA
-# token = 1*tchar
-METHRE: Final[Pattern[str]] = re.compile(r"[!#$%&'*+\-.^_`|~0-9A-Za-z]+")
-VERSRE: Final[Pattern[str]] = re.compile(r"HTTP/(\d+).(\d+)")
-HDRRE: Final[Pattern[bytes]] = re.compile(rb"[\x00-\x1F\x7F()<>@,;:\[\]={} \t\\\\\"]")
-
-
-class RawRequestMessage(NamedTuple):
- method: str
- path: str
- version: HttpVersion
- headers: "CIMultiDictProxy[str]"
- raw_headers: RawHeaders
- should_close: bool
- compression: Optional[str]
- upgrade: bool
- chunked: bool
- url: URL
-
-
-RawResponseMessage = collections.namedtuple(
- "RawResponseMessage",
- [
- "version",
- "code",
- "reason",
- "headers",
- "raw_headers",
- "should_close",
- "compression",
- "upgrade",
- "chunked",
- ],
-)
-
-
-_MsgT = TypeVar("_MsgT", RawRequestMessage, RawResponseMessage)
-
-
-class ParseState(IntEnum):
-
- PARSE_NONE = 0
- PARSE_LENGTH = 1
- PARSE_CHUNKED = 2
- PARSE_UNTIL_EOF = 3
-
-
-class ChunkState(IntEnum):
- PARSE_CHUNKED_SIZE = 0
- PARSE_CHUNKED_CHUNK = 1
- PARSE_CHUNKED_CHUNK_EOF = 2
- PARSE_MAYBE_TRAILERS = 3
- PARSE_TRAILERS = 4
-
-
-class HeadersParser:
- def __init__(
- self,
- max_line_size: int = 8190,
- max_headers: int = 32768,
- max_field_size: int = 8190,
- ) -> None:
- self.max_line_size = max_line_size
- self.max_headers = max_headers
- self.max_field_size = max_field_size
-
- def parse_headers(
- self, lines: List[bytes]
- ) -> Tuple["CIMultiDictProxy[str]", RawHeaders]:
- headers: CIMultiDict[str] = CIMultiDict()
- raw_headers = []
-
- lines_idx = 1
- line = lines[1]
- line_count = len(lines)
-
- while line:
- # Parse initial header name : value pair.
- try:
- bname, bvalue = line.split(b":", 1)
- except ValueError:
- raise InvalidHeader(line) from None
-
- bname = bname.strip(b" \t")
- bvalue = bvalue.lstrip()
- if HDRRE.search(bname):
- raise InvalidHeader(bname)
- if len(bname) > self.max_field_size:
- raise LineTooLong(
- "request header name {}".format(
- bname.decode("utf8", "xmlcharrefreplace")
- ),
- str(self.max_field_size),
- str(len(bname)),
- )
-
- header_length = len(bvalue)
-
- # next line
- lines_idx += 1
- line = lines[lines_idx]
-
- # consume continuation lines
- continuation = line and line[0] in (32, 9) # (' ', '\t')
-
- if continuation:
- bvalue_lst = [bvalue]
- while continuation:
- header_length += len(line)
- if header_length > self.max_field_size:
- raise LineTooLong(
- "request header field {}".format(
- bname.decode("utf8", "xmlcharrefreplace")
- ),
- str(self.max_field_size),
- str(header_length),
- )
- bvalue_lst.append(line)
-
- # next line
- lines_idx += 1
- if lines_idx < line_count:
- line = lines[lines_idx]
- if line:
- continuation = line[0] in (32, 9) # (' ', '\t')
- else:
- line = b""
- break
- bvalue = b"".join(bvalue_lst)
- else:
- if header_length > self.max_field_size:
- raise LineTooLong(
- "request header field {}".format(
- bname.decode("utf8", "xmlcharrefreplace")
- ),
- str(self.max_field_size),
- str(header_length),
- )
-
- bvalue = bvalue.strip()
- name = bname.decode("utf-8", "surrogateescape")
- value = bvalue.decode("utf-8", "surrogateescape")
-
- headers.add(name, value)
- raw_headers.append((bname, bvalue))
-
- return (CIMultiDictProxy(headers), tuple(raw_headers))
-
-
-class HttpParser(abc.ABC, Generic[_MsgT]):
- def __init__(
- self,
- protocol: Optional[BaseProtocol] = None,
- loop: Optional[asyncio.AbstractEventLoop] = None,
- limit: int = 2**16,
- max_line_size: int = 8190,
- max_headers: int = 32768,
- max_field_size: int = 8190,
- timer: Optional[BaseTimerContext] = None,
- code: Optional[int] = None,
- method: Optional[str] = None,
- readall: bool = False,
- payload_exception: Optional[Type[BaseException]] = None,
- response_with_body: bool = True,
- read_until_eof: bool = False,
- auto_decompress: bool = True,
- ) -> None:
- self.protocol = protocol
- self.loop = loop
- self.max_line_size = max_line_size
- self.max_headers = max_headers
- self.max_field_size = max_field_size
- self.timer = timer
- self.code = code
- self.method = method
- self.readall = readall
- self.payload_exception = payload_exception
- self.response_with_body = response_with_body
- self.read_until_eof = read_until_eof
-
- self._lines: List[bytes] = []
- self._tail = b""
- self._upgraded = False
- self._payload = None
- self._payload_parser: Optional[HttpPayloadParser] = None
- self._auto_decompress = auto_decompress
- self._limit = limit
- self._headers_parser = HeadersParser(max_line_size, max_headers, max_field_size)
-
- @abc.abstractmethod
- def parse_message(self, lines: List[bytes]) -> _MsgT:
- pass
-
- def feed_eof(self) -> Optional[_MsgT]:
- if self._payload_parser is not None:
- self._payload_parser.feed_eof()
- self._payload_parser = None
- else:
- # try to extract partial message
- if self._tail:
- self._lines.append(self._tail)
-
- if self._lines:
- if self._lines[-1] != "\r\n":
- self._lines.append(b"")
- with suppress(Exception):
- return self.parse_message(self._lines)
- return None
-
- def feed_data(
- self,
- data: bytes,
- SEP: bytes = b"\r\n",
- EMPTY: bytes = b"",
- CONTENT_LENGTH: istr = hdrs.CONTENT_LENGTH,
- METH_CONNECT: str = hdrs.METH_CONNECT,
- SEC_WEBSOCKET_KEY1: istr = hdrs.SEC_WEBSOCKET_KEY1,
- ) -> Tuple[List[Tuple[_MsgT, StreamReader]], bool, bytes]:
-
- messages = []
-
- if self._tail:
- data, self._tail = self._tail + data, b""
-
- data_len = len(data)
- start_pos = 0
- loop = self.loop
-
- while start_pos < data_len:
-
- # read HTTP message (request/response line + headers), \r\n\r\n
- # and split by lines
- if self._payload_parser is None and not self._upgraded:
- pos = data.find(SEP, start_pos)
- # consume \r\n
- if pos == start_pos and not self._lines:
- start_pos = pos + 2
- continue
-
- if pos >= start_pos:
- # line found
- self._lines.append(data[start_pos:pos])
- start_pos = pos + 2
-
- # \r\n\r\n found
- if self._lines[-1] == EMPTY:
- try:
- msg: _MsgT = self.parse_message(self._lines)
- finally:
- self._lines.clear()
-
- def get_content_length() -> Optional[int]:
- # payload length
- length_hdr = msg.headers.get(CONTENT_LENGTH)
- if length_hdr is None:
- return None
-
- try:
- length = int(length_hdr)
- except ValueError:
- raise InvalidHeader(CONTENT_LENGTH)
-
- if length < 0:
- raise InvalidHeader(CONTENT_LENGTH)
-
- return length
-
- length = get_content_length()
- # do not support old websocket spec
- if SEC_WEBSOCKET_KEY1 in msg.headers:
- raise InvalidHeader(SEC_WEBSOCKET_KEY1)
-
- self._upgraded = msg.upgrade
-
- method = getattr(msg, "method", self.method)
-
- assert self.protocol is not None
- # calculate payload
- if (
- (length is not None and length > 0)
- or msg.chunked
- and not msg.upgrade
- ):
- payload = StreamReader(
- self.protocol,
- timer=self.timer,
- loop=loop,
- limit=self._limit,
- )
- payload_parser = HttpPayloadParser(
- payload,
- length=length,
- chunked=msg.chunked,
- method=method,
- compression=msg.compression,
- code=self.code,
- readall=self.readall,
- response_with_body=self.response_with_body,
- auto_decompress=self._auto_decompress,
- )
- if not payload_parser.done:
- self._payload_parser = payload_parser
- elif method == METH_CONNECT:
- assert isinstance(msg, RawRequestMessage)
- payload = StreamReader(
- self.protocol,
- timer=self.timer,
- loop=loop,
- limit=self._limit,
- )
- self._upgraded = True
- self._payload_parser = HttpPayloadParser(
- payload,
- method=msg.method,
- compression=msg.compression,
- readall=True,
- auto_decompress=self._auto_decompress,
- )
- else:
- if (
- getattr(msg, "code", 100) >= 199
- and length is None
- and self.read_until_eof
- ):
- payload = StreamReader(
- self.protocol,
- timer=self.timer,
- loop=loop,
- limit=self._limit,
- )
- payload_parser = HttpPayloadParser(
- payload,
- length=length,
- chunked=msg.chunked,
- method=method,
- compression=msg.compression,
- code=self.code,
- readall=True,
- response_with_body=self.response_with_body,
- auto_decompress=self._auto_decompress,
- )
- if not payload_parser.done:
- self._payload_parser = payload_parser
- else:
- payload = EMPTY_PAYLOAD
-
- messages.append((msg, payload))
- else:
- self._tail = data[start_pos:]
- data = EMPTY
- break
-
- # no parser, just store
- elif self._payload_parser is None and self._upgraded:
- assert not self._lines
- break
-
- # feed payload
- elif data and start_pos < data_len:
- assert not self._lines
- assert self._payload_parser is not None
- try:
- eof, data = self._payload_parser.feed_data(data[start_pos:])
- except BaseException as exc:
- if self.payload_exception is not None:
- self._payload_parser.payload.set_exception(
- self.payload_exception(str(exc))
- )
- else:
- self._payload_parser.payload.set_exception(exc)
-
- eof = True
- data = b""
-
- if eof:
- start_pos = 0
- data_len = len(data)
- self._payload_parser = None
- continue
- else:
- break
-
- if data and start_pos < data_len:
- data = data[start_pos:]
- else:
- data = EMPTY
-
- return messages, self._upgraded, data
-
- def parse_headers(
- self, lines: List[bytes]
- ) -> Tuple[
- "CIMultiDictProxy[str]", RawHeaders, Optional[bool], Optional[str], bool, bool
- ]:
- """Parses RFC 5322 headers from a stream.
-
- Line continuations are supported. Returns list of header name
- and value pairs. Header name is in upper case.
- """
- headers, raw_headers = self._headers_parser.parse_headers(lines)
- close_conn = None
- encoding = None
- upgrade = False
- chunked = False
-
- # keep-alive
- conn = headers.get(hdrs.CONNECTION)
- if conn:
- v = conn.lower()
- if v == "close":
- close_conn = True
- elif v == "keep-alive":
- close_conn = False
- elif v == "upgrade":
- upgrade = True
-
- # encoding
- enc = headers.get(hdrs.CONTENT_ENCODING)
- if enc:
- enc = enc.lower()
- if enc in ("gzip", "deflate", "br"):
- encoding = enc
-
- # chunking
- te = headers.get(hdrs.TRANSFER_ENCODING)
- if te is not None:
- if "chunked" == te.lower():
- chunked = True
- else:
- raise BadHttpMessage("Request has invalid `Transfer-Encoding`")
-
- if hdrs.CONTENT_LENGTH in headers:
- raise BadHttpMessage(
- "Content-Length can't be present with Transfer-Encoding",
- )
-
- return (headers, raw_headers, close_conn, encoding, upgrade, chunked)
-
- def set_upgraded(self, val: bool) -> None:
- """Set connection upgraded (to websocket) mode.
-
- :param bool val: new state.
- """
- self._upgraded = val
-
-
-class HttpRequestParser(HttpParser[RawRequestMessage]):
- """Read request status line.
-
- Exception .http_exceptions.BadStatusLine
- could be raised in case of any errors in status line.
- Returns RawRequestMessage.
- """
-
- def parse_message(self, lines: List[bytes]) -> RawRequestMessage:
- # request line
- line = lines[0].decode("utf-8", "surrogateescape")
- try:
- method, path, version = line.split(None, 2)
- except ValueError:
- raise BadStatusLine(line) from None
-
- if len(path) > self.max_line_size:
- raise LineTooLong(
- "Status line is too long", str(self.max_line_size), str(len(path))
- )
-
- # method
- if not METHRE.match(method):
- raise BadStatusLine(method)
-
- # version
- try:
- if version.startswith("HTTP/"):
- n1, n2 = version[5:].split(".", 1)
- version_o = HttpVersion(int(n1), int(n2))
- else:
- raise BadStatusLine(version)
- except Exception:
- raise BadStatusLine(version)
-
- if method == "CONNECT":
- # authority-form,
- # https://datatracker.ietf.org/doc/html/rfc7230#section-5.3.3
- url = URL.build(authority=path, encoded=True)
- elif path.startswith("/"):
- # origin-form,
- # https://datatracker.ietf.org/doc/html/rfc7230#section-5.3.1
- path_part, _hash_separator, url_fragment = path.partition("#")
- path_part, _question_mark_separator, qs_part = path_part.partition("?")
-
- # NOTE: `yarl.URL.build()` is used to mimic what the Cython-based
- # NOTE: parser does, otherwise it results into the same
- # NOTE: HTTP Request-Line input producing different
- # NOTE: `yarl.URL()` objects
- url = URL.build(
- path=path_part,
- query_string=qs_part,
- fragment=url_fragment,
- encoded=True,
- )
- else:
- # absolute-form for proxy maybe,
- # https://datatracker.ietf.org/doc/html/rfc7230#section-5.3.2
- url = URL(path, encoded=True)
-
- # read headers
- (
- headers,
- raw_headers,
- close,
- compression,
- upgrade,
- chunked,
- ) = self.parse_headers(lines)
-
- if close is None: # then the headers weren't set in the request
- if version_o <= HttpVersion10: # HTTP 1.0 must asks to not close
- close = True
- else: # HTTP 1.1 must ask to close.
- close = False
-
- return RawRequestMessage(
- method,
- path,
- version_o,
- headers,
- raw_headers,
- close,
- compression,
- upgrade,
- chunked,
- url,
- )
-
-
-class HttpResponseParser(HttpParser[RawResponseMessage]):
- """Read response status line and headers.
-
- BadStatusLine could be raised in case of any errors in status line.
- Returns RawResponseMessage.
- """
-
- def parse_message(self, lines: List[bytes]) -> RawResponseMessage:
- line = lines[0].decode("utf-8", "surrogateescape")
- try:
- version, status = line.split(None, 1)
- except ValueError:
- raise BadStatusLine(line) from None
-
- try:
- status, reason = status.split(None, 1)
- except ValueError:
- reason = ""
-
- if len(reason) > self.max_line_size:
- raise LineTooLong(
- "Status line is too long", str(self.max_line_size), str(len(reason))
- )
-
- # version
- match = VERSRE.match(version)
- if match is None:
- raise BadStatusLine(line)
- version_o = HttpVersion(int(match.group(1)), int(match.group(2)))
-
- # The status code is a three-digit number
- try:
- status_i = int(status)
- except ValueError:
- raise BadStatusLine(line) from None
-
- if status_i > 999:
- raise BadStatusLine(line)
-
- # read headers
- (
- headers,
- raw_headers,
- close,
- compression,
- upgrade,
- chunked,
- ) = self.parse_headers(lines)
-
- if close is None:
- close = version_o <= HttpVersion10
-
- return RawResponseMessage(
- version_o,
- status_i,
- reason.strip(),
- headers,
- raw_headers,
- close,
- compression,
- upgrade,
- chunked,
- )
-
-
-class HttpPayloadParser:
- def __init__(
- self,
- payload: StreamReader,
- length: Optional[int] = None,
- chunked: bool = False,
- compression: Optional[str] = None,
- code: Optional[int] = None,
- method: Optional[str] = None,
- readall: bool = False,
- response_with_body: bool = True,
- auto_decompress: bool = True,
- ) -> None:
- self._length = 0
- self._type = ParseState.PARSE_NONE
- self._chunk = ChunkState.PARSE_CHUNKED_SIZE
- self._chunk_size = 0
- self._chunk_tail = b""
- self._auto_decompress = auto_decompress
- self.done = False
-
- # payload decompression wrapper
- if response_with_body and compression and self._auto_decompress:
- real_payload: Union[StreamReader, DeflateBuffer] = DeflateBuffer(
- payload, compression
- )
- else:
- real_payload = payload
-
- # payload parser
- if not response_with_body:
- # don't parse payload if it's not expected to be received
- self._type = ParseState.PARSE_NONE
- real_payload.feed_eof()
- self.done = True
-
- elif chunked:
- self._type = ParseState.PARSE_CHUNKED
- elif length is not None:
- self._type = ParseState.PARSE_LENGTH
- self._length = length
- if self._length == 0:
- real_payload.feed_eof()
- self.done = True
- else:
- if readall and code != 204:
- self._type = ParseState.PARSE_UNTIL_EOF
- elif method in ("PUT", "POST"):
- internal_logger.warning( # pragma: no cover
- "Content-Length or Transfer-Encoding header is required"
- )
- self._type = ParseState.PARSE_NONE
- real_payload.feed_eof()
- self.done = True
-
- self.payload = real_payload
-
- def feed_eof(self) -> None:
- if self._type == ParseState.PARSE_UNTIL_EOF:
- self.payload.feed_eof()
- elif self._type == ParseState.PARSE_LENGTH:
- raise ContentLengthError(
- "Not enough data for satisfy content length header."
- )
- elif self._type == ParseState.PARSE_CHUNKED:
- raise TransferEncodingError(
- "Not enough data for satisfy transfer length header."
- )
-
- def feed_data(
- self, chunk: bytes, SEP: bytes = b"\r\n", CHUNK_EXT: bytes = b";"
- ) -> Tuple[bool, bytes]:
- # Read specified amount of bytes
- if self._type == ParseState.PARSE_LENGTH:
- required = self._length
- chunk_len = len(chunk)
-
- if required >= chunk_len:
- self._length = required - chunk_len
- self.payload.feed_data(chunk, chunk_len)
- if self._length == 0:
- self.payload.feed_eof()
- return True, b""
- else:
- self._length = 0
- self.payload.feed_data(chunk[:required], required)
- self.payload.feed_eof()
- return True, chunk[required:]
-
- # Chunked transfer encoding parser
- elif self._type == ParseState.PARSE_CHUNKED:
- if self._chunk_tail:
- chunk = self._chunk_tail + chunk
- self._chunk_tail = b""
-
- while chunk:
-
- # read next chunk size
- if self._chunk == ChunkState.PARSE_CHUNKED_SIZE:
- pos = chunk.find(SEP)
- if pos >= 0:
- i = chunk.find(CHUNK_EXT, 0, pos)
- if i >= 0:
- size_b = chunk[:i] # strip chunk-extensions
- else:
- size_b = chunk[:pos]
-
- try:
- size = int(bytes(size_b), 16)
- except ValueError:
- exc = TransferEncodingError(
- chunk[:pos].decode("ascii", "surrogateescape")
- )
- self.payload.set_exception(exc)
- raise exc from None
-
- chunk = chunk[pos + 2 :]
- if size == 0: # eof marker
- self._chunk = ChunkState.PARSE_MAYBE_TRAILERS
- else:
- self._chunk = ChunkState.PARSE_CHUNKED_CHUNK
- self._chunk_size = size
- self.payload.begin_http_chunk_receiving()
- else:
- self._chunk_tail = chunk
- return False, b""
-
- # read chunk and feed buffer
- if self._chunk == ChunkState.PARSE_CHUNKED_CHUNK:
- required = self._chunk_size
- chunk_len = len(chunk)
-
- if required > chunk_len:
- self._chunk_size = required - chunk_len
- self.payload.feed_data(chunk, chunk_len)
- return False, b""
- else:
- self._chunk_size = 0
- self.payload.feed_data(chunk[:required], required)
- chunk = chunk[required:]
- self._chunk = ChunkState.PARSE_CHUNKED_CHUNK_EOF
- self.payload.end_http_chunk_receiving()
-
- # toss the CRLF at the end of the chunk
- if self._chunk == ChunkState.PARSE_CHUNKED_CHUNK_EOF:
- if chunk[:2] == SEP:
- chunk = chunk[2:]
- self._chunk = ChunkState.PARSE_CHUNKED_SIZE
- else:
- self._chunk_tail = chunk
- return False, b""
-
- # if stream does not contain trailer, after 0\r\n
- # we should get another \r\n otherwise
- # trailers needs to be skiped until \r\n\r\n
- if self._chunk == ChunkState.PARSE_MAYBE_TRAILERS:
- head = chunk[:2]
- if head == SEP:
- # end of stream
- self.payload.feed_eof()
- return True, chunk[2:]
- # Both CR and LF, or only LF may not be received yet. It is
- # expected that CRLF or LF will be shown at the very first
- # byte next time, otherwise trailers should come. The last
- # CRLF which marks the end of response might not be
- # contained in the same TCP segment which delivered the
- # size indicator.
- if not head:
- return False, b""
- if head == SEP[:1]:
- self._chunk_tail = head
- return False, b""
- self._chunk = ChunkState.PARSE_TRAILERS
-
- # read and discard trailer up to the CRLF terminator
- if self._chunk == ChunkState.PARSE_TRAILERS:
- pos = chunk.find(SEP)
- if pos >= 0:
- chunk = chunk[pos + 2 :]
- self._chunk = ChunkState.PARSE_MAYBE_TRAILERS
- else:
- self._chunk_tail = chunk
- return False, b""
-
- # Read all bytes until eof
- elif self._type == ParseState.PARSE_UNTIL_EOF:
- self.payload.feed_data(chunk, len(chunk))
-
- return False, b""
-
-
-class DeflateBuffer:
- """DeflateStream decompress stream and feed data into specified stream."""
-
- decompressor: Any
-
- def __init__(self, out: StreamReader, encoding: Optional[str]) -> None:
- self.out = out
- self.size = 0
- self.encoding = encoding
- self._started_decoding = False
-
- if encoding == "br":
- if not HAS_BROTLI: # pragma: no cover
- raise ContentEncodingError(
- "Can not decode content-encoding: brotli (br). "
- "Please install `Brotli`"
- )
-
- class BrotliDecoder:
- # Supports both 'brotlipy' and 'Brotli' packages
- # since they share an import name. The top branches
- # are for 'brotlipy' and bottom branches for 'Brotli'
- def __init__(self) -> None:
- self._obj = brotli.Decompressor()
-
- def decompress(self, data: bytes) -> bytes:
- if hasattr(self._obj, "decompress"):
- return cast(bytes, self._obj.decompress(data))
- return cast(bytes, self._obj.process(data))
-
- def flush(self) -> bytes:
- if hasattr(self._obj, "flush"):
- return cast(bytes, self._obj.flush())
- return b""
-
- self.decompressor = BrotliDecoder()
- else:
- zlib_mode = 16 + zlib.MAX_WBITS if encoding == "gzip" else zlib.MAX_WBITS
- self.decompressor = zlib.decompressobj(wbits=zlib_mode)
-
- def set_exception(self, exc: BaseException) -> None:
- self.out.set_exception(exc)
-
- def feed_data(self, chunk: bytes, size: int) -> None:
- if not size:
- return
-
- self.size += size
-
- # RFC1950
- # bits 0..3 = CM = 0b1000 = 8 = "deflate"
- # bits 4..7 = CINFO = 1..7 = windows size.
- if (
- not self._started_decoding
- and self.encoding == "deflate"
- and chunk[0] & 0xF != 8
- ):
- # Change the decoder to decompress incorrectly compressed data
- # Actually we should issue a warning about non-RFC-compliant data.
- self.decompressor = zlib.decompressobj(wbits=-zlib.MAX_WBITS)
-
- try:
- chunk = self.decompressor.decompress(chunk)
- except Exception:
- raise ContentEncodingError(
- "Can not decode content-encoding: %s" % self.encoding
- )
-
- self._started_decoding = True
-
- if chunk:
- self.out.feed_data(chunk, len(chunk))
-
- def feed_eof(self) -> None:
- chunk = self.decompressor.flush()
-
- if chunk or self.size > 0:
- self.out.feed_data(chunk, len(chunk))
- if self.encoding == "deflate" and not self.decompressor.eof:
- raise ContentEncodingError("deflate")
-
- self.out.feed_eof()
-
- def begin_http_chunk_receiving(self) -> None:
- self.out.begin_http_chunk_receiving()
-
- def end_http_chunk_receiving(self) -> None:
- self.out.end_http_chunk_receiving()
-
-
-HttpRequestParserPy = HttpRequestParser
-HttpResponseParserPy = HttpResponseParser
-RawRequestMessagePy = RawRequestMessage
-RawResponseMessagePy = RawResponseMessage
-
-try:
- if not NO_EXTENSIONS:
- from ._http_parser import ( # type: ignore[import,no-redef]
- HttpRequestParser,
- HttpResponseParser,
- RawRequestMessage,
- RawResponseMessage,
- )
-
- HttpRequestParserC = HttpRequestParser
- HttpResponseParserC = HttpResponseParser
- RawRequestMessageC = RawRequestMessage
- RawResponseMessageC = RawResponseMessage
-except ImportError: # pragma: no cover
- pass
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/altair/vegalite/api.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/altair/vegalite/api.py
deleted file mode 100644
index 6602986fe9c617eb5f4e375c94985260a2773aaa..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/altair/vegalite/api.py
+++ /dev/null
@@ -1,2 +0,0 @@
-# ruff: noqa
-from .v5.api import *
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/clickhouse_connect/entry_points.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/clickhouse_connect/entry_points.py
deleted file mode 100644
index 8d895ef07d5727dc8a415a398c62ca3ff80e74e6..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/clickhouse_connect/entry_points.py
+++ /dev/null
@@ -1,39 +0,0 @@
-#!/usr/bin/env python3
-
-import sys
-import pkg_resources
-
-EXPECTED_EPS = {'sqlalchemy.dialects:clickhousedb',
- 'sqlalchemy.dialects:clickhousedb.connect'}
-
-
-def validate_entrypoints():
- expected_eps = EXPECTED_EPS.copy()
- try:
- dist = pkg_resources.get_distribution('clickhouse-connect')
- except pkg_resources.DistributionNotFound:
- print ('\nClickHouse Connect package not found in this Python installation')
- return -1
- entry_map = dist.get_entry_map()
- print()
- for ep_group, entry_points in entry_map.items():
- print (ep_group)
- for entry_point in entry_points.values():
- print (f' {entry_point.name}={entry_point.module_name}.{", ".join(entry_point.attrs)}')
- name = f'{ep_group}:{entry_point.name}'
- try:
- expected_eps.remove(name)
- except KeyError:
- print (f'\nUnexpected entry point {name} found')
- return -1
- if expected_eps:
- print()
- for name in expected_eps:
- print (f'Did not find expected ep {name}')
- return -1
- print ('\nEntrypoints correctly installed')
- return 0
-
-
-if __name__ == '__main__':
- sys.exit(validate_entrypoints())
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/commands/completion.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/commands/completion.py
deleted file mode 100644
index 30233fc7ad2c07c42e7c2d384312f1f4373155f6..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/commands/completion.py
+++ /dev/null
@@ -1,121 +0,0 @@
-import sys
-import textwrap
-from optparse import Values
-from typing import List
-
-from pip._internal.cli.base_command import Command
-from pip._internal.cli.status_codes import SUCCESS
-from pip._internal.utils.misc import get_prog
-
-BASE_COMPLETION = """
-# pip {shell} completion start{script}# pip {shell} completion end
-"""
-
-COMPLETION_SCRIPTS = {
- "bash": """
- _pip_completion()
- {{
- COMPREPLY=( $( COMP_WORDS="${{COMP_WORDS[*]}}" \\
- COMP_CWORD=$COMP_CWORD \\
- PIP_AUTO_COMPLETE=1 $1 2>/dev/null ) )
- }}
- complete -o default -F _pip_completion {prog}
- """,
- "zsh": """
- #compdef -P pip[0-9.]#
- compadd $( COMP_WORDS="$words[*]" \\
- COMP_CWORD=$((CURRENT-1)) \\
- PIP_AUTO_COMPLETE=1 $words[1] 2>/dev/null )
- """,
- "fish": """
- function __fish_complete_pip
- set -lx COMP_WORDS (commandline -o) ""
- set -lx COMP_CWORD ( \\
- math (contains -i -- (commandline -t) $COMP_WORDS)-1 \\
- )
- set -lx PIP_AUTO_COMPLETE 1
- string split \\ -- (eval $COMP_WORDS[1])
- end
- complete -fa "(__fish_complete_pip)" -c {prog}
- """,
- "powershell": """
- if ((Test-Path Function:\\TabExpansion) -and -not `
- (Test-Path Function:\\_pip_completeBackup)) {{
- Rename-Item Function:\\TabExpansion _pip_completeBackup
- }}
- function TabExpansion($line, $lastWord) {{
- $lastBlock = [regex]::Split($line, '[|;]')[-1].TrimStart()
- if ($lastBlock.StartsWith("{prog} ")) {{
- $Env:COMP_WORDS=$lastBlock
- $Env:COMP_CWORD=$lastBlock.Split().Length - 1
- $Env:PIP_AUTO_COMPLETE=1
- (& {prog}).Split()
- Remove-Item Env:COMP_WORDS
- Remove-Item Env:COMP_CWORD
- Remove-Item Env:PIP_AUTO_COMPLETE
- }}
- elseif (Test-Path Function:\\_pip_completeBackup) {{
- # Fall back on existing tab expansion
- _pip_completeBackup $line $lastWord
- }}
- }}
- """,
-}
-
-
-class CompletionCommand(Command):
- """A helper command to be used for command completion."""
-
- ignore_require_venv = True
-
- def add_options(self) -> None:
- self.cmd_opts.add_option(
- "--bash",
- "-b",
- action="store_const",
- const="bash",
- dest="shell",
- help="Emit completion code for bash",
- )
- self.cmd_opts.add_option(
- "--zsh",
- "-z",
- action="store_const",
- const="zsh",
- dest="shell",
- help="Emit completion code for zsh",
- )
- self.cmd_opts.add_option(
- "--fish",
- "-f",
- action="store_const",
- const="fish",
- dest="shell",
- help="Emit completion code for fish",
- )
- self.cmd_opts.add_option(
- "--powershell",
- "-p",
- action="store_const",
- const="powershell",
- dest="shell",
- help="Emit completion code for powershell",
- )
-
- self.parser.insert_option_group(0, self.cmd_opts)
-
- def run(self, options: Values, args: List[str]) -> int:
- """Prints the completion code of the given shell"""
- shells = COMPLETION_SCRIPTS.keys()
- shell_options = ["--" + shell for shell in sorted(shells)]
- if options.shell in shells:
- script = textwrap.dedent(
- COMPLETION_SCRIPTS.get(options.shell, "").format(prog=get_prog())
- )
- print(BASE_COMPLETION.format(script=script, shell=options.shell))
- return SUCCESS
- else:
- sys.stderr.write(
- "ERROR: You must pass {}\n".format(" or ".join(shell_options))
- )
- return SUCCESS
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/cachecontrol/caches/redis_cache.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/cachecontrol/caches/redis_cache.py
deleted file mode 100644
index 2cba4b0708032d62b4c1278f99e5db87ed8d90fe..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/cachecontrol/caches/redis_cache.py
+++ /dev/null
@@ -1,39 +0,0 @@
-# SPDX-FileCopyrightText: 2015 Eric Larson
-#
-# SPDX-License-Identifier: Apache-2.0
-
-from __future__ import division
-
-from datetime import datetime
-from pip._vendor.cachecontrol.cache import BaseCache
-
-
-class RedisCache(BaseCache):
-
- def __init__(self, conn):
- self.conn = conn
-
- def get(self, key):
- return self.conn.get(key)
-
- def set(self, key, value, expires=None):
- if not expires:
- self.conn.set(key, value)
- elif isinstance(expires, datetime):
- expires = expires - datetime.utcnow()
- self.conn.setex(key, int(expires.total_seconds()), value)
- else:
- self.conn.setex(key, expires, value)
-
- def delete(self, key):
- self.conn.delete(key)
-
- def clear(self):
- """Helper for clearing all the keys in a database. Use with
- caution!"""
- for key in self.conn.keys():
- self.conn.delete(key)
-
- def close(self):
- """Redis uses connection pooling, no need to close the connection."""
- pass
diff --git a/spaces/TechnoByte/soft-improved/README.md b/spaces/TechnoByte/soft-improved/README.md
deleted file mode 100644
index 20a19f3e5e110a7a352136073a0b6bbfc43c9cca..0000000000000000000000000000000000000000
--- a/spaces/TechnoByte/soft-improved/README.md
+++ /dev/null
@@ -1,18 +0,0 @@
----
-tags:
-- gradio-theme
-title: 'Gradio Theme: Soft Improved'
-colorFrom: gray
-colorTo: gray
-sdk: gradio
-sdk_version: 3.42.0
-app_file: app.py
-pinned: false
-license: apache-2.0
-emoji: 👁
----
-# soft
-## Description
-Add a description of this theme here!
-## Contributions
-Thanks to [@aliabid94](https://huggingface.co/aliabid94) for adding this gradio theme!
\ No newline at end of file
diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/utils/collect_env.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/utils/collect_env.py
deleted file mode 100644
index 807b6c7e6245d0a21221b1b8d29b841ec8251761..0000000000000000000000000000000000000000
--- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/utils/collect_env.py
+++ /dev/null
@@ -1,242 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import importlib
-import numpy as np
-import os
-import re
-import subprocess
-import sys
-from collections import defaultdict
-import PIL
-import torch
-import torchvision
-from tabulate import tabulate
-
-__all__ = ["collect_env_info"]
-
-
-def collect_torch_env():
- try:
- import torch.__config__
-
- return torch.__config__.show()
- except ImportError:
- # compatible with older versions of pytorch
- from torch.utils.collect_env import get_pretty_env_info
-
- return get_pretty_env_info()
-
-
-def get_env_module():
- var_name = "DETECTRON2_ENV_MODULE"
- return var_name, os.environ.get(var_name, "")
-
-
-def detect_compute_compatibility(CUDA_HOME, so_file):
- try:
- cuobjdump = os.path.join(CUDA_HOME, "bin", "cuobjdump")
- if os.path.isfile(cuobjdump):
- output = subprocess.check_output(
- "'{}' --list-elf '{}'".format(cuobjdump, so_file), shell=True
- )
- output = output.decode("utf-8").strip().split("\n")
- arch = []
- for line in output:
- line = re.findall(r"\.sm_([0-9]*)\.", line)[0]
- arch.append(".".join(line))
- arch = sorted(set(arch))
- return ", ".join(arch)
- else:
- return so_file + "; cannot find cuobjdump"
- except Exception:
- # unhandled failure
- return so_file
-
-
-def collect_env_info():
- has_gpu = torch.cuda.is_available() # true for both CUDA & ROCM
- torch_version = torch.__version__
-
- # NOTE that CUDA_HOME/ROCM_HOME could be None even when CUDA runtime libs are functional
- from torch.utils.cpp_extension import CUDA_HOME, ROCM_HOME
-
- has_rocm = False
- if (getattr(torch.version, "hip", None) is not None) and (ROCM_HOME is not None):
- has_rocm = True
- has_cuda = has_gpu and (not has_rocm)
-
- data = []
- data.append(("sys.platform", sys.platform)) # check-template.yml depends on it
- data.append(("Python", sys.version.replace("\n", "")))
- data.append(("numpy", np.__version__))
-
- try:
- import detectron2 # noqa
-
- data.append(
- ("detectron2", detectron2.__version__ + " @" + os.path.dirname(detectron2.__file__))
- )
- except ImportError:
- data.append(("detectron2", "failed to import"))
- except AttributeError:
- data.append(("detectron2", "imported a wrong installation"))
-
- try:
- import detectron2._C as _C
- except ImportError as e:
- data.append(("detectron2._C", f"not built correctly: {e}"))
-
- # print system compilers when extension fails to build
- if sys.platform != "win32": # don't know what to do for windows
- try:
- # this is how torch/utils/cpp_extensions.py choose compiler
- cxx = os.environ.get("CXX", "c++")
- cxx = subprocess.check_output("'{}' --version".format(cxx), shell=True)
- cxx = cxx.decode("utf-8").strip().split("\n")[0]
- except subprocess.SubprocessError:
- cxx = "Not found"
- data.append(("Compiler ($CXX)", cxx))
-
- if has_cuda and CUDA_HOME is not None:
- try:
- nvcc = os.path.join(CUDA_HOME, "bin", "nvcc")
- nvcc = subprocess.check_output("'{}' -V".format(nvcc), shell=True)
- nvcc = nvcc.decode("utf-8").strip().split("\n")[-1]
- except subprocess.SubprocessError:
- nvcc = "Not found"
- data.append(("CUDA compiler", nvcc))
- if has_cuda and sys.platform != "win32":
- try:
- so_file = importlib.util.find_spec("detectron2._C").origin
- except (ImportError, AttributeError):
- pass
- else:
- data.append(
- ("detectron2 arch flags", detect_compute_compatibility(CUDA_HOME, so_file))
- )
- else:
- # print compilers that are used to build extension
- data.append(("Compiler", _C.get_compiler_version()))
- data.append(("CUDA compiler", _C.get_cuda_version())) # cuda or hip
- if has_cuda and getattr(_C, "has_cuda", lambda: True)():
- data.append(
- ("detectron2 arch flags", detect_compute_compatibility(CUDA_HOME, _C.__file__))
- )
-
- data.append(get_env_module())
- data.append(("PyTorch", torch_version + " @" + os.path.dirname(torch.__file__)))
- data.append(("PyTorch debug build", torch.version.debug))
-
- if not has_gpu:
- has_gpu_text = "No: torch.cuda.is_available() == False"
- else:
- has_gpu_text = "Yes"
- data.append(("GPU available", has_gpu_text))
- if has_gpu:
- devices = defaultdict(list)
- for k in range(torch.cuda.device_count()):
- cap = ".".join((str(x) for x in torch.cuda.get_device_capability(k)))
- name = torch.cuda.get_device_name(k) + f" (arch={cap})"
- devices[name].append(str(k))
- for name, devids in devices.items():
- data.append(("GPU " + ",".join(devids), name))
-
- if has_rocm:
- msg = " - invalid!" if not (ROCM_HOME and os.path.isdir(ROCM_HOME)) else ""
- data.append(("ROCM_HOME", str(ROCM_HOME) + msg))
- else:
- try:
- from torch.utils.collect_env import get_nvidia_driver_version, run as _run
-
- data.append(("Driver version", get_nvidia_driver_version(_run)))
- except Exception:
- pass
- msg = " - invalid!" if not (CUDA_HOME and os.path.isdir(CUDA_HOME)) else ""
- data.append(("CUDA_HOME", str(CUDA_HOME) + msg))
-
- cuda_arch_list = os.environ.get("TORCH_CUDA_ARCH_LIST", None)
- if cuda_arch_list:
- data.append(("TORCH_CUDA_ARCH_LIST", cuda_arch_list))
- data.append(("Pillow", PIL.__version__))
-
- try:
- data.append(
- (
- "torchvision",
- str(torchvision.__version__) + " @" + os.path.dirname(torchvision.__file__),
- )
- )
- if has_cuda:
- try:
- torchvision_C = importlib.util.find_spec("torchvision._C").origin
- msg = detect_compute_compatibility(CUDA_HOME, torchvision_C)
- data.append(("torchvision arch flags", msg))
- except (ImportError, AttributeError):
- data.append(("torchvision._C", "Not found"))
- except AttributeError:
- data.append(("torchvision", "unknown"))
-
- try:
- import fvcore
-
- data.append(("fvcore", fvcore.__version__))
- except (ImportError, AttributeError):
- pass
-
- try:
- import iopath
-
- data.append(("iopath", iopath.__version__))
- except (ImportError, AttributeError):
- pass
-
- try:
- import cv2
-
- data.append(("cv2", cv2.__version__))
- except (ImportError, AttributeError):
- data.append(("cv2", "Not found"))
- env_str = tabulate(data) + "\n"
- env_str += collect_torch_env()
- return env_str
-
-
-def test_nccl_ops():
- num_gpu = torch.cuda.device_count()
- if os.access("/tmp", os.W_OK):
- import torch.multiprocessing as mp
-
- dist_url = "file:///tmp/nccl_tmp_file"
- print("Testing NCCL connectivity ... this should not hang.")
- mp.spawn(_test_nccl_worker, nprocs=num_gpu, args=(num_gpu, dist_url), daemon=False)
- print("NCCL succeeded.")
-
-
-def _test_nccl_worker(rank, num_gpu, dist_url):
- import torch.distributed as dist
-
- dist.init_process_group(backend="NCCL", init_method=dist_url, rank=rank, world_size=num_gpu)
- dist.barrier(device_ids=[rank])
-
-
-if __name__ == "__main__":
- try:
- from detectron2.utils.collect_env import collect_env_info as f
-
- print(f())
- except ImportError:
- print(collect_env_info())
-
- if torch.cuda.is_available():
- num_gpu = torch.cuda.device_count()
- for k in range(num_gpu):
- device = f"cuda:{k}"
- try:
- x = torch.tensor([1, 2.0], dtype=torch.float32)
- x = x.to(device)
- except Exception as e:
- print(
- f"Unable to copy tensor to device={device}: {e}. "
- "Your CUDA environment is broken."
- )
- if num_gpu > 1:
- test_nccl_ops()
diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/dev/packaging/gen_install_table.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/dev/packaging/gen_install_table.py
deleted file mode 100644
index b4c852dc53de613707b9668f748184c2b63b9dea..0000000000000000000000000000000000000000
--- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/dev/packaging/gen_install_table.py
+++ /dev/null
@@ -1,63 +0,0 @@
-#!/usr/bin/env python
-# Copyright (c) Facebook, Inc. and its affiliates.
-# -*- coding: utf-8 -*-
-
-import argparse
-
-template = """ install \
-python -m pip install detectron2{d2_version} -f \\
- https://dl.fbaipublicfiles.com/detectron2/wheels/{cuda}/torch{torch}/index.html
-
"""
-CUDA_SUFFIX = {
- "11.3": "cu113",
- "11.1": "cu111",
- "11.0": "cu110",
- "10.2": "cu102",
- "10.1": "cu101",
- "10.0": "cu100",
- "9.2": "cu92",
- "cpu": "cpu",
-}
-
-
-def gen_header(torch_versions):
- return ' CUDA ' + "".join(
- [
- 'torch {} '.format(t)
- for t in torch_versions
- ]
- )
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("--d2-version", help="detectron2 version number, default to empty")
- args = parser.parse_args()
- d2_version = f"=={args.d2_version}" if args.d2_version else ""
-
- all_versions = (
- [("1.8", k) for k in ["11.1", "10.2", "10.1", "cpu"]]
- + [("1.9", k) for k in ["11.1", "10.2", "cpu"]]
- + [("1.10", k) for k in ["11.3", "11.1", "10.2", "cpu"]]
- )
-
- torch_versions = sorted(
- {k[0] for k in all_versions}, key=lambda x: int(x.split(".")[1]), reverse=True
- )
- cuda_versions = sorted(
- {k[1] for k in all_versions}, key=lambda x: float(x) if x != "cpu" else 0, reverse=True
- )
-
- table = gen_header(torch_versions)
- for cu in cuda_versions:
- table += f""" {cu} """
- cu_suffix = CUDA_SUFFIX[cu]
- for torch in torch_versions:
- if (torch, cu) in all_versions:
- cell = template.format(d2_version=d2_version, cuda=cu_suffix, torch=torch)
- else:
- cell = ""
- table += f"""{cell} """
- table += " "
- table += "
"
- print(table)
diff --git a/spaces/Terminus0501/vits-uma-genshin-honkai/Docker/vits.sh b/spaces/Terminus0501/vits-uma-genshin-honkai/Docker/vits.sh
deleted file mode 100644
index 2b87f26eda96d3800b73b4a21b210c78888a2299..0000000000000000000000000000000000000000
--- a/spaces/Terminus0501/vits-uma-genshin-honkai/Docker/vits.sh
+++ /dev/null
@@ -1,20 +0,0 @@
-#!/bin/bash
-run() {
- echo -e "\033[32m已完成初始化,启动服务...\033[0m"
- python3 /app/vits-uma-genshin-honkai/app.py
-}
-install() {
- echo -e "\033[33m正在初始化:安装依赖....\033[0m"
- pip install -r /app/vits-uma-genshin-honkai/requirements.txt -i https://mirrors.ustc.edu.cn/pypi/web/simple
- echo -e "\033[33m正在下载模型....\033[0m"
- rm -f /app/vits-uma-genshin-honkai/model/G_953000.pth
- wget -O /app/vits-uma-genshin-honkai/model/G_953000.pth https://huggingface.co/spaces/ikechan8370/vits-uma-genshin-honkai/resolve/main/model/G_953000.pth
- echo -e "\033[32m初始化完成!\033[0m"
- run
-}
-
-if [ ! -f "/app/vits-uma-genshin-honkai/model/G_953000.pth" ] || [ "$(stat -c%s "/app/vits-uma-genshin-honkai/model/G_953000.pth")" -lt 10000 ]; then
- install
-else
- run
-fi
diff --git a/spaces/Thafx/sdAnalog/app.py b/spaces/Thafx/sdAnalog/app.py
deleted file mode 100644
index 763501a6b30b5d2dff967cd6cb9e1d5e9a608950..0000000000000000000000000000000000000000
--- a/spaces/Thafx/sdAnalog/app.py
+++ /dev/null
@@ -1,181 +0,0 @@
-from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline, DPMSolverMultistepScheduler
-import gradio as gr
-import torch
-from PIL import Image
-
-model_id = 'wavymulder/Analog-Diffusion'
-prefix = 'analog style, '
-
-scheduler = DPMSolverMultistepScheduler.from_pretrained(model_id, subfolder="scheduler")
-
-pipe = StableDiffusionPipeline.from_pretrained(
- model_id,
- torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32,
- scheduler=scheduler)
-
-pipe_i2i = StableDiffusionImg2ImgPipeline.from_pretrained(
- model_id,
- torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32,
- scheduler=scheduler)
-
-if torch.cuda.is_available():
- pipe = pipe.to("cuda")
- pipe_i2i = pipe_i2i.to("cuda")
-
-def error_str(error, title="Error"):
- return f"""#### {title}
- {error}""" if error else ""
-
-
-def _parse_args(prompt, generator):
- parser = argparse.ArgumentParser(
- description="making it work."
- )
- parser.add_argument(
- "--no-half-vae", help="no half vae"
- )
-
- cmdline_args = parser.parse_args()
- command = cmdline_args.command
- conf_file = cmdline_args.conf_file
- conf_args = Arguments(conf_file)
- opt = conf_args.readArguments()
-
- if cmdline_args.config_overrides:
- for config_override in cmdline_args.config_overrides.split(";"):
- config_override = config_override.strip()
- if config_override:
- var_val = config_override.split("=")
- assert (
- len(var_val) == 2
- ), f"Config override '{var_val}' does not have the form 'VAR=val'"
- conf_args.add_opt(opt, var_val[0], var_val[1], force_override=True)
-
-def inference(prompt, guidance, steps, width=512, height=512, seed=0, img=None, strength=0.5, neg_prompt="", auto_prefix=False):
- generator = torch.Generator('cuda').manual_seed(seed) if seed != 0 else None
- prompt = f"{prefix} {prompt}" if auto_prefix else prompt
-
- try:
- if img is not None:
- return img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator), None
- else:
- return txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator), None
- except Exception as e:
- return None, error_str(e)
-
-
-
-def txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator):
-
- result = pipe(
- prompt,
- negative_prompt = neg_prompt,
- num_inference_steps = int(steps),
- guidance_scale = guidance,
- width = width,
- height = height,
- generator = generator)
-
- return result.images[0]
-
-def img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator):
-
- ratio = min(height / img.height, width / img.width)
- img = img.resize((int(img.width * ratio), int(img.height * ratio)), Image.LANCZOS)
- result = pipe_i2i(
- prompt,
- negative_prompt = neg_prompt,
- init_image = img,
- num_inference_steps = int(steps),
- strength = strength,
- guidance_scale = guidance,
- width = width,
- height = height,
- generator = generator)
-
- return result.images[0]
-
- def fake_safety_checker(images, **kwargs):
- return result.images[0], [False] * len(images)
-
- pipe.safety_checker = fake_safety_checker
-
-css = """.main-div div{display:inline-flex;align-items:center;gap:.8rem;font-size:1.75rem}.main-div div h1{font-weight:900;margin-bottom:7px}.main-div p{margin-bottom:10px;font-size:94%}a{text-decoration:underline}.tabs{margin-top:0;margin-bottom:0}#gallery{min-height:20rem}
-"""
-with gr.Blocks(css=css) as demo:
- gr.HTML(
- f"""
-
-
-
📸 Analog Diffusion 📸
-
-
- Demo for Analog Diffusion
- Stable Diffusion model by Wavymulder . {"" if prefix else ""}
- Running on {"GPU 🔥 " if torch.cuda.is_available() else f"CPU ⚡ "}.
-
-
Please use the prompt template below to achieve the desired result:
-
-
-
Prompt :
-
-analog style photograph of * subject * , (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, realistic, photo-realistic, full length frame, High detail RAW color art, piercing, diffused soft lighting, shallow depth of field, sharp focus, hyperrealism, cinematic lighting
-
-
-Example: analog style photograph of Heath Ledger as Batman
-
-
Important note: Analog Diffusion works best at a 1:1 aspect ratio, it is also successful using tall aspect ratios as well.
-
-
Negative Prompt :
-
-blender illustration hdr, lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature
-
-
-Have Fun & Enjoy ⚡
//THAFX
-
-
- """
- )
- with gr.Row():
-
- with gr.Column(scale=55):
- with gr.Group():
- with gr.Row():
- prompt = gr.Textbox(label="Prompt", show_label=False,max_lines=2,placeholder=f"{prefix} [your prompt]").style(container=False)
- generate = gr.Button(value="Generate").style(rounded=(False, True, True, False))
-
- image_out = gr.Image(height=512)
- error_output = gr.Markdown()
-
- with gr.Column(scale=45):
- with gr.Tab("Options"):
- with gr.Group():
- neg_prompt = gr.Textbox(label="Negative prompt", placeholder="What to exclude from the image")
- auto_prefix = gr.Checkbox(label="Prefix styling tokens automatically (analog style,)", value=prefix, visible=prefix)
-
- with gr.Row():
- guidance = gr.Slider(label="Guidance scale", value=7, maximum=15)
- steps = gr.Slider(label="Steps", value=20, minimum=2, maximum=75, step=1)
-
- with gr.Row():
- width = gr.Slider(label="Width", value=768, minimum=64, maximum=1024, step=8)
- height = gr.Slider(label="Height", value=768, minimum=64, maximum=1024, step=8)
-
- seed = gr.Slider(0, 2147483647, label='Seed (0 = random)', value=0, step=1)
-
- with gr.Tab("Image to image"):
- with gr.Group():
- image = gr.Image(label="Image", height=256, tool="editor", type="pil")
- strength = gr.Slider(label="Transformation strength", minimum=0, maximum=1, step=0.01, value=0.5)
-
- auto_prefix.change(lambda x: gr.update(placeholder=f"{prefix} [your prompt]" if x else "[Your prompt]"), inputs=auto_prefix, outputs=prompt, queue=False)
-
- inputs = [prompt, guidance, steps, width, height, seed, image, strength, neg_prompt, auto_prefix]
- outputs = [image_out, error_output]
- prompt.submit(inference, inputs=inputs, outputs=outputs)
- generate.click(inference, inputs=inputs, outputs=outputs)
-
-
-
-demo.queue(concurrency_count=1)
-demo.launch()
\ No newline at end of file
diff --git a/spaces/VickyKira/NASAGPT/g4f/Provider/Providers/Phind.py b/spaces/VickyKira/NASAGPT/g4f/Provider/Providers/Phind.py
deleted file mode 100644
index 9fa8ec821f701d7841432e498a11ac9dd017978c..0000000000000000000000000000000000000000
--- a/spaces/VickyKira/NASAGPT/g4f/Provider/Providers/Phind.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import os
-import json
-import time
-import subprocess
-
-from ...typing import sha256, Dict, get_type_hints
-
-url = 'https://phind.com'
-model = ['gpt-4']
-supports_stream = True
-
-def _create_completion(model: str, messages: list, stream: bool, **kwargs):
-
- path = os.path.dirname(os.path.realpath(__file__))
- config = json.dumps({
- 'model': model,
- 'messages': messages}, separators=(',', ':'))
-
- cmd = ['python', f'{path}/helpers/phind.py', config]
-
- p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
-
- for line in iter(p.stdout.readline, b''):
- if b'Just a moment... ' in line:
- os.system('clear' if os.name == 'posix' else 'cls')
- yield 'Clouflare error, please try again...'
- os._exit(0)
-
- else:
- if b'ping - 2023-' in line:
- continue
-
- yield line.decode('cp1251') #[:-1]
-
-params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
- '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
\ No newline at end of file
diff --git a/spaces/Vision-CAIR/minigpt4/minigpt4/datasets/data_utils.py b/spaces/Vision-CAIR/minigpt4/minigpt4/datasets/data_utils.py
deleted file mode 100644
index cddc4d68a8fa5a4e39bea0055d131c96ee81e7b7..0000000000000000000000000000000000000000
--- a/spaces/Vision-CAIR/minigpt4/minigpt4/datasets/data_utils.py
+++ /dev/null
@@ -1,196 +0,0 @@
-"""
- Copyright (c) 2022, salesforce.com, inc.
- All rights reserved.
- SPDX-License-Identifier: BSD-3-Clause
- For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause
-"""
-
-import gzip
-import logging
-import os
-import random as rnd
-import tarfile
-import zipfile
-import random
-from typing import List
-from tqdm import tqdm
-
-import decord
-from decord import VideoReader
-import webdataset as wds
-import numpy as np
-import torch
-from torch.utils.data.dataset import IterableDataset
-
-from minigpt4.common.registry import registry
-from minigpt4.datasets.datasets.base_dataset import ConcatDataset
-
-
-decord.bridge.set_bridge("torch")
-MAX_INT = registry.get("MAX_INT")
-
-
-class ChainDataset(wds.DataPipeline):
- r"""Dataset for chaining multiple :class:`DataPipeline` s.
-
- This class is useful to assemble different existing dataset streams. The
- chaining operation is done on-the-fly, so concatenating large-scale
- datasets with this class will be efficient.
-
- Args:
- datasets (iterable of IterableDataset): datasets to be chained together
- """
- def __init__(self, datasets: List[wds.DataPipeline]) -> None:
- super().__init__()
- self.datasets = datasets
- self.prob = []
- self.names = []
- for dataset in self.datasets:
- if hasattr(dataset, 'name'):
- self.names.append(dataset.name)
- else:
- self.names.append('Unknown')
- if hasattr(dataset, 'sample_ratio'):
- self.prob.append(dataset.sample_ratio)
- else:
- self.prob.append(1)
- logging.info("One of the datapipeline doesn't define ratio and set to 1 automatically.")
-
- def __iter__(self):
- datastreams = [iter(dataset) for dataset in self.datasets]
- while True:
- select_datastream = random.choices(datastreams, weights=self.prob, k=1)[0]
- yield next(select_datastream)
-
-
-def apply_to_sample(f, sample):
- if len(sample) == 0:
- return {}
-
- def _apply(x):
- if torch.is_tensor(x):
- return f(x)
- elif isinstance(x, dict):
- return {key: _apply(value) for key, value in x.items()}
- elif isinstance(x, list):
- return [_apply(x) for x in x]
- else:
- return x
-
- return _apply(sample)
-
-
-def move_to_cuda(sample):
- def _move_to_cuda(tensor):
- return tensor.cuda()
-
- return apply_to_sample(_move_to_cuda, sample)
-
-
-def prepare_sample(samples, cuda_enabled=True):
- if cuda_enabled:
- samples = move_to_cuda(samples)
-
- # TODO fp16 support
-
- return samples
-
-
-def reorg_datasets_by_split(datasets):
- """
- Organizes datasets by split.
-
- Args:
- datasets: dict of torch.utils.data.Dataset objects by name.
-
- Returns:
- Dict of datasets by split {split_name: List[Datasets]}.
- """
- # if len(datasets) == 1:
- # return datasets[list(datasets.keys())[0]]
- # else:
- reorg_datasets = dict()
-
- # reorganize by split
- for _, dataset in datasets.items():
- for split_name, dataset_split in dataset.items():
- if split_name not in reorg_datasets:
- reorg_datasets[split_name] = [dataset_split]
- else:
- reorg_datasets[split_name].append(dataset_split)
-
- return reorg_datasets
-
-
-def concat_datasets(datasets):
- """
- Concatenates multiple datasets into a single dataset.
-
- It supports may-style datasets and DataPipeline from WebDataset. Currently, does not support
- generic IterableDataset because it requires creating separate samplers.
-
- Now only supports conctenating training datasets and assuming validation and testing
- have only a single dataset. This is because metrics should not be computed on the concatenated
- datasets.
-
- Args:
- datasets: dict of torch.utils.data.Dataset objects by split.
-
- Returns:
- Dict of concatenated datasets by split, "train" is the concatenation of multiple datasets,
- "val" and "test" remain the same.
-
- If the input training datasets contain both map-style and DataPipeline datasets, returns
- a tuple, where the first element is a concatenated map-style dataset and the second
- element is a chained DataPipeline dataset.
-
- """
- # concatenate datasets in the same split
- for split_name in datasets:
- if split_name != "train":
- assert (
- len(datasets[split_name]) == 1
- ), "Do not support multiple {} datasets.".format(split_name)
- datasets[split_name] = datasets[split_name][0]
- else:
- iterable_datasets, map_datasets = [], []
- for dataset in datasets[split_name]:
- if isinstance(dataset, wds.DataPipeline):
- logging.info(
- "Dataset {} is IterableDataset, can't be concatenated.".format(
- dataset
- )
- )
- iterable_datasets.append(dataset)
- elif isinstance(dataset, IterableDataset):
- raise NotImplementedError(
- "Do not support concatenation of generic IterableDataset."
- )
- else:
- map_datasets.append(dataset)
-
- # if len(iterable_datasets) > 0:
- # concatenate map-style datasets and iterable-style datasets separately
- if len(iterable_datasets) > 1:
- chained_datasets = (
- ChainDataset(iterable_datasets)
- )
- elif len(iterable_datasets) == 1:
- chained_datasets = iterable_datasets[0]
- else:
- chained_datasets = None
-
- concat_datasets = (
- ConcatDataset(map_datasets) if len(map_datasets) > 0 else None
- )
-
- train_datasets = concat_datasets, chained_datasets
- train_datasets = tuple([x for x in train_datasets if x is not None])
- train_datasets = (
- train_datasets[0] if len(train_datasets) == 1 else train_datasets
- )
-
- datasets[split_name] = train_datasets
-
- return datasets
-
diff --git a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/sixel.py b/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/sixel.py
deleted file mode 100644
index 2d14d6434def9b867f8f5da6359e558cb024978f..0000000000000000000000000000000000000000
--- a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/sixel.py
+++ /dev/null
@@ -1,23 +0,0 @@
-from .core import *
-
-libsixel = try_import('libsixel')
-
-def _sixel_encode(data, width, height):
- s = io.BytesIO()
- output = libsixel.sixel_output_new(lambda data, s: s.write(data), s)
- dither = libsixel.sixel_dither_new(256)
- w,h = int(width),int(height)
- libsixel.sixel_dither_initialize(dither, data, w, h, libsixel.SIXEL_PIXELFORMAT_RGBA8888)
- libsixel.sixel_encode(data, w, h, 1, dither, output)
- return s.getvalue().decode('ascii')
-
-def plot_sixel(fig=None):
- if not libsixel:
- warn("You could see this plot with `libsixel`. See https://github.com/saitoha/libsixel")
- return
- if fig is None: fig = plt.gcf()
- fig.canvas.draw()
- dpi = fig.get_dpi()
- res = _sixel_encode(fig.canvas.buffer_rgba(), fig.get_figwidth()* dpi, fig.get_figheight() * dpi)
- print(res)
-
diff --git a/spaces/XzJosh/Taffy-Bert-VITS2/resample.py b/spaces/XzJosh/Taffy-Bert-VITS2/resample.py
deleted file mode 100644
index 2ed1685654a371c5722168e9987809b05b1cb224..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/Taffy-Bert-VITS2/resample.py
+++ /dev/null
@@ -1,42 +0,0 @@
-import os
-import argparse
-import librosa
-import numpy as np
-from multiprocessing import Pool, cpu_count
-
-import soundfile
-from scipy.io import wavfile
-from tqdm import tqdm
-
-
-def process(item):
- spkdir, wav_name, args = item
- speaker = spkdir.replace("\\", "/").split("/")[-1]
- wav_path = os.path.join(args.in_dir, speaker, wav_name)
- if os.path.exists(wav_path) and '.wav' in wav_path:
- os.makedirs(os.path.join(args.out_dir, speaker), exist_ok=True)
- wav, sr = librosa.load(wav_path, sr=args.sr)
- soundfile.write(
- os.path.join(args.out_dir, speaker, wav_name),
- wav,
- sr
- )
-
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("--sr", type=int, default=44100, help="sampling rate")
- parser.add_argument("--in_dir", type=str, default="./raw", help="path to source dir")
- parser.add_argument("--out_dir", type=str, default="./dataset", help="path to target dir")
- args = parser.parse_args()
- # processs = 8
- processs = cpu_count()-2 if cpu_count() >4 else 1
- pool = Pool(processes=processs)
-
- for speaker in os.listdir(args.in_dir):
- spk_dir = os.path.join(args.in_dir, speaker)
- if os.path.isdir(spk_dir):
- print(spk_dir)
- for _ in tqdm(pool.imap_unordered(process, [(spk_dir, i, args) for i in os.listdir(spk_dir) if i.endswith("wav")])):
- pass
diff --git a/spaces/Yeshwant123/mcc/app.py b/spaces/Yeshwant123/mcc/app.py
deleted file mode 100644
index 83cc8adbd357783f3191dc0a9f63ea03c778816d..0000000000000000000000000000000000000000
--- a/spaces/Yeshwant123/mcc/app.py
+++ /dev/null
@@ -1,6 +0,0 @@
-import evaluate
-from evaluate.utils import launch_gradio_widget
-
-
-module = evaluate.load("Yeshwant123/mcc")
-launch_gradio_widget(module)
\ No newline at end of file
diff --git a/spaces/YukiKurosawaDev/ChatGLM/README.md b/spaces/YukiKurosawaDev/ChatGLM/README.md
deleted file mode 100644
index 6c717692a9c3ff657b14592db1d92909d6fe9985..0000000000000000000000000000000000000000
--- a/spaces/YukiKurosawaDev/ChatGLM/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: ChatGLM
-emoji: 👀
-colorFrom: indigo
-colorTo: pink
-sdk: gradio
-sdk_version: 3.24.1
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Yuliang/ECON/lib/pymafx/utils/blob.py b/spaces/Yuliang/ECON/lib/pymafx/utils/blob.py
deleted file mode 100644
index 11814bbec48887f622d11a786ab25271f98d5450..0000000000000000000000000000000000000000
--- a/spaces/Yuliang/ECON/lib/pymafx/utils/blob.py
+++ /dev/null
@@ -1,175 +0,0 @@
-# Copyright (c) 2017-present, Facebook, Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-##############################################################################
-#
-# Based on:
-# --------------------------------------------------------
-# Fast R-CNN
-# Copyright (c) 2015 Microsoft
-# Licensed under The MIT License [see LICENSE for details]
-# Written by Ross Girshick
-# --------------------------------------------------------
-"""blob helper functions."""
-
-from __future__ import (
- absolute_import,
- division,
- print_function,
- unicode_literals,
-)
-
-import cv2
-import numpy as np
-from models.core.config import cfg
-from six.moves import cPickle as pickle
-
-
-def get_image_blob(im, target_scale, target_max_size):
- """Convert an image into a network input.
-
- Arguments:
- im (ndarray): a color image in BGR order
-
- Returns:
- blob (ndarray): a data blob holding an image pyramid
- im_scale (float): image scale (target size) / (original size)
- im_info (ndarray)
- """
- processed_im, im_scale = prep_im_for_blob(im, cfg.PIXEL_MEANS, [target_scale], target_max_size)
- blob = im_list_to_blob(processed_im)
- # NOTE: this height and width may be larger than actual scaled input image
- # due to the FPN.COARSEST_STRIDE related padding in im_list_to_blob. We are
- # maintaining this behavior for now to make existing results exactly
- # reproducible (in practice using the true input image height and width
- # yields nearly the same results, but they are sometimes slightly different
- # because predictions near the edge of the image will be pruned more
- # aggressively).
- height, width = blob.shape[2], blob.shape[3]
- im_info = np.hstack((height, width, im_scale))[np.newaxis, :]
- return blob, im_scale, im_info.astype(np.float32)
-
-
-def im_list_to_blob(ims):
- """Convert a list of images into a network input. Assumes images were
- prepared using prep_im_for_blob or equivalent: i.e.
- - BGR channel order
- - pixel means subtracted
- - resized to the desired input size
- - float32 numpy ndarray format
- Output is a 4D HCHW tensor of the images concatenated along axis 0 with
- shape.
- """
- if not isinstance(ims, list):
- ims = [ims]
- max_shape = get_max_shape([im.shape[:2] for im in ims])
-
- num_images = len(ims)
- blob = np.zeros((num_images, max_shape[0], max_shape[1], 3), dtype=np.float32)
- for i in range(num_images):
- im = ims[i]
- blob[i, 0:im.shape[0], 0:im.shape[1], :] = im
- # Move channels (axis 3) to axis 1
- # Axis order will become: (batch elem, channel, height, width)
- channel_swap = (0, 3, 1, 2)
- blob = blob.transpose(channel_swap)
- return blob
-
-
-def get_max_shape(im_shapes):
- """Calculate max spatial size (h, w) for batching given a list of image shapes
- """
- max_shape = np.array(im_shapes).max(axis=0)
- assert max_shape.size == 2
- # Pad the image so they can be divisible by a stride
- if cfg.FPN.FPN_ON:
- stride = float(cfg.FPN.COARSEST_STRIDE)
- max_shape[0] = int(np.ceil(max_shape[0] / stride) * stride)
- max_shape[1] = int(np.ceil(max_shape[1] / stride) * stride)
- return max_shape
-
-
-def prep_im_for_blob(im, pixel_means, target_sizes, max_size):
- """Prepare an image for use as a network input blob. Specially:
- - Subtract per-channel pixel mean
- - Convert to float32
- - Rescale to each of the specified target size (capped at max_size)
- Returns a list of transformed images, one for each target size. Also returns
- the scale factors that were used to compute each returned image.
- """
- im = im.astype(np.float32, copy=False)
- im -= pixel_means
- im_shape = im.shape
- im_size_min = np.min(im_shape[0:2])
- im_size_max = np.max(im_shape[0:2])
-
- ims = []
- im_scales = []
- for target_size in target_sizes:
- im_scale = get_target_scale(im_size_min, im_size_max, target_size, max_size)
- im_resized = cv2.resize(
- im, None, None, fx=im_scale, fy=im_scale, interpolation=cv2.INTER_LINEAR
- )
- ims.append(im_resized)
- im_scales.append(im_scale)
- return ims, im_scales
-
-
-def get_im_blob_sizes(im_shape, target_sizes, max_size):
- """Calculate im blob size for multiple target_sizes given original im shape
- """
- im_size_min = np.min(im_shape)
- im_size_max = np.max(im_shape)
- im_sizes = []
- for target_size in target_sizes:
- im_scale = get_target_scale(im_size_min, im_size_max, target_size, max_size)
- im_sizes.append(np.round(im_shape * im_scale))
- return np.array(im_sizes)
-
-
-def get_target_scale(im_size_min, im_size_max, target_size, max_size):
- """Calculate target resize scale
- """
- im_scale = float(target_size) / float(im_size_min)
- # Prevent the biggest axis from being more than max_size
- if np.round(im_scale * im_size_max) > max_size:
- im_scale = float(max_size) / float(im_size_max)
- return im_scale
-
-
-def zeros(shape, int32=False):
- """Return a blob of all zeros of the given shape with the correct float or
- int data type.
- """
- return np.zeros(shape, dtype=np.int32 if int32 else np.float32)
-
-
-def ones(shape, int32=False):
- """Return a blob of all ones of the given shape with the correct float or
- int data type.
- """
- return np.ones(shape, dtype=np.int32 if int32 else np.float32)
-
-
-def serialize(obj):
- """Serialize a Python object using pickle and encode it as an array of
- float32 values so that it can be feed into the workspace. See deserialize().
- """
- return np.fromstring(pickle.dumps(obj), dtype=np.uint8).astype(np.float32)
-
-
-def deserialize(arr):
- """Unserialize a Python object from an array of float32 values fetched from
- a workspace. See serialize().
- """
- return pickle.loads(arr.astype(np.uint8).tobytes())
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/detectors/two_stage.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/detectors/two_stage.py
deleted file mode 100644
index ba5bdde980dc0cd76375455c9c7ffaae4b25531e..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/detectors/two_stage.py
+++ /dev/null
@@ -1,215 +0,0 @@
-import torch
-import torch.nn as nn
-
-# from mmdet.core import bbox2result, bbox2roi, build_assigner, build_sampler
-from ..builder import DETECTORS, build_backbone, build_head, build_neck
-from .base import BaseDetector
-
-
-@DETECTORS.register_module()
-class TwoStageDetector(BaseDetector):
- """Base class for two-stage detectors.
-
- Two-stage detectors typically consisting of a region proposal network and a
- task-specific regression head.
- """
-
- def __init__(self,
- backbone,
- neck=None,
- rpn_head=None,
- roi_head=None,
- train_cfg=None,
- test_cfg=None,
- pretrained=None):
- super(TwoStageDetector, self).__init__()
- self.backbone = build_backbone(backbone)
-
- if neck is not None:
- self.neck = build_neck(neck)
-
- if rpn_head is not None:
- rpn_train_cfg = train_cfg.rpn if train_cfg is not None else None
- rpn_head_ = rpn_head.copy()
- rpn_head_.update(train_cfg=rpn_train_cfg, test_cfg=test_cfg.rpn)
- self.rpn_head = build_head(rpn_head_)
-
- if roi_head is not None:
- # update train and test cfg here for now
- # TODO: refactor assigner & sampler
- rcnn_train_cfg = train_cfg.rcnn if train_cfg is not None else None
- roi_head.update(train_cfg=rcnn_train_cfg)
- roi_head.update(test_cfg=test_cfg.rcnn)
- self.roi_head = build_head(roi_head)
-
- self.train_cfg = train_cfg
- self.test_cfg = test_cfg
-
- self.init_weights(pretrained=pretrained)
-
- @property
- def with_rpn(self):
- """bool: whether the detector has RPN"""
- return hasattr(self, 'rpn_head') and self.rpn_head is not None
-
- @property
- def with_roi_head(self):
- """bool: whether the detector has a RoI head"""
- return hasattr(self, 'roi_head') and self.roi_head is not None
-
- def init_weights(self, pretrained=None):
- """Initialize the weights in detector.
-
- Args:
- pretrained (str, optional): Path to pre-trained weights.
- Defaults to None.
- """
- super(TwoStageDetector, self).init_weights(pretrained)
- self.backbone.init_weights(pretrained=pretrained)
- if self.with_neck:
- if isinstance(self.neck, nn.Sequential):
- for m in self.neck:
- m.init_weights()
- else:
- self.neck.init_weights()
- if self.with_rpn:
- self.rpn_head.init_weights()
- if self.with_roi_head:
- self.roi_head.init_weights(pretrained)
-
- def extract_feat(self, img):
- """Directly extract features from the backbone+neck."""
- x = self.backbone(img)
- if self.with_neck:
- x = self.neck(x)
- return x
-
- def forward_dummy(self, img):
- """Used for computing network flops.
-
- See `mmdetection/tools/analysis_tools/get_flops.py`
- """
- outs = ()
- # backbone
- x = self.extract_feat(img)
- # rpn
- if self.with_rpn:
- rpn_outs = self.rpn_head(x)
- outs = outs + (rpn_outs, )
- proposals = torch.randn(1000, 4).to(img.device)
- # roi_head
- roi_outs = self.roi_head.forward_dummy(x, proposals)
- outs = outs + (roi_outs, )
- return outs
-
- def forward_train(self,
- img,
- img_metas,
- gt_bboxes,
- gt_labels,
- gt_bboxes_ignore=None,
- gt_masks=None,
- proposals=None,
- **kwargs):
- """
- Args:
- img (Tensor): of shape (N, C, H, W) encoding input images.
- Typically these should be mean centered and std scaled.
-
- img_metas (list[dict]): list of image info dict where each dict
- has: 'img_shape', 'scale_factor', 'flip', and may also contain
- 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.
- For details on the values of these keys see
- `mmdet/datasets/pipelines/formatting.py:Collect`.
-
- gt_bboxes (list[Tensor]): Ground truth bboxes for each image with
- shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
-
- gt_labels (list[Tensor]): class indices corresponding to each box
-
- gt_bboxes_ignore (None | list[Tensor]): specify which bounding
- boxes can be ignored when computing the loss.
-
- gt_masks (None | Tensor) : true segmentation masks for each box
- used if the architecture supports a segmentation task.
-
- proposals : override rpn proposals with custom proposals. Use when
- `with_rpn` is False.
-
- Returns:
- dict[str, Tensor]: a dictionary of loss components
- """
- x = self.extract_feat(img)
-
- losses = dict()
-
- # RPN forward and loss
- if self.with_rpn:
- proposal_cfg = self.train_cfg.get('rpn_proposal',
- self.test_cfg.rpn)
- rpn_losses, proposal_list = self.rpn_head.forward_train(
- x,
- img_metas,
- gt_bboxes,
- gt_labels=None,
- gt_bboxes_ignore=gt_bboxes_ignore,
- proposal_cfg=proposal_cfg)
- losses.update(rpn_losses)
- else:
- proposal_list = proposals
-
- roi_losses = self.roi_head.forward_train(x, img_metas, proposal_list,
- gt_bboxes, gt_labels,
- gt_bboxes_ignore, gt_masks,
- **kwargs)
- losses.update(roi_losses)
-
- return losses
-
- async def async_simple_test(self,
- img,
- img_meta,
- proposals=None,
- rescale=False):
- """Async test without augmentation."""
- assert self.with_bbox, 'Bbox head must be implemented.'
- x = self.extract_feat(img)
-
- if proposals is None:
- proposal_list = await self.rpn_head.async_simple_test_rpn(
- x, img_meta)
- else:
- proposal_list = proposals
-
- return await self.roi_head.async_simple_test(
- x, proposal_list, img_meta, rescale=rescale)
-
- def simple_test(self, img, img_metas, proposals=None, rescale=False):
- """Test without augmentation."""
- assert self.with_bbox, 'Bbox head must be implemented.'
-
- x = self.extract_feat(img)
-
- # get origin input shape to onnx dynamic input shape
- if torch.onnx.is_in_onnx_export():
- img_shape = torch._shape_as_tensor(img)[2:]
- img_metas[0]['img_shape_for_onnx'] = img_shape
-
- if proposals is None:
- proposal_list = self.rpn_head.simple_test_rpn(x, img_metas)
- else:
- proposal_list = proposals
-
- return self.roi_head.simple_test(
- x, proposal_list, img_metas, rescale=rescale)
-
- def aug_test(self, imgs, img_metas, rescale=False):
- """Test with augmentations.
-
- If rescale is False, then returned bboxes and masks will fit the scale
- of imgs[0].
- """
- x = self.extract_feats(imgs)
- proposal_list = self.rpn_head.aug_test_rpn(x, img_metas)
- return self.roi_head.aug_test(
- x, proposal_list, img_metas, rescale=rescale)
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/models/necks/fpn.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/models/necks/fpn.py
deleted file mode 100644
index a53b2a69500f8c2edb835abc3ff0ccc2173d1fb1..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/models/necks/fpn.py
+++ /dev/null
@@ -1,212 +0,0 @@
-import torch.nn as nn
-import torch.nn.functional as F
-from annotator.uniformer.mmcv.cnn import ConvModule, xavier_init
-
-from ..builder import NECKS
-
-
-@NECKS.register_module()
-class FPN(nn.Module):
- """Feature Pyramid Network.
-
- This is an implementation of - Feature Pyramid Networks for Object
- Detection (https://arxiv.org/abs/1612.03144)
-
- Args:
- in_channels (List[int]): Number of input channels per scale.
- out_channels (int): Number of output channels (used at each scale)
- num_outs (int): Number of output scales.
- start_level (int): Index of the start input backbone level used to
- build the feature pyramid. Default: 0.
- end_level (int): Index of the end input backbone level (exclusive) to
- build the feature pyramid. Default: -1, which means the last level.
- add_extra_convs (bool | str): If bool, it decides whether to add conv
- layers on top of the original feature maps. Default to False.
- If True, its actual mode is specified by `extra_convs_on_inputs`.
- If str, it specifies the source feature map of the extra convs.
- Only the following options are allowed
-
- - 'on_input': Last feat map of neck inputs (i.e. backbone feature).
- - 'on_lateral': Last feature map after lateral convs.
- - 'on_output': The last output feature map after fpn convs.
- extra_convs_on_inputs (bool, deprecated): Whether to apply extra convs
- on the original feature from the backbone. If True,
- it is equivalent to `add_extra_convs='on_input'`. If False, it is
- equivalent to set `add_extra_convs='on_output'`. Default to True.
- relu_before_extra_convs (bool): Whether to apply relu before the extra
- conv. Default: False.
- no_norm_on_lateral (bool): Whether to apply norm on lateral.
- Default: False.
- conv_cfg (dict): Config dict for convolution layer. Default: None.
- norm_cfg (dict): Config dict for normalization layer. Default: None.
- act_cfg (str): Config dict for activation layer in ConvModule.
- Default: None.
- upsample_cfg (dict): Config dict for interpolate layer.
- Default: `dict(mode='nearest')`
-
- Example:
- >>> import torch
- >>> in_channels = [2, 3, 5, 7]
- >>> scales = [340, 170, 84, 43]
- >>> inputs = [torch.rand(1, c, s, s)
- ... for c, s in zip(in_channels, scales)]
- >>> self = FPN(in_channels, 11, len(in_channels)).eval()
- >>> outputs = self.forward(inputs)
- >>> for i in range(len(outputs)):
- ... print(f'outputs[{i}].shape = {outputs[i].shape}')
- outputs[0].shape = torch.Size([1, 11, 340, 340])
- outputs[1].shape = torch.Size([1, 11, 170, 170])
- outputs[2].shape = torch.Size([1, 11, 84, 84])
- outputs[3].shape = torch.Size([1, 11, 43, 43])
- """
-
- def __init__(self,
- in_channels,
- out_channels,
- num_outs,
- start_level=0,
- end_level=-1,
- add_extra_convs=False,
- extra_convs_on_inputs=False,
- relu_before_extra_convs=False,
- no_norm_on_lateral=False,
- conv_cfg=None,
- norm_cfg=None,
- act_cfg=None,
- upsample_cfg=dict(mode='nearest')):
- super(FPN, self).__init__()
- assert isinstance(in_channels, list)
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.num_ins = len(in_channels)
- self.num_outs = num_outs
- self.relu_before_extra_convs = relu_before_extra_convs
- self.no_norm_on_lateral = no_norm_on_lateral
- self.fp16_enabled = False
- self.upsample_cfg = upsample_cfg.copy()
-
- if end_level == -1:
- self.backbone_end_level = self.num_ins
- assert num_outs >= self.num_ins - start_level
- else:
- # if end_level < inputs, no extra level is allowed
- self.backbone_end_level = end_level
- assert end_level <= len(in_channels)
- assert num_outs == end_level - start_level
- self.start_level = start_level
- self.end_level = end_level
- self.add_extra_convs = add_extra_convs
- assert isinstance(add_extra_convs, (str, bool))
- if isinstance(add_extra_convs, str):
- # Extra_convs_source choices: 'on_input', 'on_lateral', 'on_output'
- assert add_extra_convs in ('on_input', 'on_lateral', 'on_output')
- elif add_extra_convs: # True
- if extra_convs_on_inputs:
- # For compatibility with previous release
- # TODO: deprecate `extra_convs_on_inputs`
- self.add_extra_convs = 'on_input'
- else:
- self.add_extra_convs = 'on_output'
-
- self.lateral_convs = nn.ModuleList()
- self.fpn_convs = nn.ModuleList()
-
- for i in range(self.start_level, self.backbone_end_level):
- l_conv = ConvModule(
- in_channels[i],
- out_channels,
- 1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg if not self.no_norm_on_lateral else None,
- act_cfg=act_cfg,
- inplace=False)
- fpn_conv = ConvModule(
- out_channels,
- out_channels,
- 3,
- padding=1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg,
- inplace=False)
-
- self.lateral_convs.append(l_conv)
- self.fpn_convs.append(fpn_conv)
-
- # add extra conv layers (e.g., RetinaNet)
- extra_levels = num_outs - self.backbone_end_level + self.start_level
- if self.add_extra_convs and extra_levels >= 1:
- for i in range(extra_levels):
- if i == 0 and self.add_extra_convs == 'on_input':
- in_channels = self.in_channels[self.backbone_end_level - 1]
- else:
- in_channels = out_channels
- extra_fpn_conv = ConvModule(
- in_channels,
- out_channels,
- 3,
- stride=2,
- padding=1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg,
- inplace=False)
- self.fpn_convs.append(extra_fpn_conv)
-
- # default init_weights for conv(msra) and norm in ConvModule
- def init_weights(self):
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- xavier_init(m, distribution='uniform')
-
- def forward(self, inputs):
- assert len(inputs) == len(self.in_channels)
-
- # build laterals
- laterals = [
- lateral_conv(inputs[i + self.start_level])
- for i, lateral_conv in enumerate(self.lateral_convs)
- ]
-
- # build top-down path
- used_backbone_levels = len(laterals)
- for i in range(used_backbone_levels - 1, 0, -1):
- # In some cases, fixing `scale factor` (e.g. 2) is preferred, but
- # it cannot co-exist with `size` in `F.interpolate`.
- if 'scale_factor' in self.upsample_cfg:
- laterals[i - 1] += F.interpolate(laterals[i],
- **self.upsample_cfg)
- else:
- prev_shape = laterals[i - 1].shape[2:]
- laterals[i - 1] += F.interpolate(
- laterals[i], size=prev_shape, **self.upsample_cfg)
-
- # build outputs
- # part 1: from original levels
- outs = [
- self.fpn_convs[i](laterals[i]) for i in range(used_backbone_levels)
- ]
- # part 2: add extra levels
- if self.num_outs > len(outs):
- # use max pool to get more levels on top of outputs
- # (e.g., Faster R-CNN, Mask R-CNN)
- if not self.add_extra_convs:
- for i in range(self.num_outs - used_backbone_levels):
- outs.append(F.max_pool2d(outs[-1], 1, stride=2))
- # add conv layers on top of original feature maps (RetinaNet)
- else:
- if self.add_extra_convs == 'on_input':
- extra_source = inputs[self.backbone_end_level - 1]
- elif self.add_extra_convs == 'on_lateral':
- extra_source = laterals[-1]
- elif self.add_extra_convs == 'on_output':
- extra_source = outs[-1]
- else:
- raise NotImplementedError
- outs.append(self.fpn_convs[used_backbone_levels](extra_source))
- for i in range(used_backbone_levels + 1, self.num_outs):
- if self.relu_before_extra_convs:
- outs.append(self.fpn_convs[i](F.relu(outs[-1])))
- else:
- outs.append(self.fpn_convs[i](outs[-1]))
- return tuple(outs)
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/image/colorspace.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/image/colorspace.py
deleted file mode 100644
index 814533952fdfda23d67cb6a3073692d8c1156add..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/image/colorspace.py
+++ /dev/null
@@ -1,306 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import cv2
-import numpy as np
-
-
-def imconvert(img, src, dst):
- """Convert an image from the src colorspace to dst colorspace.
-
- Args:
- img (ndarray): The input image.
- src (str): The source colorspace, e.g., 'rgb', 'hsv'.
- dst (str): The destination colorspace, e.g., 'rgb', 'hsv'.
-
- Returns:
- ndarray: The converted image.
- """
- code = getattr(cv2, f'COLOR_{src.upper()}2{dst.upper()}')
- out_img = cv2.cvtColor(img, code)
- return out_img
-
-
-def bgr2gray(img, keepdim=False):
- """Convert a BGR image to grayscale image.
-
- Args:
- img (ndarray): The input image.
- keepdim (bool): If False (by default), then return the grayscale image
- with 2 dims, otherwise 3 dims.
-
- Returns:
- ndarray: The converted grayscale image.
- """
- out_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
- if keepdim:
- out_img = out_img[..., None]
- return out_img
-
-
-def rgb2gray(img, keepdim=False):
- """Convert a RGB image to grayscale image.
-
- Args:
- img (ndarray): The input image.
- keepdim (bool): If False (by default), then return the grayscale image
- with 2 dims, otherwise 3 dims.
-
- Returns:
- ndarray: The converted grayscale image.
- """
- out_img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
- if keepdim:
- out_img = out_img[..., None]
- return out_img
-
-
-def gray2bgr(img):
- """Convert a grayscale image to BGR image.
-
- Args:
- img (ndarray): The input image.
-
- Returns:
- ndarray: The converted BGR image.
- """
- img = img[..., None] if img.ndim == 2 else img
- out_img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
- return out_img
-
-
-def gray2rgb(img):
- """Convert a grayscale image to RGB image.
-
- Args:
- img (ndarray): The input image.
-
- Returns:
- ndarray: The converted RGB image.
- """
- img = img[..., None] if img.ndim == 2 else img
- out_img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB)
- return out_img
-
-
-def _convert_input_type_range(img):
- """Convert the type and range of the input image.
-
- It converts the input image to np.float32 type and range of [0, 1].
- It is mainly used for pre-processing the input image in colorspace
- conversion functions such as rgb2ycbcr and ycbcr2rgb.
-
- Args:
- img (ndarray): The input image. It accepts:
- 1. np.uint8 type with range [0, 255];
- 2. np.float32 type with range [0, 1].
-
- Returns:
- (ndarray): The converted image with type of np.float32 and range of
- [0, 1].
- """
- img_type = img.dtype
- img = img.astype(np.float32)
- if img_type == np.float32:
- pass
- elif img_type == np.uint8:
- img /= 255.
- else:
- raise TypeError('The img type should be np.float32 or np.uint8, '
- f'but got {img_type}')
- return img
-
-
-def _convert_output_type_range(img, dst_type):
- """Convert the type and range of the image according to dst_type.
-
- It converts the image to desired type and range. If `dst_type` is np.uint8,
- images will be converted to np.uint8 type with range [0, 255]. If
- `dst_type` is np.float32, it converts the image to np.float32 type with
- range [0, 1].
- It is mainly used for post-processing images in colorspace conversion
- functions such as rgb2ycbcr and ycbcr2rgb.
-
- Args:
- img (ndarray): The image to be converted with np.float32 type and
- range [0, 255].
- dst_type (np.uint8 | np.float32): If dst_type is np.uint8, it
- converts the image to np.uint8 type with range [0, 255]. If
- dst_type is np.float32, it converts the image to np.float32 type
- with range [0, 1].
-
- Returns:
- (ndarray): The converted image with desired type and range.
- """
- if dst_type not in (np.uint8, np.float32):
- raise TypeError('The dst_type should be np.float32 or np.uint8, '
- f'but got {dst_type}')
- if dst_type == np.uint8:
- img = img.round()
- else:
- img /= 255.
- return img.astype(dst_type)
-
-
-def rgb2ycbcr(img, y_only=False):
- """Convert a RGB image to YCbCr image.
-
- This function produces the same results as Matlab's `rgb2ycbcr` function.
- It implements the ITU-R BT.601 conversion for standard-definition
- television. See more details in
- https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion.
-
- It differs from a similar function in cv2.cvtColor: `RGB <-> YCrCb`.
- In OpenCV, it implements a JPEG conversion. See more details in
- https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion.
-
- Args:
- img (ndarray): The input image. It accepts:
- 1. np.uint8 type with range [0, 255];
- 2. np.float32 type with range [0, 1].
- y_only (bool): Whether to only return Y channel. Default: False.
-
- Returns:
- ndarray: The converted YCbCr image. The output image has the same type
- and range as input image.
- """
- img_type = img.dtype
- img = _convert_input_type_range(img)
- if y_only:
- out_img = np.dot(img, [65.481, 128.553, 24.966]) + 16.0
- else:
- out_img = np.matmul(
- img, [[65.481, -37.797, 112.0], [128.553, -74.203, -93.786],
- [24.966, 112.0, -18.214]]) + [16, 128, 128]
- out_img = _convert_output_type_range(out_img, img_type)
- return out_img
-
-
-def bgr2ycbcr(img, y_only=False):
- """Convert a BGR image to YCbCr image.
-
- The bgr version of rgb2ycbcr.
- It implements the ITU-R BT.601 conversion for standard-definition
- television. See more details in
- https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion.
-
- It differs from a similar function in cv2.cvtColor: `BGR <-> YCrCb`.
- In OpenCV, it implements a JPEG conversion. See more details in
- https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion.
-
- Args:
- img (ndarray): The input image. It accepts:
- 1. np.uint8 type with range [0, 255];
- 2. np.float32 type with range [0, 1].
- y_only (bool): Whether to only return Y channel. Default: False.
-
- Returns:
- ndarray: The converted YCbCr image. The output image has the same type
- and range as input image.
- """
- img_type = img.dtype
- img = _convert_input_type_range(img)
- if y_only:
- out_img = np.dot(img, [24.966, 128.553, 65.481]) + 16.0
- else:
- out_img = np.matmul(
- img, [[24.966, 112.0, -18.214], [128.553, -74.203, -93.786],
- [65.481, -37.797, 112.0]]) + [16, 128, 128]
- out_img = _convert_output_type_range(out_img, img_type)
- return out_img
-
-
-def ycbcr2rgb(img):
- """Convert a YCbCr image to RGB image.
-
- This function produces the same results as Matlab's ycbcr2rgb function.
- It implements the ITU-R BT.601 conversion for standard-definition
- television. See more details in
- https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion.
-
- It differs from a similar function in cv2.cvtColor: `YCrCb <-> RGB`.
- In OpenCV, it implements a JPEG conversion. See more details in
- https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion.
-
- Args:
- img (ndarray): The input image. It accepts:
- 1. np.uint8 type with range [0, 255];
- 2. np.float32 type with range [0, 1].
-
- Returns:
- ndarray: The converted RGB image. The output image has the same type
- and range as input image.
- """
- img_type = img.dtype
- img = _convert_input_type_range(img) * 255
- out_img = np.matmul(img, [[0.00456621, 0.00456621, 0.00456621],
- [0, -0.00153632, 0.00791071],
- [0.00625893, -0.00318811, 0]]) * 255.0 + [
- -222.921, 135.576, -276.836
- ]
- out_img = _convert_output_type_range(out_img, img_type)
- return out_img
-
-
-def ycbcr2bgr(img):
- """Convert a YCbCr image to BGR image.
-
- The bgr version of ycbcr2rgb.
- It implements the ITU-R BT.601 conversion for standard-definition
- television. See more details in
- https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion.
-
- It differs from a similar function in cv2.cvtColor: `YCrCb <-> BGR`.
- In OpenCV, it implements a JPEG conversion. See more details in
- https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion.
-
- Args:
- img (ndarray): The input image. It accepts:
- 1. np.uint8 type with range [0, 255];
- 2. np.float32 type with range [0, 1].
-
- Returns:
- ndarray: The converted BGR image. The output image has the same type
- and range as input image.
- """
- img_type = img.dtype
- img = _convert_input_type_range(img) * 255
- out_img = np.matmul(img, [[0.00456621, 0.00456621, 0.00456621],
- [0.00791071, -0.00153632, 0],
- [0, -0.00318811, 0.00625893]]) * 255.0 + [
- -276.836, 135.576, -222.921
- ]
- out_img = _convert_output_type_range(out_img, img_type)
- return out_img
-
-
-def convert_color_factory(src, dst):
-
- code = getattr(cv2, f'COLOR_{src.upper()}2{dst.upper()}')
-
- def convert_color(img):
- out_img = cv2.cvtColor(img, code)
- return out_img
-
- convert_color.__doc__ = f"""Convert a {src.upper()} image to {dst.upper()}
- image.
-
- Args:
- img (ndarray or str): The input image.
-
- Returns:
- ndarray: The converted {dst.upper()} image.
- """
-
- return convert_color
-
-
-bgr2rgb = convert_color_factory('bgr', 'rgb')
-
-rgb2bgr = convert_color_factory('rgb', 'bgr')
-
-bgr2hsv = convert_color_factory('bgr', 'hsv')
-
-hsv2bgr = convert_color_factory('hsv', 'bgr')
-
-bgr2hls = convert_color_factory('bgr', 'hls')
-
-hls2bgr = convert_color_factory('hls', 'bgr')
diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/font/quartz.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/font/quartz.py
deleted file mode 100644
index 26b9d967edcc60a400aef29ccf2557e5ebffe301..0000000000000000000000000000000000000000
--- a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/font/quartz.py
+++ /dev/null
@@ -1,265 +0,0 @@
-# TODO Tiger and later: need to set kWindowApplicationScaledAttribute for DPI independence?
-
-import math
-import warnings
-from ctypes import c_void_p, c_int32, byref, c_byte
-
-from pyglet.font import base
-import pyglet.image
-
-from pyglet.libs.darwin import cocoapy
-
-cf = cocoapy.cf
-ct = cocoapy.ct
-quartz = cocoapy.quartz
-
-
-class QuartzGlyphRenderer(base.GlyphRenderer):
- def __init__(self, font):
- super().__init__(font)
- self.font = font
-
- def render(self, text):
- # Using CTLineDraw seems to be the only way to make sure that the text
- # is drawn with the specified font when that font is a graphics font loaded from
- # memory. For whatever reason, [NSAttributedString drawAtPoint:] ignores
- # the graphics font if it not registered with the font manager.
- # So we just use CTLineDraw for both graphics fonts and installed fonts.
-
- ctFont = self.font.ctFont
-
- # Create an attributed string using text and font.
- attributes = c_void_p(cf.CFDictionaryCreateMutable(None, 1, cf.kCFTypeDictionaryKeyCallBacks, cf.kCFTypeDictionaryValueCallBacks))
- cf.CFDictionaryAddValue(attributes, cocoapy.kCTFontAttributeName, ctFont)
- string = c_void_p(cf.CFAttributedStringCreate(None, cocoapy.CFSTR(text), attributes))
-
- # Create a CTLine object to render the string.
- line = c_void_p(ct.CTLineCreateWithAttributedString(string))
- cf.CFRelease(string)
- cf.CFRelease(attributes)
-
- # Get a bounding rectangle for glyphs in string.
- count = len(text)
- chars = (cocoapy.UniChar * count)(*list(map(ord,str(text))))
- glyphs = (cocoapy.CGGlyph * count)()
- ct.CTFontGetGlyphsForCharacters(ctFont, chars, glyphs, count)
- rect = ct.CTFontGetBoundingRectsForGlyphs(ctFont, 0, glyphs, None, count)
-
- # Get advance for all glyphs in string.
- advance = ct.CTFontGetAdvancesForGlyphs(ctFont, 0, glyphs, None, count)
-
- # Set image parameters:
- # We add 2 pixels to the bitmap width and height so that there will be a 1-pixel border
- # around the glyph image when it is placed in the texture atlas. This prevents
- # weird artifacts from showing up around the edges of the rendered glyph textures.
- # We adjust the baseline and lsb of the glyph by 1 pixel accordingly.
- width = max(int(math.ceil(rect.size.width) + 2), 1)
- height = max(int(math.ceil(rect.size.height) + 2), 1)
- baseline = -int(math.floor(rect.origin.y)) + 1
- lsb = int(math.floor(rect.origin.x)) - 1
- advance = int(round(advance))
-
- # Create bitmap context.
- bitsPerComponent = 8
- bytesPerRow = 4*width
- colorSpace = c_void_p(quartz.CGColorSpaceCreateDeviceRGB())
- bitmap = c_void_p(quartz.CGBitmapContextCreate(
- None,
- width,
- height,
- bitsPerComponent,
- bytesPerRow,
- colorSpace,
- cocoapy.kCGImageAlphaPremultipliedLast))
-
- # Draw text to bitmap context.
- quartz.CGContextSetShouldAntialias(bitmap, True)
- quartz.CGContextSetTextPosition(bitmap, -lsb, baseline)
- ct.CTLineDraw(line, bitmap)
- cf.CFRelease(line)
-
- # Create an image to get the data out.
- imageRef = c_void_p(quartz.CGBitmapContextCreateImage(bitmap))
-
- bytesPerRow = quartz.CGImageGetBytesPerRow(imageRef)
- dataProvider = c_void_p(quartz.CGImageGetDataProvider(imageRef))
- imageData = c_void_p(quartz.CGDataProviderCopyData(dataProvider))
- buffersize = cf.CFDataGetLength(imageData)
- buffer = (c_byte * buffersize)()
- byteRange = cocoapy.CFRange(0, buffersize)
- cf.CFDataGetBytes(imageData, byteRange, buffer)
-
- quartz.CGImageRelease(imageRef)
- quartz.CGDataProviderRelease(imageData)
- cf.CFRelease(bitmap)
- cf.CFRelease(colorSpace)
-
- glyph_image = pyglet.image.ImageData(width, height, 'RGBA', buffer, bytesPerRow)
-
- glyph = self.font.create_glyph(glyph_image)
- glyph.set_bearings(baseline, lsb, advance)
- t = list(glyph.tex_coords)
- glyph.tex_coords = t[9:12] + t[6:9] + t[3:6] + t[:3]
-
- return glyph
-
-
-class QuartzFont(base.Font):
- glyph_renderer_class = QuartzGlyphRenderer
- _loaded_CGFont_table = {}
-
- def _lookup_font_with_family_and_traits(self, family, traits):
- # This method searches the _loaded_CGFont_table to find a loaded
- # font of the given family with the desired traits. If it can't find
- # anything with the exact traits, it tries to fall back to whatever
- # we have loaded that's close. If it can't find anything in the
- # given family at all, it returns None.
-
- # Check if we've loaded the font with the specified family.
- if family not in self._loaded_CGFont_table:
- return None
- # Grab a dictionary of all fonts in the family, keyed by traits.
- fonts = self._loaded_CGFont_table[family]
- if not fonts:
- return None
- # Return font with desired traits if it is available.
- if traits in fonts:
- return fonts[traits]
- # Otherwise try to find a font with some of the traits.
- for (t, f) in fonts.items():
- if traits & t:
- return f
- # Otherwise try to return a regular font.
- if 0 in fonts:
- return fonts[0]
- # Otherwise return whatever we have.
- return list(fonts.values())[0]
-
- def _create_font_descriptor(self, family_name, traits):
- # Create an attribute dictionary.
- attributes = c_void_p(cf.CFDictionaryCreateMutable(None, 0, cf.kCFTypeDictionaryKeyCallBacks, cf.kCFTypeDictionaryValueCallBacks))
- # Add family name to attributes.
- cfname = cocoapy.CFSTR(family_name)
- cf.CFDictionaryAddValue(attributes, cocoapy.kCTFontFamilyNameAttribute, cfname)
- cf.CFRelease(cfname)
- # Construct a CFNumber to represent the traits.
- itraits = c_int32(traits)
- symTraits = c_void_p(cf.CFNumberCreate(None, cocoapy.kCFNumberSInt32Type, byref(itraits)))
- if symTraits:
- # Construct a dictionary to hold the traits values.
- traitsDict = c_void_p(cf.CFDictionaryCreateMutable(None, 0, cf.kCFTypeDictionaryKeyCallBacks, cf.kCFTypeDictionaryValueCallBacks))
- if traitsDict:
- # Add CFNumber traits to traits dictionary.
- cf.CFDictionaryAddValue(traitsDict, cocoapy.kCTFontSymbolicTrait, symTraits)
- # Add traits dictionary to attributes.
- cf.CFDictionaryAddValue(attributes, cocoapy.kCTFontTraitsAttribute, traitsDict)
- cf.CFRelease(traitsDict)
- cf.CFRelease(symTraits)
- # Create font descriptor with attributes.
- descriptor = c_void_p(ct.CTFontDescriptorCreateWithAttributes(attributes))
- cf.CFRelease(attributes)
- return descriptor
-
- def __init__(self, name, size, bold=False, italic=False, stretch=False, dpi=None):
- # assert type(bold) is bool, "Only a boolean value is supported for bold in the current font renderer."
- # assert type(italic) is bool, "Only a boolean value is supported for bold in the current font renderer."
-
- if stretch:
- warnings.warn("The current font render does not support stretching.")
-
- super().__init__()
-
- name = name or 'Helvetica'
-
- # I don't know what is the right thing to do here.
- dpi = dpi or 96
- size = size * dpi / 72.0
-
- # Construct traits value.
- traits = 0
- if bold:
- traits |= cocoapy.kCTFontBoldTrait
- if italic:
- traits |= cocoapy.kCTFontItalicTrait
-
- name = str(name)
- # First see if we can find an appropriate font from our table of loaded fonts.
- cgFont = self._lookup_font_with_family_and_traits(name, traits)
- if cgFont:
- # Use cgFont from table to create a CTFont object with the specified size.
- self.ctFont = c_void_p(ct.CTFontCreateWithGraphicsFont(cgFont, size, None, None))
- else:
- # Create a font descriptor for given name and traits and use it to create font.
- descriptor = self._create_font_descriptor(name, traits)
- self.ctFont = c_void_p(ct.CTFontCreateWithFontDescriptor(descriptor, size, None))
-
- cf.CFRelease(descriptor)
- assert self.ctFont, "Couldn't load font: " + name
-
- string = c_void_p(ct.CTFontCopyFamilyName(self.ctFont))
- self._family_name = str(cocoapy.cfstring_to_string(string))
- cf.CFRelease(string)
-
- self.ascent = int(math.ceil(ct.CTFontGetAscent(self.ctFont)))
- self.descent = -int(math.ceil(ct.CTFontGetDescent(self.ctFont)))
-
- @property
- def name(self):
- return self._family_name
-
- def __del__(self):
- cf.CFRelease(self.ctFont)
-
- @classmethod
- def have_font(cls, name):
- name = str(name)
- if name in cls._loaded_CGFont_table: return True
- # Try to create the font to see if it exists.
- # TODO: Find a better way to check.
- cfstring = cocoapy.CFSTR(name)
- cgfont = c_void_p(quartz.CGFontCreateWithFontName(cfstring))
- cf.CFRelease(cfstring)
- if cgfont:
- cf.CFRelease(cgfont)
- return True
- return False
-
- @classmethod
- def add_font_data(cls, data):
- # Create a cgFont with the data. There doesn't seem to be a way to
- # register a font loaded from memory such that the operating system will
- # find it later. So instead we just store the cgFont in a table where
- # it can be found by our __init__ method.
- # Note that the iOS CTFontManager *is* able to register graphics fonts,
- # however this method is missing from CTFontManager on MacOS 10.6
- dataRef = c_void_p(cf.CFDataCreate(None, data, len(data)))
- provider = c_void_p(quartz.CGDataProviderCreateWithCFData(dataRef))
- cgFont = c_void_p(quartz.CGFontCreateWithDataProvider(provider))
-
- cf.CFRelease(dataRef)
- quartz.CGDataProviderRelease(provider)
-
- # Create a template CTFont from the graphics font so that we can get font info.
- ctFont = c_void_p(ct.CTFontCreateWithGraphicsFont(cgFont, 1, None, None))
-
- # Get info about the font to use as key in our font table.
- string = c_void_p(ct.CTFontCopyFamilyName(ctFont))
- familyName = str(cocoapy.cfstring_to_string(string))
- cf.CFRelease(string)
-
- string = c_void_p(ct.CTFontCopyFullName(ctFont))
- fullName = str(cocoapy.cfstring_to_string(string))
- cf.CFRelease(string)
-
- traits = ct.CTFontGetSymbolicTraits(ctFont)
- cf.CFRelease(ctFont)
-
- # Store font in table. We store it under both its family name and its
- # full name, since its not always clear which one will be looked up.
- if familyName not in cls._loaded_CGFont_table:
- cls._loaded_CGFont_table[familyName] = {}
- cls._loaded_CGFont_table[familyName][traits] = cgFont
-
- if fullName not in cls._loaded_CGFont_table:
- cls._loaded_CGFont_table[fullName] = {}
- cls._loaded_CGFont_table[fullName][traits] = cgFont
diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/input/win32/__init__.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/input/win32/__init__.py
deleted file mode 100644
index cde1490008e51bd51c5352228639c9ff92384274..0000000000000000000000000000000000000000
--- a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/input/win32/__init__.py
+++ /dev/null
@@ -1,116 +0,0 @@
-from typing import Dict, Optional
-
-import pyglet
-
-from pyglet.input import base
-from pyglet.input.win32.directinput import DirectInputDevice, _create_controller
-from pyglet.input.win32.directinput import _di_manager as _di_device_manager
-
-from pyglet.input.win32.directinput import get_devices as dinput_get_devices
-from pyglet.input.win32.directinput import get_controllers as dinput_get_controllers
-from pyglet.input.win32.directinput import get_joysticks
-
-try:
- from pyglet.input.win32.wintab import get_tablets
-except:
- def get_tablets(display=None):
- import warnings
- warnings.warn("Failed to initialize wintab framework.")
- return []
-
-
-_xinput_enabled = False
-if not pyglet.options["win32_disable_xinput"]:
- try:
- from pyglet.input.win32.xinput import XInputControllerManager, XInputController, XInputDevice
- from pyglet.input.win32.xinput import _device_manager as _xinput_device_manager
- from pyglet.input.win32.xinput import get_devices as xinput_get_devices
- from pyglet.input.win32.xinput import get_controllers as xinput_get_controllers
-
- _xinput_enabled = True
- except OSError:
- # Fail to import XInput.
- pass
-
-
-class Win32ControllerManager(base.ControllerManager):
- """This class manages XInput and DirectInput as a combined manager.
- XInput will override any XInput compatible DirectInput devices.
- Any devices not supported by XInput will fall back to DirectInput.
- """
-
- def __init__(self):
- self._di_controllers: Dict[DirectInputDevice, base.Controller] = {}
-
- if _xinput_enabled:
- self._xinput_controllers: Dict[XInputDevice, XInputController] = {}
-
- for xdevice in _xinput_device_manager.all_devices: # All 4 devices are initialized.
- meta = {'name': xdevice.name, 'guid': "XINPUTCONTROLLER"}
- self._xinput_controllers[xdevice] = XInputController(xdevice, meta)
-
- @_xinput_device_manager.event
- def on_connect(xdevice):
- self.dispatch_event('on_connect', self._xinput_controllers[xdevice])
-
- @_xinput_device_manager.event
- def on_disconnect(xdevice):
- self.dispatch_event('on_disconnect', self._xinput_controllers[xdevice])
-
- self._set_initial_didevices()
-
- @_di_device_manager.event
- def on_connect(di_device):
- if di_device not in self._di_controllers:
- if self._add_di_controller(di_device):
- pyglet.app.platform_event_loop.post_event(self, 'on_connect', self._di_controllers[di_device])
-
- @_di_device_manager.event
- def on_disconnect(di_device):
- if di_device in self._di_controllers:
- _controller = self._di_controllers[di_device]
- del self._di_controllers[di_device]
- pyglet.app.platform_event_loop.post_event(self, 'on_disconnect', _controller)
-
- def _set_initial_didevices(self):
- if not _di_device_manager.registered:
- _di_device_manager.register_device_events()
- _di_device_manager.set_current_devices()
-
- for device in _di_device_manager.devices:
- self._add_di_controller(device)
-
- def _add_di_controller(self, device: DirectInputDevice) -> Optional[base.Controller]:
- controller = _create_controller(device)
- if controller:
- self._di_controllers[device] = controller
- return controller
-
- return None
-
- def _get_xinput_controllers(self) -> list:
- if not _xinput_enabled:
- return []
- return [ctlr for ctlr in self._xinput_controllers.values() if ctlr.device.connected]
-
- def _get_di_controllers(self) -> list:
- return list(self._di_controllers.values())
-
- def get_controllers(self):
- return self._get_xinput_controllers() + self._get_di_controllers()
-
-
-def xinput_get_devices():
- return []
-
-
-def xinput_get_controllers():
- return []
-
-
-def get_devices(display=None):
- return xinput_get_devices() + dinput_get_devices(display)
-
-
-def get_controllers(display=None):
- return xinput_get_controllers() + dinput_get_controllers(display)
diff --git a/spaces/adyjay/andite-anything-v4.0/app.py b/spaces/adyjay/andite-anything-v4.0/app.py
deleted file mode 100644
index 47a2051db6dadeea03edf70d62694fd3e5e88ba7..0000000000000000000000000000000000000000
--- a/spaces/adyjay/andite-anything-v4.0/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/andite/anything-v4.0").launch()
\ No newline at end of file
diff --git a/spaces/akhaliq/China-Chic-illustration/README.md b/spaces/akhaliq/China-Chic-illustration/README.md
deleted file mode 100644
index 5ac10725abaa2a85f46bd5e52078e17b9435475a..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/China-Chic-illustration/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: China Chic Illustration
-emoji: 🚀
-colorFrom: yellow
-colorTo: gray
-sdk: gradio
-sdk_version: 3.16.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/akhaliq/lama/bin/extract_masks.py b/spaces/akhaliq/lama/bin/extract_masks.py
deleted file mode 100644
index d114e0fe470595f1d2aaeeeb84b36352f65b121e..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/lama/bin/extract_masks.py
+++ /dev/null
@@ -1,63 +0,0 @@
-import PIL.Image as Image
-import numpy as np
-import os
-
-
-def main(args):
- if not args.indir.endswith('/'):
- args.indir += '/'
- os.makedirs(args.outdir, exist_ok=True)
-
- src_images = [
- args.indir+fname for fname in os.listdir(args.indir)]
-
- tgt_masks = [
- args.outdir+fname[:-4] + f'_mask000.png'
- for fname in os.listdir(args.indir)]
-
- for img_name, msk_name in zip(src_images, tgt_masks):
- #print(img)
- #print(msk)
-
- image = Image.open(img_name).convert('RGB')
- image = np.transpose(np.array(image), (2, 0, 1))
-
- mask = (image == 255).astype(int)
-
- print(mask.dtype, mask.shape)
-
-
- Image.fromarray(
- np.clip(mask[0,:,:] * 255, 0, 255).astype('uint8'),mode='L'
- ).save(msk_name)
-
-
-
-
- '''
- for infile in src_images:
- try:
- file_relpath = infile[len(indir):]
- img_outpath = os.path.join(outdir, file_relpath)
- os.makedirs(os.path.dirname(img_outpath), exist_ok=True)
-
- image = Image.open(infile).convert('RGB')
-
- mask =
-
- Image.fromarray(
- np.clip(
- cur_mask * 255, 0, 255).astype('uint8'),
- mode='L'
- ).save(cur_basename + f'_mask{i:03d}.png')
- '''
-
-
-
-if __name__ == '__main__':
- import argparse
- aparser = argparse.ArgumentParser()
- aparser.add_argument('--indir', type=str, help='Path to folder with images')
- aparser.add_argument('--outdir', type=str, help='Path to folder to store aligned images and masks to')
-
- main(aparser.parse_args())
diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/distlib/metadata.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/distlib/metadata.py
deleted file mode 100644
index 6a26b0ab232e6c474dc3309a1a64bfce790e98a6..0000000000000000000000000000000000000000
--- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/distlib/metadata.py
+++ /dev/null
@@ -1,1058 +0,0 @@
-# -*- coding: utf-8 -*-
-#
-# Copyright (C) 2012 The Python Software Foundation.
-# See LICENSE.txt and CONTRIBUTORS.txt.
-#
-"""Implementation of the Metadata for Python packages PEPs.
-
-Supports all metadata formats (1.0, 1.1, 1.2, 1.3/2.1 and withdrawn 2.0).
-"""
-from __future__ import unicode_literals
-
-import codecs
-from email import message_from_file
-import json
-import logging
-import re
-
-
-from . import DistlibException, __version__
-from .compat import StringIO, string_types, text_type
-from .markers import interpret
-from .util import extract_by_key, get_extras
-from .version import get_scheme, PEP440_VERSION_RE
-
-logger = logging.getLogger(__name__)
-
-
-class MetadataMissingError(DistlibException):
- """A required metadata is missing"""
-
-
-class MetadataConflictError(DistlibException):
- """Attempt to read or write metadata fields that are conflictual."""
-
-
-class MetadataUnrecognizedVersionError(DistlibException):
- """Unknown metadata version number."""
-
-
-class MetadataInvalidError(DistlibException):
- """A metadata value is invalid"""
-
-# public API of this module
-__all__ = ['Metadata', 'PKG_INFO_ENCODING', 'PKG_INFO_PREFERRED_VERSION']
-
-# Encoding used for the PKG-INFO files
-PKG_INFO_ENCODING = 'utf-8'
-
-# preferred version. Hopefully will be changed
-# to 1.2 once PEP 345 is supported everywhere
-PKG_INFO_PREFERRED_VERSION = '1.1'
-
-_LINE_PREFIX_1_2 = re.compile('\n \\|')
-_LINE_PREFIX_PRE_1_2 = re.compile('\n ')
-_241_FIELDS = ('Metadata-Version', 'Name', 'Version', 'Platform',
- 'Summary', 'Description',
- 'Keywords', 'Home-page', 'Author', 'Author-email',
- 'License')
-
-_314_FIELDS = ('Metadata-Version', 'Name', 'Version', 'Platform',
- 'Supported-Platform', 'Summary', 'Description',
- 'Keywords', 'Home-page', 'Author', 'Author-email',
- 'License', 'Classifier', 'Download-URL', 'Obsoletes',
- 'Provides', 'Requires')
-
-_314_MARKERS = ('Obsoletes', 'Provides', 'Requires', 'Classifier',
- 'Download-URL')
-
-_345_FIELDS = ('Metadata-Version', 'Name', 'Version', 'Platform',
- 'Supported-Platform', 'Summary', 'Description',
- 'Keywords', 'Home-page', 'Author', 'Author-email',
- 'Maintainer', 'Maintainer-email', 'License',
- 'Classifier', 'Download-URL', 'Obsoletes-Dist',
- 'Project-URL', 'Provides-Dist', 'Requires-Dist',
- 'Requires-Python', 'Requires-External')
-
-_345_MARKERS = ('Provides-Dist', 'Requires-Dist', 'Requires-Python',
- 'Obsoletes-Dist', 'Requires-External', 'Maintainer',
- 'Maintainer-email', 'Project-URL')
-
-_426_FIELDS = ('Metadata-Version', 'Name', 'Version', 'Platform',
- 'Supported-Platform', 'Summary', 'Description',
- 'Keywords', 'Home-page', 'Author', 'Author-email',
- 'Maintainer', 'Maintainer-email', 'License',
- 'Classifier', 'Download-URL', 'Obsoletes-Dist',
- 'Project-URL', 'Provides-Dist', 'Requires-Dist',
- 'Requires-Python', 'Requires-External', 'Private-Version',
- 'Obsoleted-By', 'Setup-Requires-Dist', 'Extension',
- 'Provides-Extra')
-
-_426_MARKERS = ('Private-Version', 'Provides-Extra', 'Obsoleted-By',
- 'Setup-Requires-Dist', 'Extension')
-
-# See issue #106: Sometimes 'Requires' and 'Provides' occur wrongly in
-# the metadata. Include them in the tuple literal below to allow them
-# (for now).
-# Ditto for Obsoletes - see issue #140.
-_566_FIELDS = _426_FIELDS + ('Description-Content-Type',
- 'Requires', 'Provides', 'Obsoletes')
-
-_566_MARKERS = ('Description-Content-Type',)
-
-_ALL_FIELDS = set()
-_ALL_FIELDS.update(_241_FIELDS)
-_ALL_FIELDS.update(_314_FIELDS)
-_ALL_FIELDS.update(_345_FIELDS)
-_ALL_FIELDS.update(_426_FIELDS)
-_ALL_FIELDS.update(_566_FIELDS)
-
-EXTRA_RE = re.compile(r'''extra\s*==\s*("([^"]+)"|'([^']+)')''')
-
-
-def _version2fieldlist(version):
- if version == '1.0':
- return _241_FIELDS
- elif version == '1.1':
- return _314_FIELDS
- elif version == '1.2':
- return _345_FIELDS
- elif version in ('1.3', '2.1'):
- # avoid adding field names if already there
- return _345_FIELDS + tuple(f for f in _566_FIELDS if f not in _345_FIELDS)
- elif version == '2.0':
- return _426_FIELDS
- raise MetadataUnrecognizedVersionError(version)
-
-
-def _best_version(fields):
- """Detect the best version depending on the fields used."""
- def _has_marker(keys, markers):
- for marker in markers:
- if marker in keys:
- return True
- return False
-
- keys = []
- for key, value in fields.items():
- if value in ([], 'UNKNOWN', None):
- continue
- keys.append(key)
-
- possible_versions = ['1.0', '1.1', '1.2', '1.3', '2.0', '2.1']
-
- # first let's try to see if a field is not part of one of the version
- for key in keys:
- if key not in _241_FIELDS and '1.0' in possible_versions:
- possible_versions.remove('1.0')
- logger.debug('Removed 1.0 due to %s', key)
- if key not in _314_FIELDS and '1.1' in possible_versions:
- possible_versions.remove('1.1')
- logger.debug('Removed 1.1 due to %s', key)
- if key not in _345_FIELDS and '1.2' in possible_versions:
- possible_versions.remove('1.2')
- logger.debug('Removed 1.2 due to %s', key)
- if key not in _566_FIELDS and '1.3' in possible_versions:
- possible_versions.remove('1.3')
- logger.debug('Removed 1.3 due to %s', key)
- if key not in _566_FIELDS and '2.1' in possible_versions:
- if key != 'Description': # In 2.1, description allowed after headers
- possible_versions.remove('2.1')
- logger.debug('Removed 2.1 due to %s', key)
- if key not in _426_FIELDS and '2.0' in possible_versions:
- possible_versions.remove('2.0')
- logger.debug('Removed 2.0 due to %s', key)
-
- # possible_version contains qualified versions
- if len(possible_versions) == 1:
- return possible_versions[0] # found !
- elif len(possible_versions) == 0:
- logger.debug('Out of options - unknown metadata set: %s', fields)
- raise MetadataConflictError('Unknown metadata set')
-
- # let's see if one unique marker is found
- is_1_1 = '1.1' in possible_versions and _has_marker(keys, _314_MARKERS)
- is_1_2 = '1.2' in possible_versions and _has_marker(keys, _345_MARKERS)
- is_2_1 = '2.1' in possible_versions and _has_marker(keys, _566_MARKERS)
- is_2_0 = '2.0' in possible_versions and _has_marker(keys, _426_MARKERS)
- if int(is_1_1) + int(is_1_2) + int(is_2_1) + int(is_2_0) > 1:
- raise MetadataConflictError('You used incompatible 1.1/1.2/2.0/2.1 fields')
-
- # we have the choice, 1.0, or 1.2, or 2.0
- # - 1.0 has a broken Summary field but works with all tools
- # - 1.1 is to avoid
- # - 1.2 fixes Summary but has little adoption
- # - 2.0 adds more features and is very new
- if not is_1_1 and not is_1_2 and not is_2_1 and not is_2_0:
- # we couldn't find any specific marker
- if PKG_INFO_PREFERRED_VERSION in possible_versions:
- return PKG_INFO_PREFERRED_VERSION
- if is_1_1:
- return '1.1'
- if is_1_2:
- return '1.2'
- if is_2_1:
- return '2.1'
-
- return '2.0'
-
-# This follows the rules about transforming keys as described in
-# https://www.python.org/dev/peps/pep-0566/#id17
-_ATTR2FIELD = {
- name.lower().replace("-", "_"): name for name in _ALL_FIELDS
-}
-_FIELD2ATTR = {field: attr for attr, field in _ATTR2FIELD.items()}
-
-_PREDICATE_FIELDS = ('Requires-Dist', 'Obsoletes-Dist', 'Provides-Dist')
-_VERSIONS_FIELDS = ('Requires-Python',)
-_VERSION_FIELDS = ('Version',)
-_LISTFIELDS = ('Platform', 'Classifier', 'Obsoletes',
- 'Requires', 'Provides', 'Obsoletes-Dist',
- 'Provides-Dist', 'Requires-Dist', 'Requires-External',
- 'Project-URL', 'Supported-Platform', 'Setup-Requires-Dist',
- 'Provides-Extra', 'Extension')
-_LISTTUPLEFIELDS = ('Project-URL',)
-
-_ELEMENTSFIELD = ('Keywords',)
-
-_UNICODEFIELDS = ('Author', 'Maintainer', 'Summary', 'Description')
-
-_MISSING = object()
-
-_FILESAFE = re.compile('[^A-Za-z0-9.]+')
-
-
-def _get_name_and_version(name, version, for_filename=False):
- """Return the distribution name with version.
-
- If for_filename is true, return a filename-escaped form."""
- if for_filename:
- # For both name and version any runs of non-alphanumeric or '.'
- # characters are replaced with a single '-'. Additionally any
- # spaces in the version string become '.'
- name = _FILESAFE.sub('-', name)
- version = _FILESAFE.sub('-', version.replace(' ', '.'))
- return '%s-%s' % (name, version)
-
-
-class LegacyMetadata(object):
- """The legacy metadata of a release.
-
- Supports versions 1.0, 1.1, 1.2, 2.0 and 1.3/2.1 (auto-detected). You can
- instantiate the class with one of these arguments (or none):
- - *path*, the path to a metadata file
- - *fileobj* give a file-like object with metadata as content
- - *mapping* is a dict-like object
- - *scheme* is a version scheme name
- """
- # TODO document the mapping API and UNKNOWN default key
-
- def __init__(self, path=None, fileobj=None, mapping=None,
- scheme='default'):
- if [path, fileobj, mapping].count(None) < 2:
- raise TypeError('path, fileobj and mapping are exclusive')
- self._fields = {}
- self.requires_files = []
- self._dependencies = None
- self.scheme = scheme
- if path is not None:
- self.read(path)
- elif fileobj is not None:
- self.read_file(fileobj)
- elif mapping is not None:
- self.update(mapping)
- self.set_metadata_version()
-
- def set_metadata_version(self):
- self._fields['Metadata-Version'] = _best_version(self._fields)
-
- def _write_field(self, fileobj, name, value):
- fileobj.write('%s: %s\n' % (name, value))
-
- def __getitem__(self, name):
- return self.get(name)
-
- def __setitem__(self, name, value):
- return self.set(name, value)
-
- def __delitem__(self, name):
- field_name = self._convert_name(name)
- try:
- del self._fields[field_name]
- except KeyError:
- raise KeyError(name)
-
- def __contains__(self, name):
- return (name in self._fields or
- self._convert_name(name) in self._fields)
-
- def _convert_name(self, name):
- if name in _ALL_FIELDS:
- return name
- name = name.replace('-', '_').lower()
- return _ATTR2FIELD.get(name, name)
-
- def _default_value(self, name):
- if name in _LISTFIELDS or name in _ELEMENTSFIELD:
- return []
- return 'UNKNOWN'
-
- def _remove_line_prefix(self, value):
- if self.metadata_version in ('1.0', '1.1'):
- return _LINE_PREFIX_PRE_1_2.sub('\n', value)
- else:
- return _LINE_PREFIX_1_2.sub('\n', value)
-
- def __getattr__(self, name):
- if name in _ATTR2FIELD:
- return self[name]
- raise AttributeError(name)
-
- #
- # Public API
- #
-
-# dependencies = property(_get_dependencies, _set_dependencies)
-
- def get_fullname(self, filesafe=False):
- """Return the distribution name with version.
-
- If filesafe is true, return a filename-escaped form."""
- return _get_name_and_version(self['Name'], self['Version'], filesafe)
-
- def is_field(self, name):
- """return True if name is a valid metadata key"""
- name = self._convert_name(name)
- return name in _ALL_FIELDS
-
- def is_multi_field(self, name):
- name = self._convert_name(name)
- return name in _LISTFIELDS
-
- def read(self, filepath):
- """Read the metadata values from a file path."""
- fp = codecs.open(filepath, 'r', encoding='utf-8')
- try:
- self.read_file(fp)
- finally:
- fp.close()
-
- def read_file(self, fileob):
- """Read the metadata values from a file object."""
- msg = message_from_file(fileob)
- self._fields['Metadata-Version'] = msg['metadata-version']
-
- # When reading, get all the fields we can
- for field in _ALL_FIELDS:
- if field not in msg:
- continue
- if field in _LISTFIELDS:
- # we can have multiple lines
- values = msg.get_all(field)
- if field in _LISTTUPLEFIELDS and values is not None:
- values = [tuple(value.split(',')) for value in values]
- self.set(field, values)
- else:
- # single line
- value = msg[field]
- if value is not None and value != 'UNKNOWN':
- self.set(field, value)
-
- # PEP 566 specifies that the body be used for the description, if
- # available
- body = msg.get_payload()
- self["Description"] = body if body else self["Description"]
- # logger.debug('Attempting to set metadata for %s', self)
- # self.set_metadata_version()
-
- def write(self, filepath, skip_unknown=False):
- """Write the metadata fields to filepath."""
- fp = codecs.open(filepath, 'w', encoding='utf-8')
- try:
- self.write_file(fp, skip_unknown)
- finally:
- fp.close()
-
- def write_file(self, fileobject, skip_unknown=False):
- """Write the PKG-INFO format data to a file object."""
- self.set_metadata_version()
-
- for field in _version2fieldlist(self['Metadata-Version']):
- values = self.get(field)
- if skip_unknown and values in ('UNKNOWN', [], ['UNKNOWN']):
- continue
- if field in _ELEMENTSFIELD:
- self._write_field(fileobject, field, ','.join(values))
- continue
- if field not in _LISTFIELDS:
- if field == 'Description':
- if self.metadata_version in ('1.0', '1.1'):
- values = values.replace('\n', '\n ')
- else:
- values = values.replace('\n', '\n |')
- values = [values]
-
- if field in _LISTTUPLEFIELDS:
- values = [','.join(value) for value in values]
-
- for value in values:
- self._write_field(fileobject, field, value)
-
- def update(self, other=None, **kwargs):
- """Set metadata values from the given iterable `other` and kwargs.
-
- Behavior is like `dict.update`: If `other` has a ``keys`` method,
- they are looped over and ``self[key]`` is assigned ``other[key]``.
- Else, ``other`` is an iterable of ``(key, value)`` iterables.
-
- Keys that don't match a metadata field or that have an empty value are
- dropped.
- """
- def _set(key, value):
- if key in _ATTR2FIELD and value:
- self.set(self._convert_name(key), value)
-
- if not other:
- # other is None or empty container
- pass
- elif hasattr(other, 'keys'):
- for k in other.keys():
- _set(k, other[k])
- else:
- for k, v in other:
- _set(k, v)
-
- if kwargs:
- for k, v in kwargs.items():
- _set(k, v)
-
- def set(self, name, value):
- """Control then set a metadata field."""
- name = self._convert_name(name)
-
- if ((name in _ELEMENTSFIELD or name == 'Platform') and
- not isinstance(value, (list, tuple))):
- if isinstance(value, string_types):
- value = [v.strip() for v in value.split(',')]
- else:
- value = []
- elif (name in _LISTFIELDS and
- not isinstance(value, (list, tuple))):
- if isinstance(value, string_types):
- value = [value]
- else:
- value = []
-
- if logger.isEnabledFor(logging.WARNING):
- project_name = self['Name']
-
- scheme = get_scheme(self.scheme)
- if name in _PREDICATE_FIELDS and value is not None:
- for v in value:
- # check that the values are valid
- if not scheme.is_valid_matcher(v.split(';')[0]):
- logger.warning(
- "'%s': '%s' is not valid (field '%s')",
- project_name, v, name)
- # FIXME this rejects UNKNOWN, is that right?
- elif name in _VERSIONS_FIELDS and value is not None:
- if not scheme.is_valid_constraint_list(value):
- logger.warning("'%s': '%s' is not a valid version (field '%s')",
- project_name, value, name)
- elif name in _VERSION_FIELDS and value is not None:
- if not scheme.is_valid_version(value):
- logger.warning("'%s': '%s' is not a valid version (field '%s')",
- project_name, value, name)
-
- if name in _UNICODEFIELDS:
- if name == 'Description':
- value = self._remove_line_prefix(value)
-
- self._fields[name] = value
-
- def get(self, name, default=_MISSING):
- """Get a metadata field."""
- name = self._convert_name(name)
- if name not in self._fields:
- if default is _MISSING:
- default = self._default_value(name)
- return default
- if name in _UNICODEFIELDS:
- value = self._fields[name]
- return value
- elif name in _LISTFIELDS:
- value = self._fields[name]
- if value is None:
- return []
- res = []
- for val in value:
- if name not in _LISTTUPLEFIELDS:
- res.append(val)
- else:
- # That's for Project-URL
- res.append((val[0], val[1]))
- return res
-
- elif name in _ELEMENTSFIELD:
- value = self._fields[name]
- if isinstance(value, string_types):
- return value.split(',')
- return self._fields[name]
-
- def check(self, strict=False):
- """Check if the metadata is compliant. If strict is True then raise if
- no Name or Version are provided"""
- self.set_metadata_version()
-
- # XXX should check the versions (if the file was loaded)
- missing, warnings = [], []
-
- for attr in ('Name', 'Version'): # required by PEP 345
- if attr not in self:
- missing.append(attr)
-
- if strict and missing != []:
- msg = 'missing required metadata: %s' % ', '.join(missing)
- raise MetadataMissingError(msg)
-
- for attr in ('Home-page', 'Author'):
- if attr not in self:
- missing.append(attr)
-
- # checking metadata 1.2 (XXX needs to check 1.1, 1.0)
- if self['Metadata-Version'] != '1.2':
- return missing, warnings
-
- scheme = get_scheme(self.scheme)
-
- def are_valid_constraints(value):
- for v in value:
- if not scheme.is_valid_matcher(v.split(';')[0]):
- return False
- return True
-
- for fields, controller in ((_PREDICATE_FIELDS, are_valid_constraints),
- (_VERSIONS_FIELDS,
- scheme.is_valid_constraint_list),
- (_VERSION_FIELDS,
- scheme.is_valid_version)):
- for field in fields:
- value = self.get(field, None)
- if value is not None and not controller(value):
- warnings.append("Wrong value for '%s': %s" % (field, value))
-
- return missing, warnings
-
- def todict(self, skip_missing=False):
- """Return fields as a dict.
-
- Field names will be converted to use the underscore-lowercase style
- instead of hyphen-mixed case (i.e. home_page instead of Home-page).
- This is as per https://www.python.org/dev/peps/pep-0566/#id17.
- """
- self.set_metadata_version()
-
- fields = _version2fieldlist(self['Metadata-Version'])
-
- data = {}
-
- for field_name in fields:
- if not skip_missing or field_name in self._fields:
- key = _FIELD2ATTR[field_name]
- if key != 'project_url':
- data[key] = self[field_name]
- else:
- data[key] = [','.join(u) for u in self[field_name]]
-
- return data
-
- def add_requirements(self, requirements):
- if self['Metadata-Version'] == '1.1':
- # we can't have 1.1 metadata *and* Setuptools requires
- for field in ('Obsoletes', 'Requires', 'Provides'):
- if field in self:
- del self[field]
- self['Requires-Dist'] += requirements
-
- # Mapping API
- # TODO could add iter* variants
-
- def keys(self):
- return list(_version2fieldlist(self['Metadata-Version']))
-
- def __iter__(self):
- for key in self.keys():
- yield key
-
- def values(self):
- return [self[key] for key in self.keys()]
-
- def items(self):
- return [(key, self[key]) for key in self.keys()]
-
- def __repr__(self):
- return '<%s %s %s>' % (self.__class__.__name__, self.name,
- self.version)
-
-
-METADATA_FILENAME = 'pydist.json'
-WHEEL_METADATA_FILENAME = 'metadata.json'
-LEGACY_METADATA_FILENAME = 'METADATA'
-
-
-class Metadata(object):
- """
- The metadata of a release. This implementation uses 2.0 (JSON)
- metadata where possible. If not possible, it wraps a LegacyMetadata
- instance which handles the key-value metadata format.
- """
-
- METADATA_VERSION_MATCHER = re.compile(r'^\d+(\.\d+)*$')
-
- NAME_MATCHER = re.compile('^[0-9A-Z]([0-9A-Z_.-]*[0-9A-Z])?$', re.I)
-
- VERSION_MATCHER = PEP440_VERSION_RE
-
- SUMMARY_MATCHER = re.compile('.{1,2047}')
-
- METADATA_VERSION = '2.0'
-
- GENERATOR = 'distlib (%s)' % __version__
-
- MANDATORY_KEYS = {
- 'name': (),
- 'version': (),
- 'summary': ('legacy',),
- }
-
- INDEX_KEYS = ('name version license summary description author '
- 'author_email keywords platform home_page classifiers '
- 'download_url')
-
- DEPENDENCY_KEYS = ('extras run_requires test_requires build_requires '
- 'dev_requires provides meta_requires obsoleted_by '
- 'supports_environments')
-
- SYNTAX_VALIDATORS = {
- 'metadata_version': (METADATA_VERSION_MATCHER, ()),
- 'name': (NAME_MATCHER, ('legacy',)),
- 'version': (VERSION_MATCHER, ('legacy',)),
- 'summary': (SUMMARY_MATCHER, ('legacy',)),
- }
-
- __slots__ = ('_legacy', '_data', 'scheme')
-
- def __init__(self, path=None, fileobj=None, mapping=None,
- scheme='default'):
- if [path, fileobj, mapping].count(None) < 2:
- raise TypeError('path, fileobj and mapping are exclusive')
- self._legacy = None
- self._data = None
- self.scheme = scheme
- #import pdb; pdb.set_trace()
- if mapping is not None:
- try:
- self._validate_mapping(mapping, scheme)
- self._data = mapping
- except MetadataUnrecognizedVersionError:
- self._legacy = LegacyMetadata(mapping=mapping, scheme=scheme)
- self.validate()
- else:
- data = None
- if path:
- with open(path, 'rb') as f:
- data = f.read()
- elif fileobj:
- data = fileobj.read()
- if data is None:
- # Initialised with no args - to be added
- self._data = {
- 'metadata_version': self.METADATA_VERSION,
- 'generator': self.GENERATOR,
- }
- else:
- if not isinstance(data, text_type):
- data = data.decode('utf-8')
- try:
- self._data = json.loads(data)
- self._validate_mapping(self._data, scheme)
- except ValueError:
- # Note: MetadataUnrecognizedVersionError does not
- # inherit from ValueError (it's a DistlibException,
- # which should not inherit from ValueError).
- # The ValueError comes from the json.load - if that
- # succeeds and we get a validation error, we want
- # that to propagate
- self._legacy = LegacyMetadata(fileobj=StringIO(data),
- scheme=scheme)
- self.validate()
-
- common_keys = set(('name', 'version', 'license', 'keywords', 'summary'))
-
- none_list = (None, list)
- none_dict = (None, dict)
-
- mapped_keys = {
- 'run_requires': ('Requires-Dist', list),
- 'build_requires': ('Setup-Requires-Dist', list),
- 'dev_requires': none_list,
- 'test_requires': none_list,
- 'meta_requires': none_list,
- 'extras': ('Provides-Extra', list),
- 'modules': none_list,
- 'namespaces': none_list,
- 'exports': none_dict,
- 'commands': none_dict,
- 'classifiers': ('Classifier', list),
- 'source_url': ('Download-URL', None),
- 'metadata_version': ('Metadata-Version', None),
- }
-
- del none_list, none_dict
-
- def __getattribute__(self, key):
- common = object.__getattribute__(self, 'common_keys')
- mapped = object.__getattribute__(self, 'mapped_keys')
- if key in mapped:
- lk, maker = mapped[key]
- if self._legacy:
- if lk is None:
- result = None if maker is None else maker()
- else:
- result = self._legacy.get(lk)
- else:
- value = None if maker is None else maker()
- if key not in ('commands', 'exports', 'modules', 'namespaces',
- 'classifiers'):
- result = self._data.get(key, value)
- else:
- # special cases for PEP 459
- sentinel = object()
- result = sentinel
- d = self._data.get('extensions')
- if d:
- if key == 'commands':
- result = d.get('python.commands', value)
- elif key == 'classifiers':
- d = d.get('python.details')
- if d:
- result = d.get(key, value)
- else:
- d = d.get('python.exports')
- if not d:
- d = self._data.get('python.exports')
- if d:
- result = d.get(key, value)
- if result is sentinel:
- result = value
- elif key not in common:
- result = object.__getattribute__(self, key)
- elif self._legacy:
- result = self._legacy.get(key)
- else:
- result = self._data.get(key)
- return result
-
- def _validate_value(self, key, value, scheme=None):
- if key in self.SYNTAX_VALIDATORS:
- pattern, exclusions = self.SYNTAX_VALIDATORS[key]
- if (scheme or self.scheme) not in exclusions:
- m = pattern.match(value)
- if not m:
- raise MetadataInvalidError("'%s' is an invalid value for "
- "the '%s' property" % (value,
- key))
-
- def __setattr__(self, key, value):
- self._validate_value(key, value)
- common = object.__getattribute__(self, 'common_keys')
- mapped = object.__getattribute__(self, 'mapped_keys')
- if key in mapped:
- lk, _ = mapped[key]
- if self._legacy:
- if lk is None:
- raise NotImplementedError
- self._legacy[lk] = value
- elif key not in ('commands', 'exports', 'modules', 'namespaces',
- 'classifiers'):
- self._data[key] = value
- else:
- # special cases for PEP 459
- d = self._data.setdefault('extensions', {})
- if key == 'commands':
- d['python.commands'] = value
- elif key == 'classifiers':
- d = d.setdefault('python.details', {})
- d[key] = value
- else:
- d = d.setdefault('python.exports', {})
- d[key] = value
- elif key not in common:
- object.__setattr__(self, key, value)
- else:
- if key == 'keywords':
- if isinstance(value, string_types):
- value = value.strip()
- if value:
- value = value.split()
- else:
- value = []
- if self._legacy:
- self._legacy[key] = value
- else:
- self._data[key] = value
-
- @property
- def name_and_version(self):
- return _get_name_and_version(self.name, self.version, True)
-
- @property
- def provides(self):
- if self._legacy:
- result = self._legacy['Provides-Dist']
- else:
- result = self._data.setdefault('provides', [])
- s = '%s (%s)' % (self.name, self.version)
- if s not in result:
- result.append(s)
- return result
-
- @provides.setter
- def provides(self, value):
- if self._legacy:
- self._legacy['Provides-Dist'] = value
- else:
- self._data['provides'] = value
-
- def get_requirements(self, reqts, extras=None, env=None):
- """
- Base method to get dependencies, given a set of extras
- to satisfy and an optional environment context.
- :param reqts: A list of sometimes-wanted dependencies,
- perhaps dependent on extras and environment.
- :param extras: A list of optional components being requested.
- :param env: An optional environment for marker evaluation.
- """
- if self._legacy:
- result = reqts
- else:
- result = []
- extras = get_extras(extras or [], self.extras)
- for d in reqts:
- if 'extra' not in d and 'environment' not in d:
- # unconditional
- include = True
- else:
- if 'extra' not in d:
- # Not extra-dependent - only environment-dependent
- include = True
- else:
- include = d.get('extra') in extras
- if include:
- # Not excluded because of extras, check environment
- marker = d.get('environment')
- if marker:
- include = interpret(marker, env)
- if include:
- result.extend(d['requires'])
- for key in ('build', 'dev', 'test'):
- e = ':%s:' % key
- if e in extras:
- extras.remove(e)
- # A recursive call, but it should terminate since 'test'
- # has been removed from the extras
- reqts = self._data.get('%s_requires' % key, [])
- result.extend(self.get_requirements(reqts, extras=extras,
- env=env))
- return result
-
- @property
- def dictionary(self):
- if self._legacy:
- return self._from_legacy()
- return self._data
-
- @property
- def dependencies(self):
- if self._legacy:
- raise NotImplementedError
- else:
- return extract_by_key(self._data, self.DEPENDENCY_KEYS)
-
- @dependencies.setter
- def dependencies(self, value):
- if self._legacy:
- raise NotImplementedError
- else:
- self._data.update(value)
-
- def _validate_mapping(self, mapping, scheme):
- if mapping.get('metadata_version') != self.METADATA_VERSION:
- raise MetadataUnrecognizedVersionError()
- missing = []
- for key, exclusions in self.MANDATORY_KEYS.items():
- if key not in mapping:
- if scheme not in exclusions:
- missing.append(key)
- if missing:
- msg = 'Missing metadata items: %s' % ', '.join(missing)
- raise MetadataMissingError(msg)
- for k, v in mapping.items():
- self._validate_value(k, v, scheme)
-
- def validate(self):
- if self._legacy:
- missing, warnings = self._legacy.check(True)
- if missing or warnings:
- logger.warning('Metadata: missing: %s, warnings: %s',
- missing, warnings)
- else:
- self._validate_mapping(self._data, self.scheme)
-
- def todict(self):
- if self._legacy:
- return self._legacy.todict(True)
- else:
- result = extract_by_key(self._data, self.INDEX_KEYS)
- return result
-
- def _from_legacy(self):
- assert self._legacy and not self._data
- result = {
- 'metadata_version': self.METADATA_VERSION,
- 'generator': self.GENERATOR,
- }
- lmd = self._legacy.todict(True) # skip missing ones
- for k in ('name', 'version', 'license', 'summary', 'description',
- 'classifier'):
- if k in lmd:
- if k == 'classifier':
- nk = 'classifiers'
- else:
- nk = k
- result[nk] = lmd[k]
- kw = lmd.get('Keywords', [])
- if kw == ['']:
- kw = []
- result['keywords'] = kw
- keys = (('requires_dist', 'run_requires'),
- ('setup_requires_dist', 'build_requires'))
- for ok, nk in keys:
- if ok in lmd and lmd[ok]:
- result[nk] = [{'requires': lmd[ok]}]
- result['provides'] = self.provides
- author = {}
- maintainer = {}
- return result
-
- LEGACY_MAPPING = {
- 'name': 'Name',
- 'version': 'Version',
- ('extensions', 'python.details', 'license'): 'License',
- 'summary': 'Summary',
- 'description': 'Description',
- ('extensions', 'python.project', 'project_urls', 'Home'): 'Home-page',
- ('extensions', 'python.project', 'contacts', 0, 'name'): 'Author',
- ('extensions', 'python.project', 'contacts', 0, 'email'): 'Author-email',
- 'source_url': 'Download-URL',
- ('extensions', 'python.details', 'classifiers'): 'Classifier',
- }
-
- def _to_legacy(self):
- def process_entries(entries):
- reqts = set()
- for e in entries:
- extra = e.get('extra')
- env = e.get('environment')
- rlist = e['requires']
- for r in rlist:
- if not env and not extra:
- reqts.add(r)
- else:
- marker = ''
- if extra:
- marker = 'extra == "%s"' % extra
- if env:
- if marker:
- marker = '(%s) and %s' % (env, marker)
- else:
- marker = env
- reqts.add(';'.join((r, marker)))
- return reqts
-
- assert self._data and not self._legacy
- result = LegacyMetadata()
- nmd = self._data
- # import pdb; pdb.set_trace()
- for nk, ok in self.LEGACY_MAPPING.items():
- if not isinstance(nk, tuple):
- if nk in nmd:
- result[ok] = nmd[nk]
- else:
- d = nmd
- found = True
- for k in nk:
- try:
- d = d[k]
- except (KeyError, IndexError):
- found = False
- break
- if found:
- result[ok] = d
- r1 = process_entries(self.run_requires + self.meta_requires)
- r2 = process_entries(self.build_requires + self.dev_requires)
- if self.extras:
- result['Provides-Extra'] = sorted(self.extras)
- result['Requires-Dist'] = sorted(r1)
- result['Setup-Requires-Dist'] = sorted(r2)
- # TODO: any other fields wanted
- return result
-
- def write(self, path=None, fileobj=None, legacy=False, skip_unknown=True):
- if [path, fileobj].count(None) != 1:
- raise ValueError('Exactly one of path and fileobj is needed')
- self.validate()
- if legacy:
- if self._legacy:
- legacy_md = self._legacy
- else:
- legacy_md = self._to_legacy()
- if path:
- legacy_md.write(path, skip_unknown=skip_unknown)
- else:
- legacy_md.write_file(fileobj, skip_unknown=skip_unknown)
- else:
- if self._legacy:
- d = self._from_legacy()
- else:
- d = self._data
- if fileobj:
- json.dump(d, fileobj, ensure_ascii=True, indent=2,
- sort_keys=True)
- else:
- with codecs.open(path, 'w', 'utf-8') as f:
- json.dump(d, f, ensure_ascii=True, indent=2,
- sort_keys=True)
-
- def add_requirements(self, requirements):
- if self._legacy:
- self._legacy.add_requirements(requirements)
- else:
- run_requires = self._data.setdefault('run_requires', [])
- always = None
- for entry in run_requires:
- if 'environment' not in entry and 'extra' not in entry:
- always = entry
- break
- if always is None:
- always = { 'requires': requirements }
- run_requires.insert(0, always)
- else:
- rset = set(always['requires']) | set(requirements)
- always['requires'] = sorted(rset)
-
- def __repr__(self):
- name = self.name or '(no name)'
- version = self.version or 'no version'
- return '<%s %s %s (%s)>' % (self.__class__.__name__,
- self.metadata_version, name, version)
diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/bindings/java/c/src/com_portaudio_BlockingStream.c b/spaces/amarchheda/ChordDuplicate/portaudio/bindings/java/c/src/com_portaudio_BlockingStream.c
deleted file mode 100644
index 64d82134d9395c38138dc5e42b3535882b30b9fc..0000000000000000000000000000000000000000
--- a/spaces/amarchheda/ChordDuplicate/portaudio/bindings/java/c/src/com_portaudio_BlockingStream.c
+++ /dev/null
@@ -1,352 +0,0 @@
-/*
- * Portable Audio I/O Library
- * Java Binding for PortAudio
- *
- * Based on the Open Source API proposed by Ross Bencina
- * Copyright (c) 2008 Ross Bencina
- *
- * Permission is hereby granted, free of charge, to any person obtaining
- * a copy of this software and associated documentation files
- * (the "Software"), to deal in the Software without restriction,
- * including without limitation the rights to use, copy, modify, merge,
- * publish, distribute, sublicense, and/or sell copies of the Software,
- * and to permit persons to whom the Software is furnished to do so,
- * subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be
- * included in all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
- * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR
- * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
- * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
- * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
- */
-
-/*
- * The text above constitutes the entire PortAudio license; however,
- * the PortAudio community also makes the following non-binding requests:
- *
- * Any person wishing to distribute modifications to the Software is
- * requested to send the modifications to the original developer so that
- * they can be incorporated into the canonical version. It is also
- * requested that these non-binding requests be included along with the
- * license above.
- */
-
-#include "com_portaudio_BlockingStream.h"
-#include "portaudio.h"
-#include "jpa_tools.h"
-
-#ifndef FALSE
-#define FALSE (0)
-#endif
-#ifndef TRUE
-#define TRUE (!FALSE)
-#endif
-
-/*
- * Class: com_portaudio_BlockingStream
- * Method: getReadAvailable
- * Signature: ()I
- */
-JNIEXPORT jint JNICALL Java_com_portaudio_BlockingStream_getReadAvailable
- (JNIEnv *env, jobject blockingStream)
-{
- PaStream *stream =jpa_GetStreamPointer( env, blockingStream );
- if( stream == NULL ) return 0;
- return Pa_GetStreamReadAvailable( stream );
-}
-
-/*
- * Class: com_portaudio_BlockingStream
- * Method: getWriteAvailable
- * Signature: ()I
- */
-JNIEXPORT jint JNICALL Java_com_portaudio_BlockingStream_getWriteAvailable
- (JNIEnv *env, jobject blockingStream)
-{
- PaStream *stream =jpa_GetStreamPointer( env, blockingStream );
- if( stream == NULL ) return 0;
- return Pa_GetStreamWriteAvailable( stream );
-}
-
-
-/*
- * Class: com_portaudio_BlockingStream
- * Method: writeFloats
- * Signature: ([FI)Z
- */
-JNIEXPORT jboolean JNICALL Java_com_portaudio_BlockingStream_writeFloats
- (JNIEnv *env, jobject blockingStream, jfloatArray buffer, jint numFrames)
-{
- jfloat *carr;
- jint err;
- PaStream *stream =jpa_GetStreamPointer( env, blockingStream );
- if( buffer == NULL )
- {
- (*env)->ThrowNew( env, (*env)->FindClass(env,"java/lang/RuntimeException"),
- "null stream buffer");
- return FALSE;
- }
- carr = (*env)->GetFloatArrayElements(env, buffer, NULL);
- if (carr == NULL)
- {
- (*env)->ThrowNew( env, (*env)->FindClass(env,"java/lang/RuntimeException"),
- "invalid stream buffer");
- return FALSE;
- }
- err = Pa_WriteStream( stream, carr, numFrames );
- (*env)->ReleaseFloatArrayElements(env, buffer, carr, 0);
- if( err == paOutputUnderflowed )
- {
- return TRUE;
- }
- else
- {
- jpa_CheckError( env, err );
- return FALSE;
- }
-}
-
-/*
- * Class: com_portaudio_BlockingStream
- * Method: readFloats
- * Signature: ([FI)Z
- */
-JNIEXPORT jboolean JNICALL Java_com_portaudio_BlockingStream_readFloats
- (JNIEnv *env, jobject blockingStream, jfloatArray buffer, jint numFrames)
-{
- jfloat *carr;
- jint err;
- PaStream *stream =jpa_GetStreamPointer( env, blockingStream );
- if( buffer == NULL )
- {
- (*env)->ThrowNew( env, (*env)->FindClass(env,"java/lang/RuntimeException"),
- "null stream buffer");
- return FALSE;
- }
- carr = (*env)->GetFloatArrayElements(env, buffer, NULL);
- if (carr == NULL)
- {
- (*env)->ThrowNew( env, (*env)->FindClass(env,"java/lang/RuntimeException"),
- "invalid stream buffer");
- return FALSE;
- }
- err = Pa_ReadStream( stream, carr, numFrames );
- (*env)->ReleaseFloatArrayElements(env, buffer, carr, 0);
- if( err == paInputOverflowed )
- {
- return TRUE;
- }
- else
- {
- jpa_CheckError( env, err );
- return FALSE;
- }
-}
-
-/*
- * Class: com_portaudio_BlockingStream
- * Method: writeShorts
- * Signature: ([SI)Z
- */
-JNIEXPORT jboolean JNICALL Java_com_portaudio_BlockingStream_writeShorts
- (JNIEnv *env, jobject blockingStream, jfloatArray buffer, jint numFrames)
-{
- jshort *carr;
- jint err;
- PaStream *stream =jpa_GetStreamPointer( env, blockingStream );
- if( buffer == NULL )
- {
- (*env)->ThrowNew( env, (*env)->FindClass(env,"java/lang/RuntimeException"),
- "null stream buffer");
- return FALSE;
- }
- carr = (*env)->GetShortArrayElements(env, buffer, NULL);
- if (carr == NULL)
- {
- (*env)->ThrowNew( env, (*env)->FindClass(env,"java/lang/RuntimeException"),
- "invalid stream buffer");
- return FALSE;
- }
- err = Pa_WriteStream( stream, carr, numFrames );
- (*env)->ReleaseShortArrayElements(env, buffer, carr, 0);
- if( err == paOutputUnderflowed )
- {
- return TRUE;
- }
- else
- {
- jpa_CheckError( env, err );
- return FALSE;
- }
-}
-
-/*
- * Class: com_portaudio_BlockingStream
- * Method: readShorts
- * Signature: ([SI)Z
- */
-JNIEXPORT jboolean JNICALL Java_com_portaudio_BlockingStream_readShorts
- (JNIEnv *env, jobject blockingStream, jfloatArray buffer, jint numFrames)
-{
- jshort *carr;
- jint err;
- PaStream *stream =jpa_GetStreamPointer( env, blockingStream );
- if( buffer == NULL )
- {
- (*env)->ThrowNew( env, (*env)->FindClass(env,"java/lang/RuntimeException"),
- "null stream buffer");
- return FALSE;
- }
- carr = (*env)->GetShortArrayElements(env, buffer, NULL);
- if (carr == NULL)
- {
- (*env)->ThrowNew( env, (*env)->FindClass(env,"java/lang/RuntimeException"),
- "invalid stream buffer");
- return FALSE;
- }
- err = Pa_ReadStream( stream, carr, numFrames );
- (*env)->ReleaseShortArrayElements(env, buffer, carr, 0);
- if( err == paInputOverflowed )
- {
- return TRUE;
- }
- else
- {
- jpa_CheckError( env, err );
- return FALSE;
- }
-}
-
-/*
- * Class: com_portaudio_BlockingStream
- * Method: start
- * Signature: ()V
- */
-JNIEXPORT void JNICALL Java_com_portaudio_BlockingStream_start
- (JNIEnv *env, jobject blockingStream )
-{
- PaStream *stream = jpa_GetStreamPointer( env, blockingStream );
- int err = Pa_StartStream( stream );
- jpa_CheckError( env, err );
-}
-
-/*
- * Class: com_portaudio_BlockingStream
- * Method: stop
- * Signature: ()V
- */
-JNIEXPORT void JNICALL Java_com_portaudio_BlockingStream_stop
- (JNIEnv *env, jobject blockingStream )
-{
- PaStream *stream =jpa_GetStreamPointer( env, blockingStream );
- int err = Pa_StopStream( stream );
- jpa_CheckError( env, err );
-}
-/*
- * Class: com_portaudio_BlockingStream
- * Method: abort
- * Signature: ()V
- */
-JNIEXPORT void JNICALL Java_com_portaudio_BlockingStream_abort
- (JNIEnv *env, jobject blockingStream )
-{
- PaStream *stream =jpa_GetStreamPointer( env, blockingStream );
- int err = Pa_AbortStream( stream );
- jpa_CheckError( env, err );
-}
-
-/*
- * Class: com_portaudio_BlockingStream
- * Method: close
- * Signature: ()V
- */
-JNIEXPORT void JNICALL Java_com_portaudio_BlockingStream_close
- (JNIEnv *env, jobject blockingStream )
-{
- jclass cls;
- PaStream *stream =jpa_GetStreamPointer( env, blockingStream );
- if( stream != NULL )
- {
- int err = Pa_CloseStream( stream );
- jpa_CheckError( env, err );
- cls = (*env)->GetObjectClass(env, blockingStream);
- jpa_SetLongField( env, cls, blockingStream, "nativeStream", (jlong) 0 );
- }
-}
-
-/*
- * Class: com_portaudio_BlockingStream
- * Method: isStopped
- * Signature: ()V
- */
-JNIEXPORT jboolean JNICALL Java_com_portaudio_BlockingStream_isStopped
- (JNIEnv *env, jobject blockingStream )
-{
- int err;
- PaStream *stream =jpa_GetStreamPointer( env, blockingStream );
- if( stream == NULL ) return 1;
- err = Pa_IsStreamStopped( stream );
- return (jpa_CheckError( env, err ) > 0);
-}
-/*
- * Class: com_portaudio_BlockingStream
- * Method: isActive
- * Signature: ()V
- */
-JNIEXPORT jboolean JNICALL Java_com_portaudio_BlockingStream_isActive
- (JNIEnv *env, jobject blockingStream )
-{
- int err;
- PaStream *stream =jpa_GetStreamPointer( env, blockingStream );
- if( stream == NULL ) return 0;
- err = Pa_IsStreamActive( stream );
- return (jpa_CheckError( env, err ) > 0);
-}
-
-
-/*
- * Class: com_portaudio_BlockingStream
- * Method: getTime
- * Signature: ()D
- */
-JNIEXPORT jdouble JNICALL Java_com_portaudio_BlockingStream_getTime
- (JNIEnv *env, jobject blockingStream )
-{
- PaStream *stream =jpa_GetStreamPointer( env, blockingStream );
- if( stream == NULL ) return 0.0;
- return Pa_GetStreamTime( stream );
-}
-
-
-/*
- * Class: com_portaudio_BlockingStream
- * Method: getInfo
- * Signature: ()Lcom/portaudio/StreamInfo;
- */
-JNIEXPORT void JNICALL Java_com_portaudio_BlockingStream_getInfo
- (JNIEnv *env, jobject blockingStream, jobject streamInfo)
-{
-
- PaStream *stream =jpa_GetStreamPointer( env, blockingStream );
- const PaStreamInfo *info = Pa_GetStreamInfo( stream );
- if( streamInfo == NULL )
- {
- jpa_ThrowError( env, "Invalid stream." );
- }
- else
- {
- /* Get a reference to obj's class */
- jclass cls = (*env)->GetObjectClass(env, streamInfo);
-
- jpa_SetIntField( env, cls, streamInfo, "structVersion", info->structVersion );
- jpa_SetDoubleField( env, cls, streamInfo, "inputLatency", info->inputLatency );
- jpa_SetDoubleField( env, cls, streamInfo, "outputLatency", info->outputLatency );
- jpa_SetDoubleField( env, cls, streamInfo, "sampleRate", info->sampleRate );
- }
-}
-
diff --git a/spaces/amoldwalunj/resume_matching_app/README.md b/spaces/amoldwalunj/resume_matching_app/README.md
deleted file mode 100644
index 612845845137492bdfb8d930dd5b01446d57e905..0000000000000000000000000000000000000000
--- a/spaces/amoldwalunj/resume_matching_app/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Resume Matching App
-emoji: 🔥
-colorFrom: blue
-colorTo: yellow
-sdk: streamlit
-sdk_version: 1.21.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/modules/training.py b/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/modules/training.py
deleted file mode 100644
index 82e42f4d2928197564c0efd371ca4c3aaaae4e15..0000000000000000000000000000000000000000
--- a/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/modules/training.py
+++ /dev/null
@@ -1,495 +0,0 @@
-import json
-import logging
-import math
-import sys
-import threading
-import time
-import traceback
-from pathlib import Path
-
-import gradio as gr
-import torch
-import transformers
-from datasets import Dataset, load_dataset
-from peft import (LoraConfig, get_peft_model, prepare_model_for_int8_training,
- set_peft_model_state_dict)
-
-from modules import shared, ui
-from modules.evaluate import calculate_perplexity, generate_markdown_table, save_past_evaluations
-from server import get_available_loras, get_available_models
-
-# This mapping is from a very recent commit, not yet released.
-# If not available, default to a backup map for some common model types.
-try:
- from peft.utils.other import \
- TRANSFORMERS_MODELS_TO_LORA_TARGET_MODULES_MAPPING as \
- model_to_lora_modules
- from transformers.models.auto.modeling_auto import MODEL_FOR_CAUSAL_LM_MAPPING_NAMES
- MODEL_CLASSES = {v: k for k, v in MODEL_FOR_CAUSAL_LM_MAPPING_NAMES}
-except:
- standard_modules = ["q_proj", "v_proj"]
- model_to_lora_modules = {"llama": standard_modules, "opt": standard_modules, "gptj": standard_modules, "gpt_neox": ["query_key_value"]}
- MODEL_CLASSES = {
- "LlamaForCausalLM": "llama",
- "OPTForCausalLM": "opt",
- "GPTJForCausalLM": "gptj",
- "GPTNeoXForCausalLM": "gpt_neox"
- }
-
-WANT_INTERRUPT = False
-
-PARAMETERS = ["lora_name", "always_override", "save_steps", "micro_batch_size", "batch_size", "epochs", "learning_rate", "lr_scheduler_type", "lora_rank", "lora_alpha", "lora_dropout", "cutoff_len", "dataset", "eval_dataset", "format", "eval_steps", "raw_text_file", "overlap_len", "newline_favor_len", "higher_rank_limit", "warmup_steps", "optimizer"]
-
-
-def get_datasets(path: str, ext: str):
- return ['None'] + sorted(set([k.stem for k in Path(path).glob(f'*.{ext}') if k.stem != 'put-trainer-datasets-here']), key=str.lower)
-
-
-def create_train_interface():
- with gr.Tab('Train LoRA', elem_id='lora-train-tab'):
- gr.Markdown("Confused? [[Click here for a guide]](https://github.com/oobabooga/text-generation-webui/blob/main/docs/Training-LoRAs.md)")
-
- with gr.Row():
- lora_name = gr.Textbox(label='Name', info='The name of your new LoRA file')
- always_override = gr.Checkbox(label='Override Existing Files', value=False, info='If the name given is the same as an existing file, checking this will replace that file. Leaving unchecked will load that file and continue from it (must use the same rank value as the original had).')
- save_steps = gr.Number(label='Save every n steps', value=0, info='If above 0, a checkpoint of the LoRA will be saved every time this many steps pass.')
-
- with gr.Row():
- copy_from = gr.Dropdown(label='Copy parameters from', value='None', choices=get_available_loras())
- ui.create_refresh_button(copy_from, lambda: None, lambda: {'choices': get_available_loras()}, 'refresh-button')
-
- with gr.Row():
- # TODO: Implement multi-device support.
- micro_batch_size = gr.Slider(label='Micro Batch Size', value=4, minimum=1, maximum=128, step=1, info='Per-device batch size (NOTE: multiple devices not yet implemented). Increasing this will increase VRAM usage.')
- batch_size = gr.Slider(label='Batch Size', value=128, minimum=0, maximum=1024, step=4, info='Global batch size. The two batch sizes together determine gradient accumulation (gradientAccum = batch / microBatch). Higher gradient accum values lead to better quality training.')
-
- with gr.Row():
- epochs = gr.Number(label='Epochs', value=3, info='Number of times every entry in the dataset should be fed into training. So 1 means feed each item in once, 5 means feed it in five times, etc.')
- learning_rate = gr.Textbox(label='Learning Rate', value='3e-4', info='Learning rate, in scientific notation. 3e-4 is a good starting base point. 1e-2 is extremely high, 1e-6 is extremely low.')
- lr_scheduler_type = gr.Dropdown(label='LR Scheduler', value='linear', choices=['linear', 'constant', 'constant_with_warmup', 'cosine', 'cosine_with_restarts', 'polynomial', 'inverse_sqrt'], info='Learning rate scheduler - defines how the learning rate changes over time. "Constant" means never change, "linear" means to go in a straight line from the learning rate down to 0, cosine follows a curve, etc.')
-
- # TODO: What is the actual maximum rank? Likely distinct per model. This might be better to somehow be on a log scale.
- lora_rank = gr.Slider(label='LoRA Rank', value=32, minimum=0, maximum=1024, step=4, info='LoRA Rank, or dimension count. Higher values produce a larger file with better control over the model\'s content. Smaller values produce a smaller file with less overall control. Small values like 4 or 8 are great for stylistic guidance, higher values like 128 or 256 are good for teaching content upgrades, extremely high values (1024+) are difficult to train but may improve fine-detail learning for large datasets. Higher ranks also require higher VRAM.')
- lora_alpha = gr.Slider(label='LoRA Alpha', value=64, minimum=0, maximum=2048, step=4, info='LoRA Alpha. This divided by the rank becomes the scaling of the LoRA. Higher means stronger. A good standard value is twice your Rank.')
-
- cutoff_len = gr.Slider(label='Cutoff Length', minimum=0, maximum=2048, value=256, step=32, info='Cutoff length for text input. Essentially, how long of a line of text to feed in at a time. Higher values require drastically more VRAM.')
-
- with gr.Tab(label='Formatted Dataset'):
- with gr.Row():
- dataset = gr.Dropdown(choices=get_datasets('training/datasets', 'json'), value='None', label='Dataset', info='The dataset file to use for training.')
- ui.create_refresh_button(dataset, lambda: None, lambda: {'choices': get_datasets('training/datasets', 'json')}, 'refresh-button')
- eval_dataset = gr.Dropdown(choices=get_datasets('training/datasets', 'json'), value='None', label='Evaluation Dataset', info='The (optional) dataset file used to evaluate the model after training.')
- ui.create_refresh_button(eval_dataset, lambda: None, lambda: {'choices': get_datasets('training/datasets', 'json')}, 'refresh-button')
- format = gr.Dropdown(choices=get_datasets('training/formats', 'json'), value='None', label='Data Format', info='The format file used to decide how to format the dataset input.')
- ui.create_refresh_button(format, lambda: None, lambda: {'choices': get_datasets('training/formats', 'json')}, 'refresh-button')
-
- eval_steps = gr.Number(label='Evaluate every n steps', value=100, info='If an evaluation dataset is given, test it every time this many steps pass.')
-
- with gr.Tab(label="Raw text file"):
- with gr.Row():
- raw_text_file = gr.Dropdown(choices=get_datasets('training/datasets', 'txt'), value='None', label='Text file', info='The raw text file to use for training.')
- ui.create_refresh_button(raw_text_file, lambda: None, lambda: {'choices': get_datasets('training/datasets', 'txt')}, 'refresh-button')
-
- with gr.Row():
- overlap_len = gr.Slider(label='Overlap Length', minimum=0, maximum=512, value=128, step=16, info='Overlap length - ie how many tokens from the prior chunk of text to include into the next chunk. (The chunks themselves will be of a size determined by Cutoff Length below). Setting overlap to exactly half the cutoff length may be ideal.')
- newline_favor_len = gr.Slider(label='Prefer Newline Cut Length', minimum=0, maximum=512, value=128, step=16, info='Length (in characters, not tokens) of the maximum distance to shift an overlap cut by to ensure chunks cut at newlines. If too low, cuts may occur in the middle of lines.')
-
- with gr.Accordion(label='Advanced Options', open=False):
- lora_dropout = gr.Slider(label='LoRA Dropout', minimum=0.0, maximum=1.0, step=0.025, value=0.05, info='Percentage probability for dropout of LoRA layers. This can help reduce overfitting. Most users should leave at default.')
- warmup_steps = gr.Number(label='Warmup Steps', value=100, info='For this many steps at the start, the learning rate will be lower than normal. This helps the trainer prepare the model and precompute statistics to improve the quality of training after the start.')
- optimizer = gr.Dropdown(label='Optimizer', value='adamw_torch', choices=['adamw_hf', 'adamw_torch', 'adamw_torch_fused', 'adamw_torch_xla', 'adamw_apex_fused', 'adafactor', 'adamw_bnb_8bit', 'adamw_anyprecision', 'sgd', 'adagrad'], info='Different optimizer implementation options, for advanced users. Effects of different options are not well documented yet.')
-
- with gr.Row():
- higher_rank_limit = gr.Checkbox(label='Enable higher ranks', value=False, info='If checked, changes Rank/Alpha slider above to go much higher. This will not work without a datacenter-class GPU.')
-
- with gr.Row():
- start_button = gr.Button("Start LoRA Training")
- stop_button = gr.Button("Interrupt")
-
- output = gr.Markdown(value="Ready")
-
- with gr.Tab('Perplexity evaluation', elem_id='evaluate-tab'):
- with gr.Row():
- with gr.Column():
- models = gr.Dropdown(get_available_models(), label='Models', multiselect=True)
- evaluate_text_file = gr.Dropdown(choices=['wikitext', 'ptb', 'ptb_new'] + get_datasets('training/datasets', 'txt')[1:], value='wikitext', label='Input dataset', info='The raw text file on which the model will be evaluated. The first options are automatically downloaded: wikitext, ptb, and ptb_new. The next options are your local text files under training/datasets.')
- with gr.Row():
- stride_length = gr.Slider(label='Stride', minimum=1, maximum=2048, value=512, step=1, info='Used to make the evaluation faster at the cost of accuracy. 1 = slowest but most accurate. 512 is a common value.')
- max_length = gr.Slider(label='max_length', minimum=0, maximum=8096, value=0, step=1, info='The context for each evaluation. If set to 0, the maximum context length for the model will be used.')
-
- with gr.Row():
- start_current_evaluation = gr.Button("Evaluate loaded model")
- start_evaluation = gr.Button("Evaluate selected models")
- stop_evaluation = gr.Button("Interrupt")
-
- with gr.Column():
- evaluation_log = gr.Markdown(value='')
-
- evaluation_table = gr.Dataframe(value=generate_markdown_table(), interactive=True)
- save_comments = gr.Button('Save comments')
-
- # Training events
- all_params = [lora_name, always_override, save_steps, micro_batch_size, batch_size, epochs, learning_rate, lr_scheduler_type, lora_rank, lora_alpha, lora_dropout, cutoff_len, dataset, eval_dataset, format, eval_steps, raw_text_file, overlap_len, newline_favor_len, higher_rank_limit, warmup_steps, optimizer]
- copy_from.change(do_copy_params, [copy_from] + all_params, all_params)
- start_button.click(do_train, all_params, output)
- stop_button.click(do_interrupt, None, None, queue=False)
- higher_rank_limit.change(change_rank_limit, [higher_rank_limit], [lora_rank, lora_alpha])
-
- # Evaluation events. For some reason, the interrupt event
- # doesn't work with the .then() syntax, so I write them one
- # by one in this ugly but functional way.
- ev = start_evaluation.click(calculate_perplexity, [models, evaluate_text_file, stride_length, max_length], evaluation_log, show_progress=False)
- start_evaluation.click(generate_markdown_table, None, evaluation_table, show_progress=False)
-
- tmp = gr.State('')
- start_current_evaluation.click(lambda: ['current model'], None, tmp)
- ev_cur = start_current_evaluation.click(calculate_perplexity, [tmp, evaluate_text_file, stride_length, max_length], evaluation_log, show_progress=False)
- start_current_evaluation.click(generate_markdown_table, None, evaluation_table, show_progress=False)
-
- stop_evaluation.click(None, None, None, cancels=[ev, ev_cur], queue=False)
- save_comments.click(
- save_past_evaluations, evaluation_table, None).then(
- lambda: "Comments saved.", None, evaluation_log, show_progress=False)
-
-
-def do_interrupt():
- global WANT_INTERRUPT
- WANT_INTERRUPT = True
-
-
-def do_copy_params(lora_name: str, *args):
- f_name = f"{shared.args.lora_dir}/{clean_path(None, lora_name)}/training_parameters.json"
- if Path(f_name).is_file():
- with open(f_name, 'r', encoding='utf-8') as format_file:
- params: dict[str, str] = json.load(format_file)
- else:
- params = {}
-
- result = list()
- for i in range(0, len(PARAMETERS)):
- key = PARAMETERS[i]
- if key in params:
- result.append(params[key])
- else:
- result.append(args[i])
-
- return result
-
-
-def change_rank_limit(use_higher_ranks: bool):
- mult = 2 if use_higher_ranks else 1
- return {"maximum": 1024 * mult, "__type__": "update"}, {"maximum": 2048 * mult, "__type__": "update"}
-
-
-def clean_path(base_path: str, path: str):
- """"Strips unusual symbols and forcibly builds a path as relative to the intended directory."""
- # TODO: Probably could do with a security audit to guarantee there's no ways this can be bypassed to target an unwanted path.
- # Or swap it to a strict whitelist of [a-zA-Z_0-9]
- path = path.replace('\\', '/').replace('..', '_')
- if base_path is None:
- return path
-
- return f'{Path(base_path).absolute()}/{path}'
-
-
-def do_train(lora_name: str, always_override: bool, save_steps: int, micro_batch_size: int, batch_size: int, epochs: int, learning_rate: str, lr_scheduler_type: str, lora_rank: int, lora_alpha: int, lora_dropout: float, cutoff_len: int, dataset: str, eval_dataset: str, format: str, eval_steps: int, raw_text_file: str, overlap_len: int, newline_favor_len: int, higher_rank_limit: bool, warmup_steps: int, optimizer: str):
-
- if shared.args.monkey_patch:
- from monkeypatch.peft_tuners_lora_monkey_patch import \
- replace_peft_model_with_gptq_lora_model
- replace_peft_model_with_gptq_lora_model()
-
- global WANT_INTERRUPT
- WANT_INTERRUPT = False
-
- # == Input validation / processing ==
- yield "Prepping..."
- lora_file_path = clean_path(None, lora_name)
- if lora_file_path.strip() == '':
- yield "Missing or invalid LoRA file name input."
- return
-
- lora_file_path = f"{shared.args.lora_dir}/{lora_file_path}"
- actual_lr = float(learning_rate)
- model_type = type(shared.model).__name__
-
- if model_type in MODEL_CLASSES:
- model_id = MODEL_CLASSES[model_type]
- else:
- model_id = "llama"
- if model_type == "PeftModelForCausalLM":
- if len(shared.args.lora_names) > 0:
- yield "You are trying to train a LoRA while you already have another LoRA loaded. This will work, but may have unexpected effects. *(Will continue anyway in 5 seconds, press `Interrupt` to stop.)*"
- logging.warning("Training LoRA over top of another LoRA. May have unexpected effects.")
- else:
- yield "Model ID not matched due to LoRA loading. Consider reloading base model. *(Will continue anyway in 5 seconds, press `Interrupt` to stop.)*"
- logging.warning("Model ID not matched due to LoRA loading. Consider reloading base model.")
- else:
- yield "LoRA training has only currently been validated for LLaMA, OPT, GPT-J, and GPT-NeoX models. Unexpected errors may follow. *(Will continue anyway in 5 seconds, press `Interrupt` to stop.)*"
- logging.warning(f"LoRA training has only currently been validated for LLaMA, OPT, GPT-J, and GPT-NeoX models. (Found model type: {model_type})")
-
- time.sleep(5)
-
- if shared.args.wbits > 0 and not shared.args.monkey_patch:
- yield "LoRA training in 4-bit requires loading with `--monkey-patch`"
- return
-
- elif not shared.args.load_in_8bit and shared.args.wbits <= 0:
- yield "It is highly recommended you use `--load-in-8bit` for LoRA training. *(Will continue anyway in 2 seconds, press `Interrupt` to stop.)*"
- logging.warning("It is highly recommended you use `--load-in-8bit` for LoRA training.")
- time.sleep(2) # Give it a moment for the message to show in UI before continuing
-
- if cutoff_len <= 0 or micro_batch_size <= 0 or batch_size <= 0 or actual_lr <= 0 or lora_rank <= 0 or lora_alpha <= 0:
- yield "Cannot input zeroes."
- return
-
- gradient_accumulation_steps = batch_size // micro_batch_size
- shared.tokenizer.pad_token_id = 0
- shared.tokenizer.padding_side = "left"
-
- def tokenize(prompt):
- result = shared.tokenizer(prompt, truncation=True, max_length=cutoff_len + 1, padding="max_length")
- return {
- "input_ids": result["input_ids"][:-1],
- "attention_mask": result["attention_mask"][:-1],
- }
-
- # == Prep the dataset, format, etc ==
- if raw_text_file not in ['None', '']:
- logging.info("Loading raw text file dataset...")
- with open(clean_path('training/datasets', f'{raw_text_file}.txt'), 'r', encoding='utf-8') as file:
- raw_text = file.read()
-
- tokens = shared.tokenizer.encode(raw_text)
- del raw_text # Note: could be a gig for a large dataset, so delete redundant data as we go to be safe on RAM
- tokens = list(split_chunks(tokens, cutoff_len - overlap_len))
- for i in range(1, len(tokens)):
- tokens[i] = tokens[i - 1][-overlap_len:] + tokens[i]
-
- text_chunks = [shared.tokenizer.decode(x) for x in tokens]
- del tokens
- if newline_favor_len > 0:
- text_chunks = [cut_chunk_for_newline(x, newline_favor_len) for x in text_chunks]
-
- train_data = Dataset.from_list([tokenize(x) for x in text_chunks])
- del text_chunks
- eval_data = None
-
- else:
- if dataset in ['None', '']:
- yield "**Missing dataset choice input, cannot continue.**"
- return
-
- if format in ['None', '']:
- yield "**Missing format choice input, cannot continue.**"
- return
-
- with open(clean_path('training/formats', f'{format}.json'), 'r', encoding='utf-8') as formatFile:
- format_data: dict[str, str] = json.load(formatFile)
-
- def generate_prompt(data_point: dict[str, str]):
- for options, data in format_data.items():
- if set(options.split(',')) == set(x[0] for x in data_point.items() if (x[1] is not None and len(x[1].strip()) > 0)):
- for key, val in data_point.items():
- if val is not None:
- data = data.replace(f'%{key}%', val)
- return data
- raise RuntimeError(f'Data-point "{data_point}" has no keyset match within format "{list(format_data.keys())}"')
-
- def generate_and_tokenize_prompt(data_point):
- prompt = generate_prompt(data_point)
- return tokenize(prompt)
-
- logging.info("Loading JSON datasets...")
- data = load_dataset("json", data_files=clean_path('training/datasets', f'{dataset}.json'))
- train_data = data['train'].map(generate_and_tokenize_prompt)
-
- if eval_dataset == 'None':
- eval_data = None
- else:
- eval_data = load_dataset("json", data_files=clean_path('training/datasets', f'{eval_dataset}.json'))
- eval_data = eval_data['train'].map(generate_and_tokenize_prompt)
-
- # == Start prepping the model itself ==
- if not hasattr(shared.model, 'lm_head') or hasattr(shared.model.lm_head, 'weight'):
- logging.info("Getting model ready...")
- prepare_model_for_int8_training(shared.model)
-
- logging.info("Prepping for training...")
- config = LoraConfig(
- r=lora_rank,
- lora_alpha=lora_alpha,
- target_modules=model_to_lora_modules[model_id],
- lora_dropout=lora_dropout,
- bias="none",
- task_type="CAUSAL_LM"
- )
-
- try:
- logging.info("Creating LoRA model...")
- lora_model = get_peft_model(shared.model, config)
- if not always_override and Path(f"{lora_file_path}/adapter_model.bin").is_file():
- logging.info("Loading existing LoRA data...")
- state_dict_peft = torch.load(f"{lora_file_path}/adapter_model.bin")
- set_peft_model_state_dict(lora_model, state_dict_peft)
- except:
- yield traceback.format_exc()
- return
-
- if shared.args.monkey_patch:
- for n, m in lora_model.named_modules():
- if '4bit' in str(type(m)):
- if m.is_v1_model:
- m.zeros = m.zeros.half()
-
- m.scales = m.scales.half()
-
- class Tracked():
- def __init__(self):
- self.current_steps = 0
- self.max_steps = 0
- self.did_save = False
-
- tracked = Tracked()
- actual_save_steps = math.ceil(save_steps / gradient_accumulation_steps)
-
- class Callbacks(transformers.TrainerCallback):
- def on_step_begin(self, args: transformers.TrainingArguments, state: transformers.TrainerState, control: transformers.TrainerControl, **kwargs):
- tracked.current_steps = state.global_step * gradient_accumulation_steps
- tracked.max_steps = state.max_steps * gradient_accumulation_steps
- if WANT_INTERRUPT:
- control.should_epoch_stop = True
- control.should_training_stop = True
- elif state.global_step > 0 and actual_save_steps > 0 and state.global_step % actual_save_steps == 0:
- lora_model.save_pretrained(f"{lora_file_path}/checkpoint-{tracked.current_steps}/")
-
- def on_substep_end(self, args: transformers.TrainingArguments, state: transformers.TrainerState, control: transformers.TrainerControl, **kwargs):
- tracked.current_steps += 1
- if WANT_INTERRUPT:
- control.should_epoch_stop = True
- control.should_training_stop = True
-
- trainer = transformers.Trainer(
- model=lora_model,
- train_dataset=train_data,
- eval_dataset=eval_data,
- args=transformers.TrainingArguments(
- per_device_train_batch_size=micro_batch_size,
- gradient_accumulation_steps=gradient_accumulation_steps,
- warmup_steps=math.ceil(warmup_steps / gradient_accumulation_steps),
- num_train_epochs=epochs,
- learning_rate=actual_lr,
- fp16=False if shared.args.cpu else True,
- optim=optimizer,
- logging_steps=5,
- evaluation_strategy="steps" if eval_data is not None else "no",
- eval_steps=math.ceil(eval_steps / gradient_accumulation_steps) if eval_data is not None else None,
- save_strategy="no",
- output_dir=lora_file_path,
- lr_scheduler_type=lr_scheduler_type,
- load_best_model_at_end=True if eval_data is not None else False,
- # TODO: Enable multi-device support
- ddp_find_unused_parameters=None,
- no_cuda=shared.args.cpu
- ),
- data_collator=transformers.DataCollatorForLanguageModeling(shared.tokenizer, mlm=False),
- callbacks=list([Callbacks()])
- )
-
- lora_model.config.use_cache = False
-
- if torch.__version__ >= "2" and sys.platform != "win32":
- lora_model = torch.compile(lora_model)
-
- # == Save parameters for reuse ==
- with open(f"{lora_file_path}/training_parameters.json", 'w', encoding='utf-8') as file:
- vars = locals()
- json.dump({x: vars[x] for x in PARAMETERS}, file)
-
- # == Main run and monitor loop ==
- logging.info("Starting training...")
- yield "Starting..."
- if WANT_INTERRUPT:
- yield "Interrupted before start."
- return
-
- def threaded_run():
- trainer.train()
- # Note: save in the thread in case the gradio thread breaks (eg browser closed)
- lora_model.save_pretrained(lora_file_path)
- logging.info("LoRA training run is completed and saved.")
- tracked.did_save = True
-
- thread = threading.Thread(target=threaded_run)
- thread.start()
- last_step = 0
- start_time = time.perf_counter()
-
- while thread.is_alive():
- time.sleep(0.5)
- if WANT_INTERRUPT:
- yield "Interrupting, please wait... *(Run will stop after the current training step completes.)*"
-
- elif tracked.current_steps != last_step:
- last_step = tracked.current_steps
- time_elapsed = time.perf_counter() - start_time
- if time_elapsed <= 0:
- timer_info = ""
- total_time_estimate = 999
- else:
- its = tracked.current_steps / time_elapsed
- if its > 1:
- timer_info = f"`{its:.2f}` it/s"
- else:
- timer_info = f"`{1.0/its:.2f}` s/it"
-
- total_time_estimate = (1.0 / its) * (tracked.max_steps)
-
- yield f"Running... **{tracked.current_steps}** / **{tracked.max_steps}** ... {timer_info}, {format_time(time_elapsed)} / {format_time(total_time_estimate)} ... {format_time(total_time_estimate - time_elapsed)} remaining"
-
- # Saving in the train thread might fail if an error occurs, so save here if so.
- if not tracked.did_save:
- logging.info("Training complete, saving...")
- lora_model.save_pretrained(lora_file_path)
-
- if WANT_INTERRUPT:
- logging.info("Training interrupted.")
- yield f"Interrupted. Incomplete LoRA saved to `{lora_file_path}`"
- else:
- logging.info("Training complete!")
- yield f"Done! LoRA saved to `{lora_file_path}`"
-
-
-def split_chunks(arr, step):
- for i in range(0, len(arr), step):
- yield arr[i:i + step]
-
-
-def cut_chunk_for_newline(chunk: str, max_length: int):
- if '\n' not in chunk:
- return chunk
-
- first_newline = chunk.index('\n')
- if first_newline < max_length:
- chunk = chunk[first_newline + 1:]
-
- if '\n' not in chunk:
- return chunk
-
- last_newline = chunk.rindex('\n')
- if len(chunk) - last_newline < max_length:
- chunk = chunk[:last_newline]
-
- return chunk
-
-
-def format_time(seconds: float):
- if seconds < 120:
- return f"`{seconds:.0f}` seconds"
-
- minutes = seconds / 60
- if minutes < 120:
- return f"`{minutes:.0f}` minutes"
-
- hours = minutes / 60
- return f"`{hours:.0f}` hours"
diff --git a/spaces/anusurabhi/girl_race_detector/app.py b/spaces/anusurabhi/girl_race_detector/app.py
deleted file mode 100644
index 4973c5a28e9ef462c2d95ec3b5f3c48d2cc64483..0000000000000000000000000000000000000000
--- a/spaces/anusurabhi/girl_race_detector/app.py
+++ /dev/null
@@ -1,15 +0,0 @@
-#/export
-from fastai.vision.all import *
-import gradio as gr
-
-learn = load_learner('race_model.pkl')
-categories = ('chinese', 'indian', 'japanese', 'korean')
-def classify_image(img):
- pred, idx, probs = learn.predict(img)
- return dict(zip(categories, map(float,probs)))
-
-image = gr.inputs.Image(shape=(192, 192))
-label = gr.outputs.Label()
-intf = gr.Interface(fn=classify_image, inputs=image, outputs=label)
-intf.launch(inline=False)
-
diff --git a/spaces/aodianyun/stable-diffusion-webui/extensions-builtin/ScuNET/preload.py b/spaces/aodianyun/stable-diffusion-webui/extensions-builtin/ScuNET/preload.py
deleted file mode 100644
index 4ce82b1d4349b24192b1915d022ed4fda9f31e5c..0000000000000000000000000000000000000000
--- a/spaces/aodianyun/stable-diffusion-webui/extensions-builtin/ScuNET/preload.py
+++ /dev/null
@@ -1,6 +0,0 @@
-import os
-from modules import paths
-
-
-def preload(parser):
- parser.add_argument("--scunet-models-path", type=str, help="Path to directory with ScuNET model file(s).", default=os.path.join(paths.models_path, 'ScuNET'))
diff --git a/spaces/apsys/hetfit/unet.html b/spaces/apsys/hetfit/unet.html
deleted file mode 100644
index 37847599eb3624ab69a98a42009d924259c9a55c..0000000000000000000000000000000000000000
--- a/spaces/apsys/hetfit/unet.html
+++ /dev/null
@@ -1,1458 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- U-Net model for Denoising Diffusion Probabilistic Models (DDPM)
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
This is a U-Net based model to predict noise ϵ