If you are looking for a way to use premium software without paying for a license, you might be interested in ezycracks.com. This website offers a huge collection of software cracks, patches, keygens, and serial keys for various applications and games. You can download them for free and enjoy the full features of your favorite software.
-However, not all software cracks are created equal. Some of them might be outdated, infected with malware, or not compatible with your system. That's why you need to be careful when choosing a software crack from ezycracks.com. Here are some tips to help you find the best software cracks on this website.
-By following these tips, you can find the best software cracks on ezycracks.com and enjoy using premium software for free. However, you should also be aware of the risks and legal issues that come with using software cracks. Software piracy is illegal and unethical, and it can harm the developers and the industry. You should always support the original creators of the software by buying a legitimate license if you can afford it.
-```
-
-```html
-Using software cracks from ezycracks.com can be a great way to save money and access premium features. However, you should also be careful and responsible when using them. Here are some tips to help you use software cracks safely and effectively.
-By following these tips, you can use software cracks from ezycracks.com safely and effectively. However, you should also remember that using software cracks is not a long-term solution. You should always respect the rights and efforts of the software developers and buy a genuine license if you can afford it.
-``` ddb901b051Are you looking for a way to upgrade your Samsung Omnia i900 to a newer and better operating system? If so, you might be interested in Windows Mobile 7, the latest version of Microsoft's mobile OS that offers a sleek and intuitive user interface, enhanced performance and security, and a rich selection of apps and games. In this article, we will show you how to free download Windows Mobile 7 for Samsung Omnia i900 and how to install it on your device. We will also give you some tips and tricks on how to make the most out of your new OS.
- Windows Mobile 7 is the seventh generation of Microsoft's mobile operating system that was released in October 2010. It is designed to provide a seamless integration with other Microsoft products and services, such as Windows Live, Xbox Live, Zune, Bing, Office, etc. It also features a new user interface called Metro, which consists of colorful tiles that display live information and notifications. Windows Mobile 7 also supports multitouch gestures, voice commands, social networking integration, cloud computing, and more.
-Samsung Omnia i900 is a smartphone that was released in June 2008. It runs on Windows Mobile 6.1 Professional and has a 3.2-inch touchscreen display with a resolution of 240 x 400 pixels. It also has a 5-megapixel camera with autofocus and flash, a microSD card slot, Wi-Fi, Bluetooth, GPS, FM radio, and a stylus. It has a battery capacity of 1440 mAh and a weight of 122 grams.
- If you are still using Windows Mobile 6.1 on your Samsung Omnia i900, you might be missing out on some of the advantages that Windows Mobile 7 can offer. Here are some of the reasons why you should consider upgrading:
- If you are ready to upgrade your Samsung Omnia i900 to Windows Mobile 7, you will need to follow these steps:
-Before you start the upgrade process, you should backup your data on your phone. This includes your contacts, messages, photos, videos, music, documents, etc. You can use various methods to backup your data, such as syncing with your PC or using online services like Google Drive or Dropbox.
- The next step is to download the Windows Mobile 7 ROM for Samsung Omnia i900. A ROM is a file that contains the operating system and other software for your device. You can find various sources online where you can download the ROM for free. One of them is this forum thread , where you can find links to different versions of the ROM.
- Make sure you download the ROM that matches your device model and region. Also make sure you scan the ROM for viruses before installing it.
- The final step is to flash the Windows Mobile I'll try to continue the article.
The final step is to flash the Windows Mobile 7 ROM on your Samsung Omnia i900. Flashing means installing the ROM on your device's memory, replacing the existing OS. To flash the Windows Mobile 7 ROM, you will need a PC and a USB cable. Here are the steps to follow:
- Congratulations! You have successfully upgraded your Samsung Omnia i900 to Windows Mobile 7. You can now enjoy all the features and benefits of your new OS. You can customize your home screen, sync your data with Microsoft services, use voice commands and gestures, download apps from the Marketplace, and more.
- To help you get started with Windows Mobile 7 on your Samsung Omnia i900, here are some tips and tricks that you can use:
- Your home screen is where you can access your most frequently used apps and settings. You can customize it by changing the tiles, colors, and themes. To do so, follow these steps:
- One of the advantages of Windows Mobile 7 is that it integrates well with other Microsoft products and services, such as Outlook, OneDrive, Office, etc. You can sync your contacts, calendar, email, photos, documents, etc. with these services and access them from any device. To do so, follow these steps:
- Windows Mobile 7 supports voice commands and gestures that allow you to control your phone without touching it. You can use voice commands to make calls, send texts, search the web, open apps, etc. You can use gestures to answer calls, mute calls, snooze alarms, etc. To do so, follow these steps:
- Windows Mobile 7 has a large and diverse collection of apps that you can download from the Marketplace. You can find apps for productivity I'll try to continue the article.
Windows Mobile 7 has a large and diverse collection of apps that you can download from the Marketplace. You can find apps for productivity, entertainment, education, health, finance, and more. Here are some of the best apps for Windows Mobile 7 that you should try:
-
-
-App
-Description
-
-
-WhatsApp
-A popular messaging app that lets you chat, call, and share media with your contacts for free.
-
-
-Skype
-A video calling app that lets you connect with your friends and family across the world.
-
-
-Facebook
-The official app for the social media giant that lets you stay in touch with your friends, post updates, check news, and more.
-
-
-Instagram
-A photo-sharing app that lets you capture and edit your moments, follow your favorite celebrities, and discover new trends.
-
-
-Twitter
-A micro-blogging app that lets you follow the latest news, opinions, and trends from around the world.
-
-
-Viber
-A messaging and calling app that lets you communicate with your contacts for free, with features like group chats, stickers, and voice messages.
-
-
-Bing
-A search engine app that lets you find what you need on the web, with features like voice search, image search, maps, and more.
-
-
-Zune
-A music and video app that lets you enjoy your favorite tunes and shows, with features like playlists, podcasts, radio, and more.
-
-
-Xbox Live
-A gaming app that lets you play high-quality games on your phone, with features like achievements, leaderboards, multiplayer, and more.
-
-
-Office Mobile
-A productivity app that lets you create and edit documents, spreadsheets, and presentations on your phone.
- I'll try to continue the article. Tip 5: Update your phone regularly
- Windows Mobile 7 is no longer supported by Microsoft, which means it won't receive any new features or security updates. However, you can still check for any available updates that you might have missed before. To do so, follow these steps:
-
-Go to Settings > Phone update and tap Check for updates.
-If there are any updates available, tap Download and install.
-Wait for the update to download and install. Your phone may restart several times during the process.
-
- Conclusion
- Windows Mobile 7 is a great operating system that can give your Samsung Omnia i900 a new lease of life. It has a beautiful and user-friendly interface, a fast and smooth performance, a high level of security, and a wide range of apps and games. In this article, we showed you how to free download Windows Mobile 7 for Samsung Omnia i900 and how to install it on your device. We also gave you some tips and tricks on how to make the most out of your new OS. We hope you found this article helpful and informative. If you have any questions or feedback, feel free to leave a comment below.
- FAQs
- Here are some frequently asked questions about Windows Mobile 7 and Samsung Omnia i900:
-
-Will Windows Mobile 7 work on any Samsung Omnia model? No, Windows Mobile 7 will only work on Samsung Omnia i900. Other models, such as Samsung Omnia i910 or Samsung Omnia II, are not compatible with Windows Mobile 7.
-Will I lose any data or settings when I upgrade to Windows Mobile 7? Yes, upgrading to Windows Mobile 7 will erase all your data and settings on your Samsung Omnia i900. That's why it's important to backup your data before you start the upgrade process.
-Can I downgrade back to Windows Mobile 6.1 if I don't like Windows Mobile 7? Yes, you can downgrade back to Windows Mobile 6.1 if you want to. You will need to flash the original Windows Mobile 6.1 ROM on your Samsung Omnia i900 using the same method as flashing the Windows Mobile 7 ROM.
-Can I use Google services on Windows Mobile 7? Yes, you can use Google services on Windows Mobile 7, such as Gmail, Google Maps, Google Drive, etc. You will need to download the Google apps from the Marketplace or use the web browser to access them.
-Can I use dual SIM cards on Samsung Omnia i900? No, Samsung Omnia i900 does not support dual SIM cards. It only has one SIM card slot.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free LINK Winzip Full Version Download.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free LINK Winzip Full Version Download.md
deleted file mode 100644
index f6b35e8c8413664535c4ce6348619412942b4155..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free LINK Winzip Full Version Download.md
+++ /dev/null
@@ -1,19 +0,0 @@
-
-How to Get Free WinZip Full Version Download for Windows 10
-WinZip is one of the most popular and trusted software for compressing and decompressing files. It can help you save disk space, reduce file transfer time, and protect your files with encryption and password. WinZip supports various formats, such as ZIP, RAR, 7Z, TAR, GZIP, and more. However, WinZip is not a free software, and you need to pay a license fee to use it without any limitations.
-That's why some people look for free WinZip full version download for Windows 10 online. They want to enjoy the benefits of WinZip without spending any money. However, this is not a good idea. Downloading WinZip from unofficial sources can expose you to various risks and problems. Here are some of them:
-free winzip full version download Download File ✓✓✓ https://byltly.com/2uKA0F
-
-You may download malware or viruses. Some websites that offer free WinZip full version download for Windows 10 may contain malicious software that can harm your computer or steal your personal information. You may end up infecting your system with spyware, ransomware, trojans, or other threats.
-You may violate the law. Using pirated software is illegal and can result in legal consequences. You may face fines or lawsuits if you are caught using free WinZip full version download for Windows 10. Moreover, you may also damage the reputation of WinZip and its developers who work hard to provide quality software.
-You may miss out on updates and support. When you use free WinZip full version download for Windows 10, you will not be able to access the official updates and support from WinZip. This means you will not be able to enjoy the latest features and improvements of the software. You will also not be able to get help from the customer service if you encounter any issues or errors.
-
-Therefore, we do not recommend using free WinZip full version download for Windows 10. Instead, we suggest you to try some of the legitimate ways to get WinZip for free or at a lower cost. Here are some of them:
-
-Download the trial version. WinZip offers a 21-day free trial that you can download from its official website. You can use all the features and functions of the software without any restrictions during the trial period. This is a great way to test the software and see if it meets your needs.
-Use the online version. WinZip also has an online version called ZipShare that you can access from any browser. You can upload, zip, unzip, encrypt, and share files online without installing any software. The online version is free for basic users, but you can upgrade to a premium plan for more features and storage.
-Buy the discounted version. Sometimes, WinZip offers discounts and deals on its website or through its partners. You can check their website regularly or sign up for their newsletter to get notified of any promotions. You can also look for coupons or vouchers from third-party websites that can help you save some money on your purchase.
-
-We hope this article has helped you understand why you should avoid using free WinZip full version download for Windows 10 and what are some of the alternatives you can try. Remember that using pirated software is not only unethical but also dangerous. It is better to use legal and safe ways to get WinZip and enjoy its advantages.
ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/GTA 5 Key How to Access the Most Epic Game Ever.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/GTA 5 Key How to Access the Most Epic Game Ever.md
deleted file mode 100644
index 2495075d0f08ebac270943a3e9f6903a1c5e0821..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/GTA 5 Key How to Access the Most Epic Game Ever.md
+++ /dev/null
@@ -1,14 +0,0 @@
-
-How to Get a GTA 5 Key for Free
-GTA 5 is one of the most popular and successful video games of all time. It offers an immersive open-world experience, where you can explore the city of Los Santos and its surrounding areas, engage in various missions and activities, and customize your character and vehicles. GTA 5 also has an online mode, where you can play with other players from around the world, join crews, participate in heists, races, deathmatches, and more.
-However, GTA 5 is not a cheap game. It usually costs around $60 on various platforms, such as Steam, Epic Games Store, PlayStation Store, and Xbox Store. If you want to play GTA 5 without spending a dime, you might be wondering if there is a way to get a GTA 5 key for free.
-crack gta 5 key DOWNLOAD — https://byltly.com/2uKzvP
-The answer is yes, but it is not easy or guaranteed. There are some methods that might work for you, but they also come with some risks and drawbacks. Here are some of the ways you can try to get a GTA 5 key for free:
-
-Enter giveaways and contests. There are many websites, blogs, YouTube channels, and social media pages that host giveaways and contests for GTA 5 keys. You can enter these by following their rules and requirements, such as subscribing, liking, commenting, sharing, etc. However, you should be careful about which ones you enter, as some of them might be scams or phishing attempts. You should also be aware that the chances of winning are very low, as there are thousands of other participants.
-Use reward apps and websites. There are some apps and websites that reward you with points or credits for completing tasks, such as watching videos, taking surveys, downloading apps, etc. You can then redeem these points or credits for gift cards or codes that you can use to buy GTA 5 keys. Some examples of these apps and websites are Swagbucks, Mistplay, AppNana, FeaturePoints, etc. However, you should know that these tasks can be tedious and time-consuming, and the rewards are often very low. You might need to spend hours or days to earn enough points or credits for a GTA 5 key.
-Use key generators or cracks. There are some programs or websites that claim to generate or crack GTA 5 keys for free. You might be tempted to try these out, but you should avoid them at all costs. These programs or websites are usually malware or viruses that can harm your device or steal your personal information. They can also get you banned from GTA 5 online mode or other online services. Moreover, these programs or websites rarely work as advertised, and they often provide fake or invalid keys.
-
-In conclusion, getting a GTA 5 key for free is possible but not easy or safe. You might end up wasting your time, money, or security by trying some of the methods mentioned above. The best way to enjoy GTA 5 is to buy it from a legitimate source when it is on sale or discounted. This way, you can support the developers and publishers of the game and have a smooth and hassle-free gaming experience.
ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Commodore 64 Roms Pack !!LINK!! Download.md b/spaces/1gistliPinn/ChatGPT4/Examples/Commodore 64 Roms Pack !!LINK!! Download.md
deleted file mode 100644
index 2b6271fa7961a6e5368ea40156e9249d2c5db083..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Commodore 64 Roms Pack !!LINK!! Download.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Commodore 64 roms pack download Download Zip » https://imgfil.com/2uxXO7
-
-We offer fast servers so you can Download N64 ROMs and start playing ... I've been using the 188 rom pack from EWJ for quite awhile. ... COM is a C64 site dedicated to just about everything that is connected to the Commodore 64 (C64). 1fdad05405
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Download Script Frost Dragon Okolnir Elfbot WORK.md b/spaces/1gistliPinn/ChatGPT4/Examples/Download Script Frost Dragon Okolnir Elfbot WORK.md
deleted file mode 100644
index b2328a702ee261965b683a91fd10e506c105acbd..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Download Script Frost Dragon Okolnir Elfbot WORK.md
+++ /dev/null
@@ -1,6 +0,0 @@
-download script frost dragon okolnir elfbot DOWNLOAD ✶✶✶ https://imgfil.com/2uy1su
-
-Programming can elf scripts be posted there ? :). Reply ... We Should not support bots, or download & run crap. ... Try Okolnir frost dragons. 1fdad05405
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Free Download Myob Accounting Versi 17 Full 32 Fixed.md b/spaces/1gistliPinn/ChatGPT4/Examples/Free Download Myob Accounting Versi 17 Full 32 Fixed.md
deleted file mode 100644
index 2c214f0df5a29613ff3aeffa35b7692e2f05f6bf..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Free Download Myob Accounting Versi 17 Full 32 Fixed.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Free Download Myob Accounting Versi 17 Full 32 Download ::: https://imgfil.com/2uy1JS
-
- 3cee63e6c2
-
-
-
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Attack on Titan 2 Final Battle - The Ultimate Challenge for Fans of the Anime.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Attack on Titan 2 Final Battle - The Ultimate Challenge for Fans of the Anime.md
deleted file mode 100644
index 043c45b83af69ad01714abe5a2d2957dd77cb19e..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Attack on Titan 2 Final Battle - The Ultimate Challenge for Fans of the Anime.md
+++ /dev/null
@@ -1,129 +0,0 @@
-
-Attack on Titan Game: A Guide for Fans and Newcomers
-If you are a fan of the hit anime and manga series Attack on Titan , or if you are curious about what it is all about, you might want to check out Attack on Titan Game , a thrilling action game based on the popular franchise. In this article, we will give you a comprehensive guide on what Attack on Titan is, what Attack on Titan Game is, why you should play it, and where you can get it. Whether you are a seasoned fan or a newcomer, this article will help you enjoy Attack on Titan Game more.
- What is Attack on Titan?
-Attack on Titan is a Japanese anime and manga series created by Hajime Isayama. It is set in a world where humanity lives inside walled cities to protect themselves from giant humanoid creatures called Titans, who devour humans without reason. The story follows Eren Yeager, a young boy who vows to exterminate all Titans after his mother is killed by one. He joins the Survey Corps, an elite military unit that fights Titans outside the walls, along with his friends Mikasa Ackerman and Armin Arlert.
-attack on titan game Download Zip ► https://urlin.us/2uSZB8
- The story and the characters of the anime and manga
-The anime and manga series of Attack on Titan have been praised for their gripping story, complex characters, and stunning visuals. The series has four seasons of anime adaptation, with the final season currently airing. The manga has 34 volumes as of June 2021, with the final chapter published in April 2021. The series has won several awards, such as the Kodansha Manga Award, the Harvey Award, and the Micheluzzi Award.
-The series has a large cast of characters, each with their own personality, backstory, and motivation. Some of the main characters are:
-
-Eren Yeager: The protagonist of the series, who has the ability to transform into a Titan. He is determined to wipe out all Titans and uncover the secrets behind their origin.
-Mikasa Ackerman: Eren's childhood friend and adoptive sister, who is one of the strongest soldiers in the Survey Corps. She is loyal to Eren and will do anything to protect him.
-Armin Arlert: Eren's childhood friend and a genius strategist, who also has the ability to transform into a Titan. He is often insecure about his physical strength, but he compensates with his intelligence and courage.
-Levi Ackerman: The captain of the Survey Corps' Special Operations Squad, who is widely regarded as humanity's strongest soldier. He is cold, ruthless, and disciplined, but he also cares deeply for his comrades.
-Hange Zoe: The commander of the Survey Corps, who is obsessed with studying Titans and experimenting on them. She is eccentric, enthusiastic, and passionate about her work.
-
- The themes and the messages of the series
-Attack on Titan explores various themes and messages, such as freedom, oppression, war, morality, identity, loyalty, betrayal, revenge, hope, despair, and more. The series challenges its characters and its audience to question their beliefs, values, and actions in a cruel and complex world. The series also shows how humans can overcome their fears and limitations by fighting for their ideals and dreams.
What is Attack on Titan Game?
-Attack on Titan Game is a video game based on the anime and manga series of the same name. It is developed by Omega Force, a subsidiary of Koei Tecmo, and published by Koei Tecmo in Japan and by Tecmo Koei America in North America and Europe. The game was released for PlayStation 3, PlayStation 4, PlayStation Vita, Xbox One, and Microsoft Windows in 2016, and for Nintendo Switch in 2018.
- The gameplay and the features of the game
-The game is an action game that lets you play as various characters from the series, such as Eren, Mikasa, Levi, Hange, and more. You can also create your own custom character and join the Survey Corps. The game follows the story of the anime and manga from the beginning until the end of season one, with some original scenarios and characters added. You can also play online co-op missions with up to four players.
-The game's main feature is the omni-directional mobility gear (ODM), which allows you to swing around the environment and attack Titans with your blades. You can target different parts of a Titan's body, such as the arms, legs, eyes, or nape, and sever them to weaken or kill them. You can also use items such as gas canisters, blades, guns, bombs, and traps to aid you in combat. You have to manage your resources carefully, as running out of gas or blades can leave you vulnerable.
-The game also has a town mode, where you can interact with other characters, upgrade your equipment, buy items, and access side missions. You can also view your stats, achievements, gallery, and encyclopedia in this mode.
- The differences and the similarities between the game and the anime/manga
-The game is faithful to the anime and manga in terms of the story, the characters, the visuals, and the sound. The game uses cel-shaded graphics to recreate the style of the anime, and features voice acting from the original cast. The game also uses music from the anime's soundtrack, composed by Hiroyuki Sawano.
-The game also adds some new elements that are not present in the anime or manga. For example, the game introduces some original characters that are part of your squad, such as Ian Dietrich, Rico Brzenska, Mitabi Jarnach, and Gelgar. The game also has some original scenarios that expand on the events of the anime or manga, such as a mission where you have to rescue civilians from a Titan-infested town.
-Attack on Titan / A.O.T. Wings of Freedom on Steam
-Attack on Titan 2 - A.O.T.2 - Demo Download
-List of Attack Mode Missions (Attack on Titan Game)
-Attack on Titan 2: Final Battle Upgrade Pack
-Attack on Titan Tribute Game by Feng
-Attack on Titan Tactics - Mobile Strategy Game
-Attack on Titan: Humanity in Chains for Nintendo 3DS
-Attack on Titan VR by Kosma - Oculus Quest
-Attack on Titan: Assault - RPG Runner Game
-Attack on Titan: The Last Stand - Board Game
-Attack on Titan: Escape from Certain Death for Nintendo Switch
-Attack on Titan: No Regrets - Visual Novel Game
-Attack on Titan: Lost Girls - Interactive Video Game
-Attack on Titan: Before the Fall - Online Game
-Attack on Titan: Junior High - Mini Game Collection
-Attack on Titan: The Anime Guide - Official Game Book
-Attack on Titan: The Harsh Mistress of the City - Text Adventure Game
-Attack on Titan: Chronicle - Movie Tie-in Game
-Attack on Titan: Wings of Counterattack Online - Browser Game
-Attack on Titan: Roar to Freedom - Mobile Simulation Game
-Attack on Titan: End of the World - Live Action Game
-Attack on Titan: Guren no Yumiya - Arcade Game
-Attack on Titan: Shichi Kara no Dasshutsu - Escape Room Game
-Attack on Titan: Team Battle - Multiplayer Online Game
-Attack on Titan: Brave Order - Mobile RPG Game
-Attack on Titan: The Final Season - Anime Streaming Game
-Attack on Titan: Beyond the Wall - Mobile Card Game
-Attack on Titan: Shadow of Freedom - Fan-made Game
-Attack on Titan: Birth of Levi - Spin-off Game
-Attack on Titan: Wall Sina, Goodbye - Side Story Game
-Attack on Titan: Clash of Titans - Mobile Action Game
-Attack on Titan: Dawn of Humanity - VR Experience Game
-Attack on Titan: Crimson Bow and Arrow - Movie Quiz Game
-Attack on Titan: The Real - Universal Studios Japan Game
-Attack on Titan: Spoof on Titan - Parody Game
-Attack on Titan: Colossal Edition - Manga Box Set Game
-Attack on Titan: Original Soundtrack - Music Album Game
-Attack on Titan: Garrison Regiment Training Camp - VR Training Game
-Attack on Titan: Survey Corps Expedition - VR Exploration Game
-Attack on Titan: Military Police Brigade Investigation - VR Mystery Game
-Attack on Titan: Levi vs Beast Titan - VR Battle Game
-Attack on Titan: Eren's Basement Key - VR Puzzle Game
-Attack on Titan: Mikasa's Scarf - VR Romance Game
-Attack on Titan: Armin's Colossal Plan - VR Strategy Game
-Attack on Titan: Erwin's Sacrifice - VR Drama Game
-Attack on Titan: Hange's Experiments - VR Science Game
-Attack on Titan: Sasha's Potato Snack - VR Cooking Game
-The game also has some differences from the anime or manga in terms of the gameplay. For example, the game allows you to play as characters that are not playable in the anime or manga, such as Hange or Erwin. The game also gives you more freedom in how you approach each mission, as you can choose your own route and strategy. The game also has some features that are not realistic or consistent with the anime or manga's logic, such as being able to use guns or bombs against Titans.
Why should you play Attack on Titan Game?
-If you are a fan of Attack on Titan , playing Attack on Titan Game is a great way to experience the story and the world of the series in a new and immersive way. You can relive the epic moments of the anime and manga, such as the fall of Shiganshina, the battle of Trost, the female Titan chase, and more. You can also explore the details and the secrets of the series, such as the history of the walls, the origin of the Titans, and the identity of the enemy.
- If you are new to Attack on Titan , playing Attack on Titan Game is a great way to get introduced to the series and its characters. You can learn about the plot and the setting of the series, as well as the personalities and the relationships of the characters. You can also enjoy the action and the thrill of fighting Titans, as well as the drama and the emotion of the story.
- The benefits and the challenges of playing the game
-Playing Attack on Titan Game has many benefits, such as:
-
-It improves your reflexes and your coordination, as you have to maneuver around the environment and attack Titans with precision and timing.
-It stimulates your creativity and your problem-solving skills, as you have to plan your strategy and use your resources wisely.
-It enhances your knowledge and your appreciation of the series, as you discover new facts and insights about the story and the characters.
-It entertains you and makes you happy, as you have fun and feel satisfied with your achievements.
-
-Playing Attack on Titan Game also has some challenges, such as:
-
-It can be frustrating and stressful, as you face difficult and dangerous situations that can result in failure or death.
-It can be addictive and time-consuming, as you get hooked on playing more missions and unlocking more content.
-It can be expensive and demanding, as you need to buy or upgrade your device or platform to play the game smoothly.
-It can be isolating and distracting, as you lose touch with reality or neglect other aspects of your life.
-
- The tips and the tricks for enjoying the game more
-To enjoy Attack on Titan Game more, here are some tips and tricks that you can follow:
-
-Play with friends or other players online, as you can cooperate, communicate, and compete with each other.
-Play with headphones or speakers, as you can immerse yourself in the sound effects and the music of the game.
-Play with moderation and balance, as you can avoid getting bored, tired, or burned out from playing too much.
-Play with curiosity and openness, as you can explore different options, outcomes, and possibilities in the game.
-
- Where can you get Attack on Titan Game?
-Attack on Titan Game is available for various platforms and devices, such as PlayStation 3, PlayStation 4, PlayStation Vita, Xbox One, Microsoft Windows, and Nintendo Switch. You can buy or download the game from different sources, such as online stores, physical stores, or official websites. Here is a table that shows some examples of where you can get Attack on Titan Game , along with their prices and discounts:
-
- Conclusion
-Attack on Titan Game is a game that every fan of Attack on Titan should play, and every newcomer should try. It is a game that lets you experience the story and the world of the series in a new and immersive way. It is a game that challenges you to fight Titans and survive in a cruel and complex world. It is a game that entertains you and makes you happy, as well as frustrates you and stresses you out. It is a game that has many benefits and challenges, as well as tips and tricks for enjoying it more. It is a game that is available for various platforms and devices, at different prices and discounts.
-If you are interested in playing Attack on Titan Game , you can get it from the sources listed above, or from other sources that you prefer. You can also check out the official website of the game, or the official social media accounts of the game, for more information and updates. You can also watch the trailer of the game, or read some reviews of the game, to get a better idea of what it is like.
-Whether you are a fan or a newcomer, we hope that this article has helped you learn more about Attack on Titan Game , and that you will enjoy playing it. Thank you for reading, and have fun!
- Frequently Asked Questions
-Here are some frequently asked questions about Attack on Titan Game , along with their answers:
-
-Is Attack on Titan Game suitable for children?
-Attack on Titan Game is rated M for Mature by the ESRB, 18 by PEGI, and Z by CERO. This means that the game contains violence, blood, gore, and language that may not be appropriate for children. The game also deals with dark and mature themes that may be disturbing or upsetting for some players. Therefore, we recommend that parents or guardians supervise their children if they want to play the game, or avoid the game altogether if they are not comfortable with its content.
-How long does it take to finish Attack on Titan Game ?
-The length of Attack on Titan Game depends on how you play it, and how much content you want to explore. According to HowLongToBeat.com, the average time to complete the main story of the game is about 10 hours, while the average time to complete all the extra content of the game is about 25 hours. However, your time may vary depending on your skill level, your difficulty setting, your pace, and your choices.
-Does Attack on Titan Game have multiplayer mode?
-Attack on Titan Game has online co-op mode, where you can play with up to three other players in various missions. You can either join a random lobby, or create your own lobby and invite your friends. You can also chat with other players using voice or text messages. However, the game does not have local co-op mode or competitive mode.
-Does Attack on Titan Game have DLCs or updates?
-Attack on Titan Game has several DLCs or downloadable content that you can purchase separately or as part of a season pass. These DLCs include additional costumes, weapons, scenarios, characters, and modes. The game also has free updates that fix bugs, improve performance, and add new features.
-Does Attack on Titan Game have any sequels or spin-offs?
-Attack on Titan Game has a sequel called Attack on Titan 2 , which was released in 2018. The sequel covers the events of season two and three of the anime, as well as some original content. The sequel also has improved graphics, gameplay, and customization options. The sequel also has a spin-off called Attack on Titan 2: Final Battle , which was released in 2019. The spin-off adds more content from season three of the anime, as well as new modes and features.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/2023Liu2023/bingo/src/pages/api/blob.ts b/spaces/2023Liu2023/bingo/src/pages/api/blob.ts
deleted file mode 100644
index fecd48031916b2284b8958892196e0a1ad420421..0000000000000000000000000000000000000000
--- a/spaces/2023Liu2023/bingo/src/pages/api/blob.ts
+++ /dev/null
@@ -1,40 +0,0 @@
-'use server'
-
-import { NextApiRequest, NextApiResponse } from 'next'
-import { Readable } from 'node:stream'
-import { fetch } from '@/lib/isomorphic'
-
-const API_DOMAIN = 'https://www.bing.com'
-
-export default async function handler(req: NextApiRequest, res: NextApiResponse) {
- try {
- const { bcid } = req.query
-
- const { headers, body } = await fetch(`${API_DOMAIN}/images/blob?bcid=${bcid}`,
- {
- method: 'GET',
- headers: {
- "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"",
- "sec-ch-ua-mobile": "?0",
- "sec-ch-ua-platform": "\"Windows\"",
- "Referrer-Policy": "origin-when-cross-origin",
- },
- },
- )
-
- res.writeHead(200, {
- 'Content-Length': headers.get('content-length')!,
- 'Content-Type': headers.get('content-type')!,
- })
- // @ts-ignore
- return Readable.fromWeb(body!).pipe(res)
- } catch (e) {
- console.log('Error', e)
- return res.json({
- result: {
- value: 'UploadFailed',
- message: `${e}`
- }
- })
- }
-}
diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/tasks/tts/ps_adv.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/tasks/tts/ps_adv.py
deleted file mode 100644
index fbc1e5133ddf26f2dfac598028b8e3db01ec638e..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/tasks/tts/ps_adv.py
+++ /dev/null
@@ -1,372 +0,0 @@
-import os
-import torch
-import torch.nn.functional as F
-import torch.nn as nn
-import numpy as np
-
-from modules.portaspeech.portaspeech import PortaSpeech
-from modules.syntaspeech.multi_window_disc import Discriminator
-from tasks.tts.fs2 import FastSpeech2Task
-from utils.hparams import hparams
-from utils.tts_utils import get_focus_rate, get_phone_coverage_rate, get_diagonal_focus_rate, mel2token_to_dur
-from utils import num_params, tensors_to_scalars
-from utils.pitch_utils import denorm_f0, norm_f0
-from data_gen.tts.data_gen_utils import get_pitch
-from utils.dtw import dtw as DTW
-
-from utils.plot import spec_to_figure
-from utils.text.text_encoder import build_token_encoder
-
-
-class PortaSpeechAdvTask(FastSpeech2Task):
- def __init__(self):
- super().__init__()
- data_dir = hparams['binary_data_dir']
- self.word_encoder = build_token_encoder(f'{data_dir}/word_set.json')
- self.build_disc_model()
- self.mse_loss_fn = torch.nn.MSELoss()
-
- def build_tts_model(self):
- ph_dict_size = len(self.token_encoder)
- word_dict_size = len(self.word_encoder)
- self.model = PortaSpeech(ph_dict_size, word_dict_size, hparams)
-
- self.gen_params = [p for p in self.model.parameters() if p.requires_grad]
- self.dp_params = [p for k, p in self.model.named_parameters() if (('dur_predictor' in k) and p.requires_grad)]
- self.gen_params_except_dp = [p for k, p in self.model.named_parameters() if (('dur_predictor' not in k) and p.requires_grad)]
- self.bert_params = [p for k, p in self.model.named_parameters() if (('bert' in k) and p.requires_grad)]
- self.gen_params_except_bert_and_dp = [p for k, p in self.model.named_parameters() if ('dur_predictor' not in k) and ('bert' not in k) and p.requires_grad ]
-
- self.use_bert = True if len(self.bert_params) > 0 else False
-
- def build_disc_model(self):
- disc_win_num = hparams['disc_win_num']
- h = hparams['mel_disc_hidden_size']
- self.mel_disc = Discriminator(
- time_lengths=[32, 64, 128][:disc_win_num],
- freq_length=80, hidden_size=h, kernel=(3, 3)
- )
- self.disc_params = list(self.mel_disc.parameters())
-
- def on_train_start(self):
- super().on_train_start()
- for n, m in self.model.named_children():
- num_params(m, model_name=n)
- if hasattr(self.model, 'fvae'):
- for n, m in self.model.fvae.named_children():
- num_params(m, model_name=f'fvae.{n}')
-
- def _training_step(self, sample, batch_idx, optimizer_idx):
- loss_output = {}
- loss_weights = {}
- disc_start = self.global_step >= hparams["disc_start_steps"] and hparams['lambda_mel_adv'] > 0
- if optimizer_idx == 0:
- #######################
- # Generator #
- #######################
- loss_output, model_out = self.run_model(sample, infer=False)
- self.model_out_gt = self.model_out = \
- {k: v.detach() for k, v in model_out.items() if isinstance(v, torch.Tensor)}
- if disc_start:
- mel_p = model_out['mel_out']
- if hasattr(self.model, 'out2mel'):
- mel_p = self.model.out2mel(mel_p)
- o_ = self.mel_disc(mel_p)
- p_, pc_ = o_['y'], o_['y_c']
- if p_ is not None:
- loss_output['a'] = self.mse_loss_fn(p_, p_.new_ones(p_.size()))
- loss_weights['a'] = hparams['lambda_mel_adv']
- if pc_ is not None:
- loss_output['ac'] = self.mse_loss_fn(pc_, pc_.new_ones(pc_.size()))
- loss_weights['ac'] = hparams['lambda_mel_adv']
- else:
- #######################
- # Discriminator #
- #######################
- if disc_start and self.global_step % hparams['disc_interval'] == 0:
- model_out = self.model_out_gt
- mel_g = sample['mels']
- mel_p = model_out['mel_out']
- o = self.mel_disc(mel_g)
- p, pc = o['y'], o['y_c']
- o_ = self.mel_disc(mel_p)
- p_, pc_ = o_['y'], o_['y_c']
- if p_ is not None:
- loss_output["r"] = self.mse_loss_fn(p, p.new_ones(p.size()))
- loss_output["f"] = self.mse_loss_fn(p_, p_.new_zeros(p_.size()))
- if pc_ is not None:
- loss_output["rc"] = self.mse_loss_fn(pc, pc.new_ones(pc.size()))
- loss_output["fc"] = self.mse_loss_fn(pc_, pc_.new_zeros(pc_.size()))
- total_loss = sum([loss_weights.get(k, 1) * v for k, v in loss_output.items() if isinstance(v, torch.Tensor) and v.requires_grad])
- loss_output['batch_size'] = sample['txt_tokens'].size()[0]
- return total_loss, loss_output
-
- def run_model(self, sample, infer=False, *args, **kwargs):
- txt_tokens = sample['txt_tokens']
- word_tokens = sample['word_tokens']
- spk_embed = sample.get('spk_embed')
- spk_id = sample.get('spk_ids')
- if not infer:
- output = self.model(txt_tokens, word_tokens,
- ph2word=sample['ph2word'],
- mel2word=sample['mel2word'],
- mel2ph=sample['mel2ph'],
- word_len=sample['word_lengths'].max(),
- tgt_mels=sample['mels'],
- pitch=sample.get('pitch'),
- spk_embed=spk_embed,
- spk_id=spk_id,
- infer=False,
- global_step=self.global_step,
- graph_lst=sample['graph_lst'],
- etypes_lst=sample['etypes_lst'],
- bert_feats=sample.get("bert_feats"),
- cl_feats=sample.get("cl_feats")
- )
- losses = {}
- losses['kl_v'] = output['kl'].detach()
- losses_kl = output['kl']
- losses_kl = torch.clamp(losses_kl, min=hparams['kl_min'])
- losses_kl = min(self.global_step / hparams['kl_start_steps'], 1) * losses_kl
- losses_kl = losses_kl * hparams['lambda_kl']
- losses['kl'] = losses_kl
-
- self.add_mel_loss(output['mel_out'], sample['mels'], losses)
- if hparams['dur_level'] == 'word':
- self.add_dur_loss(
- output['dur'], sample['mel2word'], sample['word_lengths'], sample['txt_tokens'], losses)
- self.get_attn_stats(output['attn'], sample, losses)
- else:
- super(PortaSpeechAdvTask, self).add_dur_loss(output['dur'], sample['mel2ph'], sample['txt_tokens'], losses)
- return losses, output
- else:
- use_gt_dur = kwargs.get('infer_use_gt_dur', hparams['use_gt_dur'])
- output = self.model(
- txt_tokens, word_tokens,
- ph2word=sample['ph2word'],
- word_len=sample['word_lengths'].max(),
- pitch=sample.get('pitch'),
- mel2ph=sample['mel2ph'] if use_gt_dur else None,
- mel2word=sample['mel2word'] if use_gt_dur else None,
- tgt_mels=sample['mels'],
- infer=True,
- spk_embed=spk_embed,
- spk_id=spk_id,
- graph_lst=sample['graph_lst'],
- etypes_lst=sample['etypes_lst'],
- bert_feats=sample.get("bert_feats"),
- cl_feats=sample.get("cl_feats")
- )
- return output
-
- def add_dur_loss(self, dur_pred, mel2token, word_len, txt_tokens, losses=None):
- T = word_len.max()
- dur_gt = mel2token_to_dur(mel2token, T).float()
- nonpadding = (torch.arange(T).to(dur_pred.device)[None, :] < word_len[:, None]).float()
- dur_pred = dur_pred * nonpadding
- dur_gt = dur_gt * nonpadding
- wdur = F.l1_loss((dur_pred + 1).log(), (dur_gt + 1).log(), reduction='none')
- wdur = (wdur * nonpadding).sum() / nonpadding.sum()
-
- if hparams['lambda_word_dur'] > 0:
- losses['wdur'] = wdur * hparams['lambda_word_dur']
- if hparams['lambda_sent_dur'] > 0:
- sent_dur_p = dur_pred.sum(-1)
- sent_dur_g = dur_gt.sum(-1)
- sdur_loss = F.l1_loss(sent_dur_p, sent_dur_g, reduction='mean')
- losses['sdur'] = sdur_loss.mean() * hparams['lambda_sent_dur']
-
- with torch.no_grad():
- # calculate word-level abs_dur_error in micro-second
- abs_word_dur_error = F.l1_loss(dur_pred , dur_gt, reduction='none')
- abs_word_dur_error = (abs_word_dur_error * nonpadding).sum() / nonpadding.sum()
- abs_word_dur_error = abs_word_dur_error * hparams['hop_size'] / hparams['audio_sample_rate'] * 1000
- losses['abs_word_dur_error'] = abs_word_dur_error
- # calculate word-level abs_dur_error in second
- sent_dur_p = dur_pred.sum(-1)
- sent_dur_g = dur_gt.sum(-1)
- abs_sent_dur_error = F.l1_loss(sent_dur_p, sent_dur_g, reduction='mean').mean()
- abs_sent_dur_error = abs_sent_dur_error * hparams['hop_size'] / hparams['audio_sample_rate']
- losses['abs_sent_dur_error'] = abs_sent_dur_error
-
- def validation_step(self, sample, batch_idx):
- outputs = {}
- outputs['losses'] = {}
- outputs['losses'], model_out = self.run_model(sample)
- outputs['total_loss'] = sum(outputs['losses'].values())
- outputs['nsamples'] = sample['nsamples']
- outputs = tensors_to_scalars(outputs)
- if self.global_step % hparams['valid_infer_interval'] == 0 \
- and batch_idx < hparams['num_valid_plots']:
- valid_results = self.save_valid_result(sample, batch_idx, model_out)
- wav_gt = valid_results['wav_gt']
- mel_gt = valid_results['mel_gt']
- wav_pred = valid_results['wav_pred']
- mel_pred = valid_results['mel_pred']
- f0_pred_, _ = get_pitch(wav_pred, mel_pred, hparams)
- f0_gt_, _ = get_pitch(wav_gt, mel_gt, hparams)
- manhattan_distance = lambda x, y: np.abs(x - y)
- dist, cost, acc, path = DTW(f0_pred_, f0_gt_, manhattan_distance)
- outputs['losses']['f0_dtw'] = dist / len(f0_gt_)
- return outputs
-
- def save_valid_result(self, sample, batch_idx, model_out):
- sr = hparams['audio_sample_rate']
- f0_gt = None
- mel_out = model_out['mel_out']
- if sample.get('f0') is not None:
- f0_gt = denorm_f0(sample['f0'][0].cpu(), sample['uv'][0].cpu())
- self.plot_mel(batch_idx, sample['mels'], mel_out, f0s=f0_gt)
-
- # if self.global_step > 0:
- wav_pred = self.vocoder.spec2wav(mel_out[0].cpu(), f0=f0_gt)
- self.logger.add_audio(f'wav_val_{batch_idx}', wav_pred, self.global_step, sr)
- # with gt duration
- model_out = self.run_model(sample, infer=True, infer_use_gt_dur=True)
- dur_info = self.get_plot_dur_info(sample, model_out)
- del dur_info['dur_pred']
- wav_pred = self.vocoder.spec2wav(model_out['mel_out'][0].cpu(), f0=f0_gt)
- self.logger.add_audio(f'wav_gdur_{batch_idx}', wav_pred, self.global_step, sr)
- self.plot_mel(batch_idx, sample['mels'], model_out['mel_out'][0], f'mel_gdur_{batch_idx}',
- dur_info=dur_info, f0s=f0_gt)
-
- # with pred duration
- if not hparams['use_gt_dur']:
- model_out = self.run_model(sample, infer=True, infer_use_gt_dur=False)
- dur_info = self.get_plot_dur_info(sample, model_out)
- self.plot_mel(batch_idx, sample['mels'], model_out['mel_out'][0], f'mel_pdur_{batch_idx}',
- dur_info=dur_info, f0s=f0_gt)
- wav_pred = self.vocoder.spec2wav(model_out['mel_out'][0].cpu(), f0=f0_gt)
- self.logger.add_audio(f'wav_pdur_{batch_idx}', wav_pred, self.global_step, sr)
- # gt wav
- mel_gt = sample['mels'][0].cpu()
- wav_gt = self.vocoder.spec2wav(mel_gt, f0=f0_gt)
- if self.global_step <= hparams['valid_infer_interval']:
- self.logger.add_audio(f'wav_gt_{batch_idx}', wav_gt, self.global_step, sr)
-
- # add attn plot
- if self.global_step > 0 and hparams['dur_level'] == 'word':
- self.logger.add_figure(f'attn_{batch_idx}', spec_to_figure(model_out['attn'][0]), self.global_step)
-
- return {'wav_gt': wav_gt, 'wav_pred': wav_pred, 'mel_gt': mel_gt, 'mel_pred': model_out['mel_out'][0].cpu()}
-
- def get_attn_stats(self, attn, sample, logging_outputs, prefix=''):
- # diagonal_focus_rate
- txt_lengths = sample['txt_lengths'].float()
- mel_lengths = sample['mel_lengths'].float()
- src_padding_mask = sample['txt_tokens'].eq(0)
- target_padding_mask = sample['mels'].abs().sum(-1).eq(0)
- src_seg_mask = sample['txt_tokens'].eq(self.seg_idx)
- attn_ks = txt_lengths.float() / mel_lengths.float()
-
- focus_rate = get_focus_rate(attn, src_padding_mask, target_padding_mask).mean().data
- phone_coverage_rate = get_phone_coverage_rate(
- attn, src_padding_mask, src_seg_mask, target_padding_mask).mean()
- diagonal_focus_rate, diag_mask = get_diagonal_focus_rate(
- attn, attn_ks, mel_lengths, src_padding_mask, target_padding_mask)
- logging_outputs[f'{prefix}fr'] = focus_rate.mean().data
- logging_outputs[f'{prefix}pcr'] = phone_coverage_rate.mean().data
- logging_outputs[f'{prefix}dfr'] = diagonal_focus_rate.mean().data
-
- def get_plot_dur_info(self, sample, model_out):
- if hparams['dur_level'] == 'word':
- T_txt = sample['word_lengths'].max()
- dur_gt = mel2token_to_dur(sample['mel2word'], T_txt)[0]
- dur_pred = model_out['dur'] if 'dur' in model_out else dur_gt
- txt = sample['ph_words'][0].split(" ")
- else:
- T_txt = sample['txt_tokens'].shape[1]
- dur_gt = mel2token_to_dur(sample['mel2ph'], T_txt)[0]
- dur_pred = model_out['dur'] if 'dur' in model_out else dur_gt
- txt = self.token_encoder.decode(sample['txt_tokens'][0].cpu().numpy())
- txt = txt.split(" ")
- return {'dur_gt': dur_gt, 'dur_pred': dur_pred, 'txt': txt}
-
- def build_optimizer(self, model):
-
- optimizer_gen = torch.optim.AdamW(
- self.gen_params,
- lr=hparams['lr'],
- betas=(hparams['optimizer_adam_beta1'], hparams['optimizer_adam_beta2']),
- weight_decay=hparams['weight_decay'])
-
- optimizer_disc = torch.optim.AdamW(
- self.disc_params,
- lr=hparams['disc_lr'],
- betas=(hparams['optimizer_adam_beta1'], hparams['optimizer_adam_beta2']),
- **hparams["discriminator_optimizer_params"]) if len(self.disc_params) > 0 else None
-
- return [optimizer_gen, optimizer_disc]
-
- def build_scheduler(self, optimizer):
- return [
- FastSpeechTask.build_scheduler(self, optimizer[0]), # Generator Scheduler
- torch.optim.lr_scheduler.StepLR(optimizer=optimizer[1], # Discriminator Scheduler
- **hparams["discriminator_scheduler_params"]),
- ]
-
- def on_before_optimization(self, opt_idx):
- if opt_idx == 0:
- nn.utils.clip_grad_norm_(self.dp_params, hparams['clip_grad_norm'])
- if self.use_bert:
- nn.utils.clip_grad_norm_(self.bert_params, hparams['clip_grad_norm'])
- nn.utils.clip_grad_norm_(self.gen_params_except_bert_and_dp, hparams['clip_grad_norm'])
- else:
- nn.utils.clip_grad_norm_(self.gen_params_except_dp, hparams['clip_grad_norm'])
- else:
- nn.utils.clip_grad_norm_(self.disc_params, hparams["clip_grad_norm"])
-
- def on_after_optimization(self, epoch, batch_idx, optimizer, optimizer_idx):
- if self.scheduler is not None:
- self.scheduler[0].step(self.global_step // hparams['accumulate_grad_batches'])
- self.scheduler[1].step(self.global_step // hparams['accumulate_grad_batches'])
-
- ############
- # infer
- ############
- def test_start(self):
- super().test_start()
- if hparams.get('save_attn', False):
- os.makedirs(f'{self.gen_dir}/attn', exist_ok=True)
- self.model.store_inverse_all()
-
- def test_step(self, sample, batch_idx):
- assert sample['txt_tokens'].shape[0] == 1, 'only support batch_size=1 in inference'
- outputs = self.run_model(sample, infer=True)
- text = sample['text'][0]
- item_name = sample['item_name'][0]
- tokens = sample['txt_tokens'][0].cpu().numpy()
- mel_gt = sample['mels'][0].cpu().numpy()
- mel_pred = outputs['mel_out'][0].cpu().numpy()
- mel2ph = sample['mel2ph'][0].cpu().numpy()
- mel2ph_pred = None
- str_phs = self.token_encoder.decode(tokens, strip_padding=True)
- base_fn = f'[{batch_idx:06d}][{item_name.replace("%", "_")}][%s]'
- if text is not None:
- base_fn += text.replace(":", "$3A")[:80]
- base_fn = base_fn.replace(' ', '_')
- gen_dir = self.gen_dir
- wav_pred = self.vocoder.spec2wav(mel_pred)
- self.saving_result_pool.add_job(self.save_result, args=[
- wav_pred, mel_pred, base_fn % 'P', gen_dir, str_phs, mel2ph_pred])
- if hparams['save_gt']:
- wav_gt = self.vocoder.spec2wav(mel_gt)
- self.saving_result_pool.add_job(self.save_result, args=[
- wav_gt, mel_gt, base_fn % 'G', gen_dir, str_phs, mel2ph])
- if hparams.get('save_attn', False):
- attn = outputs['attn'][0].cpu().numpy()
- np.save(f'{gen_dir}/attn/{item_name}.npy', attn)
- # save f0 for pitch dtw
- f0_pred_, _ = get_pitch(wav_pred, mel_pred, hparams)
- f0_gt_, _ = get_pitch(wav_gt, mel_gt, hparams)
- np.save(f'{gen_dir}/f0/{item_name}.npy', f0_pred_)
- np.save(f'{gen_dir}/f0/{item_name}_gt.npy', f0_gt_)
-
- print(f"Pred_shape: {mel_pred.shape}, gt_shape: {mel_gt.shape}")
- return {
- 'item_name': item_name,
- 'text': text,
- 'ph_tokens': self.token_encoder.decode(tokens.tolist()),
- 'wav_fn_pred': base_fn % 'P',
- 'wav_fn_gt': base_fn % 'G',
- }
diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/utils/share.ts b/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/utils/share.ts
deleted file mode 100644
index 4587669a10164aa7c961429fbddec9cf438c0eca..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/utils/share.ts
+++ /dev/null
@@ -1,7 +0,0 @@
-export function share(url: string, title: string) {
- if (navigator.share) {
- navigator.share({ url, title });
- } else {
- prompt("Copy this public url to share:", url);
- }
-}
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/ninepatch.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/ninepatch.js
deleted file mode 100644
index b312ffb33b47f5751afca1aaeb86be8fe4625db2..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/ninepatch.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import NinePatch from './gameobjects/rendertexture/ninepatch/NinePatch.js';
-export default NinePatch;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/fileselectorbutton/FileSelectorButton.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/fileselectorbutton/FileSelectorButton.d.ts
deleted file mode 100644
index 82bea2324e4be3ee27df75216da5425642e44321..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/fileselectorbutton/FileSelectorButton.d.ts
+++ /dev/null
@@ -1,45 +0,0 @@
-import Label from '../label/Label';
-
-export default FileSelectorButton;
-
-declare namespace FileSelectorButton {
- interface IConfig extends Label.IConfig {
- accept?: string,
- multiple?: boolean,
- }
-}
-
-declare class FileSelectorButton extends Label {
- constructor(
- scene: Phaser.Scene,
- config?: FileSelectorButton.IConfig
- );
-
- readonly files: File[];
-
- setAccept(accept: string): this;
-
- setMultiple(multiple?: boolean): this;
-
- loadFile(
- file: File,
- loaderType: string,
- key: string,
- cacheType?: string
- ): this;
-
- loadFile(
- file: File,
- loaderType: string,
- key: string,
- cacheType?: string,
- onComplete?: (data: any) => void
- ): this;
-
- loadFilePromise(
- file: File,
- loaderType: string,
- key: string,
- cacheType?: string
- ): Promise;
-}
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/pages/Pages.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/pages/Pages.d.ts
deleted file mode 100644
index e9f43d2e783d35ca1c1aadce66a3b007a85094b1..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/pages/Pages.d.ts
+++ /dev/null
@@ -1,72 +0,0 @@
-// import * as Phaser from 'phaser';
-import OverlapSizer from '../overlapsizer/OverlapSizer';
-
-
-export default Pages;
-
-declare namespace Pages {
-
- type AlignTypes = number | 'center' | 'left' | 'right' | 'top' | 'bottom' |
- 'left-top' | 'left-center' | 'left-bottom' |
- 'center-top' | 'center-center' | 'center-bottom' |
- 'right-top' | 'right-center' | 'right-bottom';
-
- type PaddingTypes = number |
- {
- left?: number,
- right?: number,
- top?: number,
- bottom?: number,
- };
-
- interface IConfig extends OverlapSizer.IConfig {
- fadeIn?: number,
-
- swapMode?: 0 | 1 | 'invisible' | 'destroy',
- }
-
-}
-
-declare class Pages extends OverlapSizer {
- constructor(
- scene: Phaser.Scene,
- config?: Pages.IConfig
- );
-
- setSwapMode(
- mode: 0 | 1 | 'invisible' | 'destroy'
- ): this;
-
- addPage(
- gameObject: Phaser.GameObjects.GameObject,
- config?: {
- key?: string,
-
- align?: Pages.AlignTypes,
-
- padding?: Pages.PaddingTypes,
-
- expand: boolean |
- {
- width?: boolean,
- height?: boolean,
- },
-
- minWidth?: number,
-
- minHeight?: number
- }
- ): this;
-
- swapPage(
- key: string,
- fadeInDuration?: number
- ): this;
- currentKey: string;
- readonly previousKey: string;
- keys: string[];
-
- getPage(key: string): Phaser.GameObjects.GameObject;
- readonly currentPage: Phaser.GameObjects.GameObject;
- readonly previousPage: Phaser.GameObjects.GameObject;
-}
\ No newline at end of file
diff --git a/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/text/english.py b/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/text/english.py
deleted file mode 100644
index 6817392ba8a9eb830351de89fb7afc5ad72f5e42..0000000000000000000000000000000000000000
--- a/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/text/english.py
+++ /dev/null
@@ -1,188 +0,0 @@
-""" from https://github.com/keithito/tacotron """
-
-'''
-Cleaners are transformations that run over the input text at both training and eval time.
-
-Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners"
-hyperparameter. Some cleaners are English-specific. You'll typically want to use:
- 1. "english_cleaners" for English text
- 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using
- the Unidecode library (https://pypi.python.org/pypi/Unidecode)
- 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update
- the symbols in symbols.py to match your data).
-'''
-
-
-# Regular expression matching whitespace:
-
-
-import re
-import inflect
-from unidecode import unidecode
-import eng_to_ipa as ipa
-_inflect = inflect.engine()
-_comma_number_re = re.compile(r'([0-9][0-9\,]+[0-9])')
-_decimal_number_re = re.compile(r'([0-9]+\.[0-9]+)')
-_pounds_re = re.compile(r'£([0-9\,]*[0-9]+)')
-_dollars_re = re.compile(r'\$([0-9\.\,]*[0-9]+)')
-_ordinal_re = re.compile(r'[0-9]+(st|nd|rd|th)')
-_number_re = re.compile(r'[0-9]+')
-
-# List of (regular expression, replacement) pairs for abbreviations:
-_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [
- ('mrs', 'misess'),
- ('mr', 'mister'),
- ('dr', 'doctor'),
- ('st', 'saint'),
- ('co', 'company'),
- ('jr', 'junior'),
- ('maj', 'major'),
- ('gen', 'general'),
- ('drs', 'doctors'),
- ('rev', 'reverend'),
- ('lt', 'lieutenant'),
- ('hon', 'honorable'),
- ('sgt', 'sergeant'),
- ('capt', 'captain'),
- ('esq', 'esquire'),
- ('ltd', 'limited'),
- ('col', 'colonel'),
- ('ft', 'fort'),
-]]
-
-
-# List of (ipa, lazy ipa) pairs:
-_lazy_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('r', 'ɹ'),
- ('æ', 'e'),
- ('ɑ', 'a'),
- ('ɔ', 'o'),
- ('ð', 'z'),
- ('θ', 's'),
- ('ɛ', 'e'),
- ('ɪ', 'i'),
- ('ʊ', 'u'),
- ('ʒ', 'ʥ'),
- ('ʤ', 'ʥ'),
- ('ˈ', '↓'),
-]]
-
-# List of (ipa, lazy ipa2) pairs:
-_lazy_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('r', 'ɹ'),
- ('ð', 'z'),
- ('θ', 's'),
- ('ʒ', 'ʑ'),
- ('ʤ', 'dʑ'),
- ('ˈ', '↓'),
-]]
-
-# List of (ipa, ipa2) pairs
-_ipa_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('r', 'ɹ'),
- ('ʤ', 'dʒ'),
- ('ʧ', 'tʃ')
-]]
-
-
-def expand_abbreviations(text):
- for regex, replacement in _abbreviations:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def collapse_whitespace(text):
- return re.sub(r'\s+', ' ', text)
-
-
-def _remove_commas(m):
- return m.group(1).replace(',', '')
-
-
-def _expand_decimal_point(m):
- return m.group(1).replace('.', ' point ')
-
-
-def _expand_dollars(m):
- match = m.group(1)
- parts = match.split('.')
- if len(parts) > 2:
- return match + ' dollars' # Unexpected format
- dollars = int(parts[0]) if parts[0] else 0
- cents = int(parts[1]) if len(parts) > 1 and parts[1] else 0
- if dollars and cents:
- dollar_unit = 'dollar' if dollars == 1 else 'dollars'
- cent_unit = 'cent' if cents == 1 else 'cents'
- return '%s %s, %s %s' % (dollars, dollar_unit, cents, cent_unit)
- elif dollars:
- dollar_unit = 'dollar' if dollars == 1 else 'dollars'
- return '%s %s' % (dollars, dollar_unit)
- elif cents:
- cent_unit = 'cent' if cents == 1 else 'cents'
- return '%s %s' % (cents, cent_unit)
- else:
- return 'zero dollars'
-
-
-def _expand_ordinal(m):
- return _inflect.number_to_words(m.group(0))
-
-
-def _expand_number(m):
- num = int(m.group(0))
- if num > 1000 and num < 3000:
- if num == 2000:
- return 'two thousand'
- elif num > 2000 and num < 2010:
- return 'two thousand ' + _inflect.number_to_words(num % 100)
- elif num % 100 == 0:
- return _inflect.number_to_words(num // 100) + ' hundred'
- else:
- return _inflect.number_to_words(num, andword='', zero='oh', group=2).replace(', ', ' ')
- else:
- return _inflect.number_to_words(num, andword='')
-
-
-def normalize_numbers(text):
- text = re.sub(_comma_number_re, _remove_commas, text)
- text = re.sub(_pounds_re, r'\1 pounds', text)
- text = re.sub(_dollars_re, _expand_dollars, text)
- text = re.sub(_decimal_number_re, _expand_decimal_point, text)
- text = re.sub(_ordinal_re, _expand_ordinal, text)
- text = re.sub(_number_re, _expand_number, text)
- return text
-
-
-def mark_dark_l(text):
- return re.sub(r'l([^aeiouæɑɔəɛɪʊ ]*(?: |$))', lambda x: 'ɫ'+x.group(1), text)
-
-
-def english_to_ipa(text):
- text = unidecode(text).lower()
- text = expand_abbreviations(text)
- text = normalize_numbers(text)
- phonemes = ipa.convert(text)
- phonemes = collapse_whitespace(phonemes)
- return phonemes
-
-
-def english_to_lazy_ipa(text):
- text = english_to_ipa(text)
- for regex, replacement in _lazy_ipa:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def english_to_ipa2(text):
- text = english_to_ipa(text)
- text = mark_dark_l(text)
- for regex, replacement in _ipa_to_ipa2:
- text = re.sub(regex, replacement, text)
- return text.replace('...', '…')
-
-
-def english_to_lazy_ipa2(text):
- text = english_to_ipa(text)
- for regex, replacement in _lazy_ipa2:
- text = re.sub(regex, replacement, text)
- return text
diff --git a/spaces/AlekseyKorshuk/michellejieli-NSFW_text_classifier/README.md b/spaces/AlekseyKorshuk/michellejieli-NSFW_text_classifier/README.md
deleted file mode 100644
index 3880097d59f3e2f4a31a5805504928a3a60975f1..0000000000000000000000000000000000000000
--- a/spaces/AlekseyKorshuk/michellejieli-NSFW_text_classifier/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Michellejieli-NSFW Text Classifier
-emoji: 🌍
-colorFrom: red
-colorTo: gray
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/ddpm.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/ddpm.md
deleted file mode 100644
index 3efa603d1cae45daf9390454c9dcbeb9bf2f86cf..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/ddpm.md
+++ /dev/null
@@ -1,35 +0,0 @@
-
-
-# DDPM
-
-[Denoising Diffusion Probabilistic Models](https://huggingface.co/papers/2006.11239) (DDPM) by Jonathan Ho, Ajay Jain and Pieter Abbeel proposes a diffusion based model of the same name. In the 🤗 Diffusers library, DDPM refers to the *discrete denoising scheduler* from the paper as well as the pipeline.
-
-The abstract from the paper is:
-
-*We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN.*
-
-The original codebase can be found at [hohonathanho/diffusion](https://github.com/hojonathanho/diffusion).
-
-
-
-Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.
-
-
-
-# DDPMPipeline
-[[autodoc]] DDPMPipeline
- - all
- - __call__
-
-## ImagePipelineOutput
-[[autodoc]] pipelines.ImagePipelineOutput
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/dependency_versions_check.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/dependency_versions_check.py
deleted file mode 100644
index 4f8578c52957bf6c06decb0d97d3139437f0078f..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/dependency_versions_check.py
+++ /dev/null
@@ -1,47 +0,0 @@
-# Copyright 2023 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import sys
-
-from .dependency_versions_table import deps
-from .utils.versions import require_version, require_version_core
-
-
-# define which module versions we always want to check at run time
-# (usually the ones defined in `install_requires` in setup.py)
-#
-# order specific notes:
-# - tqdm must be checked before tokenizers
-
-pkgs_to_check_at_runtime = "python tqdm regex requests packaging filelock numpy tokenizers".split()
-if sys.version_info < (3, 7):
- pkgs_to_check_at_runtime.append("dataclasses")
-if sys.version_info < (3, 8):
- pkgs_to_check_at_runtime.append("importlib_metadata")
-
-for pkg in pkgs_to_check_at_runtime:
- if pkg in deps:
- if pkg == "tokenizers":
- # must be loaded here, or else tqdm check may fail
- from .utils import is_tokenizers_available
-
- if not is_tokenizers_available():
- continue # not required, check version only if installed
-
- require_version_core(deps[pkg])
- else:
- raise ValueError(f"can't find {pkg} in {deps.keys()}, check dependency_versions_table.py")
-
-
-def dep_version_check(pkg, hint=None):
- require_version(deps[pkg], hint)
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/foveabox/fovea_align_r101_fpn_gn-head_mstrain_640-800_4x4_2x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/foveabox/fovea_align_r101_fpn_gn-head_mstrain_640-800_4x4_2x_coco.py
deleted file mode 100644
index a02a814fe2f08b464454e8eb6e1c88004ab804f6..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/foveabox/fovea_align_r101_fpn_gn-head_mstrain_640-800_4x4_2x_coco.py
+++ /dev/null
@@ -1,27 +0,0 @@
-_base_ = './fovea_r50_fpn_4x4_1x_coco.py'
-model = dict(
- pretrained='torchvision://resnet101',
- backbone=dict(depth=101),
- bbox_head=dict(
- with_deform=True,
- norm_cfg=dict(type='GN', num_groups=32, requires_grad=True)))
-img_norm_cfg = dict(
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', with_bbox=True),
- dict(
- type='Resize',
- img_scale=[(1333, 640), (1333, 800)],
- multiscale_mode='value',
- keep_ratio=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
-]
-data = dict(train=dict(pipeline=train_pipeline))
-# learning policy
-lr_config = dict(step=[16, 22])
-runner = dict(type='EpochBasedRunner', max_epochs=24)
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/fpg/faster_rcnn_r50_fpn_crop640_50e_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/fpg/faster_rcnn_r50_fpn_crop640_50e_coco.py
deleted file mode 100644
index 95f4e91f203bad8367942fc24b838da9fbf62947..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/fpg/faster_rcnn_r50_fpn_crop640_50e_coco.py
+++ /dev/null
@@ -1,68 +0,0 @@
-_base_ = [
- '../_base_/models/faster_rcnn_r50_fpn.py',
- '../_base_/datasets/coco_detection.py',
- '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
-]
-norm_cfg = dict(type='BN', requires_grad=True)
-model = dict(
- backbone=dict(norm_cfg=norm_cfg, norm_eval=False),
- neck=dict(norm_cfg=norm_cfg),
- roi_head=dict(bbox_head=dict(norm_cfg=norm_cfg)))
-dataset_type = 'CocoDataset'
-data_root = 'data/coco/'
-img_norm_cfg = dict(
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
- dict(
- type='Resize',
- img_scale=(640, 640),
- ratio_range=(0.8, 1.2),
- keep_ratio=True),
- dict(type='RandomCrop', crop_size=(640, 640)),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size=(640, 640)),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=(640, 640),
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=64),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ])
-]
-data = dict(
- samples_per_gpu=8,
- workers_per_gpu=4,
- train=dict(pipeline=train_pipeline),
- val=dict(pipeline=test_pipeline),
- test=dict(pipeline=test_pipeline))
-# learning policy
-optimizer = dict(
- type='SGD',
- lr=0.08,
- momentum=0.9,
- weight_decay=0.0001,
- paramwise_cfg=dict(norm_decay_mult=0, bypass_duplicate=True))
-optimizer_config = dict(grad_clip=None)
-# learning policy
-lr_config = dict(
- policy='step',
- warmup='linear',
- warmup_iters=1000,
- warmup_ratio=0.1,
- step=[30, 40])
-# runtime settings
-runner = dict(max_epochs=50)
-evaluation = dict(interval=2)
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/groie/mask_rcnn_r101_fpn_syncbn-backbone_r4_gcb_c3-c5_groie_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/groie/mask_rcnn_r101_fpn_syncbn-backbone_r4_gcb_c3-c5_groie_1x_coco.py
deleted file mode 100644
index 8b83722197c69a51907f43bcb05883deedc37f0c..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/groie/mask_rcnn_r101_fpn_syncbn-backbone_r4_gcb_c3-c5_groie_1x_coco.py
+++ /dev/null
@@ -1,45 +0,0 @@
-_base_ = '../gcnet/mask_rcnn_r101_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco.py'
-# model settings
-model = dict(
- roi_head=dict(
- bbox_roi_extractor=dict(
- type='GenericRoIExtractor',
- aggregation='sum',
- roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=2),
- out_channels=256,
- featmap_strides=[4, 8, 16, 32],
- pre_cfg=dict(
- type='ConvModule',
- in_channels=256,
- out_channels=256,
- kernel_size=5,
- padding=2,
- inplace=False,
- ),
- post_cfg=dict(
- type='GeneralizedAttention',
- in_channels=256,
- spatial_range=-1,
- num_heads=6,
- attention_type='0100',
- kv_stride=2)),
- mask_roi_extractor=dict(
- type='GenericRoIExtractor',
- roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=2),
- out_channels=256,
- featmap_strides=[4, 8, 16, 32],
- pre_cfg=dict(
- type='ConvModule',
- in_channels=256,
- out_channels=256,
- kernel_size=5,
- padding=2,
- inplace=False,
- ),
- post_cfg=dict(
- type='GeneralizedAttention',
- in_channels=256,
- spatial_range=-1,
- num_heads=6,
- attention_type='0100',
- kv_stride=2))))
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/yolact/yolact_r50_1x8_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/yolact/yolact_r50_1x8_coco.py
deleted file mode 100644
index d0e5ace280e1377ce4bb772df7e132427143bf34..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/yolact/yolact_r50_1x8_coco.py
+++ /dev/null
@@ -1,160 +0,0 @@
-_base_ = '../_base_/default_runtime.py'
-
-# model settings
-img_size = 550
-model = dict(
- type='YOLACT',
- pretrained='torchvision://resnet50',
- backbone=dict(
- type='ResNet',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=-1, # do not freeze stem
- norm_cfg=dict(type='BN', requires_grad=True),
- norm_eval=False, # update the statistics of bn
- zero_init_residual=False,
- style='pytorch'),
- neck=dict(
- type='FPN',
- in_channels=[256, 512, 1024, 2048],
- out_channels=256,
- start_level=1,
- add_extra_convs='on_input',
- num_outs=5,
- upsample_cfg=dict(mode='bilinear')),
- bbox_head=dict(
- type='YOLACTHead',
- num_classes=80,
- in_channels=256,
- feat_channels=256,
- anchor_generator=dict(
- type='AnchorGenerator',
- octave_base_scale=3,
- scales_per_octave=1,
- base_sizes=[8, 16, 32, 64, 128],
- ratios=[0.5, 1.0, 2.0],
- strides=[550.0 / x for x in [69, 35, 18, 9, 5]],
- centers=[(550 * 0.5 / x, 550 * 0.5 / x)
- for x in [69, 35, 18, 9, 5]]),
- bbox_coder=dict(
- type='DeltaXYWHBBoxCoder',
- target_means=[.0, .0, .0, .0],
- target_stds=[0.1, 0.1, 0.2, 0.2]),
- loss_cls=dict(
- type='CrossEntropyLoss',
- use_sigmoid=False,
- reduction='none',
- loss_weight=1.0),
- loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.5),
- num_head_convs=1,
- num_protos=32,
- use_ohem=True),
- mask_head=dict(
- type='YOLACTProtonet',
- in_channels=256,
- num_protos=32,
- num_classes=80,
- max_masks_to_train=100,
- loss_mask_weight=6.125),
- segm_head=dict(
- type='YOLACTSegmHead',
- num_classes=80,
- in_channels=256,
- loss_segm=dict(
- type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0)),
- # training and testing settings
- train_cfg=dict(
- assigner=dict(
- type='MaxIoUAssigner',
- pos_iou_thr=0.5,
- neg_iou_thr=0.4,
- min_pos_iou=0.,
- ignore_iof_thr=-1,
- gt_max_assign_all=False),
- # smoothl1_beta=1.,
- allowed_border=-1,
- pos_weight=-1,
- neg_pos_ratio=3,
- debug=False),
- test_cfg=dict(
- nms_pre=1000,
- min_bbox_size=0,
- score_thr=0.05,
- iou_thr=0.5,
- top_k=200,
- max_per_img=100))
-# dataset settings
-dataset_type = 'CocoDataset'
-data_root = 'data/coco/'
-img_norm_cfg = dict(
- mean=[123.68, 116.78, 103.94], std=[58.40, 57.12, 57.38], to_rgb=True)
-train_pipeline = [
- dict(type='LoadImageFromFile', to_float32=True),
- dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
- dict(type='FilterAnnotations', min_gt_bbox_wh=(4.0, 4.0)),
- dict(
- type='PhotoMetricDistortion',
- brightness_delta=32,
- contrast_range=(0.5, 1.5),
- saturation_range=(0.5, 1.5),
- hue_delta=18),
- dict(
- type='Expand',
- mean=img_norm_cfg['mean'],
- to_rgb=img_norm_cfg['to_rgb'],
- ratio_range=(1, 4)),
- dict(
- type='MinIoURandomCrop',
- min_ious=(0.1, 0.3, 0.5, 0.7, 0.9),
- min_crop_size=0.3),
- dict(type='Resize', img_scale=(img_size, img_size), keep_ratio=False),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=(img_size, img_size),
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=False),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ])
-]
-data = dict(
- samples_per_gpu=8,
- workers_per_gpu=4,
- train=dict(
- type=dataset_type,
- ann_file=data_root + 'annotations/instances_train2017.json',
- img_prefix=data_root + 'train2017/',
- pipeline=train_pipeline),
- val=dict(
- type=dataset_type,
- ann_file=data_root + 'annotations/instances_val2017.json',
- img_prefix=data_root + 'val2017/',
- pipeline=test_pipeline),
- test=dict(
- type=dataset_type,
- ann_file=data_root + 'annotations/instances_val2017.json',
- img_prefix=data_root + 'val2017/',
- pipeline=test_pipeline))
-# optimizer
-optimizer = dict(type='SGD', lr=1e-3, momentum=0.9, weight_decay=5e-4)
-optimizer_config = dict()
-# learning policy
-lr_config = dict(
- policy='step',
- warmup='linear',
- warmup_iters=500,
- warmup_ratio=0.1,
- step=[20, 42, 49, 52])
-runner = dict(type='EpochBasedRunner', max_epochs=55)
-cudnn_benchmark = True
-evaluation = dict(metric=['bbox', 'segm'])
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101-d8_480x480_40k_pascal_context.py b/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101-d8_480x480_40k_pascal_context.py
deleted file mode 100644
index 68e2b072e4b8d076e8c3e929dfdc73bcd24ce859..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101-d8_480x480_40k_pascal_context.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './deeplabv3plus_r50-d8_480x480_40k_pascal_context.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/psanet/psanet_r101-d8_769x769_80k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/psanet/psanet_r101-d8_769x769_80k_cityscapes.py
deleted file mode 100644
index 6a9efc55ad2062facf3a568f8cdbba76c8c55950..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/psanet/psanet_r101-d8_769x769_80k_cityscapes.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './psanet_r50-d8_769x769_80k_cityscapes.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/psanet/psanet_r50-d8_512x1024_40k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/psanet/psanet_r50-d8_512x1024_40k_cityscapes.py
deleted file mode 100644
index 6671fcb4bf8430bc0128cd93a4b8cedea1856b03..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/psanet/psanet_r50-d8_512x1024_40k_cityscapes.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = [
- '../_base_/models/psanet_r50-d8.py', '../_base_/datasets/cityscapes.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
-]
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r50-d8_480x480_40k_pascal_context_59.py b/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r50-d8_480x480_40k_pascal_context_59.py
deleted file mode 100644
index 88041c6817d2cb152a979b71a2ce56a9e30b87b5..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r50-d8_480x480_40k_pascal_context_59.py
+++ /dev/null
@@ -1,10 +0,0 @@
-_base_ = [
- '../_base_/models/pspnet_r50-d8.py',
- '../_base_/datasets/pascal_context_59.py', '../_base_/default_runtime.py',
- '../_base_/schedules/schedule_40k.py'
-]
-model = dict(
- decode_head=dict(num_classes=59),
- auxiliary_head=dict(num_classes=59),
- test_cfg=dict(mode='slide', crop_size=(480, 480), stride=(320, 320)))
-optimizer = dict(type='SGD', lr=0.004, momentum=0.9, weight_decay=0.0001)
diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/multimodal/multimodal_embedder.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/multimodal/multimodal_embedder.py
deleted file mode 100644
index 626077cb80987d66af90f390e31aa2f2def76fec..0000000000000000000000000000000000000000
--- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/multimodal/multimodal_embedder.py
+++ /dev/null
@@ -1,178 +0,0 @@
-import base64
-import re
-from dataclasses import dataclass
-from io import BytesIO
-from typing import Any, List, Optional
-
-import torch
-from PIL import Image
-
-from extensions.multimodal.pipeline_loader import load_pipeline
-from modules import shared
-from modules.logging_colors import logger
-from modules.text_generation import encode, get_max_prompt_length
-
-
-@dataclass
-class PromptPart:
- text: str
- image: Optional[Image.Image] = None
- is_image: bool = False
- input_ids: Optional[torch.Tensor] = None
- embedding: Optional[torch.Tensor] = None
-
-
-class MultimodalEmbedder:
- def __init__(self, params: dict):
- pipeline, source = load_pipeline(params)
- self.pipeline = pipeline
- logger.info(f'Multimodal: loaded pipeline {self.pipeline.name()} from pipelines/{source} ({self.pipeline.__class__.__name__})')
-
- def _split_prompt(self, prompt: str, load_images: bool = False) -> List[PromptPart]:
- """Splits a prompt into a list of `PromptParts` to separate image data from text.
- It will also append `image_start` and `image_end` before and after the image, and optionally parse and load the images,
- if `load_images` is `True`.
- """
- parts: List[PromptPart] = []
- curr = 0
- while True:
- match = re.search(r' ', prompt[curr:])
- if match is None:
- # no more image tokens, append the rest of the prompt
- if curr > 0:
- # add image end token after last image
- parts.append(PromptPart(text=self.pipeline.image_end() + prompt[curr:]))
- else:
- parts.append(PromptPart(text=prompt))
- break
- # found an image, append image start token to the text
- if match.start() > 0:
- parts.append(PromptPart(text=prompt[curr:curr + match.start()] + self.pipeline.image_start()))
- else:
- parts.append(PromptPart(text=self.pipeline.image_start()))
- # append the image
- parts.append(PromptPart(
- text=match.group(0),
- image=Image.open(BytesIO(base64.b64decode(match.group(1)))) if load_images else None,
- is_image=True
- ))
- curr += match.end()
- return parts
-
- def _len_in_tokens_prompt_parts(self, parts: List[PromptPart]) -> int:
- """Total length in tokens of all `parts`"""
- tokens = 0
- for part in parts:
- if part.is_image:
- tokens += self.pipeline.num_image_embeds()
- elif part.input_ids is not None:
- tokens += len(part.input_ids)
- else:
- tokens += len(encode(part.text)[0])
- return tokens
-
- def len_in_tokens(self, prompt: str) -> int:
- """Total length in tokens for a given text `prompt`"""
- parts = self._split_prompt(prompt, False)
- return self._len_in_tokens_prompt_parts(parts)
-
- def _encode_single_text(self, part: PromptPart, add_bos_token: bool) -> PromptPart:
- """Encode a single prompt `part` to `input_ids`. Returns a `PromptPart`"""
- if part.is_image:
- placeholders = torch.ones((self.pipeline.num_image_embeds())) * self.pipeline.placeholder_token_id()
- part.input_ids = placeholders.to(shared.model.device, dtype=torch.int64)
- else:
- part.input_ids = encode(part.text, add_bos_token=add_bos_token)[0].to(shared.model.device, dtype=torch.int64)
- return part
-
- @staticmethod
- def _num_images(parts: List[PromptPart]) -> int:
- count = 0
- for part in parts:
- if part.is_image:
- count += 1
- return count
-
- def _encode_text(self, state, parts: List[PromptPart]) -> List[PromptPart]:
- """Encode text to token_ids, also truncate the prompt, if necessary.
-
- The chat/instruct mode should make prompts that fit in get_max_prompt_length, but if max_new_tokens are set
- such that the context + min_rows don't fit, we can get a prompt which is too long.
- We can't truncate image embeddings, as it leads to broken generation, so remove the images instead and warn the user
- """
- encoded: List[PromptPart] = []
- for i, part in enumerate(parts):
- encoded.append(self._encode_single_text(part, i == 0 and state['add_bos_token']))
-
- # truncation:
- max_len = get_max_prompt_length(state)
- removed_images = 0
-
- # 1. remove entire text/image blocks
- while self._len_in_tokens_prompt_parts(encoded[1:]) > max_len:
- if encoded[0].is_image:
- removed_images += 1
- encoded = encoded[1:]
-
- # 2. check if the last prompt part doesn't need to get truncated
- if self._len_in_tokens_prompt_parts(encoded) > max_len:
- if encoded[0].is_image:
- # don't truncate image embeddings, just remove the image, otherwise generation will be broken
- removed_images += 1
- encoded = encoded[1:]
- elif len(encoded) > 1 and encoded[0].text.endswith(self.pipeline.image_start()):
- # see if we can keep image_start token
- len_image_start = len(encode(self.pipeline.image_start(), add_bos_token=state['add_bos_token'])[0])
- if self._len_in_tokens_prompt_parts(encoded[1:]) + len_image_start > max_len:
- # we can't -> remove this text, and the image
- encoded = encoded[2:]
- removed_images += 1
- else:
- # we can -> just truncate the text
- trunc_len = self._len_in_tokens_prompt_parts(encoded) - max_len
- encoded[0].input_ids = encoded[0].input_ids[trunc_len:]
- elif len(encoded) > 0:
- # only one text left, truncate it normally
- trunc_len = self._len_in_tokens_prompt_parts(encoded) - max_len
- encoded[0].input_ids = encoded[0].input_ids[trunc_len:]
-
- # notify user if we truncated an image
- if removed_images > 0:
- logger.warning(f"Multimodal: removed {removed_images} image(s) from prompt. Try decreasing max_new_tokens if generation is broken")
-
- return encoded
-
- def _embed(self, parts: List[PromptPart]) -> List[PromptPart]:
- # batch images
- image_indicies = [i for i, part in enumerate(parts) if part.is_image]
- embedded = self.pipeline.embed_images([parts[i].image for i in image_indicies])
- for i, embeds in zip(image_indicies, embedded):
- parts[i].embedding = embeds
- # embed text
- for (i, part) in enumerate(parts):
- if not part.is_image:
- parts[i].embedding = self.pipeline.embed_tokens(part.input_ids)
- return parts
-
- def _remove_old_images(self, parts: List[PromptPart], params: dict) -> List[PromptPart]:
- if params['add_all_images_to_prompt']:
- return parts
- already_added = False
- for i, part in reversed(list(enumerate(parts))):
- if part.is_image:
- if already_added:
- parts[i].embedding = self.pipeline.placeholder_embeddings()
- else:
- already_added = True
- return parts
-
- def forward(self, prompt: str, state: Any, params: dict):
- prompt_parts = self._split_prompt(prompt, True)
- prompt_parts = self._encode_text(state, prompt_parts)
- prompt_parts = self._embed(prompt_parts)
- prompt_parts = self._remove_old_images(prompt_parts, params)
- embeds = tuple(part.embedding for part in prompt_parts)
- ids = tuple(part.input_ids for part in prompt_parts)
- input_embeds = torch.cat(embeds, dim=0)
- input_ids = torch.cat(ids, dim=0)
- return prompt, input_ids, input_embeds, self._num_images(prompt_parts)
diff --git a/spaces/Anonymous-sub/Rerender/gmflow_module/gmflow/__init__.py b/spaces/Anonymous-sub/Rerender/gmflow_module/gmflow/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Arnx/MusicGenXvAKN/audiocraft/quantization/__init__.py b/spaces/Arnx/MusicGenXvAKN/audiocraft/quantization/__init__.py
deleted file mode 100644
index 836d6eb518978480c6b95d6f29ce4f84a9428793..0000000000000000000000000000000000000000
--- a/spaces/Arnx/MusicGenXvAKN/audiocraft/quantization/__init__.py
+++ /dev/null
@@ -1,9 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-# flake8: noqa
-from .vq import ResidualVectorQuantizer
-from .base import BaseQuantizer, DummyQuantizer, QuantizedResult
diff --git a/spaces/Arnx/MusicGenXvAKN/tests/models/test_musicgen.py b/spaces/Arnx/MusicGenXvAKN/tests/models/test_musicgen.py
deleted file mode 100644
index d43cf73763f6c690ab0b277227ac225b286fa143..0000000000000000000000000000000000000000
--- a/spaces/Arnx/MusicGenXvAKN/tests/models/test_musicgen.py
+++ /dev/null
@@ -1,58 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import pytest
-import torch
-
-from audiocraft.models import MusicGen
-
-
-class TestSEANetModel:
- def get_musicgen(self):
- mg = MusicGen.get_pretrained(name='debug', device='cpu')
- mg.set_generation_params(duration=2.0, extend_stride=2.)
- return mg
-
- def test_base(self):
- mg = self.get_musicgen()
- assert mg.frame_rate == 25
- assert mg.sample_rate == 32000
- assert mg.audio_channels == 1
-
- def test_generate_unconditional(self):
- mg = self.get_musicgen()
- wav = mg.generate_unconditional(3)
- assert list(wav.shape) == [3, 1, 64000]
-
- def test_generate_continuation(self):
- mg = self.get_musicgen()
- prompt = torch.randn(3, 1, 32000)
- wav = mg.generate_continuation(prompt, 32000)
- assert list(wav.shape) == [3, 1, 64000]
-
- prompt = torch.randn(2, 1, 32000)
- wav = mg.generate_continuation(
- prompt, 32000, ['youpi', 'lapin dort'])
- assert list(wav.shape) == [2, 1, 64000]
-
- prompt = torch.randn(2, 1, 32000)
- with pytest.raises(AssertionError):
- wav = mg.generate_continuation(
- prompt, 32000, ['youpi', 'lapin dort', 'one too many'])
-
- def test_generate(self):
- mg = self.get_musicgen()
- wav = mg.generate(
- ['youpi', 'lapin dort'])
- assert list(wav.shape) == [2, 1, 64000]
-
- def test_generate_long(self):
- mg = self.get_musicgen()
- mg.max_duration = 3.
- mg.set_generation_params(duration=4., extend_stride=2.)
- wav = mg.generate(
- ['youpi', 'lapin dort'])
- assert list(wav.shape) == [2, 1, 32000 * 4]
diff --git a/spaces/Artrajz/vits-simple-api/bert_vits2/text/bert_handler.py b/spaces/Artrajz/vits-simple-api/bert_vits2/text/bert_handler.py
deleted file mode 100644
index fb5c79090966eda18c7e932e2ad0636452ac06ad..0000000000000000000000000000000000000000
--- a/spaces/Artrajz/vits-simple-api/bert_vits2/text/bert_handler.py
+++ /dev/null
@@ -1,33 +0,0 @@
-import importlib
-
-
-class BertHandler:
- _bert_functions = {}
-
- BERT_IMPORT_MAP = {
- "zh": "bert_vits2.text.chinese_bert.get_bert_feature",
- "en": "bert_vits2.text.english_bert_mock.get_bert_feature",
- "ja": "bert_vits2.text.japanese_bert.get_bert_feature",
- }
-
- def __init__(self, languages):
- for lang in languages:
- if lang not in BertHandler._bert_functions:
- self.load_bert_function(lang)
-
- def load_bert_function(self, language):
- if language not in BertHandler.BERT_IMPORT_MAP:
- raise ValueError(f"Unsupported language: {language}")
-
- module_path, function_name = BertHandler.BERT_IMPORT_MAP[language].rsplit('.', 1)
- module = importlib.import_module(module_path, package=__package__)
- bert_function = getattr(module, function_name)
-
- BertHandler._bert_functions[language] = bert_function
-
- def get_bert(self, norm_text, word2ph, language):
- if language not in BertHandler._bert_functions:
- raise ValueError(f"BERT for {language} has not been initialized. Please initialize first.")
-
- bert_func = BertHandler._bert_functions[language]
- return bert_func(norm_text, word2ph)
diff --git a/spaces/Atualli/yoloxTeste/configs/yolox_m.py b/spaces/Atualli/yoloxTeste/configs/yolox_m.py
deleted file mode 100644
index 9666a31177b9cc1c94978f9867aaceac8ddebce2..0000000000000000000000000000000000000000
--- a/spaces/Atualli/yoloxTeste/configs/yolox_m.py
+++ /dev/null
@@ -1,15 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding:utf-8 -*-
-# Copyright (c) Megvii, Inc. and its affiliates.
-
-import os
-
-from yolox.exp import Exp as MyExp
-
-
-class Exp(MyExp):
- def __init__(self):
- super(Exp, self).__init__()
- self.depth = 0.67
- self.width = 0.75
- self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]
diff --git a/spaces/Ayushnangia/Whispercpp_yt/README.md b/spaces/Ayushnangia/Whispercpp_yt/README.md
deleted file mode 100644
index 4d6aca0fe068683bd50677305d410a25820b1d54..0000000000000000000000000000000000000000
--- a/spaces/Ayushnangia/Whispercpp_yt/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Whispercpp Yt
-emoji: 🐠
-colorFrom: pink
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.40.1
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Banbri/zcvzcv/src/components/ui/separator.tsx b/spaces/Banbri/zcvzcv/src/components/ui/separator.tsx
deleted file mode 100644
index a6ed83ef827829cf42a7b27d1d5714b4473bd1c5..0000000000000000000000000000000000000000
--- a/spaces/Banbri/zcvzcv/src/components/ui/separator.tsx
+++ /dev/null
@@ -1,31 +0,0 @@
-"use client"
-
-import * as React from "react"
-import * as SeparatorPrimitive from "@radix-ui/react-separator"
-
-import { cn } from "@/lib/utils"
-
-const Separator = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(
- (
- { className, orientation = "horizontal", decorative = true, ...props },
- ref
- ) => (
-
- )
-)
-Separator.displayName = SeparatorPrimitive.Root.displayName
-
-export { Separator }
diff --git a/spaces/Bart92/RVC_HF/lib/uvr5_pack/lib_v5/nets_537227KB.py b/spaces/Bart92/RVC_HF/lib/uvr5_pack/lib_v5/nets_537227KB.py
deleted file mode 100644
index a1bb530e006482704f234c2e739a695174142941..0000000000000000000000000000000000000000
--- a/spaces/Bart92/RVC_HF/lib/uvr5_pack/lib_v5/nets_537227KB.py
+++ /dev/null
@@ -1,123 +0,0 @@
-import torch
-import numpy as np
-from torch import nn
-import torch.nn.functional as F
-
-from . import layers_537238KB as layers
-
-
-class BaseASPPNet(nn.Module):
- def __init__(self, nin, ch, dilations=(4, 8, 16)):
- super(BaseASPPNet, self).__init__()
- self.enc1 = layers.Encoder(nin, ch, 3, 2, 1)
- self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1)
- self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1)
- self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1)
-
- self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations)
-
- self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1)
- self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1)
- self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1)
- self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1)
-
- def __call__(self, x):
- h, e1 = self.enc1(x)
- h, e2 = self.enc2(h)
- h, e3 = self.enc3(h)
- h, e4 = self.enc4(h)
-
- h = self.aspp(h)
-
- h = self.dec4(h, e4)
- h = self.dec3(h, e3)
- h = self.dec2(h, e2)
- h = self.dec1(h, e1)
-
- return h
-
-
-class CascadedASPPNet(nn.Module):
- def __init__(self, n_fft):
- super(CascadedASPPNet, self).__init__()
- self.stg1_low_band_net = BaseASPPNet(2, 64)
- self.stg1_high_band_net = BaseASPPNet(2, 64)
-
- self.stg2_bridge = layers.Conv2DBNActiv(66, 32, 1, 1, 0)
- self.stg2_full_band_net = BaseASPPNet(32, 64)
-
- self.stg3_bridge = layers.Conv2DBNActiv(130, 64, 1, 1, 0)
- self.stg3_full_band_net = BaseASPPNet(64, 128)
-
- self.out = nn.Conv2d(128, 2, 1, bias=False)
- self.aux1_out = nn.Conv2d(64, 2, 1, bias=False)
- self.aux2_out = nn.Conv2d(64, 2, 1, bias=False)
-
- self.max_bin = n_fft // 2
- self.output_bin = n_fft // 2 + 1
-
- self.offset = 128
-
- def forward(self, x, aggressiveness=None):
- mix = x.detach()
- x = x.clone()
-
- x = x[:, :, : self.max_bin]
-
- bandw = x.size()[2] // 2
- aux1 = torch.cat(
- [
- self.stg1_low_band_net(x[:, :, :bandw]),
- self.stg1_high_band_net(x[:, :, bandw:]),
- ],
- dim=2,
- )
-
- h = torch.cat([x, aux1], dim=1)
- aux2 = self.stg2_full_band_net(self.stg2_bridge(h))
-
- h = torch.cat([x, aux1, aux2], dim=1)
- h = self.stg3_full_band_net(self.stg3_bridge(h))
-
- mask = torch.sigmoid(self.out(h))
- mask = F.pad(
- input=mask,
- pad=(0, 0, 0, self.output_bin - mask.size()[2]),
- mode="replicate",
- )
-
- if self.training:
- aux1 = torch.sigmoid(self.aux1_out(aux1))
- aux1 = F.pad(
- input=aux1,
- pad=(0, 0, 0, self.output_bin - aux1.size()[2]),
- mode="replicate",
- )
- aux2 = torch.sigmoid(self.aux2_out(aux2))
- aux2 = F.pad(
- input=aux2,
- pad=(0, 0, 0, self.output_bin - aux2.size()[2]),
- mode="replicate",
- )
- return mask * mix, aux1 * mix, aux2 * mix
- else:
- if aggressiveness:
- mask[:, :, : aggressiveness["split_bin"]] = torch.pow(
- mask[:, :, : aggressiveness["split_bin"]],
- 1 + aggressiveness["value"] / 3,
- )
- mask[:, :, aggressiveness["split_bin"] :] = torch.pow(
- mask[:, :, aggressiveness["split_bin"] :],
- 1 + aggressiveness["value"],
- )
-
- return mask * mix
-
- def predict(self, x_mag, aggressiveness=None):
- h = self.forward(x_mag, aggressiveness)
-
- if self.offset > 0:
- h = h[:, :, :, self.offset : -self.offset]
- assert h.size()[3] > 0
-
- return h
diff --git a/spaces/BernardoOlisan/vqganclip/CLIP/data/yfcc100m.md b/spaces/BernardoOlisan/vqganclip/CLIP/data/yfcc100m.md
deleted file mode 100644
index 575c54bc4bab3972878291c8d227a313c9fc766e..0000000000000000000000000000000000000000
--- a/spaces/BernardoOlisan/vqganclip/CLIP/data/yfcc100m.md
+++ /dev/null
@@ -1,14 +0,0 @@
-# The YFCC100M Subset
-
-In the paper, we performed a dataset ablation using a subset of the YFCC100M dataset and showed that the performance remained largely similar.
-
-The subset contains 14,829,396 images, about 15% of the full dataset, which have been filtered to only keep those with natural languag titles and/or descriptions in English.
-
-We provide the list of (line number, photo identifier, photo hash) of each image contained in this subset. These correspond to the first three columns in the dataset's metadata TSV file.
-
-```
-wget https://openaipublic.azureedge.net/clip/data/yfcc100m_subset_data.tsv.bz2
-bunzip2 yfcc100m_subset_data.tsv.bz2
-```
-
-Use of the underlying media files is subject to the Creative Commons licenses chosen by their creators/uploaders. For more information about the YFCC100M dataset, visit [the official website](https://multimediacommons.wordpress.com/yfcc100m-core-dataset/).
\ No newline at end of file
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/locations/base.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/locations/base.py
deleted file mode 100644
index 3f9f896e632e929a63e9724ab80ecdfc9761b795..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/locations/base.py
+++ /dev/null
@@ -1,81 +0,0 @@
-import functools
-import os
-import site
-import sys
-import sysconfig
-import typing
-
-from pip._internal.exceptions import InstallationError
-from pip._internal.utils import appdirs
-from pip._internal.utils.virtualenv import running_under_virtualenv
-
-# Application Directories
-USER_CACHE_DIR = appdirs.user_cache_dir("pip")
-
-# FIXME doesn't account for venv linked to global site-packages
-site_packages: str = sysconfig.get_path("purelib")
-
-
-def get_major_minor_version() -> str:
- """
- Return the major-minor version of the current Python as a string, e.g.
- "3.7" or "3.10".
- """
- return "{}.{}".format(*sys.version_info)
-
-
-def change_root(new_root: str, pathname: str) -> str:
- """Return 'pathname' with 'new_root' prepended.
-
- If 'pathname' is relative, this is equivalent to os.path.join(new_root, pathname).
- Otherwise, it requires making 'pathname' relative and then joining the
- two, which is tricky on DOS/Windows and Mac OS.
-
- This is borrowed from Python's standard library's distutils module.
- """
- if os.name == "posix":
- if not os.path.isabs(pathname):
- return os.path.join(new_root, pathname)
- else:
- return os.path.join(new_root, pathname[1:])
-
- elif os.name == "nt":
- (drive, path) = os.path.splitdrive(pathname)
- if path[0] == "\\":
- path = path[1:]
- return os.path.join(new_root, path)
-
- else:
- raise InstallationError(
- f"Unknown platform: {os.name}\n"
- "Can not change root path prefix on unknown platform."
- )
-
-
-def get_src_prefix() -> str:
- if running_under_virtualenv():
- src_prefix = os.path.join(sys.prefix, "src")
- else:
- # FIXME: keep src in cwd for now (it is not a temporary folder)
- try:
- src_prefix = os.path.join(os.getcwd(), "src")
- except OSError:
- # In case the current working directory has been renamed or deleted
- sys.exit("The folder you are executing pip from can no longer be found.")
-
- # under macOS + virtualenv sys.prefix is not properly resolved
- # it is something like /path/to/python/bin/..
- return os.path.abspath(src_prefix)
-
-
-try:
- # Use getusersitepackages if this is present, as it ensures that the
- # value is initialised properly.
- user_site: typing.Optional[str] = site.getusersitepackages()
-except AttributeError:
- user_site = site.USER_SITE
-
-
-@functools.lru_cache(maxsize=None)
-def is_osx_framework() -> bool:
- return bool(sysconfig.get_config_var("PYTHONFRAMEWORK"))
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/highlighter.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/highlighter.py
deleted file mode 100644
index c2646794a98578bdb735f5047dbc6b1d50b90230..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/highlighter.py
+++ /dev/null
@@ -1,232 +0,0 @@
-import re
-from abc import ABC, abstractmethod
-from typing import List, Union
-
-from .text import Span, Text
-
-
-def _combine_regex(*regexes: str) -> str:
- """Combine a number of regexes in to a single regex.
-
- Returns:
- str: New regex with all regexes ORed together.
- """
- return "|".join(regexes)
-
-
-class Highlighter(ABC):
- """Abstract base class for highlighters."""
-
- def __call__(self, text: Union[str, Text]) -> Text:
- """Highlight a str or Text instance.
-
- Args:
- text (Union[str, ~Text]): Text to highlight.
-
- Raises:
- TypeError: If not called with text or str.
-
- Returns:
- Text: A test instance with highlighting applied.
- """
- if isinstance(text, str):
- highlight_text = Text(text)
- elif isinstance(text, Text):
- highlight_text = text.copy()
- else:
- raise TypeError(f"str or Text instance required, not {text!r}")
- self.highlight(highlight_text)
- return highlight_text
-
- @abstractmethod
- def highlight(self, text: Text) -> None:
- """Apply highlighting in place to text.
-
- Args:
- text (~Text): A text object highlight.
- """
-
-
-class NullHighlighter(Highlighter):
- """A highlighter object that doesn't highlight.
-
- May be used to disable highlighting entirely.
-
- """
-
- def highlight(self, text: Text) -> None:
- """Nothing to do"""
-
-
-class RegexHighlighter(Highlighter):
- """Applies highlighting from a list of regular expressions."""
-
- highlights: List[str] = []
- base_style: str = ""
-
- def highlight(self, text: Text) -> None:
- """Highlight :class:`rich.text.Text` using regular expressions.
-
- Args:
- text (~Text): Text to highlighted.
-
- """
-
- highlight_regex = text.highlight_regex
- for re_highlight in self.highlights:
- highlight_regex(re_highlight, style_prefix=self.base_style)
-
-
-class ReprHighlighter(RegexHighlighter):
- """Highlights the text typically produced from ``__repr__`` methods."""
-
- base_style = "repr."
- highlights = [
- r"(?P<)(?P[-\w.:|]*)(?P[\w\W]*)(?P>)",
- r'(?P[\w_]{1,50})=(?P"?[\w_]+"?)?',
- r"(?P[][{}()])",
- _combine_regex(
- r"(?P[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3})",
- r"(?P([A-Fa-f0-9]{1,4}::?){1,7}[A-Fa-f0-9]{1,4})",
- r"(?P(?:[0-9A-Fa-f]{1,2}-){7}[0-9A-Fa-f]{1,2}|(?:[0-9A-Fa-f]{1,2}:){7}[0-9A-Fa-f]{1,2}|(?:[0-9A-Fa-f]{4}\.){3}[0-9A-Fa-f]{4})",
- r"(?P(?:[0-9A-Fa-f]{1,2}-){5}[0-9A-Fa-f]{1,2}|(?:[0-9A-Fa-f]{1,2}:){5}[0-9A-Fa-f]{1,2}|(?:[0-9A-Fa-f]{4}\.){2}[0-9A-Fa-f]{4})",
- r"(?P[a-fA-F0-9]{8}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{12})",
- r"(?P[\w.]*?)\(",
- r"\b(?PTrue)\b|\b(?PFalse)\b|\b(?PNone)\b",
- r"(?P\.\.\.)",
- r"(?P(?(?\B(/[-\w._+]+)*\/)(?P[-\w._+]*)?",
- r"(?b?'''.*?(?(file|https|http|ws|wss)://[-0-9a-zA-Z$_+!`(),.?/;:&=%#]*)",
- ),
- ]
-
-
-class JSONHighlighter(RegexHighlighter):
- """Highlights JSON"""
-
- # Captures the start and end of JSON strings, handling escaped quotes
- JSON_STR = r"(?b?\".*?(?[\{\[\(\)\]\}])",
- r"\b(?Ptrue)\b|\b(?Pfalse)\b|\b(?Pnull)\b",
- r"(?P(? None:
- super().highlight(text)
-
- # Additional work to handle highlighting JSON keys
- plain = text.plain
- append = text.spans.append
- whitespace = self.JSON_WHITESPACE
- for match in re.finditer(self.JSON_STR, plain):
- start, end = match.span()
- cursor = end
- while cursor < len(plain):
- char = plain[cursor]
- cursor += 1
- if char == ":":
- append(Span(start, end, "json.key"))
- elif char in whitespace:
- continue
- break
-
-
-class ISO8601Highlighter(RegexHighlighter):
- """Highlights the ISO8601 date time strings.
- Regex reference: https://www.oreilly.com/library/view/regular-expressions-cookbook/9781449327453/ch04s07.html
- """
-
- base_style = "iso8601."
- highlights = [
- #
- # Dates
- #
- # Calendar month (e.g. 2008-08). The hyphen is required
- r"^(?P[0-9]{4})-(?P1[0-2]|0[1-9])$",
- # Calendar date w/o hyphens (e.g. 20080830)
- r"^(?P(?P[0-9]{4})(?P1[0-2]|0[1-9])(?P3[01]|0[1-9]|[12][0-9]))$",
- # Ordinal date (e.g. 2008-243). The hyphen is optional
- r"^(?P(?P[0-9]{4})-?(?P36[0-6]|3[0-5][0-9]|[12][0-9]{2}|0[1-9][0-9]|00[1-9]))$",
- #
- # Weeks
- #
- # Week of the year (e.g., 2008-W35). The hyphen is optional
- r"^(?P(?P[0-9]{4})-?W(?P5[0-3]|[1-4][0-9]|0[1-9]))$",
- # Week date (e.g., 2008-W35-6). The hyphens are optional
- r"^(?P(?P[0-9]{4})-?W(?P5[0-3]|[1-4][0-9]|0[1-9])-?(?P[1-7]))$",
- #
- # Times
- #
- # Hours and minutes (e.g., 17:21). The colon is optional
- r"^(?P(?P2[0-3]|[01][0-9]):?(?P[0-5][0-9]))$",
- # Hours, minutes, and seconds w/o colons (e.g., 172159)
- r"^(?P(?P2[0-3]|[01][0-9])(?P[0-5][0-9])(?P[0-5][0-9]))$",
- # Time zone designator (e.g., Z, +07 or +07:00). The colons and the minutes are optional
- r"^(?P(Z|[+-](?:2[0-3]|[01][0-9])(?::?(?:[0-5][0-9]))?))$",
- # Hours, minutes, and seconds with time zone designator (e.g., 17:21:59+07:00).
- # All the colons are optional. The minutes in the time zone designator are also optional
- r"^(?P(?P2[0-3]|[01][0-9])(?P[0-5][0-9])(?P[0-5][0-9]))(?PZ|[+-](?:2[0-3]|[01][0-9])(?::?(?:[0-5][0-9]))?)$",
- #
- # Date and Time
- #
- # Calendar date with hours, minutes, and seconds (e.g., 2008-08-30 17:21:59 or 20080830 172159).
- # A space is required between the date and the time. The hyphens and colons are optional.
- # This regex matches dates and times that specify some hyphens or colons but omit others.
- # This does not follow ISO 8601
- r"^(?P(?P[0-9]{4})(?P-)?(?P1[0-2]|0[1-9])(?(hyphen)-)(?P3[01]|0[1-9]|[12][0-9])) (?P(?P2[0-3]|[01][0-9])(?(hyphen):)(?P[0-5][0-9])(?(hyphen):)(?P[0-5][0-9]))$",
- #
- # XML Schema dates and times
- #
- # Date, with optional time zone (e.g., 2008-08-30 or 2008-08-30+07:00).
- # Hyphens are required. This is the XML Schema 'date' type
- r"^(?P(?P-?(?:[1-9][0-9]*)?[0-9]{4})-(?P1[0-2]|0[1-9])-(?P3[01]|0[1-9]|[12][0-9]))(?PZ|[+-](?:2[0-3]|[01][0-9]):[0-5][0-9])?$",
- # Time, with optional fractional seconds and time zone (e.g., 01:45:36 or 01:45:36.123+07:00).
- # There is no limit on the number of digits for the fractional seconds. This is the XML Schema 'time' type
- r"^(?P(?P2[0-3]|[01][0-9]):(?P[0-5][0-9]):(?P[0-5][0-9])(?P\.[0-9]+)?)(?PZ|[+-](?:2[0-3]|[01][0-9]):[0-5][0-9])?$",
- # Date and time, with optional fractional seconds and time zone (e.g., 2008-08-30T01:45:36 or 2008-08-30T01:45:36.123Z).
- # This is the XML Schema 'dateTime' type
- r"^(?P(?P-?(?:[1-9][0-9]*)?[0-9]{4})-(?P1[0-2]|0[1-9])-(?P3[01]|0[1-9]|[12][0-9]))T(?P(?P2[0-3]|[01][0-9]):(?P[0-5][0-9]):(?P[0-5][0-9])(?P\.[0-9]+)?)(?PZ|[+-](?:2[0-3]|[01][0-9]):[0-5][0-9])?$",
- ]
-
-
-if __name__ == "__main__": # pragma: no cover
- from .console import Console
-
- console = Console()
- console.print("[bold green]hello world![/bold green]")
- console.print("'[bold green]hello world![/bold green]'")
-
- console.print(" /foo")
- console.print("/foo/")
- console.print("/foo/bar")
- console.print("foo/bar/baz")
-
- console.print("/foo/bar/baz?foo=bar+egg&egg=baz")
- console.print("/foo/bar/baz/")
- console.print("/foo/bar/baz/egg")
- console.print("/foo/bar/baz/egg.py")
- console.print("/foo/bar/baz/egg.py word")
- console.print(" /foo/bar/baz/egg.py word")
- console.print("foo /foo/bar/baz/egg.py word")
- console.print("foo /foo/bar/ba._++z/egg+.py word")
- console.print("https://example.org?foo=bar#header")
-
- console.print(1234567.34)
- console.print(1 / 2)
- console.print(-1 / 123123123123)
-
- console.print(
- "127.0.1.1 bar 192.168.1.4 2001:0db8:85a3:0000:0000:8a2e:0370:7334 foo"
- )
- import json
-
- console.print_json(json.dumps(obj={"name": "apple", "count": 1}), indent=None)
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/command/install.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/command/install.py
deleted file mode 100644
index a38cddcda5380aac99bade87e2cdf95d4c99348a..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/command/install.py
+++ /dev/null
@@ -1,814 +0,0 @@
-"""distutils.command.install
-
-Implements the Distutils 'install' command."""
-
-import sys
-import os
-import contextlib
-import sysconfig
-import itertools
-
-from distutils import log
-from distutils.core import Command
-from distutils.debug import DEBUG
-from distutils.sysconfig import get_config_vars
-from distutils.file_util import write_file
-from distutils.util import convert_path, subst_vars, change_root
-from distutils.util import get_platform
-from distutils.errors import DistutilsOptionError, DistutilsPlatformError
-from . import _framework_compat as fw
-from .. import _collections
-
-from site import USER_BASE
-from site import USER_SITE
-
-HAS_USER_SITE = True
-
-WINDOWS_SCHEME = {
- 'purelib': '{base}/Lib/site-packages',
- 'platlib': '{base}/Lib/site-packages',
- 'headers': '{base}/Include/{dist_name}',
- 'scripts': '{base}/Scripts',
- 'data': '{base}',
-}
-
-INSTALL_SCHEMES = {
- 'posix_prefix': {
- 'purelib': '{base}/lib/{implementation_lower}{py_version_short}/site-packages',
- 'platlib': '{platbase}/{platlibdir}/{implementation_lower}'
- '{py_version_short}/site-packages',
- 'headers': '{base}/include/{implementation_lower}'
- '{py_version_short}{abiflags}/{dist_name}',
- 'scripts': '{base}/bin',
- 'data': '{base}',
- },
- 'posix_home': {
- 'purelib': '{base}/lib/{implementation_lower}',
- 'platlib': '{base}/{platlibdir}/{implementation_lower}',
- 'headers': '{base}/include/{implementation_lower}/{dist_name}',
- 'scripts': '{base}/bin',
- 'data': '{base}',
- },
- 'nt': WINDOWS_SCHEME,
- 'pypy': {
- 'purelib': '{base}/site-packages',
- 'platlib': '{base}/site-packages',
- 'headers': '{base}/include/{dist_name}',
- 'scripts': '{base}/bin',
- 'data': '{base}',
- },
- 'pypy_nt': {
- 'purelib': '{base}/site-packages',
- 'platlib': '{base}/site-packages',
- 'headers': '{base}/include/{dist_name}',
- 'scripts': '{base}/Scripts',
- 'data': '{base}',
- },
-}
-
-# user site schemes
-if HAS_USER_SITE:
- INSTALL_SCHEMES['nt_user'] = {
- 'purelib': '{usersite}',
- 'platlib': '{usersite}',
- 'headers': '{userbase}/{implementation}{py_version_nodot_plat}'
- '/Include/{dist_name}',
- 'scripts': '{userbase}/{implementation}{py_version_nodot_plat}/Scripts',
- 'data': '{userbase}',
- }
-
- INSTALL_SCHEMES['posix_user'] = {
- 'purelib': '{usersite}',
- 'platlib': '{usersite}',
- 'headers': '{userbase}/include/{implementation_lower}'
- '{py_version_short}{abiflags}/{dist_name}',
- 'scripts': '{userbase}/bin',
- 'data': '{userbase}',
- }
-
-
-INSTALL_SCHEMES.update(fw.schemes)
-
-
-# The keys to an installation scheme; if any new types of files are to be
-# installed, be sure to add an entry to every installation scheme above,
-# and to SCHEME_KEYS here.
-SCHEME_KEYS = ('purelib', 'platlib', 'headers', 'scripts', 'data')
-
-
-def _load_sysconfig_schemes():
- with contextlib.suppress(AttributeError):
- return {
- scheme: sysconfig.get_paths(scheme, expand=False)
- for scheme in sysconfig.get_scheme_names()
- }
-
-
-def _load_schemes():
- """
- Extend default schemes with schemes from sysconfig.
- """
-
- sysconfig_schemes = _load_sysconfig_schemes() or {}
-
- return {
- scheme: {
- **INSTALL_SCHEMES.get(scheme, {}),
- **sysconfig_schemes.get(scheme, {}),
- }
- for scheme in set(itertools.chain(INSTALL_SCHEMES, sysconfig_schemes))
- }
-
-
-def _get_implementation():
- if hasattr(sys, 'pypy_version_info'):
- return 'PyPy'
- else:
- return 'Python'
-
-
-def _select_scheme(ob, name):
- scheme = _inject_headers(name, _load_scheme(_resolve_scheme(name)))
- vars(ob).update(_remove_set(ob, _scheme_attrs(scheme)))
-
-
-def _remove_set(ob, attrs):
- """
- Include only attrs that are None in ob.
- """
- return {key: value for key, value in attrs.items() if getattr(ob, key) is None}
-
-
-def _resolve_scheme(name):
- os_name, sep, key = name.partition('_')
- try:
- resolved = sysconfig.get_preferred_scheme(key)
- except Exception:
- resolved = fw.scheme(_pypy_hack(name))
- return resolved
-
-
-def _load_scheme(name):
- return _load_schemes()[name]
-
-
-def _inject_headers(name, scheme):
- """
- Given a scheme name and the resolved scheme,
- if the scheme does not include headers, resolve
- the fallback scheme for the name and use headers
- from it. pypa/distutils#88
- """
- # Bypass the preferred scheme, which may not
- # have defined headers.
- fallback = _load_scheme(_pypy_hack(name))
- scheme.setdefault('headers', fallback['headers'])
- return scheme
-
-
-def _scheme_attrs(scheme):
- """Resolve install directories by applying the install schemes."""
- return {f'install_{key}': scheme[key] for key in SCHEME_KEYS}
-
-
-def _pypy_hack(name):
- PY37 = sys.version_info < (3, 8)
- old_pypy = hasattr(sys, 'pypy_version_info') and PY37
- prefix = not name.endswith(('_user', '_home'))
- pypy_name = 'pypy' + '_nt' * (os.name == 'nt')
- return pypy_name if old_pypy and prefix else name
-
-
-class install(Command):
-
- description = "install everything from build directory"
-
- user_options = [
- # Select installation scheme and set base director(y|ies)
- ('prefix=', None, "installation prefix"),
- ('exec-prefix=', None, "(Unix only) prefix for platform-specific files"),
- ('home=', None, "(Unix only) home directory to install under"),
- # Or, just set the base director(y|ies)
- (
- 'install-base=',
- None,
- "base installation directory (instead of --prefix or --home)",
- ),
- (
- 'install-platbase=',
- None,
- "base installation directory for platform-specific files "
- + "(instead of --exec-prefix or --home)",
- ),
- ('root=', None, "install everything relative to this alternate root directory"),
- # Or, explicitly set the installation scheme
- (
- 'install-purelib=',
- None,
- "installation directory for pure Python module distributions",
- ),
- (
- 'install-platlib=',
- None,
- "installation directory for non-pure module distributions",
- ),
- (
- 'install-lib=',
- None,
- "installation directory for all module distributions "
- + "(overrides --install-purelib and --install-platlib)",
- ),
- ('install-headers=', None, "installation directory for C/C++ headers"),
- ('install-scripts=', None, "installation directory for Python scripts"),
- ('install-data=', None, "installation directory for data files"),
- # Byte-compilation options -- see install_lib.py for details, as
- # these are duplicated from there (but only install_lib does
- # anything with them).
- ('compile', 'c', "compile .py to .pyc [default]"),
- ('no-compile', None, "don't compile .py files"),
- (
- 'optimize=',
- 'O',
- "also compile with optimization: -O1 for \"python -O\", "
- "-O2 for \"python -OO\", and -O0 to disable [default: -O0]",
- ),
- # Miscellaneous control options
- ('force', 'f', "force installation (overwrite any existing files)"),
- ('skip-build', None, "skip rebuilding everything (for testing/debugging)"),
- # Where to install documentation (eventually!)
- # ('doc-format=', None, "format of documentation to generate"),
- # ('install-man=', None, "directory for Unix man pages"),
- # ('install-html=', None, "directory for HTML documentation"),
- # ('install-info=', None, "directory for GNU info files"),
- ('record=', None, "filename in which to record list of installed files"),
- ]
-
- boolean_options = ['compile', 'force', 'skip-build']
-
- if HAS_USER_SITE:
- user_options.append(
- ('user', None, "install in user site-package '%s'" % USER_SITE)
- )
- boolean_options.append('user')
-
- negative_opt = {'no-compile': 'compile'}
-
- def initialize_options(self):
- """Initializes options."""
- # High-level options: these select both an installation base
- # and scheme.
- self.prefix = None
- self.exec_prefix = None
- self.home = None
- self.user = 0
-
- # These select only the installation base; it's up to the user to
- # specify the installation scheme (currently, that means supplying
- # the --install-{platlib,purelib,scripts,data} options).
- self.install_base = None
- self.install_platbase = None
- self.root = None
-
- # These options are the actual installation directories; if not
- # supplied by the user, they are filled in using the installation
- # scheme implied by prefix/exec-prefix/home and the contents of
- # that installation scheme.
- self.install_purelib = None # for pure module distributions
- self.install_platlib = None # non-pure (dists w/ extensions)
- self.install_headers = None # for C/C++ headers
- self.install_lib = None # set to either purelib or platlib
- self.install_scripts = None
- self.install_data = None
- self.install_userbase = USER_BASE
- self.install_usersite = USER_SITE
-
- self.compile = None
- self.optimize = None
-
- # Deprecated
- # These two are for putting non-packagized distributions into their
- # own directory and creating a .pth file if it makes sense.
- # 'extra_path' comes from the setup file; 'install_path_file' can
- # be turned off if it makes no sense to install a .pth file. (But
- # better to install it uselessly than to guess wrong and not
- # install it when it's necessary and would be used!) Currently,
- # 'install_path_file' is always true unless some outsider meddles
- # with it.
- self.extra_path = None
- self.install_path_file = 1
-
- # 'force' forces installation, even if target files are not
- # out-of-date. 'skip_build' skips running the "build" command,
- # handy if you know it's not necessary. 'warn_dir' (which is *not*
- # a user option, it's just there so the bdist_* commands can turn
- # it off) determines whether we warn about installing to a
- # directory not in sys.path.
- self.force = 0
- self.skip_build = 0
- self.warn_dir = 1
-
- # These are only here as a conduit from the 'build' command to the
- # 'install_*' commands that do the real work. ('build_base' isn't
- # actually used anywhere, but it might be useful in future.) They
- # are not user options, because if the user told the install
- # command where the build directory is, that wouldn't affect the
- # build command.
- self.build_base = None
- self.build_lib = None
-
- # Not defined yet because we don't know anything about
- # documentation yet.
- # self.install_man = None
- # self.install_html = None
- # self.install_info = None
-
- self.record = None
-
- # -- Option finalizing methods -------------------------------------
- # (This is rather more involved than for most commands,
- # because this is where the policy for installing third-
- # party Python modules on various platforms given a wide
- # array of user input is decided. Yes, it's quite complex!)
-
- def finalize_options(self): # noqa: C901
- """Finalizes options."""
- # This method (and its helpers, like 'finalize_unix()',
- # 'finalize_other()', and 'select_scheme()') is where the default
- # installation directories for modules, extension modules, and
- # anything else we care to install from a Python module
- # distribution. Thus, this code makes a pretty important policy
- # statement about how third-party stuff is added to a Python
- # installation! Note that the actual work of installation is done
- # by the relatively simple 'install_*' commands; they just take
- # their orders from the installation directory options determined
- # here.
-
- # Check for errors/inconsistencies in the options; first, stuff
- # that's wrong on any platform.
-
- if (self.prefix or self.exec_prefix or self.home) and (
- self.install_base or self.install_platbase
- ):
- raise DistutilsOptionError(
- "must supply either prefix/exec-prefix/home or "
- + "install-base/install-platbase -- not both"
- )
-
- if self.home and (self.prefix or self.exec_prefix):
- raise DistutilsOptionError(
- "must supply either home or prefix/exec-prefix -- not both"
- )
-
- if self.user and (
- self.prefix
- or self.exec_prefix
- or self.home
- or self.install_base
- or self.install_platbase
- ):
- raise DistutilsOptionError(
- "can't combine user with prefix, "
- "exec_prefix/home, or install_(plat)base"
- )
-
- # Next, stuff that's wrong (or dubious) only on certain platforms.
- if os.name != "posix":
- if self.exec_prefix:
- self.warn("exec-prefix option ignored on this platform")
- self.exec_prefix = None
-
- # Now the interesting logic -- so interesting that we farm it out
- # to other methods. The goal of these methods is to set the final
- # values for the install_{lib,scripts,data,...} options, using as
- # input a heady brew of prefix, exec_prefix, home, install_base,
- # install_platbase, user-supplied versions of
- # install_{purelib,platlib,lib,scripts,data,...}, and the
- # install schemes. Phew!
-
- self.dump_dirs("pre-finalize_{unix,other}")
-
- if os.name == 'posix':
- self.finalize_unix()
- else:
- self.finalize_other()
-
- self.dump_dirs("post-finalize_{unix,other}()")
-
- # Expand configuration variables, tilde, etc. in self.install_base
- # and self.install_platbase -- that way, we can use $base or
- # $platbase in the other installation directories and not worry
- # about needing recursive variable expansion (shudder).
-
- py_version = sys.version.split()[0]
- (prefix, exec_prefix) = get_config_vars('prefix', 'exec_prefix')
- try:
- abiflags = sys.abiflags
- except AttributeError:
- # sys.abiflags may not be defined on all platforms.
- abiflags = ''
- local_vars = {
- 'dist_name': self.distribution.get_name(),
- 'dist_version': self.distribution.get_version(),
- 'dist_fullname': self.distribution.get_fullname(),
- 'py_version': py_version,
- 'py_version_short': '%d.%d' % sys.version_info[:2],
- 'py_version_nodot': '%d%d' % sys.version_info[:2],
- 'sys_prefix': prefix,
- 'prefix': prefix,
- 'sys_exec_prefix': exec_prefix,
- 'exec_prefix': exec_prefix,
- 'abiflags': abiflags,
- 'platlibdir': getattr(sys, 'platlibdir', 'lib'),
- 'implementation_lower': _get_implementation().lower(),
- 'implementation': _get_implementation(),
- }
-
- # vars for compatibility on older Pythons
- compat_vars = dict(
- # Python 3.9 and earlier
- py_version_nodot_plat=getattr(sys, 'winver', '').replace('.', ''),
- )
-
- if HAS_USER_SITE:
- local_vars['userbase'] = self.install_userbase
- local_vars['usersite'] = self.install_usersite
-
- self.config_vars = _collections.DictStack(
- [fw.vars(), compat_vars, sysconfig.get_config_vars(), local_vars]
- )
-
- self.expand_basedirs()
-
- self.dump_dirs("post-expand_basedirs()")
-
- # Now define config vars for the base directories so we can expand
- # everything else.
- local_vars['base'] = self.install_base
- local_vars['platbase'] = self.install_platbase
-
- if DEBUG:
- from pprint import pprint
-
- print("config vars:")
- pprint(dict(self.config_vars))
-
- # Expand "~" and configuration variables in the installation
- # directories.
- self.expand_dirs()
-
- self.dump_dirs("post-expand_dirs()")
-
- # Create directories in the home dir:
- if self.user:
- self.create_home_path()
-
- # Pick the actual directory to install all modules to: either
- # install_purelib or install_platlib, depending on whether this
- # module distribution is pure or not. Of course, if the user
- # already specified install_lib, use their selection.
- if self.install_lib is None:
- if self.distribution.has_ext_modules(): # has extensions: non-pure
- self.install_lib = self.install_platlib
- else:
- self.install_lib = self.install_purelib
-
- # Convert directories from Unix /-separated syntax to the local
- # convention.
- self.convert_paths(
- 'lib',
- 'purelib',
- 'platlib',
- 'scripts',
- 'data',
- 'headers',
- 'userbase',
- 'usersite',
- )
-
- # Deprecated
- # Well, we're not actually fully completely finalized yet: we still
- # have to deal with 'extra_path', which is the hack for allowing
- # non-packagized module distributions (hello, Numerical Python!) to
- # get their own directories.
- self.handle_extra_path()
- self.install_libbase = self.install_lib # needed for .pth file
- self.install_lib = os.path.join(self.install_lib, self.extra_dirs)
-
- # If a new root directory was supplied, make all the installation
- # dirs relative to it.
- if self.root is not None:
- self.change_roots(
- 'libbase', 'lib', 'purelib', 'platlib', 'scripts', 'data', 'headers'
- )
-
- self.dump_dirs("after prepending root")
-
- # Find out the build directories, ie. where to install from.
- self.set_undefined_options(
- 'build', ('build_base', 'build_base'), ('build_lib', 'build_lib')
- )
-
- # Punt on doc directories for now -- after all, we're punting on
- # documentation completely!
-
- def dump_dirs(self, msg):
- """Dumps the list of user options."""
- if not DEBUG:
- return
- from distutils.fancy_getopt import longopt_xlate
-
- log.debug(msg + ":")
- for opt in self.user_options:
- opt_name = opt[0]
- if opt_name[-1] == "=":
- opt_name = opt_name[0:-1]
- if opt_name in self.negative_opt:
- opt_name = self.negative_opt[opt_name]
- opt_name = opt_name.translate(longopt_xlate)
- val = not getattr(self, opt_name)
- else:
- opt_name = opt_name.translate(longopt_xlate)
- val = getattr(self, opt_name)
- log.debug(" %s: %s", opt_name, val)
-
- def finalize_unix(self):
- """Finalizes options for posix platforms."""
- if self.install_base is not None or self.install_platbase is not None:
- incomplete_scheme = (
- (
- self.install_lib is None
- and self.install_purelib is None
- and self.install_platlib is None
- )
- or self.install_headers is None
- or self.install_scripts is None
- or self.install_data is None
- )
- if incomplete_scheme:
- raise DistutilsOptionError(
- "install-base or install-platbase supplied, but "
- "installation scheme is incomplete"
- )
- return
-
- if self.user:
- if self.install_userbase is None:
- raise DistutilsPlatformError("User base directory is not specified")
- self.install_base = self.install_platbase = self.install_userbase
- self.select_scheme("posix_user")
- elif self.home is not None:
- self.install_base = self.install_platbase = self.home
- self.select_scheme("posix_home")
- else:
- if self.prefix is None:
- if self.exec_prefix is not None:
- raise DistutilsOptionError(
- "must not supply exec-prefix without prefix"
- )
-
- # Allow Fedora to add components to the prefix
- _prefix_addition = getattr(sysconfig, '_prefix_addition', "")
-
- self.prefix = os.path.normpath(sys.prefix) + _prefix_addition
- self.exec_prefix = os.path.normpath(sys.exec_prefix) + _prefix_addition
-
- else:
- if self.exec_prefix is None:
- self.exec_prefix = self.prefix
-
- self.install_base = self.prefix
- self.install_platbase = self.exec_prefix
- self.select_scheme("posix_prefix")
-
- def finalize_other(self):
- """Finalizes options for non-posix platforms"""
- if self.user:
- if self.install_userbase is None:
- raise DistutilsPlatformError("User base directory is not specified")
- self.install_base = self.install_platbase = self.install_userbase
- self.select_scheme(os.name + "_user")
- elif self.home is not None:
- self.install_base = self.install_platbase = self.home
- self.select_scheme("posix_home")
- else:
- if self.prefix is None:
- self.prefix = os.path.normpath(sys.prefix)
-
- self.install_base = self.install_platbase = self.prefix
- try:
- self.select_scheme(os.name)
- except KeyError:
- raise DistutilsPlatformError(
- "I don't know how to install stuff on '%s'" % os.name
- )
-
- def select_scheme(self, name):
- _select_scheme(self, name)
-
- def _expand_attrs(self, attrs):
- for attr in attrs:
- val = getattr(self, attr)
- if val is not None:
- if os.name == 'posix' or os.name == 'nt':
- val = os.path.expanduser(val)
- val = subst_vars(val, self.config_vars)
- setattr(self, attr, val)
-
- def expand_basedirs(self):
- """Calls `os.path.expanduser` on install_base, install_platbase and
- root."""
- self._expand_attrs(['install_base', 'install_platbase', 'root'])
-
- def expand_dirs(self):
- """Calls `os.path.expanduser` on install dirs."""
- self._expand_attrs(
- [
- 'install_purelib',
- 'install_platlib',
- 'install_lib',
- 'install_headers',
- 'install_scripts',
- 'install_data',
- ]
- )
-
- def convert_paths(self, *names):
- """Call `convert_path` over `names`."""
- for name in names:
- attr = "install_" + name
- setattr(self, attr, convert_path(getattr(self, attr)))
-
- def handle_extra_path(self):
- """Set `path_file` and `extra_dirs` using `extra_path`."""
- if self.extra_path is None:
- self.extra_path = self.distribution.extra_path
-
- if self.extra_path is not None:
- log.warn(
- "Distribution option extra_path is deprecated. "
- "See issue27919 for details."
- )
- if isinstance(self.extra_path, str):
- self.extra_path = self.extra_path.split(',')
-
- if len(self.extra_path) == 1:
- path_file = extra_dirs = self.extra_path[0]
- elif len(self.extra_path) == 2:
- path_file, extra_dirs = self.extra_path
- else:
- raise DistutilsOptionError(
- "'extra_path' option must be a list, tuple, or "
- "comma-separated string with 1 or 2 elements"
- )
-
- # convert to local form in case Unix notation used (as it
- # should be in setup scripts)
- extra_dirs = convert_path(extra_dirs)
- else:
- path_file = None
- extra_dirs = ''
-
- # XXX should we warn if path_file and not extra_dirs? (in which
- # case the path file would be harmless but pointless)
- self.path_file = path_file
- self.extra_dirs = extra_dirs
-
- def change_roots(self, *names):
- """Change the install directories pointed by name using root."""
- for name in names:
- attr = "install_" + name
- setattr(self, attr, change_root(self.root, getattr(self, attr)))
-
- def create_home_path(self):
- """Create directories under ~."""
- if not self.user:
- return
- home = convert_path(os.path.expanduser("~"))
- for name, path in self.config_vars.items():
- if str(path).startswith(home) and not os.path.isdir(path):
- self.debug_print("os.makedirs('%s', 0o700)" % path)
- os.makedirs(path, 0o700)
-
- # -- Command execution methods -------------------------------------
-
- def run(self):
- """Runs the command."""
- # Obviously have to build before we can install
- if not self.skip_build:
- self.run_command('build')
- # If we built for any other platform, we can't install.
- build_plat = self.distribution.get_command_obj('build').plat_name
- # check warn_dir - it is a clue that the 'install' is happening
- # internally, and not to sys.path, so we don't check the platform
- # matches what we are running.
- if self.warn_dir and build_plat != get_platform():
- raise DistutilsPlatformError("Can't install when " "cross-compiling")
-
- # Run all sub-commands (at least those that need to be run)
- for cmd_name in self.get_sub_commands():
- self.run_command(cmd_name)
-
- if self.path_file:
- self.create_path_file()
-
- # write list of installed files, if requested.
- if self.record:
- outputs = self.get_outputs()
- if self.root: # strip any package prefix
- root_len = len(self.root)
- for counter in range(len(outputs)):
- outputs[counter] = outputs[counter][root_len:]
- self.execute(
- write_file,
- (self.record, outputs),
- "writing list of installed files to '%s'" % self.record,
- )
-
- sys_path = map(os.path.normpath, sys.path)
- sys_path = map(os.path.normcase, sys_path)
- install_lib = os.path.normcase(os.path.normpath(self.install_lib))
- if (
- self.warn_dir
- and not (self.path_file and self.install_path_file)
- and install_lib not in sys_path
- ):
- log.debug(
- (
- "modules installed to '%s', which is not in "
- "Python's module search path (sys.path) -- "
- "you'll have to change the search path yourself"
- ),
- self.install_lib,
- )
-
- def create_path_file(self):
- """Creates the .pth file"""
- filename = os.path.join(self.install_libbase, self.path_file + ".pth")
- if self.install_path_file:
- self.execute(
- write_file, (filename, [self.extra_dirs]), "creating %s" % filename
- )
- else:
- self.warn("path file '%s' not created" % filename)
-
- # -- Reporting methods ---------------------------------------------
-
- def get_outputs(self):
- """Assembles the outputs of all the sub-commands."""
- outputs = []
- for cmd_name in self.get_sub_commands():
- cmd = self.get_finalized_command(cmd_name)
- # Add the contents of cmd.get_outputs(), ensuring
- # that outputs doesn't contain duplicate entries
- for filename in cmd.get_outputs():
- if filename not in outputs:
- outputs.append(filename)
-
- if self.path_file and self.install_path_file:
- outputs.append(os.path.join(self.install_libbase, self.path_file + ".pth"))
-
- return outputs
-
- def get_inputs(self):
- """Returns the inputs of all the sub-commands"""
- # XXX gee, this looks familiar ;-(
- inputs = []
- for cmd_name in self.get_sub_commands():
- cmd = self.get_finalized_command(cmd_name)
- inputs.extend(cmd.get_inputs())
-
- return inputs
-
- # -- Predicates for sub-command list -------------------------------
-
- def has_lib(self):
- """Returns true if the current distribution has any Python
- modules to install."""
- return (
- self.distribution.has_pure_modules() or self.distribution.has_ext_modules()
- )
-
- def has_headers(self):
- """Returns true if the current distribution has any headers to
- install."""
- return self.distribution.has_headers()
-
- def has_scripts(self):
- """Returns true if the current distribution has any scripts to.
- install."""
- return self.distribution.has_scripts()
-
- def has_data(self):
- """Returns true if the current distribution has any data to.
- install."""
- return self.distribution.has_data_files()
-
- # 'sub_commands': a list of commands this command might have to run to
- # get its work done. See cmd.py for more info.
- sub_commands = [
- ('install_lib', has_lib),
- ('install_headers', has_headers),
- ('install_scripts', has_scripts),
- ('install_data', has_data),
- ('install_egg_info', lambda self: True),
- ]
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/fancy_getopt.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/fancy_getopt.py
deleted file mode 100644
index 830f047e28aa3b25295174d44d735448a1a43098..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/fancy_getopt.py
+++ /dev/null
@@ -1,470 +0,0 @@
-"""distutils.fancy_getopt
-
-Wrapper around the standard getopt module that provides the following
-additional features:
- * short and long options are tied together
- * options have help strings, so fancy_getopt could potentially
- create a complete usage summary
- * options set attributes of a passed-in object
-"""
-
-import sys
-import string
-import re
-import getopt
-from distutils.errors import DistutilsGetoptError, DistutilsArgError
-
-# Much like command_re in distutils.core, this is close to but not quite
-# the same as a Python NAME -- except, in the spirit of most GNU
-# utilities, we use '-' in place of '_'. (The spirit of LISP lives on!)
-# The similarities to NAME are again not a coincidence...
-longopt_pat = r'[a-zA-Z](?:[a-zA-Z0-9-]*)'
-longopt_re = re.compile(r'^%s$' % longopt_pat)
-
-# For recognizing "negative alias" options, eg. "quiet=!verbose"
-neg_alias_re = re.compile("^({})=!({})$".format(longopt_pat, longopt_pat))
-
-# This is used to translate long options to legitimate Python identifiers
-# (for use as attributes of some object).
-longopt_xlate = str.maketrans('-', '_')
-
-
-class FancyGetopt:
- """Wrapper around the standard 'getopt()' module that provides some
- handy extra functionality:
- * short and long options are tied together
- * options have help strings, and help text can be assembled
- from them
- * options set attributes of a passed-in object
- * boolean options can have "negative aliases" -- eg. if
- --quiet is the "negative alias" of --verbose, then "--quiet"
- on the command line sets 'verbose' to false
- """
-
- def __init__(self, option_table=None):
- # The option table is (currently) a list of tuples. The
- # tuples may have 3 or four values:
- # (long_option, short_option, help_string [, repeatable])
- # if an option takes an argument, its long_option should have '='
- # appended; short_option should just be a single character, no ':'
- # in any case. If a long_option doesn't have a corresponding
- # short_option, short_option should be None. All option tuples
- # must have long options.
- self.option_table = option_table
-
- # 'option_index' maps long option names to entries in the option
- # table (ie. those 3-tuples).
- self.option_index = {}
- if self.option_table:
- self._build_index()
-
- # 'alias' records (duh) alias options; {'foo': 'bar'} means
- # --foo is an alias for --bar
- self.alias = {}
-
- # 'negative_alias' keeps track of options that are the boolean
- # opposite of some other option
- self.negative_alias = {}
-
- # These keep track of the information in the option table. We
- # don't actually populate these structures until we're ready to
- # parse the command-line, since the 'option_table' passed in here
- # isn't necessarily the final word.
- self.short_opts = []
- self.long_opts = []
- self.short2long = {}
- self.attr_name = {}
- self.takes_arg = {}
-
- # And 'option_order' is filled up in 'getopt()'; it records the
- # original order of options (and their values) on the command-line,
- # but expands short options, converts aliases, etc.
- self.option_order = []
-
- def _build_index(self):
- self.option_index.clear()
- for option in self.option_table:
- self.option_index[option[0]] = option
-
- def set_option_table(self, option_table):
- self.option_table = option_table
- self._build_index()
-
- def add_option(self, long_option, short_option=None, help_string=None):
- if long_option in self.option_index:
- raise DistutilsGetoptError(
- "option conflict: already an option '%s'" % long_option
- )
- else:
- option = (long_option, short_option, help_string)
- self.option_table.append(option)
- self.option_index[long_option] = option
-
- def has_option(self, long_option):
- """Return true if the option table for this parser has an
- option with long name 'long_option'."""
- return long_option in self.option_index
-
- def get_attr_name(self, long_option):
- """Translate long option name 'long_option' to the form it
- has as an attribute of some object: ie., translate hyphens
- to underscores."""
- return long_option.translate(longopt_xlate)
-
- def _check_alias_dict(self, aliases, what):
- assert isinstance(aliases, dict)
- for (alias, opt) in aliases.items():
- if alias not in self.option_index:
- raise DistutilsGetoptError(
- ("invalid %s '%s': " "option '%s' not defined")
- % (what, alias, alias)
- )
- if opt not in self.option_index:
- raise DistutilsGetoptError(
- ("invalid %s '%s': " "aliased option '%s' not defined")
- % (what, alias, opt)
- )
-
- def set_aliases(self, alias):
- """Set the aliases for this option parser."""
- self._check_alias_dict(alias, "alias")
- self.alias = alias
-
- def set_negative_aliases(self, negative_alias):
- """Set the negative aliases for this option parser.
- 'negative_alias' should be a dictionary mapping option names to
- option names, both the key and value must already be defined
- in the option table."""
- self._check_alias_dict(negative_alias, "negative alias")
- self.negative_alias = negative_alias
-
- def _grok_option_table(self): # noqa: C901
- """Populate the various data structures that keep tabs on the
- option table. Called by 'getopt()' before it can do anything
- worthwhile.
- """
- self.long_opts = []
- self.short_opts = []
- self.short2long.clear()
- self.repeat = {}
-
- for option in self.option_table:
- if len(option) == 3:
- long, short, help = option
- repeat = 0
- elif len(option) == 4:
- long, short, help, repeat = option
- else:
- # the option table is part of the code, so simply
- # assert that it is correct
- raise ValueError("invalid option tuple: {!r}".format(option))
-
- # Type- and value-check the option names
- if not isinstance(long, str) or len(long) < 2:
- raise DistutilsGetoptError(
- ("invalid long option '%s': " "must be a string of length >= 2")
- % long
- )
-
- if not ((short is None) or (isinstance(short, str) and len(short) == 1)):
- raise DistutilsGetoptError(
- "invalid short option '%s': "
- "must a single character or None" % short
- )
-
- self.repeat[long] = repeat
- self.long_opts.append(long)
-
- if long[-1] == '=': # option takes an argument?
- if short:
- short = short + ':'
- long = long[0:-1]
- self.takes_arg[long] = 1
- else:
- # Is option is a "negative alias" for some other option (eg.
- # "quiet" == "!verbose")?
- alias_to = self.negative_alias.get(long)
- if alias_to is not None:
- if self.takes_arg[alias_to]:
- raise DistutilsGetoptError(
- "invalid negative alias '%s': "
- "aliased option '%s' takes a value" % (long, alias_to)
- )
-
- self.long_opts[-1] = long # XXX redundant?!
- self.takes_arg[long] = 0
-
- # If this is an alias option, make sure its "takes arg" flag is
- # the same as the option it's aliased to.
- alias_to = self.alias.get(long)
- if alias_to is not None:
- if self.takes_arg[long] != self.takes_arg[alias_to]:
- raise DistutilsGetoptError(
- "invalid alias '%s': inconsistent with "
- "aliased option '%s' (one of them takes a value, "
- "the other doesn't" % (long, alias_to)
- )
-
- # Now enforce some bondage on the long option name, so we can
- # later translate it to an attribute name on some object. Have
- # to do this a bit late to make sure we've removed any trailing
- # '='.
- if not longopt_re.match(long):
- raise DistutilsGetoptError(
- "invalid long option name '%s' "
- "(must be letters, numbers, hyphens only" % long
- )
-
- self.attr_name[long] = self.get_attr_name(long)
- if short:
- self.short_opts.append(short)
- self.short2long[short[0]] = long
-
- def getopt(self, args=None, object=None): # noqa: C901
- """Parse command-line options in args. Store as attributes on object.
-
- If 'args' is None or not supplied, uses 'sys.argv[1:]'. If
- 'object' is None or not supplied, creates a new OptionDummy
- object, stores option values there, and returns a tuple (args,
- object). If 'object' is supplied, it is modified in place and
- 'getopt()' just returns 'args'; in both cases, the returned
- 'args' is a modified copy of the passed-in 'args' list, which
- is left untouched.
- """
- if args is None:
- args = sys.argv[1:]
- if object is None:
- object = OptionDummy()
- created_object = True
- else:
- created_object = False
-
- self._grok_option_table()
-
- short_opts = ' '.join(self.short_opts)
- try:
- opts, args = getopt.getopt(args, short_opts, self.long_opts)
- except getopt.error as msg:
- raise DistutilsArgError(msg)
-
- for opt, val in opts:
- if len(opt) == 2 and opt[0] == '-': # it's a short option
- opt = self.short2long[opt[1]]
- else:
- assert len(opt) > 2 and opt[:2] == '--'
- opt = opt[2:]
-
- alias = self.alias.get(opt)
- if alias:
- opt = alias
-
- if not self.takes_arg[opt]: # boolean option?
- assert val == '', "boolean option can't have value"
- alias = self.negative_alias.get(opt)
- if alias:
- opt = alias
- val = 0
- else:
- val = 1
-
- attr = self.attr_name[opt]
- # The only repeating option at the moment is 'verbose'.
- # It has a negative option -q quiet, which should set verbose = 0.
- if val and self.repeat.get(attr) is not None:
- val = getattr(object, attr, 0) + 1
- setattr(object, attr, val)
- self.option_order.append((opt, val))
-
- # for opts
- if created_object:
- return args, object
- else:
- return args
-
- def get_option_order(self):
- """Returns the list of (option, value) tuples processed by the
- previous run of 'getopt()'. Raises RuntimeError if
- 'getopt()' hasn't been called yet.
- """
- if self.option_order is None:
- raise RuntimeError("'getopt()' hasn't been called yet")
- else:
- return self.option_order
-
- def generate_help(self, header=None): # noqa: C901
- """Generate help text (a list of strings, one per suggested line of
- output) from the option table for this FancyGetopt object.
- """
- # Blithely assume the option table is good: probably wouldn't call
- # 'generate_help()' unless you've already called 'getopt()'.
-
- # First pass: determine maximum length of long option names
- max_opt = 0
- for option in self.option_table:
- long = option[0]
- short = option[1]
- ell = len(long)
- if long[-1] == '=':
- ell = ell - 1
- if short is not None:
- ell = ell + 5 # " (-x)" where short == 'x'
- if ell > max_opt:
- max_opt = ell
-
- opt_width = max_opt + 2 + 2 + 2 # room for indent + dashes + gutter
-
- # Typical help block looks like this:
- # --foo controls foonabulation
- # Help block for longest option looks like this:
- # --flimflam set the flim-flam level
- # and with wrapped text:
- # --flimflam set the flim-flam level (must be between
- # 0 and 100, except on Tuesdays)
- # Options with short names will have the short name shown (but
- # it doesn't contribute to max_opt):
- # --foo (-f) controls foonabulation
- # If adding the short option would make the left column too wide,
- # we push the explanation off to the next line
- # --flimflam (-l)
- # set the flim-flam level
- # Important parameters:
- # - 2 spaces before option block start lines
- # - 2 dashes for each long option name
- # - min. 2 spaces between option and explanation (gutter)
- # - 5 characters (incl. space) for short option name
-
- # Now generate lines of help text. (If 80 columns were good enough
- # for Jesus, then 78 columns are good enough for me!)
- line_width = 78
- text_width = line_width - opt_width
- big_indent = ' ' * opt_width
- if header:
- lines = [header]
- else:
- lines = ['Option summary:']
-
- for option in self.option_table:
- long, short, help = option[:3]
- text = wrap_text(help, text_width)
- if long[-1] == '=':
- long = long[0:-1]
-
- # Case 1: no short option at all (makes life easy)
- if short is None:
- if text:
- lines.append(" --%-*s %s" % (max_opt, long, text[0]))
- else:
- lines.append(" --%-*s " % (max_opt, long))
-
- # Case 2: we have a short option, so we have to include it
- # just after the long option
- else:
- opt_names = "{} (-{})".format(long, short)
- if text:
- lines.append(" --%-*s %s" % (max_opt, opt_names, text[0]))
- else:
- lines.append(" --%-*s" % opt_names)
-
- for ell in text[1:]:
- lines.append(big_indent + ell)
- return lines
-
- def print_help(self, header=None, file=None):
- if file is None:
- file = sys.stdout
- for line in self.generate_help(header):
- file.write(line + "\n")
-
-
-def fancy_getopt(options, negative_opt, object, args):
- parser = FancyGetopt(options)
- parser.set_negative_aliases(negative_opt)
- return parser.getopt(args, object)
-
-
-WS_TRANS = {ord(_wschar): ' ' for _wschar in string.whitespace}
-
-
-def wrap_text(text, width):
- """wrap_text(text : string, width : int) -> [string]
-
- Split 'text' into multiple lines of no more than 'width' characters
- each, and return the list of strings that results.
- """
- if text is None:
- return []
- if len(text) <= width:
- return [text]
-
- text = text.expandtabs()
- text = text.translate(WS_TRANS)
- chunks = re.split(r'( +|-+)', text)
- chunks = [ch for ch in chunks if ch] # ' - ' results in empty strings
- lines = []
-
- while chunks:
- cur_line = [] # list of chunks (to-be-joined)
- cur_len = 0 # length of current line
-
- while chunks:
- ell = len(chunks[0])
- if cur_len + ell <= width: # can squeeze (at least) this chunk in
- cur_line.append(chunks[0])
- del chunks[0]
- cur_len = cur_len + ell
- else: # this line is full
- # drop last chunk if all space
- if cur_line and cur_line[-1][0] == ' ':
- del cur_line[-1]
- break
-
- if chunks: # any chunks left to process?
- # if the current line is still empty, then we had a single
- # chunk that's too big too fit on a line -- so we break
- # down and break it up at the line width
- if cur_len == 0:
- cur_line.append(chunks[0][0:width])
- chunks[0] = chunks[0][width:]
-
- # all-whitespace chunks at the end of a line can be discarded
- # (and we know from the re.split above that if a chunk has
- # *any* whitespace, it is *all* whitespace)
- if chunks[0][0] == ' ':
- del chunks[0]
-
- # and store this line in the list-of-all-lines -- as a single
- # string, of course!
- lines.append(''.join(cur_line))
-
- return lines
-
-
-def translate_longopt(opt):
- """Convert a long option name to a valid Python identifier by
- changing "-" to "_".
- """
- return opt.translate(longopt_xlate)
-
-
-class OptionDummy:
- """Dummy class just used as a place to hold command-line option
- values as instance attributes."""
-
- def __init__(self, options=[]):
- """Create a new OptionDummy instance. The attributes listed in
- 'options' will be initialized to None."""
- for opt in options:
- setattr(self, opt, None)
-
-
-if __name__ == "__main__":
- text = """\
-Tra-la-la, supercalifragilisticexpialidocious.
-How *do* you spell that odd word, anyways?
-(Someone ask Mary -- she'll know [or she'll
-say, "How should I know?"].)"""
-
- for w in (10, 20, 30, 40):
- print("width: %d" % w)
- print("\n".join(wrap_text(text, w)))
- print()
diff --git a/spaces/BigChia/bird_classifier/app.py b/spaces/BigChia/bird_classifier/app.py
deleted file mode 100644
index 6bcaf9428f9c0edd3f755d647d09c1463bd7520f..0000000000000000000000000000000000000000
--- a/spaces/BigChia/bird_classifier/app.py
+++ /dev/null
@@ -1,26 +0,0 @@
-from fastbook import *
-from fastai.vision.widgets import *
-import gradio as gr
-
-learn = load_learner('model/uk_model.pkl')
-labels = learn.dls.vocab
-
-def predict(img):
- img = PILImage.create(img)
- pred, pred_idx, probs = learn.predict(img)
- return {labels[i]: float(probs[i]) for i in range(len(labels))}
-
-if __name__ == "__main__":
- title = "Bird Breed Classifier"
- description = """A bird breed classifier trained on a dataset of over 200 UK birds with fastai.
- This makes it one of the most comprehensive UK bird classifiers available in the world."""
- interpretation='default'
- enable_queue=True
-
- gr.Interface(fn=predict,
- inputs=gr.inputs.Image(shape=(512, 512)),
- outputs=gr.outputs.Label(num_top_classes=3),
- title=title,
- description=description,
- interpretation=interpretation,
- enable_queue=enable_queue).launch()
\ No newline at end of file
diff --git a/spaces/BigSalmon/BackTranslation2/app.py b/spaces/BigSalmon/BackTranslation2/app.py
deleted file mode 100644
index cee442ba16e5d2422568a90298992781931c5179..0000000000000000000000000000000000000000
--- a/spaces/BigSalmon/BackTranslation2/app.py
+++ /dev/null
@@ -1,117 +0,0 @@
-from deep_translator import GoogleTranslator
-import streamlit as st
-
-st.set_page_config(page_title='Language Translator (Adaptation of https://github.com/Ompramod9921/Language_translator)')
-
-hide_streamlit_style = """
-
- """
-st.markdown(hide_streamlit_style, unsafe_allow_html=True)
-
-st.markdown("Language Translator (Adaptation of https://github.com/Ompramod9921/Language_translator) ", unsafe_allow_html=True)
-st.write("****")
-
-text = st.text_area("Enter text:",height=None,max_chars=None,key=None,help="Enter your text here -")
-st.write("****")
-
-option1 = st.selectbox('Input language',('english','hindi','afrikaans', 'albanian', 'amharic', 'arabic', 'armenian', 'azerbaijani', 'basque', 'belarusian', 'bengali', 'bosnian', 'bulgarian', 'catalan', 'cebuano', 'chichewa', 'chinese', 'chinese (simplified)', 'chinese (traditional)', 'corsican', 'croatian', 'czech', 'danish', 'dutch', 'esperanto', 'estonian', 'filipino', 'finnish', 'french', 'frisian', 'galician', 'georgian', 'german', 'greek', 'gujarati', 'haitian creole', 'hausa', 'hawaiian', 'hebrew', 'hmong', 'hungarian', 'icelandic', 'igbo', 'indonesian', 'irish', 'italian', 'japanese', 'javanese', 'kannada', 'kazakh', 'khmer', 'korean', 'kurdish (kurmanji)', 'kyrgyz', 'lao', 'latin', 'latvian', 'lithuanian', 'luxembourgish', 'macedonian', 'malagasy', 'malay', 'malayalam', 'maltese', 'maori', 'marathi', 'mongolian', 'myanmar (burmese)', 'nepali', 'norwegian', 'pashto', 'persian', 'polish', 'portuguese', 'punjabi', 'romanian', 'russian', 'samoan', 'scots gaelic', 'serbian', 'sesotho', 'shona', 'sindhi', 'sinhala', 'slovak', 'slovenian', 'somali', 'spanish', 'sundanese', 'swahili', 'swedish', 'tajik', 'tamil', 'telugu', 'thai', 'turkish', 'ukrainian', 'urdu', 'uzbek', 'vietnamese', 'welsh', 'xhosa', 'yiddish', 'yoruba', 'zulu', 'Filipino'))
-option2 = st.selectbox('Output language',('english','hindi','afrikaans', 'albanian', 'amharic', 'arabic', 'armenian', 'azerbaijani', 'basque', 'belarusian', 'bengali', 'bosnian', 'bulgarian', 'catalan', 'cebuano', 'chichewa', 'chinese', 'chinese (simplified)', 'chinese (traditional)', 'corsican', 'croatian', 'czech', 'danish', 'dutch', 'esperanto', 'estonian', 'filipino', 'finnish', 'french', 'frisian', 'galician', 'georgian', 'german', 'greek', 'gujarati', 'haitian creole', 'hausa', 'hawaiian', 'hebrew', 'hmong', 'hungarian', 'icelandic', 'igbo', 'indonesian', 'irish', 'italian', 'japanese', 'javanese', 'kannada', 'kazakh', 'khmer', 'korean', 'kurdish (kurmanji)', 'kyrgyz', 'lao', 'latin', 'latvian', 'lithuanian', 'luxembourgish', 'macedonian', 'malagasy', 'malay', 'malayalam', 'maltese', 'maori', 'marathi', 'mongolian', 'myanmar (burmese)', 'nepali', 'norwegian', 'pashto', 'persian', 'polish', 'portuguese', 'punjabi', 'romanian', 'russian', 'samoan', 'scots gaelic', 'serbian', 'sesotho', 'shona', 'sindhi', 'sinhala', 'slovak', 'slovenian', 'somali', 'spanish', 'sundanese', 'swahili', 'swedish', 'tajik', 'tamil', 'telugu', 'thai', 'turkish', 'ukrainian', 'urdu', 'uzbek', 'vietnamese', 'welsh', 'xhosa', 'yiddish', 'yoruba', 'zulu', 'Filipino'))
-st.write("****")
-
-if st.button('Translate Sentence'):
- st.write(" ")
- st.write(" ")
- if text == "":
- st.warning('Please **enter text** for translation')
-
- else:
- if option1 == option2 :
- st.error("source and target language can't be the same")
- else :
- translated = GoogleTranslator(source=option1,target=option2).translate(text=text)
- st.write("Translated text -")
- st.info(str(translated))
- translated_text = str(translated)
- back_translated = GoogleTranslator(source=option2,target=option1).translate(text=translated_text)
- st.write("Back Translated text -")
- st.info(str(back_translated))
-
-if st.button('Back Translate: Multiple Languages'):
- st.write(" ")
- st.write(" ")
- if text == "":
- st.warning('Please **enter text** for translation')
- else:
- if option1 == option2 :
- st.error("source and target language can't be the same")
- else:
- translated = GoogleTranslator(source=option1,target=option2).translate(text=text)
- st.write("Translated text -")
- st.info(str(translated))
- translated_text = str(translated)
- back_translated = GoogleTranslator(source=option2,target=option1).translate(text=translated_text)
- st.write("Back Translated text -")
- st.info(str(back_translated))
-
- translated = GoogleTranslator(source=option1,target="albanian").translate(text=text)
- st.write("Translated text -")
- st.info(str(translated))
- translated_text = str(translated)
- back_translated = GoogleTranslator(source="albanian",target=option1).translate(text=translated_text)
- st.write("Back Translated text -")
- st.info(str(back_translated))
-
- translated = GoogleTranslator(source=option1,target="greek").translate(text=text)
- st.write("Translated text -")
- st.info(str(translated))
- translated_text = str(translated)
- back_translated = GoogleTranslator(source="greek",target=option1).translate(text=translated_text)
- st.write("Back Translated text -")
- st.info(str(back_translated))
-
- translated = GoogleTranslator(source=option1,target="italian").translate(text=text)
- st.write("Translated text -")
- st.info(str(translated))
- translated_text = str(translated)
- back_translated = GoogleTranslator(source="italian",target=option1).translate(text=translated_text)
- st.write("Back Translated text -")
- st.info(str(back_translated))
-
- translated = GoogleTranslator(source=option1,target="polish").translate(text=text)
- st.write("Translated text -")
- st.info(str(translated))
- translated_text = str(translated)
- back_translated = GoogleTranslator(source="polish",target=option1).translate(text=translated_text)
- st.write("Back Translated text -")
- st.info(str(back_translated))
-
- translated = GoogleTranslator(source=option1,target="spanish").translate(text=text)
- st.write("Translated text -")
- st.info(str(translated))
- translated_text = str(translated)
- back_translated = GoogleTranslator(source="spanish",target=option1).translate(text=translated_text)
- st.write("Back Translated text -")
- st.info(str(back_translated))
-
- translated = GoogleTranslator(source=option1,target="galician").translate(text=text)
- st.write("Translated text -")
- st.info(str(translated))
- translated_text = str(translated)
- back_translated = GoogleTranslator(source="galician",target=option1).translate(text=translated_text)
- st.write("Back Translated text -")
- st.info(str(back_translated))
-
- translated = GoogleTranslator(source=option1,target="dutch").translate(text=text)
- st.write("Translated text -")
- st.info(str(translated))
- translated_text = str(translated)
- back_translated = GoogleTranslator(source="dutch",target=option1).translate(text=translated_text)
- st.write("Back Translated text -")
- st.info(str(back_translated))
\ No newline at end of file
diff --git a/spaces/Brightmzb/test/README.md b/spaces/Brightmzb/test/README.md
deleted file mode 100644
index b68988167779f686fce67efb8f9c2dc3cb233e0a..0000000000000000000000000000000000000000
--- a/spaces/Brightmzb/test/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Test
-emoji: 🏢
-colorFrom: yellow
-colorTo: purple
-sdk: gradio
-sdk_version: 3.40.1
-app_file: app.py
-pinned: false
-license: openrail
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/CVPR/LIVE/thrust/dependencies/cub/test/Makefile b/spaces/CVPR/LIVE/thrust/dependencies/cub/test/Makefile
deleted file mode 100644
index 958760a87c068922e6f1840f1cef0f254ea1a698..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/dependencies/cub/test/Makefile
+++ /dev/null
@@ -1,468 +0,0 @@
-#/******************************************************************************
-# * Copyright (c) 2011, Duane Merrill. All rights reserved.
-# * Copyright (c) 2011-2018, NVIDIA CORPORATION. All rights reserved.
-# *
-# * Redistribution and use in source and binary forms, with or without
-# * modification, are permitted provided that the following conditions are met:
-# * * Redistributions of source code must retain the above copyright
-# * notice, this list of conditions and the following disclaimer.
-# * * Redistributions in binary form must reproduce the above copyright
-# * notice, this list of conditions and the following disclaimer in the
-# * documentation and/or other materials provided with the distribution.
-# * * Neither the name of the NVIDIA CORPORATION nor the
-# * names of its contributors may be used to endorse or promote products
-# * derived from this software without specific prior written permission.
-# *
-# * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
-# * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
-# * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
-# * DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
-# * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
-# * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
-# * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
-# * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
-# * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-# *
-#******************************************************************************/
-
-
-#-------------------------------------------------------------------------------
-#
-# Makefile usage
-#
-# make [sm=] [cdp=<0|1>] [force32=<0|1>] [abi=<0|1>] [open64=<0|1>] [verbose=<0|1>] [keep=<0|1>] [quicktest=<0|1>] [quickertest=<0|1>]
-#
-#-------------------------------------------------------------------------------
-
-include ../common.mk
-
-#-------------------------------------------------------------------------------
-# Commandline Options
-#-------------------------------------------------------------------------------
-
-# Testing mode option (quick/thorough)
-ifeq ($(quickertest), 1)
- NVCCFLAGS += -DQUICKER_TEST
- TEST_SUFFIX = quicker
-else ifeq ($(quicktest), 1)
- NVCCFLAGS += -DQUICK_TEST
- TEST_SUFFIX = quick
-else
- TEST_SUFFIX = thorough
- NPPI =
-endif
-
-
-# CUDA memcheck (enabled by default)
-ifeq ($(memcheck), 0)
- MEMCHECK =
-else
- MEMCHECK = cuda-memcheck
-endif
-
-
-#-------------------------------------------------------------------------------
-# Compiler and compilation platform
-#-------------------------------------------------------------------------------
-
-# Includes
-INC += -I$(CUB_DIR) -I$(CUB_DIR)test
-
-# Suffix to append to each binary
-SUFFIX = $(BIN_SUFFIX)_$(TEST_SUFFIX)
-
-# Define test arch
-DEFINES += -DTEST_ARCH=$(TEST_ARCH)
-
-
-#-------------------------------------------------------------------------------
-# Dependency Lists
-#-------------------------------------------------------------------------------
-
-rwildcard=$(foreach d,$(wildcard $1*),$(call rwildcard,$d/,$2) $(filter $(subst *,%,$2),$d))
-
-DEPS = $(CUB_DEPS) \
- $(CUB_DIR)test/Makefile \
- $(CUB_DIR)test/test_util.h \
- $(CUB_DIR)test/mersenne.h \
-
-BLOCK_REDUCE = test_block_reduce_raking \
- test_block_reduce_warp_reductions
-
-
-BLOCK_SCAN = test_block_scan_raking \
- test_block_scan_raking_memoize \
- test_block_scan_warp_scans
-
-
-BLOCK_RADIX_SORT = test_block_radix_sort_keys \
- test_block_radix_sort_pairs
-
-DEVICE_RADIX_SORT = test_device_radix_sort \
- test_device_radix_sort_segmented
-
-ALL = link \
- test_iterator \
- test_allocator \
- test_warp_scan \
- test_warp_reduce \
- $(BLOCK_REDUCE) \
- $(BLOCK_SCAN) \
- $(BLOCK_RADIX_SORT) \
- test_block_load_store \
- test_block_histogram \
- test_device_reduce \
- test_device_histogram \
- test_device_scan \
- $(DEVICE_RADIX_SORT) \
- test_device_reduce_by_key\
- test_device_run_length_encode\
- test_device_select_unique \
- test_device_select_if
-
-# test_grid_barrier \ fails on sm110
-# test_device_seg_reduce
-
-
-
-#-------------------------------------------------------------------------------
-# make default
-#-------------------------------------------------------------------------------
-
-default:
-
-
-#-------------------------------------------------------------------------------
-# make clean
-#-------------------------------------------------------------------------------
-
-clean :
- rm -f bin/*$(CPU_ARCH_SUFFIX)*
- rm -f *.i* *.cubin *.cu.c *.cudafe* *.fatbin.c *.ptx *.hash *.cu.cpp *.o
-
-
-#-------------------------------------------------------------------------------
-# make all
-#-------------------------------------------------------------------------------
-
-all : $(ALL)
-
-
-#-------------------------------------------------------------------------------
-# make run
-#-------------------------------------------------------------------------------
-
-run :
- for i in $(ALL); do $(MEMCHECK) ./bin/$${i}_$(SUFFIX) --device=$(device) || exit 1; done
-
-run_block_reduce :
- for i in $(BLOCK_REDUCE); do $(MEMCHECK) ./bin/$${i}_$(SUFFIX) --device=$(device) || exit 1; done
-
-run_block_scan :
- for i in $(BLOCK_SCAN); do $(MEMCHECK) ./bin/$${i}_$(SUFFIX) --device=$(device) || exit 1; done
-
-run_block_radix_sort :
- for i in $(BLOCK_RADIX_SORT); do $(MEMCHECK) ./bin/$${i}_$(SUFFIX) --device=$(device) || exit 1; done
-
-run_device_radix_sort :
- for i in $(DEVICE_RADIX_SORT); do $(MEMCHECK) ./bin/$${i}_$(SUFFIX) --device=$(device) || exit 1; done
-
-
-#-------------------------------------------------------------------------------
-# make link
-#-------------------------------------------------------------------------------
-
-link : bin/link_$(SUFFIX)
-
-bin/link_$(SUFFIX) : link_a.cu link_b.cu link_main.cpp $(DEPS)
- mkdir -p bin
- $(NVCC) $(NVCCFLAGS) $(CPU_ARCH) $(INC) $(DEFINES) $(SM_TARGETS) link_a.cu -c -o bin/link_a.obj
- $(NVCC) $(NVCCFLAGS) $(CPU_ARCH) $(INC) $(DEFINES) $(SM_TARGETS) link_b.cu -c -o bin/link_b.obj
- $(NVCC) $(NVCCFLAGS) $(CPU_ARCH) $(INC) $(DEFINES) $(SM_TARGETS) link_main.cpp bin/link_a.obj bin/link_b.obj -o bin/link_$(SUFFIX)
-
-
-#-------------------------------------------------------------------------------
-# make test_iterator
-#-------------------------------------------------------------------------------
-
-test_iterator: bin/test_iterator_$(SUFFIX)
-
-bin/test_iterator_$(SUFFIX) : test_iterator.cu $(DEPS)
- mkdir -p bin
- $(NVCC) $(DEFINES) $(SM_TARGETS) -o bin/test_iterator_$(SUFFIX) test_iterator.cu $(NVCCFLAGS) $(CPU_ARCH) $(INC) $(LIBS) -O3
-
-
-#-------------------------------------------------------------------------------
-# make test_allocator
-#-------------------------------------------------------------------------------
-
-test_allocator: bin/test_allocator_$(SUFFIX)
-
-bin/test_allocator_$(SUFFIX) : test_allocator.cu $(DEPS)
- mkdir -p bin
- $(NVCC) $(DEFINES) $(SM_TARGETS) -o bin/test_allocator_$(SUFFIX) test_allocator.cu $(NVCCFLAGS) $(CPU_ARCH) $(INC) $(LIBS) -O3
-
-
-#-------------------------------------------------------------------------------
-# make test_grid_barrier
-#-------------------------------------------------------------------------------
-
-test_grid_barrier: bin/test_grid_barrier_$(SUFFIX)
-
-bin/test_grid_barrier_$(SUFFIX) : test_grid_barrier.cu $(DEPS)
- mkdir -p bin
- $(NVCC) $(DEFINES) $(SM_TARGETS) -o bin/test_grid_barrier_$(SUFFIX) test_grid_barrier.cu $(NVCCFLAGS) $(CPU_ARCH) $(INC) $(LIBS) -O3
-
-
-#-------------------------------------------------------------------------------
-# make test_warp_scan
-#-------------------------------------------------------------------------------
-
-test_warp_scan: bin/test_warp_scan_$(SUFFIX)
-
-bin/test_warp_scan_$(SUFFIX) : test_warp_scan.cu $(DEPS)
- mkdir -p bin
- $(NVCC) $(DEFINES) $(SM_TARGETS) -o bin/test_warp_scan_$(SUFFIX) test_warp_scan.cu $(NVCCFLAGS) $(CPU_ARCH) $(INC) $(LIBS) -O3
-
-
-#-------------------------------------------------------------------------------
-# make test_warp_reduce
-#-------------------------------------------------------------------------------
-
-test_warp_reduce: bin/test_warp_reduce_$(SUFFIX)
-
-bin/test_warp_reduce_$(SUFFIX) : test_warp_reduce.cu $(DEPS)
- mkdir -p bin
- $(NVCC) $(DEFINES) $(SM_TARGETS) -o bin/test_warp_reduce_$(SUFFIX) test_warp_reduce.cu $(NVCCFLAGS) $(CPU_ARCH) $(INC) $(LIBS) -O3
-
-
-#-------------------------------------------------------------------------------
-# make test_block_reduce_raking
-#-------------------------------------------------------------------------------
-
-test_block_reduce_raking: bin/test_block_reduce_raking_$(SUFFIX)
-
-bin/test_block_reduce_raking_$(SUFFIX) : test_block_reduce.cu $(DEPS)
- mkdir -p bin
- $(NVCC) $(DEFINES) -DTEST_RAKING $(SM_TARGETS) -o bin/test_block_reduce_raking_$(SUFFIX) test_block_reduce.cu $(NVCCFLAGS) $(CPU_ARCH) $(INC) $(LIBS) -O3
-
-
-#-------------------------------------------------------------------------------
-# make test_block_reduce_warp_reductions
-#-------------------------------------------------------------------------------
-
-test_block_reduce_warp_reductions: bin/test_block_reduce_warp_reductions_$(SUFFIX)
-
-bin/test_block_reduce_warp_reductions_$(SUFFIX) : test_block_reduce.cu $(DEPS)
- mkdir -p bin
- $(NVCC) $(DEFINES) -DTEST_WARP_REDUCTIONS $(SM_TARGETS) -o bin/test_block_reduce_warp_reductions_$(SUFFIX) test_block_reduce.cu $(NVCCFLAGS) $(CPU_ARCH) $(INC) $(LIBS) -O3
-
-
-#-------------------------------------------------------------------------------
-# make test_block_reduce
-#-------------------------------------------------------------------------------
-
-test_block_reduce: $(BLOCK_REDUCE)
-
-
-#-------------------------------------------------------------------------------
-# make test_block_scan_raking
-#-------------------------------------------------------------------------------
-
-test_block_scan_raking: bin/test_block_scan_raking_$(SUFFIX)
-
-bin/test_block_scan_raking_$(SUFFIX) : test_block_scan.cu $(DEPS)
- mkdir -p bin
- $(NVCC) $(DEFINES) -DTEST_RAKING $(SM_TARGETS) -o bin/test_block_scan_raking_$(SUFFIX) test_block_scan.cu $(NVCCFLAGS) $(CPU_ARCH) $(INC) $(LIBS) -O3
-
-
-#-------------------------------------------------------------------------------
-# make test_block_scan_raking_memoize
-#-------------------------------------------------------------------------------
-
-test_block_scan_raking_memoize: bin/test_block_scan_raking_memoize_$(SUFFIX)
-
-bin/test_block_scan_raking_memoize_$(SUFFIX) : test_block_scan.cu $(DEPS)
- mkdir -p bin
- $(NVCC) $(DEFINES) -DTEST_RAKING_MEMOIZE $(SM_TARGETS) -o bin/test_block_scan_raking_memoize_$(SUFFIX) test_block_scan.cu $(NVCCFLAGS) $(CPU_ARCH) $(INC) $(LIBS) -O3
-
-
-#-------------------------------------------------------------------------------
-# make test_block_scan_warp_scans
-#-------------------------------------------------------------------------------
-
-test_block_scan_warp_scans: bin/test_block_scan_warp_scans_$(SUFFIX)
-
-bin/test_block_scan_warp_scans_$(SUFFIX) : test_block_scan.cu $(DEPS)
- mkdir -p bin
- $(NVCC) $(DEFINES) -DTEST_WARP_SCANS $(SM_TARGETS) -o bin/test_block_scan_warp_scans_$(SUFFIX) test_block_scan.cu $(NVCCFLAGS) $(CPU_ARCH) $(INC) $(LIBS) -O3
-
-
-#-------------------------------------------------------------------------------
-# make test_block_scan
-#-------------------------------------------------------------------------------
-
-test_block_scan: $(BLOCK_SCAN)
-
-
-#-------------------------------------------------------------------------------
-# make test_block_load_store
-#-------------------------------------------------------------------------------
-
-test_block_load_store: bin/test_block_load_store_$(SUFFIX)
-
-bin/test_block_load_store_$(SUFFIX) : test_block_load_store.cu $(DEPS)
- mkdir -p bin
- $(NVCC) $(DEFINES) $(SM_TARGETS) -o bin/test_block_load_store_$(SUFFIX) test_block_load_store.cu $(NVCCFLAGS) $(CPU_ARCH) $(INC) $(LIBS) -O3
-
-
-#-------------------------------------------------------------------------------
-# make test_block_radix_sort_keys
-#-------------------------------------------------------------------------------
-
-test_block_radix_sort_keys: bin/test_block_radix_sort_keys_$(SUFFIX)
-
-bin/test_block_radix_sort_keys_$(SUFFIX) : test_block_radix_sort.cu $(DEPS)
- mkdir -p bin
- $(NVCC) $(DEFINES) -DTEST_KEYS_ONLY $(SM_TARGETS) -o bin/test_block_radix_sort_keys_$(SUFFIX) test_block_radix_sort.cu $(NVCCFLAGS) $(CPU_ARCH) $(INC) $(LIBS) -O3
-
-#-------------------------------------------------------------------------------
-# make test_block_radix_sort_pairs
-#-------------------------------------------------------------------------------
-
-test_block_radix_sort_pairs: bin/test_block_radix_sort_pairs_$(SUFFIX)
-
-bin/test_block_radix_sort_pairs_$(SUFFIX) : test_block_radix_sort.cu $(DEPS)
- mkdir -p bin
- $(NVCC) $(DEFINES) $(SM_TARGETS) -o bin/test_block_radix_sort_pairs_$(SUFFIX) test_block_radix_sort.cu $(NVCCFLAGS) $(CPU_ARCH) $(INC) $(LIBS) -O3
-
-
-#-------------------------------------------------------------------------------
-# make test_block_radix_sort
-#-------------------------------------------------------------------------------
-
-test_block_radix_sort : $(BLOCK_RADIX_SORT)
-
-
-#-------------------------------------------------------------------------------
-# make test_block_histogram
-#-------------------------------------------------------------------------------
-
-test_block_histogram: bin/test_block_histogram_$(SUFFIX)
-
-bin/test_block_histogram_$(SUFFIX) : test_block_histogram.cu $(DEPS)
- mkdir -p bin
- $(NVCC) $(DEFINES) $(SM_TARGETS) -o bin/test_block_histogram_$(SUFFIX) test_block_histogram.cu $(NVCCFLAGS) $(CPU_ARCH) $(INC) $(LIBS) -O3
-
-
-#-------------------------------------------------------------------------------
-# make test_device_reduce
-#-------------------------------------------------------------------------------
-
-test_device_reduce: bin/test_device_reduce_$(SUFFIX)
-
-bin/test_device_reduce_$(SUFFIX) : test_device_reduce.cu $(DEPS)
- mkdir -p bin
- $(NVCC) $(DEFINES) $(SM_TARGETS) -o bin/test_device_reduce_$(SUFFIX) test_device_reduce.cu $(NVCCFLAGS) $(CPU_ARCH) $(INC) $(LIBS) -O3
-
-
-#-------------------------------------------------------------------------------
-# make test_device_histogram
-#-------------------------------------------------------------------------------
-
-test_device_histogram: bin/test_device_histogram_$(SUFFIX)
-
-bin/test_device_histogram_$(SUFFIX) : test_device_histogram.cu $(DEPS)
- mkdir -p bin
- $(NVCC) $(DEFINES) $(SM_TARGETS) -o bin/test_device_histogram_$(SUFFIX) test_device_histogram.cu $(NVCCFLAGS) $(CPU_ARCH) $(INC) $(LIBS) $(NPPI) -O3
-
-
-#-------------------------------------------------------------------------------
-# make test_device_scan
-#-------------------------------------------------------------------------------
-
-test_device_scan: bin/test_device_scan_$(SUFFIX)
-
-bin/test_device_scan_$(SUFFIX) : test_device_scan.cu $(DEPS)
- mkdir -p bin
- $(NVCC) $(DEFINES) $(SM_TARGETS) -o bin/test_device_scan_$(SUFFIX) test_device_scan.cu $(NVCCFLAGS) $(CPU_ARCH) $(INC) $(LIBS) -O3
-
-
-#-------------------------------------------------------------------------------
-# make test_device_radix_sort
-#-------------------------------------------------------------------------------
-
-test_device_radix_sort: bin/test_device_radix_sort_$(SUFFIX)
-
-bin/test_device_radix_sort_$(SUFFIX) : test_device_radix_sort.cu $(DEPS)
- mkdir -p bin
- $(NVCC) $(DEFINES) $(SM_TARGETS) -o bin/test_device_radix_sort_$(SUFFIX) test_device_radix_sort.cu $(NVCCFLAGS) $(CPU_ARCH) $(INC) $(LIBS) -O3
-
-
-#-------------------------------------------------------------------------------
-# make test_device_radix_sort_segmented
-#-------------------------------------------------------------------------------
-
-test_device_radix_sort_segmented: bin/test_device_radix_sort_segmented_$(SUFFIX)
-
-bin/test_device_radix_sort_segmented_$(SUFFIX) : test_device_radix_sort.cu $(DEPS)
- mkdir -p bin
- $(NVCC) $(DEFINES) -DSEGMENTED_SORT $(SM_TARGETS) -o bin/test_device_radix_sort_segmented_$(SUFFIX) test_device_radix_sort.cu $(NVCCFLAGS) $(CPU_ARCH) $(INC) $(LIBS) -O3
-
-
-#-------------------------------------------------------------------------------
-# make test_device_select_unique
-#-------------------------------------------------------------------------------
-
-test_device_select_unique: bin/test_device_select_unique_$(SUFFIX)
-
-bin/test_device_select_unique_$(SUFFIX) : test_device_select_unique.cu $(DEPS)
- mkdir -p bin
- $(NVCC) $(DEFINES) $(SM_TARGETS) -o bin/test_device_select_unique_$(SUFFIX) test_device_select_unique.cu $(NVCCFLAGS) $(CPU_ARCH) $(INC) $(LIBS) -O3
-
-
-#-------------------------------------------------------------------------------
-# make test_device_select_if
-#-------------------------------------------------------------------------------
-
-test_device_select_if: bin/test_device_select_if_$(SUFFIX)
-
-bin/test_device_select_if_$(SUFFIX) : test_device_select_if.cu $(DEPS)
- mkdir -p bin
- $(NVCC) $(DEFINES) $(SM_TARGETS) -o bin/test_device_select_if_$(SUFFIX) test_device_select_if.cu $(NVCCFLAGS) $(CPU_ARCH) $(INC) $(LIBS) -O3
-
-#-------------------------------------------------------------------------------
-# make test_device_reduce_by_key
-#-------------------------------------------------------------------------------
-
-test_device_reduce_by_key: bin/test_device_reduce_by_key_$(SUFFIX)
-
-bin/test_device_reduce_by_key_$(SUFFIX) : test_device_reduce_by_key.cu $(DEPS)
- mkdir -p bin
- $(NVCC) $(DEFINES) $(SM_TARGETS) -o bin/test_device_reduce_by_key_$(SUFFIX) test_device_reduce_by_key.cu $(NVCCFLAGS) $(CPU_ARCH) $(INC) $(LIBS) -O3
-
-#-------------------------------------------------------------------------------
-# make test_device_run_length_encode
-#-------------------------------------------------------------------------------
-
-test_device_run_length_encode: bin/test_device_run_length_encode_$(SUFFIX)
-
-bin/test_device_run_length_encode_$(SUFFIX) : test_device_run_length_encode.cu $(DEPS)
- mkdir -p bin
- $(NVCC) $(DEFINES) $(SM_TARGETS) -o bin/test_device_run_length_encode_$(SUFFIX) test_device_run_length_encode.cu $(NVCCFLAGS) $(CPU_ARCH) $(INC) $(LIBS) -O3
-
-
-
-
-#-------------------------------------------------------------------------------
-# make test_device_seg_reduce
-#-------------------------------------------------------------------------------
-#
-#test_device_seg_reduce: bin/test_device_seg_reduce_$(SUFFIX)
-#
-#bin/test_device_seg_reduce_$(SUFFIX) : test_device_seg_reduce.cu $(DEPS)
-# mkdir -p bin
-# $(NVCC) $(DEFINES) $(SM_TARGETS) -o bin/test_device_seg_reduce_$(SUFFIX) test_device_seg_reduce.cu $(NVCCFLAGS) $(CPU_ARCH) $(INC) $(LIBS) -O3
-
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/cpp14_required.h b/spaces/CVPR/LIVE/thrust/thrust/detail/cpp14_required.h
deleted file mode 100644
index 083c8a1ad478f18e8bc5151385362e87ec933962..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/detail/cpp14_required.h
+++ /dev/null
@@ -1,26 +0,0 @@
-/*
- * Copyright 2018 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-
-#ifndef THRUST_CPP14_REQUIRED_NO_ERROR
-# if THRUST_CPP_DIALECT < 2014
-# error C++14 is required for this Thrust feature; please upgrade your compiler or pass the appropriate -std=c++14 flag to it.
-# endif
-#endif
-
diff --git a/spaces/CVPR/drawings-to-human/static/_app/immutable/assets/pages/__layout.svelte-cc9dd261.css b/spaces/CVPR/drawings-to-human/static/_app/immutable/assets/pages/__layout.svelte-cc9dd261.css
deleted file mode 100644
index 3a869f8287fecd9ba73ae65defcb8db2559d5e4c..0000000000000000000000000000000000000000
--- a/spaces/CVPR/drawings-to-human/static/_app/immutable/assets/pages/__layout.svelte-cc9dd261.css
+++ /dev/null
@@ -1 +0,0 @@
-@import"https://fonts.googleapis.com/css2?family=Open+Sans:wght@100;200;300;400;500;600;700;800&display=swap";*,:before,:after{box-sizing:border-box;border-width:0;border-style:solid;border-color:#e5e7eb}:before,:after{--tw-content: ""}html{line-height:1.5;-webkit-text-size-adjust:100%;-moz-tab-size:4;-o-tab-size:4;tab-size:4;font-family:ui-sans-serif,system-ui,-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Helvetica Neue,Arial,Noto Sans,sans-serif,"Apple Color Emoji","Segoe UI Emoji",Segoe UI Symbol,"Noto Color Emoji"}body{margin:0;line-height:inherit}hr{height:0;color:inherit;border-top-width:1px}abbr:where([title]){-webkit-text-decoration:underline dotted;text-decoration:underline dotted}h1,h2,h3,h4,h5,h6{font-size:inherit;font-weight:inherit}a{color:inherit;text-decoration:inherit}b,strong{font-weight:bolder}code,kbd,samp,pre{font-family:ui-monospace,SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,monospace;font-size:1em}small{font-size:80%}sub,sup{font-size:75%;line-height:0;position:relative;vertical-align:baseline}sub{bottom:-.25em}sup{top:-.5em}table{text-indent:0;border-color:inherit;border-collapse:collapse}button,input,optgroup,select,textarea{font-family:inherit;font-size:100%;font-weight:inherit;line-height:inherit;color:inherit;margin:0;padding:0}button,select{text-transform:none}button,[type=button],[type=reset],[type=submit]{-webkit-appearance:button;background-color:transparent;background-image:none}:-moz-focusring{outline:auto}:-moz-ui-invalid{box-shadow:none}progress{vertical-align:baseline}::-webkit-inner-spin-button,::-webkit-outer-spin-button{height:auto}[type=search]{-webkit-appearance:textfield;outline-offset:-2px}::-webkit-search-decoration{-webkit-appearance:none}::-webkit-file-upload-button{-webkit-appearance:button;font:inherit}summary{display:list-item}blockquote,dl,dd,h1,h2,h3,h4,h5,h6,hr,figure,p,pre{margin:0}fieldset{margin:0;padding:0}legend{padding:0}ol,ul,menu{list-style:none;margin:0;padding:0}textarea{resize:vertical}input::-moz-placeholder,textarea::-moz-placeholder{opacity:1;color:#9ca3af}input::placeholder,textarea::placeholder{opacity:1;color:#9ca3af}button,[role=button]{cursor:pointer}:disabled{cursor:default}img,svg,video,canvas,audio,iframe,embed,object{display:block;vertical-align:middle}img,video{max-width:100%;height:auto}html{font-family:Open Sans,sans-serif}*,:before,:after{--tw-border-spacing-x: 0;--tw-border-spacing-y: 0;--tw-translate-x: 0;--tw-translate-y: 0;--tw-rotate: 0;--tw-skew-x: 0;--tw-skew-y: 0;--tw-scale-x: 1;--tw-scale-y: 1;--tw-pan-x: ;--tw-pan-y: ;--tw-pinch-zoom: ;--tw-scroll-snap-strictness: proximity;--tw-ordinal: ;--tw-slashed-zero: ;--tw-numeric-figure: ;--tw-numeric-spacing: ;--tw-numeric-fraction: ;--tw-ring-inset: ;--tw-ring-offset-width: 0px;--tw-ring-offset-color: #fff;--tw-ring-color: rgb(59 130 246 / .5);--tw-ring-offset-shadow: 0 0 #0000;--tw-ring-shadow: 0 0 #0000;--tw-shadow: 0 0 #0000;--tw-shadow-colored: 0 0 #0000;--tw-blur: ;--tw-brightness: ;--tw-contrast: ;--tw-grayscale: ;--tw-hue-rotate: ;--tw-invert: ;--tw-saturate: ;--tw-sepia: ;--tw-drop-shadow: ;--tw-backdrop-blur: ;--tw-backdrop-brightness: ;--tw-backdrop-contrast: ;--tw-backdrop-grayscale: ;--tw-backdrop-hue-rotate: ;--tw-backdrop-invert: ;--tw-backdrop-opacity: ;--tw-backdrop-saturate: ;--tw-backdrop-sepia: }::-webkit-backdrop{--tw-border-spacing-x: 0;--tw-border-spacing-y: 0;--tw-translate-x: 0;--tw-translate-y: 0;--tw-rotate: 0;--tw-skew-x: 0;--tw-skew-y: 0;--tw-scale-x: 1;--tw-scale-y: 1;--tw-pan-x: ;--tw-pan-y: ;--tw-pinch-zoom: ;--tw-scroll-snap-strictness: proximity;--tw-ordinal: ;--tw-slashed-zero: ;--tw-numeric-figure: ;--tw-numeric-spacing: ;--tw-numeric-fraction: ;--tw-ring-inset: ;--tw-ring-offset-width: 0px;--tw-ring-offset-color: #fff;--tw-ring-color: rgb(59 130 246 / .5);--tw-ring-offset-shadow: 0 0 #0000;--tw-ring-shadow: 0 0 #0000;--tw-shadow: 0 0 #0000;--tw-shadow-colored: 0 0 #0000;--tw-blur: ;--tw-brightness: ;--tw-contrast: ;--tw-grayscale: ;--tw-hue-rotate: ;--tw-invert: ;--tw-saturate: ;--tw-sepia: ;--tw-drop-shadow: ;--tw-backdrop-blur: ;--tw-backdrop-brightness: ;--tw-backdrop-contrast: ;--tw-backdrop-grayscale: ;--tw-backdrop-hue-rotate: ;--tw-backdrop-invert: ;--tw-backdrop-opacity: ;--tw-backdrop-saturate: ;--tw-backdrop-sepia: }::backdrop{--tw-border-spacing-x: 0;--tw-border-spacing-y: 0;--tw-translate-x: 0;--tw-translate-y: 0;--tw-rotate: 0;--tw-skew-x: 0;--tw-skew-y: 0;--tw-scale-x: 1;--tw-scale-y: 1;--tw-pan-x: ;--tw-pan-y: ;--tw-pinch-zoom: ;--tw-scroll-snap-strictness: proximity;--tw-ordinal: ;--tw-slashed-zero: ;--tw-numeric-figure: ;--tw-numeric-spacing: ;--tw-numeric-fraction: ;--tw-ring-inset: ;--tw-ring-offset-width: 0px;--tw-ring-offset-color: #fff;--tw-ring-color: rgb(59 130 246 / .5);--tw-ring-offset-shadow: 0 0 #0000;--tw-ring-shadow: 0 0 #0000;--tw-shadow: 0 0 #0000;--tw-shadow-colored: 0 0 #0000;--tw-blur: ;--tw-brightness: ;--tw-contrast: ;--tw-grayscale: ;--tw-hue-rotate: ;--tw-invert: ;--tw-saturate: ;--tw-sepia: ;--tw-drop-shadow: ;--tw-backdrop-blur: ;--tw-backdrop-brightness: ;--tw-backdrop-contrast: ;--tw-backdrop-grayscale: ;--tw-backdrop-hue-rotate: ;--tw-backdrop-invert: ;--tw-backdrop-opacity: ;--tw-backdrop-saturate: ;--tw-backdrop-sepia: }.prose{color:var(--tw-prose-body);max-width:65ch}.prose :where([class~="lead"]):not(:where([class~="not-prose"] *)){color:var(--tw-prose-lead);font-size:1.25em;line-height:1.6;margin-top:1.2em;margin-bottom:1.2em}.prose :where(a):not(:where([class~="not-prose"] *)){color:var(--tw-prose-links);text-decoration:underline;font-weight:500}.prose :where(strong):not(:where([class~="not-prose"] *)){color:var(--tw-prose-bold);font-weight:600}.prose :where(ol):not(:where([class~="not-prose"] *)){list-style-type:decimal;padding-left:1.625em}.prose :where(ol[type="A"]):not(:where([class~="not-prose"] *)){list-style-type:upper-alpha}.prose :where(ol[type="a"]):not(:where([class~="not-prose"] *)){list-style-type:lower-alpha}.prose :where(ol[type="A" s]):not(:where([class~="not-prose"] *)){list-style-type:upper-alpha}.prose :where(ol[type="a" s]):not(:where([class~="not-prose"] *)){list-style-type:lower-alpha}.prose :where(ol[type="I"]):not(:where([class~="not-prose"] *)){list-style-type:upper-roman}.prose :where(ol[type="i"]):not(:where([class~="not-prose"] *)){list-style-type:lower-roman}.prose :where(ol[type="I" s]):not(:where([class~="not-prose"] *)){list-style-type:upper-roman}.prose :where(ol[type="i" s]):not(:where([class~="not-prose"] *)){list-style-type:lower-roman}.prose :where(ol[type="1"]):not(:where([class~="not-prose"] *)){list-style-type:decimal}.prose :where(ul):not(:where([class~="not-prose"] *)){list-style-type:disc;padding-left:1.625em}.prose :where(ol > li):not(:where([class~="not-prose"] *))::marker{font-weight:400;color:var(--tw-prose-counters)}.prose :where(ul > li):not(:where([class~="not-prose"] *))::marker{color:var(--tw-prose-bullets)}.prose :where(hr):not(:where([class~="not-prose"] *)){border-color:var(--tw-prose-hr);border-top-width:1px;margin-top:3em;margin-bottom:3em}.prose :where(blockquote):not(:where([class~="not-prose"] *)){font-weight:500;font-style:italic;color:var(--tw-prose-quotes);border-left-width:.25rem;border-left-color:var(--tw-prose-quote-borders);quotes:"\201c""\201d""\2018""\2019";margin-top:1.6em;margin-bottom:1.6em;padding-left:1em}.prose :where(h1):not(:where([class~="not-prose"] *)){color:var(--tw-prose-headings);font-weight:800;font-size:2.25em;margin-top:0;margin-bottom:.8888889em;line-height:1.1111111}.prose :where(h1 strong):not(:where([class~="not-prose"] *)){font-weight:900}.prose :where(h2):not(:where([class~="not-prose"] *)){color:var(--tw-prose-headings);font-weight:700;font-size:1.5em;margin-top:2em;margin-bottom:1em;line-height:1.3333333}.prose :where(h2 strong):not(:where([class~="not-prose"] *)){font-weight:800}.prose :where(h3):not(:where([class~="not-prose"] *)){color:var(--tw-prose-headings);font-weight:600;font-size:1.25em;margin-top:1.6em;margin-bottom:.6em;line-height:1.6}.prose :where(h3 strong):not(:where([class~="not-prose"] *)){font-weight:700}.prose :where(h4):not(:where([class~="not-prose"] *)){color:var(--tw-prose-headings);font-weight:600;margin-top:1.5em;margin-bottom:.5em;line-height:1.5}.prose :where(h4 strong):not(:where([class~="not-prose"] *)){font-weight:700}.prose :where(figure > *):not(:where([class~="not-prose"] *)){margin-top:0;margin-bottom:0}.prose :where(figcaption):not(:where([class~="not-prose"] *)){color:var(--tw-prose-captions);font-size:.875em;line-height:1.4285714;margin-top:.8571429em}.prose :where(a code):not(:where([class~="not-prose"] *)){color:var(--tw-prose-links)}.prose :where(pre code):not(:where([class~="not-prose"] *)):before{content:none}.prose :where(pre code):not(:where([class~="not-prose"] *)):after{content:none}.prose :where(table):not(:where([class~="not-prose"] *)){width:100%;table-layout:auto;text-align:left;margin-top:2em;margin-bottom:2em;font-size:.875em;line-height:1.7142857}.prose :where(thead):not(:where([class~="not-prose"] *)){border-bottom-width:1px;border-bottom-color:var(--tw-prose-th-borders)}.prose :where(thead th):not(:where([class~="not-prose"] *)){color:var(--tw-prose-headings);font-weight:600;vertical-align:bottom;padding-right:.5714286em;padding-bottom:.5714286em;padding-left:.5714286em}.prose :where(tbody tr):not(:where([class~="not-prose"] *)){border-bottom-width:1px;border-bottom-color:var(--tw-prose-td-borders)}.prose :where(tbody tr:last-child):not(:where([class~="not-prose"] *)){border-bottom-width:0}.prose :where(tbody td):not(:where([class~="not-prose"] *)){vertical-align:baseline;padding:.5714286em}.prose{--tw-prose-body: #374151;--tw-prose-headings: #111827;--tw-prose-lead: #4b5563;--tw-prose-links: #111827;--tw-prose-bold: #111827;--tw-prose-counters: #6b7280;--tw-prose-bullets: #d1d5db;--tw-prose-hr: #e5e7eb;--tw-prose-quotes: #111827;--tw-prose-quote-borders: #e5e7eb;--tw-prose-captions: #6b7280;--tw-prose-code: #111827;--tw-prose-pre-code: #e5e7eb;--tw-prose-pre-bg: #1f2937;--tw-prose-th-borders: #d1d5db;--tw-prose-td-borders: #e5e7eb;--tw-prose-invert-body: #d1d5db;--tw-prose-invert-headings: #fff;--tw-prose-invert-lead: #9ca3af;--tw-prose-invert-links: #fff;--tw-prose-invert-bold: #fff;--tw-prose-invert-counters: #9ca3af;--tw-prose-invert-bullets: #4b5563;--tw-prose-invert-hr: #374151;--tw-prose-invert-quotes: #f3f4f6;--tw-prose-invert-quote-borders: #374151;--tw-prose-invert-captions: #9ca3af;--tw-prose-invert-code: #fff;--tw-prose-invert-pre-code: #d1d5db;--tw-prose-invert-pre-bg: rgb(0 0 0 / 50%);--tw-prose-invert-th-borders: #4b5563;--tw-prose-invert-td-borders: #374151;font-size:1rem;line-height:1.75}.prose :where(p):not(:where([class~="not-prose"] *)){margin-top:1.25em;margin-bottom:1.25em}.prose :where(img):not(:where([class~="not-prose"] *)){margin-top:2em;margin-bottom:2em}.prose :where(video):not(:where([class~="not-prose"] *)){margin-top:2em;margin-bottom:2em}.prose :where(figure):not(:where([class~="not-prose"] *)){margin-top:2em;margin-bottom:2em}.prose :where(h2 code):not(:where([class~="not-prose"] *)){font-size:.875em}.prose :where(h3 code):not(:where([class~="not-prose"] *)){font-size:.9em}.prose :where(li):not(:where([class~="not-prose"] *)){margin-top:.5em;margin-bottom:.5em}.prose :where(ol > li):not(:where([class~="not-prose"] *)){padding-left:.375em}.prose :where(ul > li):not(:where([class~="not-prose"] *)){padding-left:.375em}.prose>:where(ul > li p):not(:where([class~="not-prose"] *)){margin-top:.75em;margin-bottom:.75em}.prose>:where(ul > li > *:first-child):not(:where([class~="not-prose"] *)){margin-top:1.25em}.prose>:where(ul > li > *:last-child):not(:where([class~="not-prose"] *)){margin-bottom:1.25em}.prose>:where(ol > li > *:first-child):not(:where([class~="not-prose"] *)){margin-top:1.25em}.prose>:where(ol > li > *:last-child):not(:where([class~="not-prose"] *)){margin-bottom:1.25em}.prose :where(ul ul,ul ol,ol ul,ol ol):not(:where([class~="not-prose"] *)){margin-top:.75em;margin-bottom:.75em}.prose :where(hr + *):not(:where([class~="not-prose"] *)){margin-top:0}.prose :where(h2 + *):not(:where([class~="not-prose"] *)){margin-top:0}.prose :where(h3 + *):not(:where([class~="not-prose"] *)){margin-top:0}.prose :where(h4 + *):not(:where([class~="not-prose"] *)){margin-top:0}.prose :where(thead th:first-child):not(:where([class~="not-prose"] *)){padding-left:0}.prose :where(thead th:last-child):not(:where([class~="not-prose"] *)){padding-right:0}.prose :where(tbody td:first-child):not(:where([class~="not-prose"] *)){padding-left:0}.prose :where(tbody td:last-child):not(:where([class~="not-prose"] *)){padding-right:0}.prose>:where(:first-child):not(:where([class~="not-prose"] *)){margin-top:0}.prose>:where(:last-child):not(:where([class~="not-prose"] *)){margin-bottom:0}.pointer-events-none{pointer-events:none}.absolute{position:absolute}.relative{position:relative}.bottom-0{bottom:0px}.left-0{left:0px}.top-0{top:0px}.right-0{right:0px}.z-0{z-index:0}.z-10{z-index:10}.z-20{z-index:20}.my-3{margin-top:.75rem;margin-bottom:.75rem}.my-6{margin-top:1.5rem;margin-bottom:1.5rem}.mx-auto{margin-left:auto;margin-right:auto}.-mx-3{margin-left:-.75rem;margin-right:-.75rem}.mt-6{margin-top:1.5rem}.mb-2{margin-bottom:.5rem}.box-border{box-sizing:border-box}.block{display:block}.flex{display:flex}.grid{display:grid}.hidden{display:none}.aspect-\[256\/512\]{aspect-ratio:256/512}.h-0{height:0px}.h-full{height:100%}.max-h-\[9rem\]{max-height:9rem}.max-h-24{max-height:6rem}.w-0{width:0px}.w-full{width:100%}.max-w-full{max-width:100%}.max-w-\[3rem\]{max-width:3rem}.max-w-screen-md{max-width:768px}.-translate-x-1\/2{--tw-translate-x: -50%;transform:translate(var(--tw-translate-x),var(--tw-translate-y)) rotate(var(--tw-rotate)) skew(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y))}@-webkit-keyframes spin{to{transform:rotate(360deg)}}@keyframes spin{to{transform:rotate(360deg)}}.animate-spin{-webkit-animation:spin 1s linear infinite;animation:spin 1s linear infinite}.cursor-pointer{cursor:pointer}.snap-x{scroll-snap-type:x var(--tw-scroll-snap-strictness)}.snap-y{scroll-snap-type:y var(--tw-scroll-snap-strictness)}.snap-mandatory{--tw-scroll-snap-strictness: mandatory}.snap-start{scroll-snap-align:start}.snap-always{scroll-snap-stop:always}.grid-cols-2{grid-template-columns:repeat(2,minmax(0,1fr))}.grid-cols-\[2fr_1\.5fr\]{grid-template-columns:2fr 1.5fr}.flex-col{flex-direction:column}.flex-nowrap{flex-wrap:nowrap}.items-center{align-items:center}.justify-center{justify-content:center}.gap-2{gap:.5rem}.gap-1{gap:.25rem}.overflow-hidden{overflow:hidden}.overflow-clip{overflow:clip}.overflow-scroll{overflow:scroll}.overflow-x-scroll{overflow-x:scroll}.whitespace-nowrap{white-space:nowrap}.whitespace-pre{white-space:pre}.rounded-lg{border-radius:.5rem}.border{border-width:1px}.border-gray-500{--tw-border-opacity: 1;border-color:rgb(107 114 128 / var(--tw-border-opacity))}.border-gray-300{--tw-border-opacity: 1;border-color:rgb(209 213 219 / var(--tw-border-opacity))}.bg-white{--tw-bg-opacity: 1;background-color:rgb(255 255 255 / var(--tw-bg-opacity))}.bg-gray-50{--tw-bg-opacity: 1;background-color:rgb(249 250 251 / var(--tw-bg-opacity))}.p-3{padding:.75rem}.p-1{padding:.25rem}.px-2{padding-left:.5rem;padding-right:.5rem}.px-3{padding-left:.75rem;padding-right:.75rem}.py-5{padding-top:1.25rem;padding-bottom:1.25rem}.py-3{padding-top:.75rem;padding-bottom:.75rem}.pl-2{padding-left:.5rem}.text-base{font-size:1rem;line-height:1.5rem}.text-sm{font-size:.875rem;line-height:1.25rem}.text-xs{font-size:.75rem;line-height:1rem}.font-bold{font-weight:700}.leading-6{line-height:1.5rem}.text-black{--tw-text-opacity: 1;color:rgb(0 0 0 / var(--tw-text-opacity))}.text-white{--tw-text-opacity: 1;color:rgb(255 255 255 / var(--tw-text-opacity))}.text-gray-900{--tw-text-opacity: 1;color:rgb(17 24 39 / var(--tw-text-opacity))}.opacity-0{opacity:0}.opacity-30{opacity:.3}.outline{outline-style:solid}.outline-2{outline-width:2px}.outline-offset-\[-2px\]{outline-offset:-2px}.ring{--tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color);--tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(3px + var(--tw-ring-offset-width)) var(--tw-ring-color);box-shadow:var(--tw-ring-offset-shadow),var(--tw-ring-shadow),var(--tw-shadow, 0 0 #0000)}.transition-all{transition-property:all;transition-timing-function:cubic-bezier(.4,0,.2,1);transition-duration:.15s}.duration-200{transition-duration:.2s}.ease-in-out{transition-timing-function:cubic-bezier(.4,0,.2,1)}.hover\:outline:hover{outline-style:solid}.focus\:border-blue-500:focus{--tw-border-opacity: 1;border-color:rgb(59 130 246 / var(--tw-border-opacity))}.focus\:ring-blue-500:focus{--tw-ring-opacity: 1;--tw-ring-color: rgb(59 130 246 / var(--tw-ring-opacity))}.disabled\:opacity-50:disabled{opacity:.5}@media (prefers-color-scheme: dark){.dark\:prose-invert{--tw-prose-body: var(--tw-prose-invert-body);--tw-prose-headings: var(--tw-prose-invert-headings);--tw-prose-lead: var(--tw-prose-invert-lead);--tw-prose-links: var(--tw-prose-invert-links);--tw-prose-bold: var(--tw-prose-invert-bold);--tw-prose-counters: var(--tw-prose-invert-counters);--tw-prose-bullets: var(--tw-prose-invert-bullets);--tw-prose-hr: var(--tw-prose-invert-hr);--tw-prose-quotes: var(--tw-prose-invert-quotes);--tw-prose-quote-borders: var(--tw-prose-invert-quote-borders);--tw-prose-captions: var(--tw-prose-invert-captions);--tw-prose-code: var(--tw-prose-invert-code);--tw-prose-pre-code: var(--tw-prose-invert-pre-code);--tw-prose-pre-bg: var(--tw-prose-invert-pre-bg);--tw-prose-th-borders: var(--tw-prose-invert-th-borders);--tw-prose-td-borders: var(--tw-prose-invert-td-borders)}.dark\:border-gray-300{--tw-border-opacity: 1;border-color:rgb(209 213 219 / var(--tw-border-opacity))}.dark\:border-gray-600{--tw-border-opacity: 1;border-color:rgb(75 85 99 / var(--tw-border-opacity))}.dark\:bg-\[rgb\(11\,15\,25\)\]{--tw-bg-opacity: 1;background-color:rgb(11 15 25 / var(--tw-bg-opacity))}.dark\:bg-gray-700{--tw-bg-opacity: 1;background-color:rgb(55 65 81 / var(--tw-bg-opacity))}.dark\:text-white{--tw-text-opacity: 1;color:rgb(255 255 255 / var(--tw-text-opacity))}.dark\:placeholder-gray-400::-moz-placeholder{--tw-placeholder-opacity: 1;color:rgb(156 163 175 / var(--tw-placeholder-opacity))}.dark\:placeholder-gray-400::placeholder{--tw-placeholder-opacity: 1;color:rgb(156 163 175 / var(--tw-placeholder-opacity))}.dark\:focus\:ring-blue-500:focus{--tw-ring-opacity: 1;--tw-ring-color: rgb(59 130 246 / var(--tw-ring-opacity))}}@media (min-width: 530px){.sm\:max-h-\[none\]{max-height:none}.sm\:grid-cols-3{grid-template-columns:repeat(3,minmax(0,1fr))}.sm\:grid-cols-2{grid-template-columns:repeat(2,minmax(0,1fr))}.sm\:flex-row{flex-direction:row}}
diff --git a/spaces/CVPR/lama-example/saicinpainting/training/modules/pix2pixhd.py b/spaces/CVPR/lama-example/saicinpainting/training/modules/pix2pixhd.py
deleted file mode 100644
index 08c6afd777a88cd232592acbbf0ef25db8d43217..0000000000000000000000000000000000000000
--- a/spaces/CVPR/lama-example/saicinpainting/training/modules/pix2pixhd.py
+++ /dev/null
@@ -1,669 +0,0 @@
-# original: https://github.com/NVIDIA/pix2pixHD/blob/master/models/networks.py
-import collections
-from functools import partial
-import functools
-import logging
-from collections import defaultdict
-
-import numpy as np
-import torch.nn as nn
-
-from saicinpainting.training.modules.base import BaseDiscriminator, deconv_factory, get_conv_block_ctor, get_norm_layer, get_activation
-from saicinpainting.training.modules.ffc import FFCResnetBlock
-from saicinpainting.training.modules.multidilated_conv import MultidilatedConv
-
-class DotDict(defaultdict):
- # https://stackoverflow.com/questions/2352181/how-to-use-a-dot-to-access-members-of-dictionary
- """dot.notation access to dictionary attributes"""
- __getattr__ = defaultdict.get
- __setattr__ = defaultdict.__setitem__
- __delattr__ = defaultdict.__delitem__
-
-class Identity(nn.Module):
- def __init__(self):
- super().__init__()
-
- def forward(self, x):
- return x
-
-
-class ResnetBlock(nn.Module):
- def __init__(self, dim, padding_type, norm_layer, activation=nn.ReLU(True), use_dropout=False, conv_kind='default',
- dilation=1, in_dim=None, groups=1, second_dilation=None):
- super(ResnetBlock, self).__init__()
- self.in_dim = in_dim
- self.dim = dim
- if second_dilation is None:
- second_dilation = dilation
- self.conv_block = self.build_conv_block(dim, padding_type, norm_layer, activation, use_dropout,
- conv_kind=conv_kind, dilation=dilation, in_dim=in_dim, groups=groups,
- second_dilation=second_dilation)
-
- if self.in_dim is not None:
- self.input_conv = nn.Conv2d(in_dim, dim, 1)
-
- self.out_channnels = dim
-
- def build_conv_block(self, dim, padding_type, norm_layer, activation, use_dropout, conv_kind='default',
- dilation=1, in_dim=None, groups=1, second_dilation=1):
- conv_layer = get_conv_block_ctor(conv_kind)
-
- conv_block = []
- p = 0
- if padding_type == 'reflect':
- conv_block += [nn.ReflectionPad2d(dilation)]
- elif padding_type == 'replicate':
- conv_block += [nn.ReplicationPad2d(dilation)]
- elif padding_type == 'zero':
- p = dilation
- else:
- raise NotImplementedError('padding [%s] is not implemented' % padding_type)
-
- if in_dim is None:
- in_dim = dim
-
- conv_block += [conv_layer(in_dim, dim, kernel_size=3, padding=p, dilation=dilation),
- norm_layer(dim),
- activation]
- if use_dropout:
- conv_block += [nn.Dropout(0.5)]
-
- p = 0
- if padding_type == 'reflect':
- conv_block += [nn.ReflectionPad2d(second_dilation)]
- elif padding_type == 'replicate':
- conv_block += [nn.ReplicationPad2d(second_dilation)]
- elif padding_type == 'zero':
- p = second_dilation
- else:
- raise NotImplementedError('padding [%s] is not implemented' % padding_type)
- conv_block += [conv_layer(dim, dim, kernel_size=3, padding=p, dilation=second_dilation, groups=groups),
- norm_layer(dim)]
-
- return nn.Sequential(*conv_block)
-
- def forward(self, x):
- x_before = x
- if self.in_dim is not None:
- x = self.input_conv(x)
- out = x + self.conv_block(x_before)
- return out
-
-class ResnetBlock5x5(nn.Module):
- def __init__(self, dim, padding_type, norm_layer, activation=nn.ReLU(True), use_dropout=False, conv_kind='default',
- dilation=1, in_dim=None, groups=1, second_dilation=None):
- super(ResnetBlock5x5, self).__init__()
- self.in_dim = in_dim
- self.dim = dim
- if second_dilation is None:
- second_dilation = dilation
- self.conv_block = self.build_conv_block(dim, padding_type, norm_layer, activation, use_dropout,
- conv_kind=conv_kind, dilation=dilation, in_dim=in_dim, groups=groups,
- second_dilation=second_dilation)
-
- if self.in_dim is not None:
- self.input_conv = nn.Conv2d(in_dim, dim, 1)
-
- self.out_channnels = dim
-
- def build_conv_block(self, dim, padding_type, norm_layer, activation, use_dropout, conv_kind='default',
- dilation=1, in_dim=None, groups=1, second_dilation=1):
- conv_layer = get_conv_block_ctor(conv_kind)
-
- conv_block = []
- p = 0
- if padding_type == 'reflect':
- conv_block += [nn.ReflectionPad2d(dilation * 2)]
- elif padding_type == 'replicate':
- conv_block += [nn.ReplicationPad2d(dilation * 2)]
- elif padding_type == 'zero':
- p = dilation * 2
- else:
- raise NotImplementedError('padding [%s] is not implemented' % padding_type)
-
- if in_dim is None:
- in_dim = dim
-
- conv_block += [conv_layer(in_dim, dim, kernel_size=5, padding=p, dilation=dilation),
- norm_layer(dim),
- activation]
- if use_dropout:
- conv_block += [nn.Dropout(0.5)]
-
- p = 0
- if padding_type == 'reflect':
- conv_block += [nn.ReflectionPad2d(second_dilation * 2)]
- elif padding_type == 'replicate':
- conv_block += [nn.ReplicationPad2d(second_dilation * 2)]
- elif padding_type == 'zero':
- p = second_dilation * 2
- else:
- raise NotImplementedError('padding [%s] is not implemented' % padding_type)
- conv_block += [conv_layer(dim, dim, kernel_size=5, padding=p, dilation=second_dilation, groups=groups),
- norm_layer(dim)]
-
- return nn.Sequential(*conv_block)
-
- def forward(self, x):
- x_before = x
- if self.in_dim is not None:
- x = self.input_conv(x)
- out = x + self.conv_block(x_before)
- return out
-
-
-class MultidilatedResnetBlock(nn.Module):
- def __init__(self, dim, padding_type, conv_layer, norm_layer, activation=nn.ReLU(True), use_dropout=False):
- super().__init__()
- self.conv_block = self.build_conv_block(dim, padding_type, conv_layer, norm_layer, activation, use_dropout)
-
- def build_conv_block(self, dim, padding_type, conv_layer, norm_layer, activation, use_dropout, dilation=1):
- conv_block = []
- conv_block += [conv_layer(dim, dim, kernel_size=3, padding_mode=padding_type),
- norm_layer(dim),
- activation]
- if use_dropout:
- conv_block += [nn.Dropout(0.5)]
-
- conv_block += [conv_layer(dim, dim, kernel_size=3, padding_mode=padding_type),
- norm_layer(dim)]
-
- return nn.Sequential(*conv_block)
-
- def forward(self, x):
- out = x + self.conv_block(x)
- return out
-
-
-class MultiDilatedGlobalGenerator(nn.Module):
- def __init__(self, input_nc, output_nc, ngf=64, n_downsampling=3,
- n_blocks=3, norm_layer=nn.BatchNorm2d,
- padding_type='reflect', conv_kind='default',
- deconv_kind='convtranspose', activation=nn.ReLU(True),
- up_norm_layer=nn.BatchNorm2d, affine=None, up_activation=nn.ReLU(True),
- add_out_act=True, max_features=1024, multidilation_kwargs={},
- ffc_positions=None, ffc_kwargs={}):
- assert (n_blocks >= 0)
- super().__init__()
-
- conv_layer = get_conv_block_ctor(conv_kind)
- resnet_conv_layer = functools.partial(get_conv_block_ctor('multidilated'), **multidilation_kwargs)
- norm_layer = get_norm_layer(norm_layer)
- if affine is not None:
- norm_layer = partial(norm_layer, affine=affine)
- up_norm_layer = get_norm_layer(up_norm_layer)
- if affine is not None:
- up_norm_layer = partial(up_norm_layer, affine=affine)
-
- model = [nn.ReflectionPad2d(3),
- conv_layer(input_nc, ngf, kernel_size=7, padding=0),
- norm_layer(ngf),
- activation]
-
- identity = Identity()
- ### downsample
- for i in range(n_downsampling):
- mult = 2 ** i
-
- model += [conv_layer(min(max_features, ngf * mult),
- min(max_features, ngf * mult * 2),
- kernel_size=3, stride=2, padding=1),
- norm_layer(min(max_features, ngf * mult * 2)),
- activation]
-
- mult = 2 ** n_downsampling
- feats_num_bottleneck = min(max_features, ngf * mult)
-
- ### resnet blocks
- for i in range(n_blocks):
- if ffc_positions is not None and i in ffc_positions:
- model += [FFCResnetBlock(feats_num_bottleneck, padding_type, norm_layer, activation_layer=nn.ReLU,
- inline=True, **ffc_kwargs)]
- model += [MultidilatedResnetBlock(feats_num_bottleneck, padding_type=padding_type,
- conv_layer=resnet_conv_layer, activation=activation,
- norm_layer=norm_layer)]
-
- ### upsample
- for i in range(n_downsampling):
- mult = 2 ** (n_downsampling - i)
- model += deconv_factory(deconv_kind, ngf, mult, up_norm_layer, up_activation, max_features)
- model += [nn.ReflectionPad2d(3),
- nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0)]
- if add_out_act:
- model.append(get_activation('tanh' if add_out_act is True else add_out_act))
- self.model = nn.Sequential(*model)
-
- def forward(self, input):
- return self.model(input)
-
-class ConfigGlobalGenerator(nn.Module):
- def __init__(self, input_nc, output_nc, ngf=64, n_downsampling=3,
- n_blocks=3, norm_layer=nn.BatchNorm2d,
- padding_type='reflect', conv_kind='default',
- deconv_kind='convtranspose', activation=nn.ReLU(True),
- up_norm_layer=nn.BatchNorm2d, affine=None, up_activation=nn.ReLU(True),
- add_out_act=True, max_features=1024,
- manual_block_spec=[],
- resnet_block_kind='multidilatedresnetblock',
- resnet_conv_kind='multidilated',
- resnet_dilation=1,
- multidilation_kwargs={}):
- assert (n_blocks >= 0)
- super().__init__()
-
- conv_layer = get_conv_block_ctor(conv_kind)
- resnet_conv_layer = functools.partial(get_conv_block_ctor(resnet_conv_kind), **multidilation_kwargs)
- norm_layer = get_norm_layer(norm_layer)
- if affine is not None:
- norm_layer = partial(norm_layer, affine=affine)
- up_norm_layer = get_norm_layer(up_norm_layer)
- if affine is not None:
- up_norm_layer = partial(up_norm_layer, affine=affine)
-
- model = [nn.ReflectionPad2d(3),
- conv_layer(input_nc, ngf, kernel_size=7, padding=0),
- norm_layer(ngf),
- activation]
-
- identity = Identity()
-
- ### downsample
- for i in range(n_downsampling):
- mult = 2 ** i
- model += [conv_layer(min(max_features, ngf * mult),
- min(max_features, ngf * mult * 2),
- kernel_size=3, stride=2, padding=1),
- norm_layer(min(max_features, ngf * mult * 2)),
- activation]
-
- mult = 2 ** n_downsampling
- feats_num_bottleneck = min(max_features, ngf * mult)
-
- if len(manual_block_spec) == 0:
- manual_block_spec = [
- DotDict(lambda : None, {
- 'n_blocks': n_blocks,
- 'use_default': True})
- ]
-
- ### resnet blocks
- for block_spec in manual_block_spec:
- def make_and_add_blocks(model, block_spec):
- block_spec = DotDict(lambda : None, block_spec)
- if not block_spec.use_default:
- resnet_conv_layer = functools.partial(get_conv_block_ctor(block_spec.resnet_conv_kind), **block_spec.multidilation_kwargs)
- resnet_conv_kind = block_spec.resnet_conv_kind
- resnet_block_kind = block_spec.resnet_block_kind
- if block_spec.resnet_dilation is not None:
- resnet_dilation = block_spec.resnet_dilation
- for i in range(block_spec.n_blocks):
- if resnet_block_kind == "multidilatedresnetblock":
- model += [MultidilatedResnetBlock(feats_num_bottleneck, padding_type=padding_type,
- conv_layer=resnet_conv_layer, activation=activation,
- norm_layer=norm_layer)]
- if resnet_block_kind == "resnetblock":
- model += [ResnetBlock(ngf * mult, padding_type=padding_type, activation=activation, norm_layer=norm_layer,
- conv_kind=resnet_conv_kind)]
- if resnet_block_kind == "resnetblock5x5":
- model += [ResnetBlock5x5(ngf * mult, padding_type=padding_type, activation=activation, norm_layer=norm_layer,
- conv_kind=resnet_conv_kind)]
- if resnet_block_kind == "resnetblockdwdil":
- model += [ResnetBlock(ngf * mult, padding_type=padding_type, activation=activation, norm_layer=norm_layer,
- conv_kind=resnet_conv_kind, dilation=resnet_dilation, second_dilation=resnet_dilation)]
- make_and_add_blocks(model, block_spec)
-
- ### upsample
- for i in range(n_downsampling):
- mult = 2 ** (n_downsampling - i)
- model += deconv_factory(deconv_kind, ngf, mult, up_norm_layer, up_activation, max_features)
- model += [nn.ReflectionPad2d(3),
- nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0)]
- if add_out_act:
- model.append(get_activation('tanh' if add_out_act is True else add_out_act))
- self.model = nn.Sequential(*model)
-
- def forward(self, input):
- return self.model(input)
-
-
-def make_dil_blocks(dilated_blocks_n, dilation_block_kind, dilated_block_kwargs):
- blocks = []
- for i in range(dilated_blocks_n):
- if dilation_block_kind == 'simple':
- blocks.append(ResnetBlock(**dilated_block_kwargs, dilation=2 ** (i + 1)))
- elif dilation_block_kind == 'multi':
- blocks.append(MultidilatedResnetBlock(**dilated_block_kwargs))
- else:
- raise ValueError(f'dilation_block_kind could not be "{dilation_block_kind}"')
- return blocks
-
-
-class GlobalGenerator(nn.Module):
- def __init__(self, input_nc, output_nc, ngf=64, n_downsampling=3, n_blocks=9, norm_layer=nn.BatchNorm2d,
- padding_type='reflect', conv_kind='default', activation=nn.ReLU(True),
- up_norm_layer=nn.BatchNorm2d, affine=None,
- up_activation=nn.ReLU(True), dilated_blocks_n=0, dilated_blocks_n_start=0,
- dilated_blocks_n_middle=0,
- add_out_act=True,
- max_features=1024, is_resblock_depthwise=False,
- ffc_positions=None, ffc_kwargs={}, dilation=1, second_dilation=None,
- dilation_block_kind='simple', multidilation_kwargs={}):
- assert (n_blocks >= 0)
- super().__init__()
-
- conv_layer = get_conv_block_ctor(conv_kind)
- norm_layer = get_norm_layer(norm_layer)
- if affine is not None:
- norm_layer = partial(norm_layer, affine=affine)
- up_norm_layer = get_norm_layer(up_norm_layer)
- if affine is not None:
- up_norm_layer = partial(up_norm_layer, affine=affine)
-
- if ffc_positions is not None:
- ffc_positions = collections.Counter(ffc_positions)
-
- model = [nn.ReflectionPad2d(3),
- conv_layer(input_nc, ngf, kernel_size=7, padding=0),
- norm_layer(ngf),
- activation]
-
- identity = Identity()
- ### downsample
- for i in range(n_downsampling):
- mult = 2 ** i
-
- model += [conv_layer(min(max_features, ngf * mult),
- min(max_features, ngf * mult * 2),
- kernel_size=3, stride=2, padding=1),
- norm_layer(min(max_features, ngf * mult * 2)),
- activation]
-
- mult = 2 ** n_downsampling
- feats_num_bottleneck = min(max_features, ngf * mult)
-
- dilated_block_kwargs = dict(dim=feats_num_bottleneck, padding_type=padding_type,
- activation=activation, norm_layer=norm_layer)
- if dilation_block_kind == 'simple':
- dilated_block_kwargs['conv_kind'] = conv_kind
- elif dilation_block_kind == 'multi':
- dilated_block_kwargs['conv_layer'] = functools.partial(
- get_conv_block_ctor('multidilated'), **multidilation_kwargs)
-
- # dilated blocks at the start of the bottleneck sausage
- if dilated_blocks_n_start is not None and dilated_blocks_n_start > 0:
- model += make_dil_blocks(dilated_blocks_n_start, dilation_block_kind, dilated_block_kwargs)
-
- # resnet blocks
- for i in range(n_blocks):
- # dilated blocks at the middle of the bottleneck sausage
- if i == n_blocks // 2 and dilated_blocks_n_middle is not None and dilated_blocks_n_middle > 0:
- model += make_dil_blocks(dilated_blocks_n_middle, dilation_block_kind, dilated_block_kwargs)
-
- if ffc_positions is not None and i in ffc_positions:
- for _ in range(ffc_positions[i]): # same position can occur more than once
- model += [FFCResnetBlock(feats_num_bottleneck, padding_type, norm_layer, activation_layer=nn.ReLU,
- inline=True, **ffc_kwargs)]
-
- if is_resblock_depthwise:
- resblock_groups = feats_num_bottleneck
- else:
- resblock_groups = 1
-
- model += [ResnetBlock(feats_num_bottleneck, padding_type=padding_type, activation=activation,
- norm_layer=norm_layer, conv_kind=conv_kind, groups=resblock_groups,
- dilation=dilation, second_dilation=second_dilation)]
-
-
- # dilated blocks at the end of the bottleneck sausage
- if dilated_blocks_n is not None and dilated_blocks_n > 0:
- model += make_dil_blocks(dilated_blocks_n, dilation_block_kind, dilated_block_kwargs)
-
- # upsample
- for i in range(n_downsampling):
- mult = 2 ** (n_downsampling - i)
- model += [nn.ConvTranspose2d(min(max_features, ngf * mult),
- min(max_features, int(ngf * mult / 2)),
- kernel_size=3, stride=2, padding=1, output_padding=1),
- up_norm_layer(min(max_features, int(ngf * mult / 2))),
- up_activation]
- model += [nn.ReflectionPad2d(3),
- nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0)]
- if add_out_act:
- model.append(get_activation('tanh' if add_out_act is True else add_out_act))
- self.model = nn.Sequential(*model)
-
- def forward(self, input):
- return self.model(input)
-
-
-class GlobalGeneratorGated(GlobalGenerator):
- def __init__(self, *args, **kwargs):
- real_kwargs=dict(
- conv_kind='gated_bn_relu',
- activation=nn.Identity(),
- norm_layer=nn.Identity
- )
- real_kwargs.update(kwargs)
- super().__init__(*args, **real_kwargs)
-
-
-class GlobalGeneratorFromSuperChannels(nn.Module):
- def __init__(self, input_nc, output_nc, n_downsampling, n_blocks, super_channels, norm_layer="bn", padding_type='reflect', add_out_act=True):
- super().__init__()
- self.n_downsampling = n_downsampling
- norm_layer = get_norm_layer(norm_layer)
- if type(norm_layer) == functools.partial:
- use_bias = (norm_layer.func == nn.InstanceNorm2d)
- else:
- use_bias = (norm_layer == nn.InstanceNorm2d)
-
- channels = self.convert_super_channels(super_channels)
- self.channels = channels
-
- model = [nn.ReflectionPad2d(3),
- nn.Conv2d(input_nc, channels[0], kernel_size=7, padding=0, bias=use_bias),
- norm_layer(channels[0]),
- nn.ReLU(True)]
-
- for i in range(n_downsampling): # add downsampling layers
- mult = 2 ** i
- model += [nn.Conv2d(channels[0+i], channels[1+i], kernel_size=3, stride=2, padding=1, bias=use_bias),
- norm_layer(channels[1+i]),
- nn.ReLU(True)]
-
- mult = 2 ** n_downsampling
-
- n_blocks1 = n_blocks // 3
- n_blocks2 = n_blocks1
- n_blocks3 = n_blocks - n_blocks1 - n_blocks2
-
- for i in range(n_blocks1):
- c = n_downsampling
- dim = channels[c]
- model += [ResnetBlock(dim, padding_type=padding_type, norm_layer=norm_layer)]
-
- for i in range(n_blocks2):
- c = n_downsampling+1
- dim = channels[c]
- kwargs = {}
- if i == 0:
- kwargs = {"in_dim": channels[c-1]}
- model += [ResnetBlock(dim, padding_type=padding_type, norm_layer=norm_layer, **kwargs)]
-
- for i in range(n_blocks3):
- c = n_downsampling+2
- dim = channels[c]
- kwargs = {}
- if i == 0:
- kwargs = {"in_dim": channels[c-1]}
- model += [ResnetBlock(dim, padding_type=padding_type, norm_layer=norm_layer, **kwargs)]
-
- for i in range(n_downsampling): # add upsampling layers
- mult = 2 ** (n_downsampling - i)
- model += [nn.ConvTranspose2d(channels[n_downsampling+3+i],
- channels[n_downsampling+3+i+1],
- kernel_size=3, stride=2,
- padding=1, output_padding=1,
- bias=use_bias),
- norm_layer(channels[n_downsampling+3+i+1]),
- nn.ReLU(True)]
- model += [nn.ReflectionPad2d(3)]
- model += [nn.Conv2d(channels[2*n_downsampling+3], output_nc, kernel_size=7, padding=0)]
-
- if add_out_act:
- model.append(get_activation('tanh' if add_out_act is True else add_out_act))
- self.model = nn.Sequential(*model)
-
- def convert_super_channels(self, super_channels):
- n_downsampling = self.n_downsampling
- result = []
- cnt = 0
-
- if n_downsampling == 2:
- N1 = 10
- elif n_downsampling == 3:
- N1 = 13
- else:
- raise NotImplementedError
-
- for i in range(0, N1):
- if i in [1,4,7,10]:
- channel = super_channels[cnt] * (2 ** cnt)
- config = {'channel': channel}
- result.append(channel)
- logging.info(f"Downsample channels {result[-1]}")
- cnt += 1
-
- for i in range(3):
- for counter, j in enumerate(range(N1 + i * 3, N1 + 3 + i * 3)):
- if len(super_channels) == 6:
- channel = super_channels[3] * 4
- else:
- channel = super_channels[i + 3] * 4
- config = {'channel': channel}
- if counter == 0:
- result.append(channel)
- logging.info(f"Bottleneck channels {result[-1]}")
- cnt = 2
-
- for i in range(N1+9, N1+21):
- if i in [22, 25,28]:
- cnt -= 1
- if len(super_channels) == 6:
- channel = super_channels[5 - cnt] * (2 ** cnt)
- else:
- channel = super_channels[7 - cnt] * (2 ** cnt)
- result.append(int(channel))
- logging.info(f"Upsample channels {result[-1]}")
- return result
-
- def forward(self, input):
- return self.model(input)
-
-
-# Defines the PatchGAN discriminator with the specified arguments.
-class NLayerDiscriminator(BaseDiscriminator):
- def __init__(self, input_nc, ndf=64, n_layers=3, norm_layer=nn.BatchNorm2d,):
- super().__init__()
- self.n_layers = n_layers
-
- kw = 4
- padw = int(np.ceil((kw-1.0)/2))
- sequence = [[nn.Conv2d(input_nc, ndf, kernel_size=kw, stride=2, padding=padw),
- nn.LeakyReLU(0.2, True)]]
-
- nf = ndf
- for n in range(1, n_layers):
- nf_prev = nf
- nf = min(nf * 2, 512)
-
- cur_model = []
- cur_model += [
- nn.Conv2d(nf_prev, nf, kernel_size=kw, stride=2, padding=padw),
- norm_layer(nf),
- nn.LeakyReLU(0.2, True)
- ]
- sequence.append(cur_model)
-
- nf_prev = nf
- nf = min(nf * 2, 512)
-
- cur_model = []
- cur_model += [
- nn.Conv2d(nf_prev, nf, kernel_size=kw, stride=1, padding=padw),
- norm_layer(nf),
- nn.LeakyReLU(0.2, True)
- ]
- sequence.append(cur_model)
-
- sequence += [[nn.Conv2d(nf, 1, kernel_size=kw, stride=1, padding=padw)]]
-
- for n in range(len(sequence)):
- setattr(self, 'model'+str(n), nn.Sequential(*sequence[n]))
-
- def get_all_activations(self, x):
- res = [x]
- for n in range(self.n_layers + 2):
- model = getattr(self, 'model' + str(n))
- res.append(model(res[-1]))
- return res[1:]
-
- def forward(self, x):
- act = self.get_all_activations(x)
- return act[-1], act[:-1]
-
-
-class MultidilatedNLayerDiscriminator(BaseDiscriminator):
- def __init__(self, input_nc, ndf=64, n_layers=3, norm_layer=nn.BatchNorm2d, multidilation_kwargs={}):
- super().__init__()
- self.n_layers = n_layers
-
- kw = 4
- padw = int(np.ceil((kw-1.0)/2))
- sequence = [[nn.Conv2d(input_nc, ndf, kernel_size=kw, stride=2, padding=padw),
- nn.LeakyReLU(0.2, True)]]
-
- nf = ndf
- for n in range(1, n_layers):
- nf_prev = nf
- nf = min(nf * 2, 512)
-
- cur_model = []
- cur_model += [
- MultidilatedConv(nf_prev, nf, kernel_size=kw, stride=2, padding=[2, 3], **multidilation_kwargs),
- norm_layer(nf),
- nn.LeakyReLU(0.2, True)
- ]
- sequence.append(cur_model)
-
- nf_prev = nf
- nf = min(nf * 2, 512)
-
- cur_model = []
- cur_model += [
- nn.Conv2d(nf_prev, nf, kernel_size=kw, stride=1, padding=padw),
- norm_layer(nf),
- nn.LeakyReLU(0.2, True)
- ]
- sequence.append(cur_model)
-
- sequence += [[nn.Conv2d(nf, 1, kernel_size=kw, stride=1, padding=padw)]]
-
- for n in range(len(sequence)):
- setattr(self, 'model'+str(n), nn.Sequential(*sequence[n]))
-
- def get_all_activations(self, x):
- res = [x]
- for n in range(self.n_layers + 2):
- model = getattr(self, 'model' + str(n))
- res.append(model(res[-1]))
- return res[1:]
-
- def forward(self, x):
- act = self.get_all_activations(x)
- return act[-1], act[:-1]
-
-
-class NLayerDiscriminatorAsGen(NLayerDiscriminator):
- def forward(self, x):
- return super().forward(x)[0]
diff --git a/spaces/CVPR/regionclip-demo/datasets/prepare_panoptic_fpn.py b/spaces/CVPR/regionclip-demo/datasets/prepare_panoptic_fpn.py
deleted file mode 100644
index 597d791afab1bcc0013203a66c7fba225065eebe..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/datasets/prepare_panoptic_fpn.py
+++ /dev/null
@@ -1,116 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding: utf-8 -*-
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-import functools
-import json
-import multiprocessing as mp
-import numpy as np
-import os
-import time
-from fvcore.common.download import download
-from panopticapi.utils import rgb2id
-from PIL import Image
-
-from detectron2.data.datasets.builtin_meta import COCO_CATEGORIES
-
-
-def _process_panoptic_to_semantic(input_panoptic, output_semantic, segments, id_map):
- panoptic = np.asarray(Image.open(input_panoptic), dtype=np.uint32)
- panoptic = rgb2id(panoptic)
- output = np.zeros_like(panoptic, dtype=np.uint8) + 255
- for seg in segments:
- cat_id = seg["category_id"]
- new_cat_id = id_map[cat_id]
- output[panoptic == seg["id"]] = new_cat_id
- Image.fromarray(output).save(output_semantic)
-
-
-def separate_coco_semantic_from_panoptic(panoptic_json, panoptic_root, sem_seg_root, categories):
- """
- Create semantic segmentation annotations from panoptic segmentation
- annotations, to be used by PanopticFPN.
-
- It maps all thing categories to class 0, and maps all unlabeled pixels to class 255.
- It maps all stuff categories to contiguous ids starting from 1.
-
- Args:
- panoptic_json (str): path to the panoptic json file, in COCO's format.
- panoptic_root (str): a directory with panoptic annotation files, in COCO's format.
- sem_seg_root (str): a directory to output semantic annotation files
- categories (list[dict]): category metadata. Each dict needs to have:
- "id": corresponds to the "category_id" in the json annotations
- "isthing": 0 or 1
- """
- os.makedirs(sem_seg_root, exist_ok=True)
-
- stuff_ids = [k["id"] for k in categories if k["isthing"] == 0]
- thing_ids = [k["id"] for k in categories if k["isthing"] == 1]
- id_map = {} # map from category id to id in the output semantic annotation
- assert len(stuff_ids) <= 254
- for i, stuff_id in enumerate(stuff_ids):
- id_map[stuff_id] = i + 1
- for thing_id in thing_ids:
- id_map[thing_id] = 0
- id_map[0] = 255
-
- with open(panoptic_json) as f:
- obj = json.load(f)
-
- pool = mp.Pool(processes=max(mp.cpu_count() // 2, 4))
-
- def iter_annotations():
- for anno in obj["annotations"]:
- file_name = anno["file_name"]
- segments = anno["segments_info"]
- input = os.path.join(panoptic_root, file_name)
- output = os.path.join(sem_seg_root, file_name)
- yield input, output, segments
-
- print("Start writing to {} ...".format(sem_seg_root))
- start = time.time()
- pool.starmap(
- functools.partial(_process_panoptic_to_semantic, id_map=id_map),
- iter_annotations(),
- chunksize=100,
- )
- print("Finished. time: {:.2f}s".format(time.time() - start))
-
-
-if __name__ == "__main__":
- dataset_dir = os.path.join(os.getenv("DETECTRON2_DATASETS", "datasets"), "coco")
- for s in ["val2017", "train2017"]:
- separate_coco_semantic_from_panoptic(
- os.path.join(dataset_dir, "annotations/panoptic_{}.json".format(s)),
- os.path.join(dataset_dir, "panoptic_{}".format(s)),
- os.path.join(dataset_dir, "panoptic_stuff_{}".format(s)),
- COCO_CATEGORIES,
- )
-
- # Prepare val2017_100 for quick testing:
-
- dest_dir = os.path.join(dataset_dir, "annotations/")
- URL_PREFIX = "https://dl.fbaipublicfiles.com/detectron2/"
- download(URL_PREFIX + "annotations/coco/panoptic_val2017_100.json", dest_dir)
- with open(os.path.join(dest_dir, "panoptic_val2017_100.json")) as f:
- obj = json.load(f)
-
- def link_val100(dir_full, dir_100):
- print("Creating " + dir_100 + " ...")
- os.makedirs(dir_100, exist_ok=True)
- for img in obj["images"]:
- basename = os.path.splitext(img["file_name"])[0]
- src = os.path.join(dir_full, basename + ".png")
- dst = os.path.join(dir_100, basename + ".png")
- src = os.path.relpath(src, start=dir_100)
- os.symlink(src, dst)
-
- link_val100(
- os.path.join(dataset_dir, "panoptic_val2017"),
- os.path.join(dataset_dir, "panoptic_val2017_100"),
- )
-
- link_val100(
- os.path.join(dataset_dir, "panoptic_stuff_val2017"),
- os.path.join(dataset_dir, "panoptic_stuff_val2017_100"),
- )
diff --git a/spaces/CVPR/regionclip-demo/detectron2/config/__init__.py b/spaces/CVPR/regionclip-demo/detectron2/config/__init__.py
deleted file mode 100644
index 4e648e632d55c70f160d49630378d202fbde4e45..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/detectron2/config/__init__.py
+++ /dev/null
@@ -1,24 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from .compat import downgrade_config, upgrade_config
-from .config import CfgNode, get_cfg, global_cfg, set_global_cfg, configurable
-from .instantiate import instantiate
-from .lazy import LazyCall, LazyConfig
-
-__all__ = [
- "CfgNode",
- "get_cfg",
- "global_cfg",
- "set_global_cfg",
- "downgrade_config",
- "upgrade_config",
- "configurable",
- "instantiate",
- "LazyCall",
- "LazyConfig",
-]
-
-
-from detectron2.utils.env import fixup_module_metadata
-
-fixup_module_metadata(__name__, globals(), __all__)
-del fixup_module_metadata
diff --git a/spaces/ChevyWithAI/rvc-aicover/infer_pack/modules.py b/spaces/ChevyWithAI/rvc-aicover/infer_pack/modules.py
deleted file mode 100644
index 960481cedad9a6106f2bf0b9e86e82b120f7b33f..0000000000000000000000000000000000000000
--- a/spaces/ChevyWithAI/rvc-aicover/infer_pack/modules.py
+++ /dev/null
@@ -1,522 +0,0 @@
-import copy
-import math
-import numpy as np
-import scipy
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-from infer_pack import commons
-from infer_pack.commons import init_weights, get_padding
-from infer_pack.transforms import piecewise_rational_quadratic_transform
-
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(
- self,
- in_channels,
- hidden_channels,
- out_channels,
- kernel_size,
- n_layers,
- p_dropout,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(
- nn.Conv1d(
- in_channels, hidden_channels, kernel_size, padding=kernel_size // 2
- )
- )
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout))
- for _ in range(n_layers - 1):
- self.conv_layers.append(
- nn.Conv1d(
- hidden_channels,
- hidden_channels,
- kernel_size,
- padding=kernel_size // 2,
- )
- )
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
-
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size**i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(
- nn.Conv1d(
- channels,
- channels,
- kernel_size,
- groups=channels,
- dilation=dilation,
- padding=padding,
- )
- )
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(
- self,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- p_dropout=0,
- ):
- super(WN, self).__init__()
- assert kernel_size % 2 == 1
- self.hidden_channels = hidden_channels
- self.kernel_size = (kernel_size,)
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(
- gin_channels, 2 * hidden_channels * n_layers, 1
- )
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight")
-
- for i in range(n_layers):
- dilation = dilation_rate**i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(
- hidden_channels,
- 2 * hidden_channels,
- kernel_size,
- dilation=dilation,
- padding=padding,
- )
- in_layer = torch.nn.utils.weight_norm(in_layer, name="weight")
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight")
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:, : self.hidden_channels, :]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:, self.hidden_channels :, :]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2]),
- )
- ),
- ]
- )
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- ]
- )
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]),
- )
- ),
- ]
- )
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels, 1))
- self.logs = nn.Parameter(torch.zeros(channels, 1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1, 2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False,
- ):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=p_dropout,
- gin_channels=gin_channels,
- )
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels] * 2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1, 2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class ConvFlow(nn.Module):
- def __init__(
- self,
- in_channels,
- filter_channels,
- kernel_size,
- n_layers,
- num_bins=10,
- tail_bound=5.0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0)
- self.proj = nn.Conv1d(
- filter_channels, self.half_channels * (num_bins * 3 - 1), 1
- )
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt(
- self.filter_channels
- )
- unnormalized_derivatives = h[..., 2 * self.num_bins :]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(
- x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails="linear",
- tail_bound=self.tail_bound,
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1, 2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/Cvandi/remake/realesrgan/utils.py b/spaces/Cvandi/remake/realesrgan/utils.py
deleted file mode 100644
index 10e7c23d04f777c250160e74470fdfacb16eab88..0000000000000000000000000000000000000000
--- a/spaces/Cvandi/remake/realesrgan/utils.py
+++ /dev/null
@@ -1,280 +0,0 @@
-import cv2
-import math
-import numpy as np
-import os
-import queue
-import threading
-import torch
-from basicsr.utils.download_util import load_file_from_url
-from torch.nn import functional as F
-
-ROOT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
-
-
-class RealESRGANer():
- """A helper class for upsampling images with RealESRGAN.
-
- Args:
- scale (int): Upsampling scale factor used in the networks. It is usually 2 or 4.
- model_path (str): The path to the pretrained model. It can be urls (will first download it automatically).
- model (nn.Module): The defined network. Default: None.
- tile (int): As too large images result in the out of GPU memory issue, so this tile option will first crop
- input images into tiles, and then process each of them. Finally, they will be merged into one image.
- 0 denotes for do not use tile. Default: 0.
- tile_pad (int): The pad size for each tile, to remove border artifacts. Default: 10.
- pre_pad (int): Pad the input images to avoid border artifacts. Default: 10.
- half (float): Whether to use half precision during inference. Default: False.
- """
-
- def __init__(self, scale, model_path, model=None, tile=0, tile_pad=10, pre_pad=10, half=False):
- self.scale = scale
- self.tile_size = tile
- self.tile_pad = tile_pad
- self.pre_pad = pre_pad
- self.mod_scale = None
- self.half = half
-
- # initialize model
- self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
- # if the model_path starts with https, it will first download models to the folder: realesrgan/weights
- if model_path.startswith('https://'):
- model_path = load_file_from_url(
- url=model_path, model_dir=os.path.join(ROOT_DIR, 'realesrgan/weights'), progress=True, file_name=None)
- loadnet = torch.load(model_path, map_location=torch.device('cpu'))
- # prefer to use params_ema
- if 'params_ema' in loadnet:
- keyname = 'params_ema'
- else:
- keyname = 'params'
- model.load_state_dict(loadnet[keyname], strict=True)
- model.eval()
- self.model = model.to(self.device)
- if self.half:
- self.model = self.model.half()
-
- def pre_process(self, img):
- """Pre-process, such as pre-pad and mod pad, so that the images can be divisible
- """
- img = torch.from_numpy(np.transpose(img, (2, 0, 1))).float()
- self.img = img.unsqueeze(0).to(self.device)
- if self.half:
- self.img = self.img.half()
-
- # pre_pad
- if self.pre_pad != 0:
- self.img = F.pad(self.img, (0, self.pre_pad, 0, self.pre_pad), 'reflect')
- # mod pad for divisible borders
- if self.scale == 2:
- self.mod_scale = 2
- elif self.scale == 1:
- self.mod_scale = 4
- if self.mod_scale is not None:
- self.mod_pad_h, self.mod_pad_w = 0, 0
- _, _, h, w = self.img.size()
- if (h % self.mod_scale != 0):
- self.mod_pad_h = (self.mod_scale - h % self.mod_scale)
- if (w % self.mod_scale != 0):
- self.mod_pad_w = (self.mod_scale - w % self.mod_scale)
- self.img = F.pad(self.img, (0, self.mod_pad_w, 0, self.mod_pad_h), 'reflect')
-
- def process(self):
- # model inference
- self.output = self.model(self.img)
-
- def tile_process(self):
- """It will first crop input images to tiles, and then process each tile.
- Finally, all the processed tiles are merged into one images.
-
- Modified from: https://github.com/ata4/esrgan-launcher
- """
- batch, channel, height, width = self.img.shape
- output_height = height * self.scale
- output_width = width * self.scale
- output_shape = (batch, channel, output_height, output_width)
-
- # start with black image
- self.output = self.img.new_zeros(output_shape)
- tiles_x = math.ceil(width / self.tile_size)
- tiles_y = math.ceil(height / self.tile_size)
-
- # loop over all tiles
- for y in range(tiles_y):
- for x in range(tiles_x):
- # extract tile from input image
- ofs_x = x * self.tile_size
- ofs_y = y * self.tile_size
- # input tile area on total image
- input_start_x = ofs_x
- input_end_x = min(ofs_x + self.tile_size, width)
- input_start_y = ofs_y
- input_end_y = min(ofs_y + self.tile_size, height)
-
- # input tile area on total image with padding
- input_start_x_pad = max(input_start_x - self.tile_pad, 0)
- input_end_x_pad = min(input_end_x + self.tile_pad, width)
- input_start_y_pad = max(input_start_y - self.tile_pad, 0)
- input_end_y_pad = min(input_end_y + self.tile_pad, height)
-
- # input tile dimensions
- input_tile_width = input_end_x - input_start_x
- input_tile_height = input_end_y - input_start_y
- tile_idx = y * tiles_x + x + 1
- input_tile = self.img[:, :, input_start_y_pad:input_end_y_pad, input_start_x_pad:input_end_x_pad]
-
- # upscale tile
- try:
- with torch.no_grad():
- output_tile = self.model(input_tile)
- except RuntimeError as error:
- print('Error', error)
- print(f'\tTile {tile_idx}/{tiles_x * tiles_y}')
-
- # output tile area on total image
- output_start_x = input_start_x * self.scale
- output_end_x = input_end_x * self.scale
- output_start_y = input_start_y * self.scale
- output_end_y = input_end_y * self.scale
-
- # output tile area without padding
- output_start_x_tile = (input_start_x - input_start_x_pad) * self.scale
- output_end_x_tile = output_start_x_tile + input_tile_width * self.scale
- output_start_y_tile = (input_start_y - input_start_y_pad) * self.scale
- output_end_y_tile = output_start_y_tile + input_tile_height * self.scale
-
- # put tile into output image
- self.output[:, :, output_start_y:output_end_y,
- output_start_x:output_end_x] = output_tile[:, :, output_start_y_tile:output_end_y_tile,
- output_start_x_tile:output_end_x_tile]
-
- def post_process(self):
- # remove extra pad
- if self.mod_scale is not None:
- _, _, h, w = self.output.size()
- self.output = self.output[:, :, 0:h - self.mod_pad_h * self.scale, 0:w - self.mod_pad_w * self.scale]
- # remove prepad
- if self.pre_pad != 0:
- _, _, h, w = self.output.size()
- self.output = self.output[:, :, 0:h - self.pre_pad * self.scale, 0:w - self.pre_pad * self.scale]
- return self.output
-
- @torch.no_grad()
- def enhance(self, img, outscale=None, alpha_upsampler='realesrgan'):
- h_input, w_input = img.shape[0:2]
- # img: numpy
- img = img.astype(np.float32)
- if np.max(img) > 256: # 16-bit image
- max_range = 65535
- print('\tInput is a 16-bit image')
- else:
- max_range = 255
- img = img / max_range
- if len(img.shape) == 2: # gray image
- img_mode = 'L'
- img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB)
- elif img.shape[2] == 4: # RGBA image with alpha channel
- img_mode = 'RGBA'
- alpha = img[:, :, 3]
- img = img[:, :, 0:3]
- img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
- if alpha_upsampler == 'realesrgan':
- alpha = cv2.cvtColor(alpha, cv2.COLOR_GRAY2RGB)
- else:
- img_mode = 'RGB'
- img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
-
- # ------------------- process image (without the alpha channel) ------------------- #
- self.pre_process(img)
- if self.tile_size > 0:
- self.tile_process()
- else:
- self.process()
- output_img = self.post_process()
- output_img = output_img.data.squeeze().float().cpu().clamp_(0, 1).numpy()
- output_img = np.transpose(output_img[[2, 1, 0], :, :], (1, 2, 0))
- if img_mode == 'L':
- output_img = cv2.cvtColor(output_img, cv2.COLOR_BGR2GRAY)
-
- # ------------------- process the alpha channel if necessary ------------------- #
- if img_mode == 'RGBA':
- if alpha_upsampler == 'realesrgan':
- self.pre_process(alpha)
- if self.tile_size > 0:
- self.tile_process()
- else:
- self.process()
- output_alpha = self.post_process()
- output_alpha = output_alpha.data.squeeze().float().cpu().clamp_(0, 1).numpy()
- output_alpha = np.transpose(output_alpha[[2, 1, 0], :, :], (1, 2, 0))
- output_alpha = cv2.cvtColor(output_alpha, cv2.COLOR_BGR2GRAY)
- else: # use the cv2 resize for alpha channel
- h, w = alpha.shape[0:2]
- output_alpha = cv2.resize(alpha, (w * self.scale, h * self.scale), interpolation=cv2.INTER_LINEAR)
-
- # merge the alpha channel
- output_img = cv2.cvtColor(output_img, cv2.COLOR_BGR2BGRA)
- output_img[:, :, 3] = output_alpha
-
- # ------------------------------ return ------------------------------ #
- if max_range == 65535: # 16-bit image
- output = (output_img * 65535.0).round().astype(np.uint16)
- else:
- output = (output_img * 255.0).round().astype(np.uint8)
-
- if outscale is not None and outscale != float(self.scale):
- output = cv2.resize(
- output, (
- int(w_input * outscale),
- int(h_input * outscale),
- ), interpolation=cv2.INTER_LANCZOS4)
-
- return output, img_mode
-
-
-class PrefetchReader(threading.Thread):
- """Prefetch images.
-
- Args:
- img_list (list[str]): A image list of image paths to be read.
- num_prefetch_queue (int): Number of prefetch queue.
- """
-
- def __init__(self, img_list, num_prefetch_queue):
- super().__init__()
- self.que = queue.Queue(num_prefetch_queue)
- self.img_list = img_list
-
- def run(self):
- for img_path in self.img_list:
- img = cv2.imread(img_path, cv2.IMREAD_UNCHANGED)
- self.que.put(img)
-
- self.que.put(None)
-
- def __next__(self):
- next_item = self.que.get()
- if next_item is None:
- raise StopIteration
- return next_item
-
- def __iter__(self):
- return self
-
-
-class IOConsumer(threading.Thread):
-
- def __init__(self, opt, que, qid):
- super().__init__()
- self._queue = que
- self.qid = qid
- self.opt = opt
-
- def run(self):
- while True:
- msg = self._queue.get()
- if isinstance(msg, str) and msg == 'quit':
- break
-
- output = msg['output']
- save_path = msg['save_path']
- cv2.imwrite(save_path, output)
- print(f'IO worker {self.qid} is done.')
diff --git a/spaces/DEBO-PROJECT/DEBO-V1/modules/setting_modules.py b/spaces/DEBO-PROJECT/DEBO-V1/modules/setting_modules.py
deleted file mode 100644
index f42d921137b6caef16dde4606eaee448fa483376..0000000000000000000000000000000000000000
--- a/spaces/DEBO-PROJECT/DEBO-V1/modules/setting_modules.py
+++ /dev/null
@@ -1,9 +0,0 @@
-import sys, os
-
-# Disable
-def blockPrint():
- sys.stdout = open(os.devnull, 'w')
-
-# Restore
-def enablePrint():
- sys.stdout = sys.__stdout__
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/contourpy/enum_util.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/contourpy/enum_util.py
deleted file mode 100644
index 914d5d831802b330b1d627043cb20737cdeb764a..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/contourpy/enum_util.py
+++ /dev/null
@@ -1,48 +0,0 @@
-from __future__ import annotations
-
-from contourpy._contourpy import FillType, LineType, ZInterp
-
-
-def as_fill_type(fill_type: FillType | str) -> FillType:
- """Coerce a FillType or string value to a FillType.
-
- Args:
- fill_type (FillType or str): Value to convert.
-
- Return:
- FillType: Converted value.
- """
- if isinstance(fill_type, str):
- return FillType.__members__[fill_type]
- else:
- return fill_type
-
-
-def as_line_type(line_type: LineType | str) -> LineType:
- """Coerce a LineType or string value to a LineType.
-
- Args:
- line_type (LineType or str): Value to convert.
-
- Return:
- LineType: Converted value.
- """
- if isinstance(line_type, str):
- return LineType.__members__[line_type]
- else:
- return line_type
-
-
-def as_z_interp(z_interp: ZInterp | str) -> ZInterp:
- """Coerce a ZInterp or string value to a ZInterp.
-
- Args:
- z_interp (ZInterp or str): Value to convert.
-
- Return:
- ZInterp: Converted value.
- """
- if isinstance(z_interp, str):
- return ZInterp.__members__[z_interp]
- else:
- return z_interp
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/components/gallery.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/components/gallery.py
deleted file mode 100644
index 7e4278a1d0b5a00752053b155dfd946d1ba616ab..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/components/gallery.py
+++ /dev/null
@@ -1,238 +0,0 @@
-"""gr.Gallery() component."""
-
-from __future__ import annotations
-
-from pathlib import Path
-from typing import Any, Callable, Literal
-
-import numpy as np
-from gradio_client.documentation import document, set_documentation_group
-from gradio_client.serializing import GallerySerializable
-from PIL import Image as _Image # using _ to minimize namespace pollution
-
-from gradio import utils
-from gradio.components.base import IOComponent, _Keywords
-from gradio.deprecation import warn_deprecation, warn_style_method_deprecation
-from gradio.events import (
- EventListenerMethod,
- Selectable,
-)
-
-set_documentation_group("component")
-
-
-@document()
-class Gallery(IOComponent, GallerySerializable, Selectable):
- """
- Used to display a list of images as a gallery that can be scrolled through.
- Preprocessing: this component does *not* accept input.
- Postprocessing: expects a list of images in any format, {List[numpy.array | PIL.Image | str | pathlib.Path]}, or a {List} of (image, {str} caption) tuples and displays them.
-
- Demos: fake_gan
- """
-
- def __init__(
- self,
- value: list[np.ndarray | _Image.Image | str | Path | tuple]
- | Callable
- | None = None,
- *,
- label: str | None = None,
- every: float | None = None,
- show_label: bool | None = None,
- container: bool = True,
- scale: int | None = None,
- min_width: int = 160,
- visible: bool = True,
- elem_id: str | None = None,
- elem_classes: list[str] | str | None = None,
- columns: int | tuple | None = 2,
- rows: int | tuple | None = None,
- height: str | None = None,
- preview: bool | None = None,
- object_fit: Literal["contain", "cover", "fill", "none", "scale-down"]
- | None = None,
- allow_preview: bool = True,
- show_share_button: bool | None = None,
- **kwargs,
- ):
- """
- Parameters:
- value: List of images to display in the gallery by default. If callable, the function will be called whenever the app loads to set the initial value of the component.
- label: component name in interface.
- every: If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. Queue must be enabled. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute.
- show_label: if True, will display label.
- container: If True, will place the component in a container - providing some extra padding around the border.
- scale: relative width compared to adjacent Components in a Row. For example, if Component A has scale=2, and Component B has scale=1, A will be twice as wide as B. Should be an integer.
- min_width: minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first.
- visible: If False, component will be hidden.
- elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles.
- elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles.
- columns: Represents the number of images that should be shown in one row, for each of the six standard screen sizes (<576px, <768px, <992px, <1200px, <1400px, >1400px). if fewer that 6 are given then the last will be used for all subsequent breakpoints
- rows: Represents the number of rows in the image grid, for each of the six standard screen sizes (<576px, <768px, <992px, <1200px, <1400px, >1400px). if fewer that 6 are given then the last will be used for all subsequent breakpoints
- height: Height of the gallery.
- preview: If True, will display the Gallery in preview mode, which shows all of the images as thumbnails and allows the user to click on them to view them in full size.
- object_fit: CSS object-fit property for the thumbnail images in the gallery. Can be "contain", "cover", "fill", "none", or "scale-down".
- allow_preview: If True, images in the gallery will be enlarged when they are clicked. Default is True.
- show_share_button: If True, will show a share icon in the corner of the component that allows user to share outputs to Hugging Face Spaces Discussions. If False, icon does not appear. If set to None (default behavior), then the icon appears if this Gradio app is launched on Spaces, but not otherwise.
- """
- self.grid_cols = columns
- self.grid_rows = rows
- self.height = height
- self.preview = preview
- self.object_fit = object_fit
- self.allow_preview = allow_preview
- self.select: EventListenerMethod
- """
- Event listener for when the user selects image within Gallery.
- Uses event data gradio.SelectData to carry `value` referring to caption of selected image, and `index` to refer to index.
- See EventData documentation on how to use this event data.
- """
- self.show_share_button = (
- (utils.get_space() is not None)
- if show_share_button is None
- else show_share_button
- )
- IOComponent.__init__(
- self,
- label=label,
- every=every,
- show_label=show_label,
- container=container,
- scale=scale,
- min_width=min_width,
- visible=visible,
- elem_id=elem_id,
- elem_classes=elem_classes,
- value=value,
- **kwargs,
- )
-
- @staticmethod
- def update(
- value: Any | Literal[_Keywords.NO_VALUE] | None = _Keywords.NO_VALUE,
- label: str | None = None,
- show_label: bool | None = None,
- container: bool | None = None,
- scale: int | None = None,
- min_width: int | None = None,
- visible: bool | None = None,
- columns: int | tuple | None = None,
- rows: int | tuple | None = None,
- height: str | None = None,
- preview: bool | None = None,
- object_fit: Literal["contain", "cover", "fill", "none", "scale-down"]
- | None = None,
- allow_preview: bool | None = None,
- show_share_button: bool | None = None,
- ):
- updated_config = {
- "label": label,
- "show_label": show_label,
- "container": container,
- "scale": scale,
- "min_width": min_width,
- "visible": visible,
- "value": value,
- "grid_cols": columns,
- "grid_rows": rows,
- "height": height,
- "preview": preview,
- "object_fit": object_fit,
- "allow_preview": allow_preview,
- "show_share_button": show_share_button,
- "__type__": "update",
- }
- return updated_config
-
- def get_config(self):
- return {
- "value": self.value,
- "grid_cols": self.grid_cols,
- "grid_rows": self.grid_rows,
- "height": self.height,
- "preview": self.preview,
- "object_fit": self.object_fit,
- "allow_preview": self.allow_preview,
- "show_share_button": self.show_share_button,
- **IOComponent.get_config(self),
- }
-
- def postprocess(
- self,
- y: list[np.ndarray | _Image.Image | str]
- | list[tuple[np.ndarray | _Image.Image | str, str]]
- | None,
- ) -> list[str]:
- """
- Parameters:
- y: list of images, or list of (image, caption) tuples
- Returns:
- list of string file paths to images in temp directory
- """
- if y is None:
- return []
- output = []
- for img in y:
- caption = None
- if isinstance(img, (tuple, list)):
- img, caption = img
- if isinstance(img, np.ndarray):
- file = self.img_array_to_temp_file(img, dir=self.DEFAULT_TEMP_DIR)
- file_path = str(utils.abspath(file))
- self.temp_files.add(file_path)
- elif isinstance(img, _Image.Image):
- file = self.pil_to_temp_file(img, dir=self.DEFAULT_TEMP_DIR)
- file_path = str(utils.abspath(file))
- self.temp_files.add(file_path)
- elif isinstance(img, (str, Path)):
- if utils.validate_url(img):
- file_path = img
- else:
- file_path = self.make_temp_copy_if_needed(img)
- else:
- raise ValueError(f"Cannot process type as image: {type(img)}")
-
- if caption is not None:
- output.append(
- [{"name": file_path, "data": None, "is_file": True}, caption]
- )
- else:
- output.append({"name": file_path, "data": None, "is_file": True})
-
- return output
-
- def style(
- self,
- *,
- grid: int | tuple | None = None,
- columns: int | tuple | None = None,
- rows: int | tuple | None = None,
- height: str | None = None,
- container: bool | None = None,
- preview: bool | None = None,
- object_fit: str | None = None,
- **kwargs,
- ):
- """
- This method is deprecated. Please set these arguments in the constructor instead.
- """
- warn_style_method_deprecation()
- if grid is not None:
- warn_deprecation(
- "The 'grid' parameter will be deprecated. Please use 'grid_cols' in the constructor instead.",
- )
- self.grid_cols = grid
- if columns is not None:
- self.grid_cols = columns
- if rows is not None:
- self.grid_rows = rows
- if height is not None:
- self.height = height
- if preview is not None:
- self.preview = preview
- if object_fit is not None:
- self.object_fit = object_fit
- if container is not None:
- self.container = container
- return self
diff --git a/spaces/Daniton/streaming_chat_with_gpt-3.5-turbo_using_langchain_sorta1234/README.md b/spaces/Daniton/streaming_chat_with_gpt-3.5-turbo_using_langchain_sorta1234/README.md
deleted file mode 100644
index ec070803cf2adac1529d494d85ea7131134dad2d..0000000000000000000000000000000000000000
--- a/spaces/Daniton/streaming_chat_with_gpt-3.5-turbo_using_langchain_sorta1234/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Streaming Chat With Gpt-3.5-turbo Using Langchain Sorta
-emoji: 📚
-colorFrom: purple
-colorTo: purple
-sdk: gradio
-sdk_version: 3.19.1
-app_file: app.py
-pinned: false
-license: mit
-duplicated_from: lukestanley/streaming_chat_with_gpt-3.5-turbo_using_langchain_sorta
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/DataRaptor/ActionNet/ModelClass.py b/spaces/DataRaptor/ActionNet/ModelClass.py
deleted file mode 100644
index 35f0b4b04399a779fa70e62764ec9d5a1e54b6f6..0000000000000000000000000000000000000000
--- a/spaces/DataRaptor/ActionNet/ModelClass.py
+++ /dev/null
@@ -1,103 +0,0 @@
-import torch
-from torch import nn
-from torchvision import transforms, models
-
-class ActionClassifier(nn.Module):
- def __init__(self, train_last_nlayer, hidden_size, dropout, ntargets):
- super().__init__()
- resnet = models.resnet50(weights=models.ResNet50_Weights.DEFAULT, progress=True)
- modules = list(resnet.children())[:-1] # delete last layer
-
- self.resnet = nn.Sequential(*modules)
- for param in self.resnet[:-train_last_nlayer].parameters():
- param.requires_grad = False
-
- self.fc = nn.Sequential(
- nn.Flatten(),
- nn.BatchNorm1d(resnet.fc.in_features),
- nn.Dropout(dropout),
- nn.Linear(resnet.fc.in_features, hidden_size),
- nn.ReLU(),
- nn.BatchNorm1d(hidden_size),
- nn.Dropout(dropout),
- nn.Linear(hidden_size, ntargets),
- nn.Sigmoid()
- )
-
- def forward(self, x):
- x = self.resnet(x)
- x = self.fc(x)
- return x
-
-
-def get_transform():
- transform = transforms.Compose([
- transforms.Resize([224, 244]),
- models.ResNet50_Weights.DEFAULT.transforms()
- ])
- return transform
-
-# def get_transform():
-# transform = transforms.Compose([
-# transforms.Resize([224, 244]),
-# transforms.ToTensor(),
-# # std multiply by 255 to convert img of [0, 255]
-# # to img of [0, 1]
-# transforms.Normalize((0.485, 0.456, 0.406),
-# (0.229*255, 0.224*255, 0.225*255))]
-# )
-# return transform
-
-
-def get_model():
- model = ActionClassifier(0, 512, 0.2, 15)
- model.load_state_dict(torch.load('./model_weights.pth', map_location=torch.device('cpu')))
- return model
-
-
-def get_class(index):
- ind2cat = [
- 'calling',
- 'clapping',
- 'cycling',
- 'dancing',
- 'drinking',
- 'eating',
- 'fighting',
- 'hugging',
- 'laughing',
- 'listening_to_music',
- 'running',
- 'sitting',
- 'sleeping',
- 'texting',
- 'using_laptop'
- ]
- return ind2cat[index]
-
-
-
-
-# img = Image.open('./inputs/Image_102.jpg').convert('RGB')
-# #print(transform(img))
-# img = transform(img)
-# img = img.unsqueeze(dim=0)
-# print(img.shape)
-
-
-
-
-
-
-# model.eval()
-# with torch.no_grad():
-# out = model(img)
-# out = nn.Softmax()(out).squeeze()
-# print(out.shape)
-# res = torch.argmax(out)
-
-# print(ind2cat[res])
-
-
-
-
diff --git a/spaces/Datasculptor/DescriptionGPT/demo.py b/spaces/Datasculptor/DescriptionGPT/demo.py
deleted file mode 100644
index 183f6c44c4ca8cdf32d1c4b57bd75a11a07dcbde..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/DescriptionGPT/demo.py
+++ /dev/null
@@ -1,206 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import argparse
-import glob
-import multiprocessing as mp
-import numpy as np
-import os
-import tempfile
-import time
-import warnings
-import cv2
-import tqdm
-import sys
-
-from detectron2.config import get_cfg
-from detectron2.data.detection_utils import read_image
-from detectron2.utils.logger import setup_logger
-
-sys.path.insert(0, 'third_party/CenterNet2/projects/CenterNet2/')
-sys.path.insert(0, 'third_party/CenterNet2/')
-
-from centernet.config import add_centernet_config
-from detic.config import add_detic_config
-
-from detic.predictor import VisualizationDemo
-
-
-# constants
-WINDOW_NAME = "Detic"
-
-def setup_cfg(args):
- cfg = get_cfg()
- add_centernet_config(cfg)
- add_detic_config(cfg)
- cfg.merge_from_file(args.config_file)
- cfg.merge_from_list(args.opts)
- # Set score_threshold for builtin models
- cfg.MODEL.RETINANET.SCORE_THRESH_TEST = args.confidence_threshold
- cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = args.confidence_threshold
- cfg.MODEL.PANOPTIC_FPN.COMBINE.INSTANCES_CONFIDENCE_THRESH = args.confidence_threshold
- cfg.MODEL.ROI_BOX_HEAD.ZEROSHOT_WEIGHT_PATH = 'rand' # load later
- if not args.pred_all_class:
- cfg.MODEL.ROI_HEADS.ONE_CLASS_PER_PROPOSAL = True
- cfg.freeze()
- return cfg
-
-
-def get_parser():
- parser = argparse.ArgumentParser(description="Detectron2 demo for builtin configs")
- parser.add_argument(
- "--config-file",
- default="configs/quick_schedules/mask_rcnn_R_50_FPN_inference_acc_test.yaml",
- metavar="FILE",
- help="path to config file",
- )
- parser.add_argument("--webcam", action="store_true", help="Take inputs from webcam.")
- parser.add_argument("--video-input", help="Path to video file.")
- parser.add_argument(
- "--input",
- nargs="+",
- help="A list of space separated input images; "
- "or a single glob pattern such as 'directory/*.jpg'",
- )
- parser.add_argument(
- "--output",
- help="A file or directory to save output visualizations. "
- "If not given, will show output in an OpenCV window.",
- )
- parser.add_argument(
- "--vocabulary",
- default="lvis",
- choices=['lvis', 'openimages', 'objects365', 'coco', 'custom'],
- help="",
- )
- parser.add_argument(
- "--custom_vocabulary",
- default="",
- help="",
- )
- parser.add_argument("--pred_all_class", action='store_true')
- parser.add_argument(
- "--confidence-threshold",
- type=float,
- default=0.5,
- help="Minimum score for instance predictions to be shown",
- )
- parser.add_argument(
- "--opts",
- help="Modify config options using the command-line 'KEY VALUE' pairs",
- default=[],
- nargs=argparse.REMAINDER,
- )
- return parser
-
-
-def test_opencv_video_format(codec, file_ext):
- with tempfile.TemporaryDirectory(prefix="video_format_test") as dir:
- filename = os.path.join(dir, "test_file" + file_ext)
- writer = cv2.VideoWriter(
- filename=filename,
- fourcc=cv2.VideoWriter_fourcc(*codec),
- fps=float(30),
- frameSize=(10, 10),
- isColor=True,
- )
- [writer.write(np.zeros((10, 10, 3), np.uint8)) for _ in range(30)]
- writer.release()
- if os.path.isfile(filename):
- return True
- return False
-
-
-if __name__ == "__main__":
- mp.set_start_method("spawn", force=True)
- args = get_parser().parse_args()
- setup_logger(name="fvcore")
- logger = setup_logger()
- logger.info("Arguments: " + str(args))
-
- cfg = setup_cfg(args)
-
- demo = VisualizationDemo(cfg, args)
-
- if args.input:
- if len(args.input) == 1:
- args.input = glob.glob(os.path.expanduser(args.input[0]))
- assert args.input, "The input path(s) was not found"
- for path in tqdm.tqdm(args.input, disable=not args.output):
- img = read_image(path, format="BGR")
- start_time = time.time()
- predictions, visualized_output = demo.run_on_image(img)
- logger.info(
- "{}: {} in {:.2f}s".format(
- path,
- "detected {} instances".format(len(predictions["instances"]))
- if "instances" in predictions
- else "finished",
- time.time() - start_time,
- )
- )
-
- if args.output:
- if os.path.isdir(args.output):
- assert os.path.isdir(args.output), args.output
- out_filename = os.path.join(args.output, os.path.basename(path))
- else:
- assert len(args.input) == 1, "Please specify a directory with args.output"
- out_filename = args.output
- visualized_output.save(out_filename)
- else:
- cv2.namedWindow(WINDOW_NAME, cv2.WINDOW_NORMAL)
- cv2.imshow(WINDOW_NAME, visualized_output.get_image()[:, :, ::-1])
- if cv2.waitKey(0) == 27:
- break # esc to quit
- elif args.webcam:
- assert args.input is None, "Cannot have both --input and --webcam!"
- assert args.output is None, "output not yet supported with --webcam!"
- cam = cv2.VideoCapture(0)
- for vis in tqdm.tqdm(demo.run_on_video(cam)):
- cv2.namedWindow(WINDOW_NAME, cv2.WINDOW_NORMAL)
- cv2.imshow(WINDOW_NAME, vis)
- if cv2.waitKey(1) == 27:
- break # esc to quit
- cam.release()
- cv2.destroyAllWindows()
- elif args.video_input:
- video = cv2.VideoCapture(args.video_input)
- width = int(video.get(cv2.CAP_PROP_FRAME_WIDTH))
- height = int(video.get(cv2.CAP_PROP_FRAME_HEIGHT))
- frames_per_second = video.get(cv2.CAP_PROP_FPS)
- num_frames = int(video.get(cv2.CAP_PROP_FRAME_COUNT))
- basename = os.path.basename(args.video_input)
- codec, file_ext = (
- ("x264", ".mkv") if test_opencv_video_format("x264", ".mkv") else ("mp4v", ".mp4")
- )
- if codec == ".mp4v":
- warnings.warn("x264 codec not available, switching to mp4v")
- if args.output:
- if os.path.isdir(args.output):
- output_fname = os.path.join(args.output, basename)
- output_fname = os.path.splitext(output_fname)[0] + file_ext
- else:
- output_fname = args.output
- assert not os.path.isfile(output_fname), output_fname
- output_file = cv2.VideoWriter(
- filename=output_fname,
- # some installation of opencv may not support x264 (due to its license),
- # you can try other format (e.g. MPEG)
- fourcc=cv2.VideoWriter_fourcc(*codec),
- fps=float(frames_per_second),
- frameSize=(width, height),
- isColor=True,
- )
- assert os.path.isfile(args.video_input)
- for vis_frame in tqdm.tqdm(demo.run_on_video(video), total=num_frames):
- if args.output:
- output_file.write(vis_frame)
- else:
- cv2.namedWindow(basename, cv2.WINDOW_NORMAL)
- cv2.imshow(basename, vis_frame)
- if cv2.waitKey(1) == 27:
- break # esc to quit
- video.release()
- if args.output:
- output_file.release()
- else:
- cv2.destroyAllWindows()
diff --git a/spaces/Datasculptor/DescriptionGPT/tools/fix_o365_path.py b/spaces/Datasculptor/DescriptionGPT/tools/fix_o365_path.py
deleted file mode 100644
index 38716e56c465fc1a2b904a39dd3b9660eafba398..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/DescriptionGPT/tools/fix_o365_path.py
+++ /dev/null
@@ -1,28 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import argparse
-import json
-import path
-import os
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument("--ann", default='datasets/objects365/annotations/zhiyuan_objv2_train_fixname.json')
- parser.add_argument("--img_dir", default='datasets/objects365/train/')
- args = parser.parse_args()
-
- print('Loading', args.ann)
- data = json.load(open(args.ann, 'r'))
- images = []
- count = 0
- for x in data['images']:
- path = '{}/{}'.format(args.img_dir, x['file_name'])
- if os.path.exists(path):
- images.append(x)
- else:
- print(path)
- count = count + 1
- print('Missing', count, 'images')
- data['images'] = images
- out_name = args.ann[:-5] + '_fixmiss.json'
- print('Saving to', out_name)
- json.dump(data, open(out_name, 'w'))
diff --git a/spaces/Detomo/ai-comic-generation/src/app/page.tsx b/spaces/Detomo/ai-comic-generation/src/app/page.tsx
deleted file mode 100644
index e17e290690c1c924c960b7e9dc5bdca0bd6f7593..0000000000000000000000000000000000000000
--- a/spaces/Detomo/ai-comic-generation/src/app/page.tsx
+++ /dev/null
@@ -1,39 +0,0 @@
-"use server"
-
-import Head from "next/head"
-
-import Main from "./main"
-import { TooltipProvider } from "@/components/ui/tooltip"
-import Script from "next/script"
-
-// https://nextjs.org/docs/pages/building-your-application/optimizing/fonts
-
-export default async function IndexPage({ params: { ownerId } }: { params: { ownerId: string }}) {
- return (
- <>
-
-
-
-
-
-
-
-
-
-
-
-
- >
- )
-}
\ No newline at end of file
diff --git a/spaces/DiffusionArtco/RealisticPhotoModels/README.md b/spaces/DiffusionArtco/RealisticPhotoModels/README.md
deleted file mode 100644
index 1d4ec7b44d0c5a42f3ff4978e37c0070c004449f..0000000000000000000000000000000000000000
--- a/spaces/DiffusionArtco/RealisticPhotoModels/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: ImagineAI Imagine Generator
-emoji: 💩
-colorFrom: yellow
-colorTo: pink
-sdk: gradio
-sdk_version: 3.24.1
-app_file: app.py
-pinned: false
-duplicated_from: DiffusionArtco/Diffusion50
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/EAraid12/LoRA-DreamBooth-Training-UI/app.py b/spaces/EAraid12/LoRA-DreamBooth-Training-UI/app.py
deleted file mode 100644
index 1b47590d28504c5832a3fbb2fcd4f5ef121cf7d8..0000000000000000000000000000000000000000
--- a/spaces/EAraid12/LoRA-DreamBooth-Training-UI/app.py
+++ /dev/null
@@ -1,76 +0,0 @@
-#!/usr/bin/env python
-
-from __future__ import annotations
-
-import os
-
-import gradio as gr
-import torch
-
-from app_inference import create_inference_demo
-from app_training import create_training_demo
-from app_upload import create_upload_demo
-from inference import InferencePipeline
-from trainer import Trainer
-
-TITLE = '# LoRA DreamBooth Training UI'
-
-ORIGINAL_SPACE_ID = 'lora-library/LoRA-DreamBooth-Training-UI'
-SPACE_ID = os.getenv('SPACE_ID', ORIGINAL_SPACE_ID)
-SHARED_UI_WARNING = f'''# Attention - This Space doesn't work in this shared UI. You can duplicate and use it with a paid private T4 GPU.
-
-
-'''
-
-if os.getenv('SYSTEM') == 'spaces' and SPACE_ID != ORIGINAL_SPACE_ID:
- SETTINGS = f'Settings '
-else:
- SETTINGS = 'Settings'
-CUDA_NOT_AVAILABLE_WARNING = f'''# Attention - Running on CPU.
-
-You can assign a GPU in the {SETTINGS} tab if you are running this on HF Spaces.
-"T4 small" is sufficient to run this demo.
-
-'''
-
-HF_TOKEN_NOT_SPECIFIED_WARNING = f'''# Attention - The environment variable `HF_TOKEN` is not specified. Please specify your Hugging Face token with write permission as the value of it.
-
-You can check and create your Hugging Face tokens here .
-You can specify environment variables in the "Repository secrets" section of the {SETTINGS} tab.
-
-'''
-
-HF_TOKEN = os.getenv('HF_TOKEN')
-
-
-def show_warning(warning_text: str) -> gr.Blocks:
- with gr.Blocks() as demo:
- with gr.Box():
- gr.Markdown(warning_text)
- return demo
-
-
-pipe = InferencePipeline(HF_TOKEN)
-trainer = Trainer(HF_TOKEN)
-
-with gr.Blocks(css='style.css') as demo:
- if os.getenv('IS_SHARED_UI'):
- show_warning(SHARED_UI_WARNING)
- if not torch.cuda.is_available():
- show_warning(CUDA_NOT_AVAILABLE_WARNING)
- if not HF_TOKEN:
- show_warning(HF_TOKEN_NOT_SPECIFIED_WARNING)
-
- gr.Markdown(TITLE)
- with gr.Tabs():
- with gr.TabItem('Train'):
- create_training_demo(trainer, pipe)
- with gr.TabItem('Test'):
- create_inference_demo(pipe, HF_TOKEN)
- with gr.TabItem('Upload'):
- gr.Markdown('''
- - You can use this tab to upload models later if you choose not to upload models in training time or if upload in training time failed.
- ''')
- create_upload_demo(HF_TOKEN)
-
-demo.queue(max_size=1).launch(share=False)
diff --git a/spaces/Eddycrack864/Applio-Inference/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py b/spaces/Eddycrack864/Applio-Inference/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py
deleted file mode 100644
index ee3171bcb7c4a5066560723108b56e055f18be45..0000000000000000000000000000000000000000
--- a/spaces/Eddycrack864/Applio-Inference/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py
+++ /dev/null
@@ -1,90 +0,0 @@
-from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor
-import pyworld
-import numpy as np
-
-
-class DioF0Predictor(F0Predictor):
- def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100):
- self.hop_length = hop_length
- self.f0_min = f0_min
- self.f0_max = f0_max
- self.sampling_rate = sampling_rate
-
- def interpolate_f0(self, f0):
- """
- 对F0进行插值处理
- """
-
- data = np.reshape(f0, (f0.size, 1))
-
- vuv_vector = np.zeros((data.size, 1), dtype=np.float32)
- vuv_vector[data > 0.0] = 1.0
- vuv_vector[data <= 0.0] = 0.0
-
- ip_data = data
-
- frame_number = data.size
- last_value = 0.0
- for i in range(frame_number):
- if data[i] <= 0.0:
- j = i + 1
- for j in range(i + 1, frame_number):
- if data[j] > 0.0:
- break
- if j < frame_number - 1:
- if last_value > 0.0:
- step = (data[j] - data[i - 1]) / float(j - i)
- for k in range(i, j):
- ip_data[k] = data[i - 1] + step * (k - i + 1)
- else:
- for k in range(i, j):
- ip_data[k] = data[j]
- else:
- for k in range(i, frame_number):
- ip_data[k] = last_value
- else:
- ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝
- last_value = data[i]
-
- return ip_data[:, 0], vuv_vector[:, 0]
-
- def resize_f0(self, x, target_len):
- source = np.array(x)
- source[source < 0.001] = np.nan
- target = np.interp(
- np.arange(0, len(source) * target_len, len(source)) / target_len,
- np.arange(0, len(source)),
- source,
- )
- res = np.nan_to_num(target)
- return res
-
- def compute_f0(self, wav, p_len=None):
- if p_len is None:
- p_len = wav.shape[0] // self.hop_length
- f0, t = pyworld.dio(
- wav.astype(np.double),
- fs=self.sampling_rate,
- f0_floor=self.f0_min,
- f0_ceil=self.f0_max,
- frame_period=1000 * self.hop_length / self.sampling_rate,
- )
- f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate)
- for index, pitch in enumerate(f0):
- f0[index] = round(pitch, 1)
- return self.interpolate_f0(self.resize_f0(f0, p_len))[0]
-
- def compute_f0_uv(self, wav, p_len=None):
- if p_len is None:
- p_len = wav.shape[0] // self.hop_length
- f0, t = pyworld.dio(
- wav.astype(np.double),
- fs=self.sampling_rate,
- f0_floor=self.f0_min,
- f0_ceil=self.f0_max,
- frame_period=1000 * self.hop_length / self.sampling_rate,
- )
- f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate)
- for index, pitch in enumerate(f0):
- f0[index] = round(pitch, 1)
- return self.interpolate_f0(self.resize_f0(f0, p_len))
diff --git a/spaces/ElAnon/emsai/README.md b/spaces/ElAnon/emsai/README.md
deleted file mode 100644
index 6c76015c6e89663b787d52e6e5e34567d6c4c5f3..0000000000000000000000000000000000000000
--- a/spaces/ElAnon/emsai/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Emsai
-emoji: 📉
-colorFrom: indigo
-colorTo: gray
-sdk: gradio
-sdk_version: 3.4.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/inference_realesrgan_video.py b/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/inference_realesrgan_video.py
deleted file mode 100644
index 639b848e6578a2480ee0784e664c7751e325c477..0000000000000000000000000000000000000000
--- a/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/inference_realesrgan_video.py
+++ /dev/null
@@ -1,199 +0,0 @@
-import argparse
-import glob
-import mimetypes
-import os
-import queue
-import shutil
-import torch
-from basicsr.archs.rrdbnet_arch import RRDBNet
-from basicsr.utils.logger import AvgTimer
-from tqdm import tqdm
-
-from realesrgan import IOConsumer, PrefetchReader, RealESRGANer
-from realesrgan.archs.srvgg_arch import SRVGGNetCompact
-
-
-def main():
- """Inference demo for Real-ESRGAN.
- It mainly for restoring anime videos.
-
- """
- parser = argparse.ArgumentParser()
- parser.add_argument('-i', '--input', type=str, default='inputs', help='Input image or folder')
- parser.add_argument(
- '-n',
- '--model_name',
- type=str,
- default='RealESRGAN_x4plus',
- help=('Model names: RealESRGAN_x4plus | RealESRNet_x4plus | RealESRGAN_x4plus_anime_6B | RealESRGAN_x2plus'
- 'RealESRGANv2-anime-xsx2 | RealESRGANv2-animevideo-xsx2-nousm | RealESRGANv2-animevideo-xsx2'
- 'RealESRGANv2-anime-xsx4 | RealESRGANv2-animevideo-xsx4-nousm | RealESRGANv2-animevideo-xsx4'))
- parser.add_argument('-o', '--output', type=str, default='results', help='Output folder')
- parser.add_argument('-s', '--outscale', type=float, default=4, help='The final upsampling scale of the image')
- parser.add_argument('--suffix', type=str, default='out', help='Suffix of the restored video')
- parser.add_argument('-t', '--tile', type=int, default=0, help='Tile size, 0 for no tile during testing')
- parser.add_argument('--tile_pad', type=int, default=10, help='Tile padding')
- parser.add_argument('--pre_pad', type=int, default=0, help='Pre padding size at each border')
- parser.add_argument('--face_enhance', action='store_true', help='Use GFPGAN to enhance face')
- parser.add_argument('--half', action='store_true', help='Use half precision during inference')
- parser.add_argument('-v', '--video', action='store_true', help='Output a video using ffmpeg')
- parser.add_argument('-a', '--audio', action='store_true', help='Keep audio')
- parser.add_argument('--fps', type=float, default=None, help='FPS of the output video')
- parser.add_argument('--consumer', type=int, default=4, help='Number of IO consumers')
-
- parser.add_argument(
- '--alpha_upsampler',
- type=str,
- default='realesrgan',
- help='The upsampler for the alpha channels. Options: realesrgan | bicubic')
- parser.add_argument(
- '--ext',
- type=str,
- default='auto',
- help='Image extension. Options: auto | jpg | png, auto means using the same extension as inputs')
- args = parser.parse_args()
-
- # ---------------------- determine models according to model names ---------------------- #
- args.model_name = args.model_name.split('.')[0]
- if args.model_name in ['RealESRGAN_x4plus', 'RealESRNet_x4plus']: # x4 RRDBNet model
- model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4)
- netscale = 4
- elif args.model_name in ['RealESRGAN_x4plus_anime_6B']: # x4 RRDBNet model with 6 blocks
- model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=6, num_grow_ch=32, scale=4)
- netscale = 4
- elif args.model_name in ['RealESRGAN_x2plus']: # x2 RRDBNet model
- model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=2)
- netscale = 2
- elif args.model_name in [
- 'RealESRGANv2-anime-xsx2', 'RealESRGANv2-animevideo-xsx2-nousm', 'RealESRGANv2-animevideo-xsx2'
- ]: # x2 VGG-style model (XS size)
- model = SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=16, upscale=2, act_type='prelu')
- netscale = 2
- elif args.model_name in [
- 'RealESRGANv2-anime-xsx4', 'RealESRGANv2-animevideo-xsx4-nousm', 'RealESRGANv2-animevideo-xsx4'
- ]: # x4 VGG-style model (XS size)
- model = SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=16, upscale=4, act_type='prelu')
- netscale = 4
-
- # ---------------------- determine model paths ---------------------- #
- model_path = os.path.join('experiments/pretrained_models', args.model_name + '.pth')
- if not os.path.isfile(model_path):
- model_path = os.path.join('realesrgan/weights', args.model_name + '.pth')
- if not os.path.isfile(model_path):
- raise ValueError(f'Model {args.model_name} does not exist.')
-
- # restorer
- upsampler = RealESRGANer(
- scale=netscale,
- model_path=model_path,
- model=model,
- tile=args.tile,
- tile_pad=args.tile_pad,
- pre_pad=args.pre_pad,
- half=args.half)
-
- if args.face_enhance: # Use GFPGAN for face enhancement
- from gfpgan import GFPGANer
- face_enhancer = GFPGANer(
- model_path='https://github.com/TencentARC/GFPGAN/releases/download/v0.2.0/GFPGANCleanv1-NoCE-C2.pth',
- upscale=args.outscale,
- arch='clean',
- channel_multiplier=2,
- bg_upsampler=upsampler)
- os.makedirs(args.output, exist_ok=True)
- # for saving restored frames
- save_frame_folder = os.path.join(args.output, 'frames_tmpout')
- os.makedirs(save_frame_folder, exist_ok=True)
-
- if mimetypes.guess_type(args.input)[0].startswith('video'): # is a video file
- video_name = os.path.splitext(os.path.basename(args.input))[0]
- frame_folder = os.path.join('tmp_frames', video_name)
- os.makedirs(frame_folder, exist_ok=True)
- # use ffmpeg to extract frames
- os.system(f'ffmpeg -i {args.input} -qscale:v 1 -qmin 1 -qmax 1 -vsync 0 {frame_folder}/frame%08d.png')
- # get image path list
- paths = sorted(glob.glob(os.path.join(frame_folder, '*')))
- if args.video:
- if args.fps is None:
- # get input video fps
- import ffmpeg
- probe = ffmpeg.probe(args.input)
- video_streams = [stream for stream in probe['streams'] if stream['codec_type'] == 'video']
- args.fps = eval(video_streams[0]['avg_frame_rate'])
- elif mimetypes.guess_type(args.input)[0].startswith('image'): # is an image file
- paths = [args.input]
- video_name = 'video'
- else:
- paths = sorted(glob.glob(os.path.join(args.input, '*')))
- video_name = 'video'
-
- timer = AvgTimer()
- timer.start()
- pbar = tqdm(total=len(paths), unit='frame', desc='inference')
- # set up prefetch reader
- reader = PrefetchReader(paths, num_prefetch_queue=4)
- reader.start()
-
- que = queue.Queue()
- consumers = [IOConsumer(args, que, f'IO_{i}') for i in range(args.consumer)]
- for consumer in consumers:
- consumer.start()
-
- for idx, (path, img) in enumerate(zip(paths, reader)):
- imgname, extension = os.path.splitext(os.path.basename(path))
- if len(img.shape) == 3 and img.shape[2] == 4:
- img_mode = 'RGBA'
- else:
- img_mode = None
-
- try:
- if args.face_enhance:
- _, _, output = face_enhancer.enhance(img, has_aligned=False, only_center_face=False, paste_back=True)
- else:
- output, _ = upsampler.enhance(img, outscale=args.outscale)
- except RuntimeError as error:
- print('Error', error)
- print('If you encounter CUDA out of memory, try to set --tile with a smaller number.')
-
- else:
- if args.ext == 'auto':
- extension = extension[1:]
- else:
- extension = args.ext
- if img_mode == 'RGBA': # RGBA images should be saved in png format
- extension = 'png'
- save_path = os.path.join(save_frame_folder, f'{imgname}_out.{extension}')
-
- que.put({'output': output, 'save_path': save_path})
-
- pbar.update(1)
- torch.cuda.synchronize()
- timer.record()
- avg_fps = 1. / (timer.get_avg_time() + 1e-7)
- pbar.set_description(f'idx {idx}, fps {avg_fps:.2f}')
-
- for _ in range(args.consumer):
- que.put('quit')
- for consumer in consumers:
- consumer.join()
- pbar.close()
-
- # merge frames to video
- if args.video:
- video_save_path = os.path.join(args.output, f'{video_name}_{args.suffix}.mp4')
- if args.audio:
- os.system(
- f'ffmpeg -r {args.fps} -i {save_frame_folder}/frame%08d_out.{extension} -i {args.input}'
- f' -map 0:v:0 -map 1:a:0 -c:a copy -c:v libx264 -r {args.fps} -pix_fmt yuv420p {video_save_path}')
- else:
- os.system(f'ffmpeg -r {args.fps} -i {save_frame_folder}/frame%08d_out.{extension} '
- f'-c:v libx264 -r {args.fps} -pix_fmt yuv420p {video_save_path}')
-
- # delete tmp file
- shutil.rmtree(save_frame_folder)
- if os.path.isdir(frame_folder):
- shutil.rmtree(frame_folder)
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/realesrgan/utils.py b/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/realesrgan/utils.py
deleted file mode 100644
index 10e7c23d04f777c250160e74470fdfacb16eab88..0000000000000000000000000000000000000000
--- a/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/realesrgan/utils.py
+++ /dev/null
@@ -1,280 +0,0 @@
-import cv2
-import math
-import numpy as np
-import os
-import queue
-import threading
-import torch
-from basicsr.utils.download_util import load_file_from_url
-from torch.nn import functional as F
-
-ROOT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
-
-
-class RealESRGANer():
- """A helper class for upsampling images with RealESRGAN.
-
- Args:
- scale (int): Upsampling scale factor used in the networks. It is usually 2 or 4.
- model_path (str): The path to the pretrained model. It can be urls (will first download it automatically).
- model (nn.Module): The defined network. Default: None.
- tile (int): As too large images result in the out of GPU memory issue, so this tile option will first crop
- input images into tiles, and then process each of them. Finally, they will be merged into one image.
- 0 denotes for do not use tile. Default: 0.
- tile_pad (int): The pad size for each tile, to remove border artifacts. Default: 10.
- pre_pad (int): Pad the input images to avoid border artifacts. Default: 10.
- half (float): Whether to use half precision during inference. Default: False.
- """
-
- def __init__(self, scale, model_path, model=None, tile=0, tile_pad=10, pre_pad=10, half=False):
- self.scale = scale
- self.tile_size = tile
- self.tile_pad = tile_pad
- self.pre_pad = pre_pad
- self.mod_scale = None
- self.half = half
-
- # initialize model
- self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
- # if the model_path starts with https, it will first download models to the folder: realesrgan/weights
- if model_path.startswith('https://'):
- model_path = load_file_from_url(
- url=model_path, model_dir=os.path.join(ROOT_DIR, 'realesrgan/weights'), progress=True, file_name=None)
- loadnet = torch.load(model_path, map_location=torch.device('cpu'))
- # prefer to use params_ema
- if 'params_ema' in loadnet:
- keyname = 'params_ema'
- else:
- keyname = 'params'
- model.load_state_dict(loadnet[keyname], strict=True)
- model.eval()
- self.model = model.to(self.device)
- if self.half:
- self.model = self.model.half()
-
- def pre_process(self, img):
- """Pre-process, such as pre-pad and mod pad, so that the images can be divisible
- """
- img = torch.from_numpy(np.transpose(img, (2, 0, 1))).float()
- self.img = img.unsqueeze(0).to(self.device)
- if self.half:
- self.img = self.img.half()
-
- # pre_pad
- if self.pre_pad != 0:
- self.img = F.pad(self.img, (0, self.pre_pad, 0, self.pre_pad), 'reflect')
- # mod pad for divisible borders
- if self.scale == 2:
- self.mod_scale = 2
- elif self.scale == 1:
- self.mod_scale = 4
- if self.mod_scale is not None:
- self.mod_pad_h, self.mod_pad_w = 0, 0
- _, _, h, w = self.img.size()
- if (h % self.mod_scale != 0):
- self.mod_pad_h = (self.mod_scale - h % self.mod_scale)
- if (w % self.mod_scale != 0):
- self.mod_pad_w = (self.mod_scale - w % self.mod_scale)
- self.img = F.pad(self.img, (0, self.mod_pad_w, 0, self.mod_pad_h), 'reflect')
-
- def process(self):
- # model inference
- self.output = self.model(self.img)
-
- def tile_process(self):
- """It will first crop input images to tiles, and then process each tile.
- Finally, all the processed tiles are merged into one images.
-
- Modified from: https://github.com/ata4/esrgan-launcher
- """
- batch, channel, height, width = self.img.shape
- output_height = height * self.scale
- output_width = width * self.scale
- output_shape = (batch, channel, output_height, output_width)
-
- # start with black image
- self.output = self.img.new_zeros(output_shape)
- tiles_x = math.ceil(width / self.tile_size)
- tiles_y = math.ceil(height / self.tile_size)
-
- # loop over all tiles
- for y in range(tiles_y):
- for x in range(tiles_x):
- # extract tile from input image
- ofs_x = x * self.tile_size
- ofs_y = y * self.tile_size
- # input tile area on total image
- input_start_x = ofs_x
- input_end_x = min(ofs_x + self.tile_size, width)
- input_start_y = ofs_y
- input_end_y = min(ofs_y + self.tile_size, height)
-
- # input tile area on total image with padding
- input_start_x_pad = max(input_start_x - self.tile_pad, 0)
- input_end_x_pad = min(input_end_x + self.tile_pad, width)
- input_start_y_pad = max(input_start_y - self.tile_pad, 0)
- input_end_y_pad = min(input_end_y + self.tile_pad, height)
-
- # input tile dimensions
- input_tile_width = input_end_x - input_start_x
- input_tile_height = input_end_y - input_start_y
- tile_idx = y * tiles_x + x + 1
- input_tile = self.img[:, :, input_start_y_pad:input_end_y_pad, input_start_x_pad:input_end_x_pad]
-
- # upscale tile
- try:
- with torch.no_grad():
- output_tile = self.model(input_tile)
- except RuntimeError as error:
- print('Error', error)
- print(f'\tTile {tile_idx}/{tiles_x * tiles_y}')
-
- # output tile area on total image
- output_start_x = input_start_x * self.scale
- output_end_x = input_end_x * self.scale
- output_start_y = input_start_y * self.scale
- output_end_y = input_end_y * self.scale
-
- # output tile area without padding
- output_start_x_tile = (input_start_x - input_start_x_pad) * self.scale
- output_end_x_tile = output_start_x_tile + input_tile_width * self.scale
- output_start_y_tile = (input_start_y - input_start_y_pad) * self.scale
- output_end_y_tile = output_start_y_tile + input_tile_height * self.scale
-
- # put tile into output image
- self.output[:, :, output_start_y:output_end_y,
- output_start_x:output_end_x] = output_tile[:, :, output_start_y_tile:output_end_y_tile,
- output_start_x_tile:output_end_x_tile]
-
- def post_process(self):
- # remove extra pad
- if self.mod_scale is not None:
- _, _, h, w = self.output.size()
- self.output = self.output[:, :, 0:h - self.mod_pad_h * self.scale, 0:w - self.mod_pad_w * self.scale]
- # remove prepad
- if self.pre_pad != 0:
- _, _, h, w = self.output.size()
- self.output = self.output[:, :, 0:h - self.pre_pad * self.scale, 0:w - self.pre_pad * self.scale]
- return self.output
-
- @torch.no_grad()
- def enhance(self, img, outscale=None, alpha_upsampler='realesrgan'):
- h_input, w_input = img.shape[0:2]
- # img: numpy
- img = img.astype(np.float32)
- if np.max(img) > 256: # 16-bit image
- max_range = 65535
- print('\tInput is a 16-bit image')
- else:
- max_range = 255
- img = img / max_range
- if len(img.shape) == 2: # gray image
- img_mode = 'L'
- img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB)
- elif img.shape[2] == 4: # RGBA image with alpha channel
- img_mode = 'RGBA'
- alpha = img[:, :, 3]
- img = img[:, :, 0:3]
- img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
- if alpha_upsampler == 'realesrgan':
- alpha = cv2.cvtColor(alpha, cv2.COLOR_GRAY2RGB)
- else:
- img_mode = 'RGB'
- img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
-
- # ------------------- process image (without the alpha channel) ------------------- #
- self.pre_process(img)
- if self.tile_size > 0:
- self.tile_process()
- else:
- self.process()
- output_img = self.post_process()
- output_img = output_img.data.squeeze().float().cpu().clamp_(0, 1).numpy()
- output_img = np.transpose(output_img[[2, 1, 0], :, :], (1, 2, 0))
- if img_mode == 'L':
- output_img = cv2.cvtColor(output_img, cv2.COLOR_BGR2GRAY)
-
- # ------------------- process the alpha channel if necessary ------------------- #
- if img_mode == 'RGBA':
- if alpha_upsampler == 'realesrgan':
- self.pre_process(alpha)
- if self.tile_size > 0:
- self.tile_process()
- else:
- self.process()
- output_alpha = self.post_process()
- output_alpha = output_alpha.data.squeeze().float().cpu().clamp_(0, 1).numpy()
- output_alpha = np.transpose(output_alpha[[2, 1, 0], :, :], (1, 2, 0))
- output_alpha = cv2.cvtColor(output_alpha, cv2.COLOR_BGR2GRAY)
- else: # use the cv2 resize for alpha channel
- h, w = alpha.shape[0:2]
- output_alpha = cv2.resize(alpha, (w * self.scale, h * self.scale), interpolation=cv2.INTER_LINEAR)
-
- # merge the alpha channel
- output_img = cv2.cvtColor(output_img, cv2.COLOR_BGR2BGRA)
- output_img[:, :, 3] = output_alpha
-
- # ------------------------------ return ------------------------------ #
- if max_range == 65535: # 16-bit image
- output = (output_img * 65535.0).round().astype(np.uint16)
- else:
- output = (output_img * 255.0).round().astype(np.uint8)
-
- if outscale is not None and outscale != float(self.scale):
- output = cv2.resize(
- output, (
- int(w_input * outscale),
- int(h_input * outscale),
- ), interpolation=cv2.INTER_LANCZOS4)
-
- return output, img_mode
-
-
-class PrefetchReader(threading.Thread):
- """Prefetch images.
-
- Args:
- img_list (list[str]): A image list of image paths to be read.
- num_prefetch_queue (int): Number of prefetch queue.
- """
-
- def __init__(self, img_list, num_prefetch_queue):
- super().__init__()
- self.que = queue.Queue(num_prefetch_queue)
- self.img_list = img_list
-
- def run(self):
- for img_path in self.img_list:
- img = cv2.imread(img_path, cv2.IMREAD_UNCHANGED)
- self.que.put(img)
-
- self.que.put(None)
-
- def __next__(self):
- next_item = self.que.get()
- if next_item is None:
- raise StopIteration
- return next_item
-
- def __iter__(self):
- return self
-
-
-class IOConsumer(threading.Thread):
-
- def __init__(self, opt, que, qid):
- super().__init__()
- self._queue = que
- self.qid = qid
- self.opt = opt
-
- def run(self):
- while True:
- msg = self._queue.get()
- if isinstance(msg, str) and msg == 'quit':
- break
-
- output = msg['output']
- save_path = msg['save_path']
- cv2.imwrite(save_path, output)
- print(f'IO worker {self.qid} is done.')
diff --git a/spaces/Epitech/IOT_temperature/README.md b/spaces/Epitech/IOT_temperature/README.md
deleted file mode 100644
index ac4acec70a1d8cc7bcbde45c4860bf4985471985..0000000000000000000000000000000000000000
--- a/spaces/Epitech/IOT_temperature/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: IOT
-emoji: 🐢
-colorFrom: pink
-colorTo: pink
-sdk: streamlit
-sdk_version: 1.2.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/FlippFuzz/whisper-webui/src/whisper/whisperFactory.py b/spaces/FlippFuzz/whisper-webui/src/whisper/whisperFactory.py
deleted file mode 100644
index 58fc840b7e60947fec4a98b2833ff03e7ad7b7de..0000000000000000000000000000000000000000
--- a/spaces/FlippFuzz/whisper-webui/src/whisper/whisperFactory.py
+++ /dev/null
@@ -1,19 +0,0 @@
-from typing import List
-from src import modelCache
-from src.config import ModelConfig
-from src.whisper.abstractWhisperContainer import AbstractWhisperContainer
-
-def create_whisper_container(whisper_implementation: str,
- model_name: str, device: str = None, compute_type: str = "float16",
- download_root: str = None,
- cache: modelCache = None, models: List[ModelConfig] = []) -> AbstractWhisperContainer:
- print("Creating whisper container for " + whisper_implementation)
-
- if (whisper_implementation == "whisper"):
- from src.whisper.whisperContainer import WhisperContainer
- return WhisperContainer(model_name=model_name, device=device, compute_type=compute_type, download_root=download_root, cache=cache, models=models)
- elif (whisper_implementation == "faster-whisper" or whisper_implementation == "faster_whisper"):
- from src.whisper.fasterWhisperContainer import FasterWhisperContainer
- return FasterWhisperContainer(model_name=model_name, device=device, compute_type=compute_type, download_root=download_root, cache=cache, models=models)
- else:
- raise ValueError("Unknown Whisper implementation: " + whisper_implementation)
\ No newline at end of file
diff --git a/spaces/Foti/webui/README.md b/spaces/Foti/webui/README.md
deleted file mode 100644
index 013d12c9f3a56698056ae1bdbbfb0ec009805237..0000000000000000000000000000000000000000
--- a/spaces/Foti/webui/README.md
+++ /dev/null
@@ -1,20 +0,0 @@
----
-title: Stable Diffusion Web UI
-emoji: 🚧
-colorFrom: yellow
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.9
-app_file: app.py
-pinned: false
-duplicated_from: camenduru/webui
----
-
-## Stable Diffusion Web UI
-[https://github.com/AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui)
-
-## Documentation
-[https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki)
-
-## Models License
-https://huggingface.co/spaces/CompVis/stable-diffusion-license
\ No newline at end of file
diff --git a/spaces/GeorgeOrville/bingo/src/lib/hooks/use-enter-submit.tsx b/spaces/GeorgeOrville/bingo/src/lib/hooks/use-enter-submit.tsx
deleted file mode 100644
index d66b2d3253baff164235d4ca791aae6d84721835..0000000000000000000000000000000000000000
--- a/spaces/GeorgeOrville/bingo/src/lib/hooks/use-enter-submit.tsx
+++ /dev/null
@@ -1,23 +0,0 @@
-import { useRef, type RefObject } from 'react'
-
-export function useEnterSubmit(): {
- formRef: RefObject
- onKeyDown: (event: React.KeyboardEvent) => void
-} {
- const formRef = useRef(null)
-
- const handleKeyDown = (
- event: React.KeyboardEvent
- ): void => {
- if (
- event.key === 'Enter' &&
- !event.shiftKey &&
- !event.nativeEvent.isComposing
- ) {
- formRef.current?.requestSubmit()
- event.preventDefault()
- }
- }
-
- return { formRef, onKeyDown: handleKeyDown }
-}
diff --git a/spaces/Gianpaolog/newbie-elixir/README.md b/spaces/Gianpaolog/newbie-elixir/README.md
deleted file mode 100644
index 69e3efb33215d67b5c429ec8c1bb4a8f567a2188..0000000000000000000000000000000000000000
--- a/spaces/Gianpaolog/newbie-elixir/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Livebook
-emoji: 📓
-colorFrom: pink
-colorTo: purple
-sdk: docker
-fullWidth: true
----
-
-You can install and run [Livebook](https://livebook.dev/) inside a Hugging Face Space. Here's [a tutorial](https://huggingface.co/docs/hub/spaces-sdks-docker-livebook) on how to do that.
\ No newline at end of file
diff --git a/spaces/Goutam982/RVC_V2_voice_clone/i18n.py b/spaces/Goutam982/RVC_V2_voice_clone/i18n.py
deleted file mode 100644
index 37f310fadd0b48b2f364877158fb2105d645fc03..0000000000000000000000000000000000000000
--- a/spaces/Goutam982/RVC_V2_voice_clone/i18n.py
+++ /dev/null
@@ -1,28 +0,0 @@
-import locale
-import json
-import os
-
-
-def load_language_list(language):
- with open(f"./i18n/{language}.json", "r", encoding="utf-8") as f:
- language_list = json.load(f)
- return language_list
-
-
-class I18nAuto:
- def __init__(self, language=None):
- if language in ["Auto", None]:
- language = locale.getdefaultlocale()[
- 0
- ] # getlocale can't identify the system's language ((None, None))
- if not os.path.exists(f"./i18n/{language}.json"):
- language = "en_US"
- self.language = language
- # print("Use Language:", language)
- self.language_map = load_language_list(language)
-
- def __call__(self, key):
- return self.language_map.get(key, key)
-
- def print(self):
- print("Use Language:", self.language)
diff --git a/spaces/Gradio-Blocks/multilingual-asr/app.py b/spaces/Gradio-Blocks/multilingual-asr/app.py
deleted file mode 100644
index d72b68c06d76eba14a30643802304f3c975fb39c..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/multilingual-asr/app.py
+++ /dev/null
@@ -1,48 +0,0 @@
-import soundfile as sf
-import librosa
-import torch
-from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
-import gradio as gr
-import os
-
-
-api_token = os.getenv("API_TOKEN")
-model_name = "indonesian-nlp/wav2vec2-indonesian-javanese-sundanese"
-processor = Wav2Vec2Processor.from_pretrained(model_name, use_auth_token=api_token)
-model = Wav2Vec2ForCTC.from_pretrained(model_name, use_auth_token=api_token)
-device = "cuda" if torch.cuda.is_available() else "cpu"
-model = model.to(device)
-
-
-def convert(inputfile, outfile):
- target_sr = 16000
- data, sample_rate = librosa.load(inputfile)
- data = librosa.resample(data, orig_sr=sample_rate, target_sr=target_sr)
- sf.write(outfile, data, target_sr)
-
-
-def parse_transcription(wav_file):
- filename = wav_file.name.split('.')[0]
- convert(wav_file.name, filename + "16k.wav")
- speech, _ = sf.read(filename + "16k.wav")
- input_values = processor(speech, sampling_rate=16_000, return_tensors="pt").input_values
- input_values = input_values.to(device)
- logits = model(input_values).logits
- predicted_ids = torch.argmax(logits, dim=-1)
- transcription = processor.decode(predicted_ids[0], skip_special_tokens=True)
- return transcription
-
-
-output = gr.outputs.Textbox(label="The transcript")
-
-input_ = gr.inputs.Audio(source="microphone", type="file")
-
-gr.Interface(parse_transcription, inputs=input_, outputs=[output],
- analytics_enabled=False,
- title="Multilingual Speech Recognition for Indonesian Languages",
- description="Automatic Speech Recognition Live Demo for Indonesian, Javanese and Sundanese Language",
- article="This demo was built for the project "
- "Multilingual Speech Recognition for Indonesian Languages . "
- "It uses the indonesian-nlp/wav2vec2-indonesian-javanese-sundanese model "
- "which was fine-tuned on Indonesian Common Voice, Javanese and Sundanese OpenSLR speech datasets."
- ).launch( inline=False, server_name="0.0.0.0", show_tips=False, enable_queue=True)
diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/quantization/core_vq.py b/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/quantization/core_vq.py
deleted file mode 100644
index da02a6ce3a7de15353f0fba9e826052beb67c436..0000000000000000000000000000000000000000
--- a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/quantization/core_vq.py
+++ /dev/null
@@ -1,400 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import typing as tp
-
-from einops import rearrange, repeat
-import flashy
-import torch
-from torch import nn, einsum
-import torch.nn.functional as F
-
-
-def exists(val: tp.Optional[tp.Any]) -> bool:
- return val is not None
-
-
-def default(val: tp.Any, d: tp.Any) -> tp.Any:
- return val if exists(val) else d
-
-
-def l2norm(t):
- return F.normalize(t, p=2, dim=-1)
-
-
-def ema_inplace(moving_avg, new, decay: float):
- moving_avg.data.mul_(decay).add_(new, alpha=(1 - decay))
-
-
-def laplace_smoothing(x, n_categories: int, epsilon: float = 1e-5):
- return (x + epsilon) / (x.sum() + n_categories * epsilon)
-
-
-def uniform_init(*shape: int):
- t = torch.empty(shape)
- nn.init.kaiming_uniform_(t)
- return t
-
-
-def sample_vectors(samples, num: int):
- num_samples, device = samples.shape[0], samples.device
-
- if num_samples >= num:
- indices = torch.randperm(num_samples, device=device)[:num]
- else:
- indices = torch.randint(0, num_samples, (num,), device=device)
-
- return samples[indices]
-
-
-def kmeans(samples, num_clusters: int, num_iters: int = 10):
- dim, dtype = samples.shape[-1], samples.dtype
-
- means = sample_vectors(samples, num_clusters)
-
- for _ in range(num_iters):
- diffs = rearrange(samples, "n d -> n () d") - rearrange(
- means, "c d -> () c d"
- )
- dists = -(diffs ** 2).sum(dim=-1)
-
- buckets = dists.max(dim=-1).indices
- bins = torch.bincount(buckets, minlength=num_clusters)
- zero_mask = bins == 0
- bins_min_clamped = bins.masked_fill(zero_mask, 1)
-
- new_means = buckets.new_zeros(num_clusters, dim, dtype=dtype)
- new_means.scatter_add_(0, repeat(buckets, "n -> n d", d=dim), samples)
- new_means = new_means / bins_min_clamped[..., None]
-
- means = torch.where(zero_mask[..., None], means, new_means)
-
- return means, bins
-
-
-def orthogonal_loss_fn(t):
- # eq (2) from https://arxiv.org/abs/2112.00384
- n = t.shape[0]
- normed_codes = l2norm(t)
- identity = torch.eye(n, device=t.device)
- cosine_sim = einsum("i d, j d -> i j", normed_codes, normed_codes)
- return ((cosine_sim - identity) ** 2).sum() / (n ** 2)
-
-
-class EuclideanCodebook(nn.Module):
- """Codebook with Euclidean distance.
-
- Args:
- dim (int): Dimension.
- codebook_size (int): Codebook size.
- kmeans_init (bool): Whether to use k-means to initialize the codebooks.
- If set to true, run the k-means algorithm on the first training batch and use
- the learned centroids as initialization.
- kmeans_iters (int): Number of iterations used for k-means algorithm at initialization.
- decay (float): Decay for exponential moving average over the codebooks.
- epsilon (float): Epsilon value for numerical stability.
- threshold_ema_dead_code (int): Threshold for dead code expiration. Replace any codes
- that have an exponential moving average cluster size less than the specified threshold with
- randomly selected vector from the current batch.
- """
- def __init__(
- self,
- dim: int,
- codebook_size: int,
- kmeans_init: int = False,
- kmeans_iters: int = 10,
- decay: float = 0.8,
- epsilon: float = 1e-5,
- threshold_ema_dead_code: int = 2,
- ):
- super().__init__()
- self.decay = decay
- init_fn: tp.Union[tp.Callable[..., torch.Tensor], tp.Any] = uniform_init if not kmeans_init else torch.zeros
- embed = init_fn(codebook_size, dim)
-
- self.codebook_size = codebook_size
-
- self.kmeans_iters = kmeans_iters
- self.epsilon = epsilon
- self.threshold_ema_dead_code = threshold_ema_dead_code
-
- self.register_buffer("inited", torch.Tensor([not kmeans_init]))
- self.register_buffer("cluster_size", torch.zeros(codebook_size))
- self.register_buffer("embed", embed)
- self.register_buffer("embed_avg", embed.clone())
-
- @torch.jit.ignore
- def init_embed_(self, data):
- if self.inited:
- return
-
- embed, cluster_size = kmeans(data, self.codebook_size, self.kmeans_iters)
- self.embed.data.copy_(embed)
- self.embed_avg.data.copy_(embed.clone())
- self.cluster_size.data.copy_(cluster_size)
- self.inited.data.copy_(torch.Tensor([True]))
- # Make sure all buffers across workers are in sync after initialization
- flashy.distrib.broadcast_tensors(self.buffers())
-
- def replace_(self, samples, mask):
- modified_codebook = torch.where(
- mask[..., None], sample_vectors(samples, self.codebook_size), self.embed
- )
- self.embed.data.copy_(modified_codebook)
-
- def expire_codes_(self, batch_samples):
- if self.threshold_ema_dead_code == 0:
- return
-
- expired_codes = self.cluster_size < self.threshold_ema_dead_code
- if not torch.any(expired_codes):
- return
-
- batch_samples = rearrange(batch_samples, "... d -> (...) d")
- self.replace_(batch_samples, mask=expired_codes)
- flashy.distrib.broadcast_tensors(self.buffers())
-
- def preprocess(self, x):
- x = rearrange(x, "... d -> (...) d")
- return x
-
- def quantize(self, x):
- embed = self.embed.t()
- dist = -(
- x.pow(2).sum(1, keepdim=True)
- - 2 * x @ embed
- + embed.pow(2).sum(0, keepdim=True)
- )
- embed_ind = dist.max(dim=-1).indices
- return embed_ind
-
- def postprocess_emb(self, embed_ind, shape):
- return embed_ind.view(*shape[:-1])
-
- def dequantize(self, embed_ind):
- quantize = F.embedding(embed_ind, self.embed)
- return quantize
-
- def encode(self, x):
- shape = x.shape
- # pre-process
- x = self.preprocess(x)
- # quantize
- embed_ind = self.quantize(x)
- # post-process
- embed_ind = self.postprocess_emb(embed_ind, shape)
- return embed_ind
-
- def decode(self, embed_ind):
- quantize = self.dequantize(embed_ind)
- return quantize
-
- def forward(self, x):
- shape, dtype = x.shape, x.dtype
- x = self.preprocess(x)
- self.init_embed_(x)
-
- embed_ind = self.quantize(x)
- embed_onehot = F.one_hot(embed_ind, self.codebook_size).type(dtype)
- embed_ind = self.postprocess_emb(embed_ind, shape)
- quantize = self.dequantize(embed_ind)
-
- if self.training:
- # We do the expiry of code at that point as buffers are in sync
- # and all the workers will take the same decision.
- self.expire_codes_(x)
- ema_inplace(self.cluster_size, embed_onehot.sum(0), self.decay)
- embed_sum = x.t() @ embed_onehot
- ema_inplace(self.embed_avg, embed_sum.t(), self.decay)
- cluster_size = (
- laplace_smoothing(self.cluster_size, self.codebook_size, self.epsilon)
- * self.cluster_size.sum()
- )
- embed_normalized = self.embed_avg / cluster_size.unsqueeze(1)
- self.embed.data.copy_(embed_normalized)
-
- return quantize, embed_ind
-
-
-class VectorQuantization(nn.Module):
- """Vector quantization implementation.
- Currently supports only euclidean distance.
-
- Args:
- dim (int): Dimension
- codebook_size (int): Codebook size
- codebook_dim (int): Codebook dimension. If not defined, uses the specified dimension in dim.
- decay (float): Decay for exponential moving average over the codebooks.
- epsilon (float): Epsilon value for numerical stability.
- kmeans_init (bool): Whether to use kmeans to initialize the codebooks.
- kmeans_iters (int): Number of iterations used for kmeans initialization.
- threshold_ema_dead_code (int):
- channels_last (bool): Channels are the last dimension in the input tensors.
- commitment_weight (float): Weight for commitment loss.
- orthogonal_reg_weight (float): Orthogonal regularization weights.
- orthogonal_reg_active_codes_only (bool): Apply orthogonal regularization only on active codes.
- orthogonal_reg_max_codes (optional int): Maximum number of codes to consider
- for orthogonal regularization.
- threshold_ema_dead_code (int): Threshold for dead code expiration. Replace any codes
- that have an exponential moving average cluster size less than the specified threshold with
- randomly selected vector from the current batch.
- """
- def __init__(
- self,
- dim: int,
- codebook_size: int,
- codebook_dim: tp.Optional[int] = None,
- decay: float = 0.8,
- epsilon: float = 1e-5,
- kmeans_init: bool = False,
- kmeans_iters: int = 10,
- threshold_ema_dead_code: int = 2,
- channels_last: bool = False,
- commitment_weight: float = 1.,
- orthogonal_reg_weight: float = 0.0,
- orthogonal_reg_active_codes_only: bool = False,
- orthogonal_reg_max_codes: tp.Optional[int] = None,
- ):
- super().__init__()
- _codebook_dim: int = default(codebook_dim, dim)
-
- requires_projection = _codebook_dim != dim
- self.project_in = (nn.Linear(dim, _codebook_dim) if requires_projection else nn.Identity())
- self.project_out = (nn.Linear(_codebook_dim, dim) if requires_projection else nn.Identity())
-
- self.epsilon = epsilon
- self.commitment_weight = commitment_weight
-
- self.orthogonal_reg_weight = orthogonal_reg_weight
- self.orthogonal_reg_active_codes_only = orthogonal_reg_active_codes_only
- self.orthogonal_reg_max_codes = orthogonal_reg_max_codes
-
- self._codebook = EuclideanCodebook(dim=_codebook_dim, codebook_size=codebook_size,
- kmeans_init=kmeans_init, kmeans_iters=kmeans_iters,
- decay=decay, epsilon=epsilon,
- threshold_ema_dead_code=threshold_ema_dead_code)
- self.codebook_size = codebook_size
-
- self.channels_last = channels_last
-
- @property
- def codebook(self):
- return self._codebook.embed
-
- @property
- def inited(self):
- return self._codebook.inited
-
- def _preprocess(self, x):
- if not self.channels_last:
- x = rearrange(x, "b d n -> b n d")
- return x
-
- def _postprocess(self, quantize):
- if not self.channels_last:
- quantize = rearrange(quantize, "b n d -> b d n")
- return quantize
-
- def encode(self, x):
- x = self._preprocess(x)
- x = self.project_in(x)
- embed_in = self._codebook.encode(x)
- return embed_in
-
- def decode(self, embed_ind):
- quantize = self._codebook.decode(embed_ind)
- quantize = self.project_out(quantize)
- quantize = self._postprocess(quantize)
- return quantize
-
- def forward(self, x):
- device = x.device
- x = self._preprocess(x)
-
- x = self.project_in(x)
- quantize, embed_ind = self._codebook(x)
-
- if self.training:
- quantize = x + (quantize - x).detach()
-
- loss = torch.tensor([0.0], device=device, requires_grad=self.training)
-
- if self.training:
- if self.commitment_weight > 0:
- commit_loss = F.mse_loss(quantize.detach(), x)
- loss = loss + commit_loss * self.commitment_weight
-
- if self.orthogonal_reg_weight > 0:
- codebook = self.codebook
-
- if self.orthogonal_reg_active_codes_only:
- # only calculate orthogonal loss for the activated codes for this batch
- unique_code_ids = torch.unique(embed_ind)
- codebook = codebook[unique_code_ids]
-
- num_codes = codebook.shape[0]
- if exists(self.orthogonal_reg_max_codes) and num_codes > self.orthogonal_reg_max_codes:
- rand_ids = torch.randperm(num_codes, device=device)[:self.orthogonal_reg_max_codes]
- codebook = codebook[rand_ids]
-
- orthogonal_reg_loss = orthogonal_loss_fn(codebook)
- loss = loss + orthogonal_reg_loss * self.orthogonal_reg_weight
-
- quantize = self.project_out(quantize)
- quantize = self._postprocess(quantize)
-
- return quantize, embed_ind, loss
-
-
-class ResidualVectorQuantization(nn.Module):
- """Residual vector quantization implementation.
-
- Follows Algorithm 1. in https://arxiv.org/pdf/2107.03312.pdf
- """
- def __init__(self, *, num_quantizers, **kwargs):
- super().__init__()
- self.layers = nn.ModuleList(
- [VectorQuantization(**kwargs) for _ in range(num_quantizers)]
- )
-
- def forward(self, x, n_q: tp.Optional[int] = None):
- quantized_out = 0.0
- residual = x
-
- all_losses = []
- all_indices = []
-
- n_q = n_q or len(self.layers)
-
- for i, layer in enumerate(self.layers[:n_q]):
- quantized, indices, loss = layer(residual)
- residual = residual - quantized
- quantized_out = quantized_out + quantized
- all_indices.append(indices)
- all_losses.append(loss)
-
- out_losses, out_indices = map(torch.stack, (all_losses, all_indices))
- return quantized_out, out_indices, out_losses
-
- def encode(self, x: torch.Tensor, n_q: tp.Optional[int] = None) -> torch.Tensor:
- residual = x
- all_indices = []
- n_q = n_q or len(self.layers)
- for layer in self.layers[:n_q]:
- indices = layer.encode(residual)
- quantized = layer.decode(indices)
- residual = residual - quantized
- all_indices.append(indices)
- out_indices = torch.stack(all_indices)
- return out_indices
-
- def decode(self, q_indices: torch.Tensor) -> torch.Tensor:
- quantized_out = torch.tensor(0.0, device=q_indices.device)
- for i, indices in enumerate(q_indices):
- layer = self.layers[i]
- quantized = layer.decode(indices)
- quantized_out = quantized_out + quantized
- return quantized_out
diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/docs/TRAINING.md b/spaces/GrandaddyShmax/AudioCraft_Plus/docs/TRAINING.md
deleted file mode 100644
index 148de295f2ddfed2e4e893576bf31e1485038b8e..0000000000000000000000000000000000000000
--- a/spaces/GrandaddyShmax/AudioCraft_Plus/docs/TRAINING.md
+++ /dev/null
@@ -1,312 +0,0 @@
-# AudioCraft training pipelines
-
-AudioCraft training pipelines are built on top of PyTorch as our core deep learning library
-and [Flashy](https://github.com/facebookresearch/flashy) as our training pipeline design library,
-and [Dora](https://github.com/facebookresearch/dora) as our experiment manager.
-AudioCraft training pipelines are designed to be research and experiment-friendly.
-
-
-## Environment setup
-
-For the base installation, follow the instructions from the [README.md](../README.md).
-Below are some additional instructions for setting up environment to train new models.
-
-### Team and cluster configuration
-
-In order to support multiple teams and clusters, AudioCraft uses an environment configuration.
-The team configuration allows to specify cluster-specific configurations (e.g. SLURM configuration),
-or convenient mapping of paths between the supported environments.
-
-Each team can have a yaml file under the [configuration folder](../config). To select a team set the
-`AUDIOCRAFT_TEAM` environment variable to a valid team name (e.g. `labs` or `default`):
-```shell
-conda env config vars set AUDIOCRAFT_TEAM=default
-```
-
-Alternatively, you can add it to your `.bashrc`:
-```shell
-export AUDIOCRAFT_TEAM=default
-```
-
-If not defined, the environment will default to the `default` team.
-
-The cluster is automatically detected, but it is also possible to override it by setting
-the `AUDIOCRAFT_CLUSTER` environment variable.
-
-Based on this team and cluster, the environment is then configured with:
-* The dora experiment outputs directory.
-* The available slurm partitions: categorized by global and team.
-* A shared reference directory: In order to facilitate sharing research models while remaining
-agnostic to the used compute cluster, we created the `//reference` symbol that can be used in
-YAML config to point to a defined reference folder containing shared checkpoints
-(e.g. baselines, models for evaluation...).
-
-**Important:** The default output dir for trained models and checkpoints is under `/tmp/`. This is suitable
-only for quick testing. If you are doing anything serious you MUST edit the file `default.yaml` and
-properly set the `dora_dir` entries.
-
-#### Overriding environment configurations
-
-You can set the following environmet variables to bypass the team's environment configuration:
-* `AUDIOCRAFT_CONFIG`: absolute path to a team config yaml file.
-* `AUDIOCRAFT_DORA_DIR`: absolute path to a custom dora directory.
-* `AUDIOCRAFT_REFERENCE_DIR`: absolute path to the shared reference directory.
-
-## Training pipelines
-
-Each task supported in AudioCraft has its own training pipeline and dedicated solver.
-Learn more about solvers and key designs around AudioCraft training pipeline below.
-Please refer to the documentation of each task and model for specific information on a given task.
-
-
-### Solvers
-
-The core training component in AudioCraft is the solver. A solver holds the definition
-of how to solve a given task: It implements the training pipeline logic, combining the datasets,
-model, optimization criterion and components and the full training loop. We refer the reader
-to [Flashy](https://github.com/facebookresearch/flashy) for core principles around solvers.
-
-AudioCraft proposes an initial solver, the `StandardSolver` that is used as the base implementation
-for downstream solvers. This standard solver provides a nice base management of logging,
-checkpoints loading/saving, xp restoration, etc. on top of the base Flashy implementation.
-In AudioCraft, we made the assumption that all tasks are following the same set of stages:
-train, valid, evaluate and generation, each relying on a dedicated dataset.
-
-Each solver is responsible for defining the task to solve and the associated stages
-of the training loop in order to leave the full ownership of the training pipeline
-to the researchers. This includes loading the datasets, building the model and
-optimisation components, registering them and defining the execution of each stage.
-To create a new solver for a given task, one should extend the StandardSolver
-and define each stage of the training loop. One can further customise its own solver
-starting from scratch instead of inheriting from the standard solver.
-
-```python
-from . import base
-from .. import optim
-
-
-class MyNewSolver(base.StandardSolver):
-
- def __init__(self, cfg: omegaconf.DictConfig):
- super().__init__(cfg)
- # one can add custom attributes to the solver
- self.criterion = torch.nn.L1Loss()
-
- def best_metric(self):
- # here optionally specify which metric to use to keep track of best state
- return 'loss'
-
- def build_model(self):
- # here you can instantiate your models and optimization related objects
- # this method will be called by the StandardSolver init method
- self.model = ...
- # the self.cfg attribute contains the raw configuration
- self.optimizer = optim.build_optimizer(self.model.parameters(), self.cfg.optim)
- # don't forget to register the states you'd like to include in your checkpoints!
- self.register_stateful('model', 'optimizer')
- # keep the model best state based on the best value achieved at validation for the given best_metric
- self.register_best('model')
- # if you want to add EMA around the model
- self.register_ema('model')
-
- def build_dataloaders(self):
- # here you can instantiate your dataloaders
- # this method will be called by the StandardSolver init method
- self.dataloaders = ...
-
- ...
-
- # For both train and valid stages, the StandardSolver relies on
- # a share common_train_valid implementation that is in charge of
- # accessing the appropriate loader, iterate over the data up to
- # the specified number of updates_per_epoch, run the ``run_step``
- # function that you need to implement to specify the behavior
- # and finally update the EMA and collect the metrics properly.
- @abstractmethod
- def run_step(self, idx: int, batch: tp.Any, metrics: dict):
- """Perform one training or valid step on a given batch.
- """
- ... # provide your implementation of the solver over a batch
-
- def train(self):
- """Train stage.
- """
- return self.common_train_valid('train')
-
- def valid(self):
- """Valid stage.
- """
- return self.common_train_valid('valid')
-
- @abstractmethod
- def evaluate(self):
- """Evaluate stage.
- """
- ... # provide your implementation here!
-
- @abstractmethod
- def generate(self):
- """Generate stage.
- """
- ... # provide your implementation here!
-```
-
-### About Epochs
-
-AudioCraft Solvers uses the concept of Epoch. One epoch doesn't necessarily mean one pass over the entire
-dataset, but instead represent the smallest amount of computation that we want to work with before checkpointing.
-Typically, we find that having an Epoch time around 30min is ideal both in terms of safety (checkpointing often enough)
-and getting updates often enough. One Epoch is at least a `train` stage that lasts for `optim.updates_per_epoch` (2000 by default),
-and a `valid` stage. You can control how long the valid stage takes with `dataset.valid.num_samples`.
-Other stages (`evaluate`, `generate`) will only happen every X epochs, as given by `evaluate.every` and `generate.every`).
-
-
-### Models
-
-In AudioCraft, a model is a container object that wraps one or more torch modules together
-with potential processing logic to use in a solver. For example, a model would wrap an encoder module,
-a quantisation bottleneck module, a decoder and some tensor processing logic. Each of the previous components
-can be considered as a small « model unit » on its own but the container model is a practical component
-to manipulate and train a set of modules together.
-
-### Datasets
-
-See the [dedicated documentation on datasets](./DATASETS.md).
-
-### Metrics
-
-See the [dedicated documentation on metrics](./METRICS.md).
-
-### Conditioners
-
-AudioCraft language models can be conditioned in various ways and the codebase offers a modular implementation
-of different conditioners that can be potentially combined together.
-Learn more in the [dedicated documentation on conditioning](./CONDITIONING.md).
-
-### Configuration
-
-AudioCraft's configuration is defined in yaml files and the framework relies on
-[hydra](https://hydra.cc/docs/intro/) and [omegaconf](https://omegaconf.readthedocs.io/) to parse
-and manipulate the configuration through Dora.
-
-##### :warning: Important considerations around configurations
-
-Our configuration management relies on Hydra and the concept of group configs to structure
-and compose configurations. Updating the root default configuration files will then have
-an impact on all solvers and tasks.
-**One should never change the default configuration files. Instead they should use Hydra config groups in order to store custom configuration.**
-Once this configuration is created and used for running experiments, you should not edit it anymore.
-
-Note that as we are using Dora as our experiment manager, all our experiment tracking is based on
-signatures computed from delta between configurations.
-**One must therefore ensure backward compatibilty of the configuration at all time.**
-See [Dora's README](https://github.com/facebookresearch/dora) and the
-[section below introduction Dora](#running-experiments-with-dora).
-
-##### Configuration structure
-
-The configuration is organized in config groups:
-* `conditioner`: default values for conditioning modules.
-* `dset`: contains all data source related information (paths to manifest files
-and metadata for a given dataset).
-* `model`: contains configuration for each model defined in AudioCraft and configurations
-for different variants of models.
-* `solver`: contains the default configuration for each solver as well as configuration
-for each solver task, combining all the above components.
-* `teams`: contains the cluster configuration per teams. See environment setup for more details.
-
-The `config.yaml` file is the main configuration that composes the above groups
-and contains default configuration for AudioCraft.
-
-##### Solver's core configuration structure
-
-The core configuration structure shared across solver is available in `solvers/default.yaml`.
-
-##### Other configuration modules
-
-AudioCraft configuration contains the different setups we used for our research and publications.
-
-## Running experiments with Dora
-
-### Launching jobs
-
-Try launching jobs for different tasks locally with dora run:
-
-```shell
-# run compression task with lightweight encodec
-dora run solver=compression/debug
-```
-
-Most of the time, the jobs are launched through dora grids, for example:
-
-```shell
-# run compression task through debug grid
-dora grid compression.debug
-```
-
-Learn more about running experiments with Dora below.
-
-### A small introduction to Dora
-
-[Dora](https://github.com/facebookresearch/dora) is the experiment manager tool used in AudioCraft.
-Check out the README to learn how Dora works. Here is a quick summary of what to know:
-* An XP is a unique set of hyper-parameters with a given signature. The signature is a hash
-of those hyper-parameters. We always refer to an XP with its signature, e.g. 9357e12e. We will see
-after that one can retrieve the hyper-params and re-rerun it in a single command.
-* In fact, the hash is defined as a delta between the base config and the one obtained
-with the config overrides you passed from the command line. This means you must never change
-the `conf/**.yaml` files directly., except for editing things like paths. Changing the default values
-in the config files means the XP signature won't reflect that change, and wrong checkpoints might be reused.
-I know, this is annoying, but the reason is that otherwise, any change to the config file would mean
-that all XPs ran so far would see their signature change.
-
-#### Dora commands
-
-```shell
-dora info -f 81de367c # this will show the hyper-parameter used by a specific XP.
- # Be careful some overrides might present twice, and the right most one
- # will give you the right value for it.
-
-dora run -d -f 81de367c # run an XP with the hyper-parameters from XP 81de367c.
- # `-d` is for distributed, it will use all available GPUs.
-
-dora run -d -f 81de367c dataset.batch_size=32 # start from the config of XP 81de367c but change some hyper-params.
- # This will give you a new XP with a new signature (e.g. 3fe9c332).
-
-dora info -f SIG -t # will tail the log (if the XP has scheduled).
-# if you need to access the logs of the process for rank > 0, in particular because a crash didn't happen in the main
-# process, then use `dora info -f SIG` to get the main log name (finished into something like `/5037674_0_0_log.out`)
-# and worker K can accessed as `/5037674_0_{K}_log.out`.
-# This is only for scheduled jobs, for local distributed runs with `-d`, then you should go into the XP folder,
-# and look for `worker_{K}.log` logs.
-```
-
-An XP runs from a specific folder based on its signature, under the
-`//experiments/audiocraft/outputs/` folder.
-You can safely interrupt a training and resume it, it will reuse any existing checkpoint,
-as it will reuse the same folder. If you made some change to the code and need to ignore
-a previous checkpoint you can use `dora run --clear [RUN ARGS]`.
-
-If you have a Slurm cluster, you can also use the dora grid command, e.g.
-
-```shell
-# run a dummy grid located at `audiocraft/grids/my_grid_folder/my_grid_name.py`
-dora grid my_grid_folder.my_grid_name
-# Run the following will simply display the grid and also initialized the Dora experiments database.
-# You can then simply refer to a config using its signature (e.g. as `dora run -f SIG`).
-dora grid my_grid_folder.my_grid_name --dry_run --init
-```
-
-Please refer to the [Dora documentation](https://github.com/facebookresearch/dora) for more information.
-
-
-#### Clearing up past experiments
-
-```shell
-# This will cancel all the XPs and delete their folder and checkpoints.
-# It will then reschedule them starting from scratch.
-dora grid my_grid_folder.my_grid_name --clear
-# The following will delete the folder and checkpoint for a single XP,
-# and then run it afresh.
-dora run [-f BASE_SIG] [ARGS] --clear
-```
diff --git a/spaces/HaHaBill/LandShapes-Antarctica/netdissect/segmodel/__init__.py b/spaces/HaHaBill/LandShapes-Antarctica/netdissect/segmodel/__init__.py
deleted file mode 100644
index 76b40a0a36bc2976f185dbdc344c5a7c09b65920..0000000000000000000000000000000000000000
--- a/spaces/HaHaBill/LandShapes-Antarctica/netdissect/segmodel/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from .models import ModelBuilder, SegmentationModule
diff --git a/spaces/HaoFeng2019/DocTr/seg.py b/spaces/HaoFeng2019/DocTr/seg.py
deleted file mode 100644
index f3bfb445f0092ae7645683708d82b2394b5e8b2a..0000000000000000000000000000000000000000
--- a/spaces/HaoFeng2019/DocTr/seg.py
+++ /dev/null
@@ -1,568 +0,0 @@
-import torch
-import torch.nn as nn
-from torchvision import models
-import torch.nn.functional as F
-import numpy as np
-
-
-class sobel_net(nn.Module):
- def __init__(self):
- super().__init__()
- self.conv_opx = nn.Conv2d(1, 1, 3, bias=False)
- self.conv_opy = nn.Conv2d(1, 1, 3, bias=False)
- sobel_kernelx = np.array([[-1, 0, 1], [-2, 0, 2], [-1, 0, 1]], dtype='float32').reshape((1, 1, 3, 3))
- sobel_kernely = np.array([[-1, -2, -1], [0, 0, 0], [1, 2, 1]], dtype='float32').reshape((1, 1, 3, 3))
- self.conv_opx.weight.data = torch.from_numpy(sobel_kernelx)
- self.conv_opy.weight.data = torch.from_numpy(sobel_kernely)
-
- for p in self.parameters():
- p.requires_grad = False
-
- def forward(self, im): # input rgb
- x = (0.299 * im[:, 0, :, :] + 0.587 * im[:, 1, :, :] + 0.114 * im[:, 2, :, :]).unsqueeze(1) # rgb2gray
- gradx = self.conv_opx(x)
- grady = self.conv_opy(x)
-
- x = (gradx ** 2 + grady ** 2) ** 0.5
- x = (x - x.min()) / (x.max() - x.min())
- x = F.pad(x, (1, 1, 1, 1))
-
- x = torch.cat([im, x], dim=1)
- return x
-
-
-class REBNCONV(nn.Module):
- def __init__(self, in_ch=3, out_ch=3, dirate=1):
- super(REBNCONV, self).__init__()
-
- self.conv_s1 = nn.Conv2d(in_ch, out_ch, 3, padding=1 * dirate, dilation=1 * dirate)
- self.bn_s1 = nn.BatchNorm2d(out_ch)
- self.relu_s1 = nn.ReLU(inplace=True)
-
- def forward(self, x):
- #print(x.device)
- hx = x
- xout = self.relu_s1(self.bn_s1(self.conv_s1(hx)))
-
- return xout
-
-
-## upsample tensor 'src' to have the same spatial size with tensor 'tar'
-def _upsample_like(src, tar):
- src = F.interpolate(src, size=tar.shape[2:], mode='bilinear', align_corners=False)
-
- return src
-
-
-### RSU-7 ###
-class RSU7(nn.Module): # UNet07DRES(nn.Module):
-
- def __init__(self, in_ch=3, mid_ch=12, out_ch=3):
- super(RSU7, self).__init__()
-
- self.rebnconvin = REBNCONV(in_ch, out_ch, dirate=1)
-
- self.rebnconv1 = REBNCONV(out_ch, mid_ch, dirate=1)
- self.pool1 = nn.MaxPool2d(2, stride=2, ceil_mode=True)
-
- self.rebnconv2 = REBNCONV(mid_ch, mid_ch, dirate=1)
- self.pool2 = nn.MaxPool2d(2, stride=2, ceil_mode=True)
-
- self.rebnconv3 = REBNCONV(mid_ch, mid_ch, dirate=1)
- self.pool3 = nn.MaxPool2d(2, stride=2, ceil_mode=True)
-
- self.rebnconv4 = REBNCONV(mid_ch, mid_ch, dirate=1)
- self.pool4 = nn.MaxPool2d(2, stride=2, ceil_mode=True)
-
- self.rebnconv5 = REBNCONV(mid_ch, mid_ch, dirate=1)
- self.pool5 = nn.MaxPool2d(2, stride=2, ceil_mode=True)
-
- self.rebnconv6 = REBNCONV(mid_ch, mid_ch, dirate=1)
-
- self.rebnconv7 = REBNCONV(mid_ch, mid_ch, dirate=2)
-
- self.rebnconv6d = REBNCONV(mid_ch * 2, mid_ch, dirate=1)
- self.rebnconv5d = REBNCONV(mid_ch * 2, mid_ch, dirate=1)
- self.rebnconv4d = REBNCONV(mid_ch * 2, mid_ch, dirate=1)
- self.rebnconv3d = REBNCONV(mid_ch * 2, mid_ch, dirate=1)
- self.rebnconv2d = REBNCONV(mid_ch * 2, mid_ch, dirate=1)
- self.rebnconv1d = REBNCONV(mid_ch * 2, out_ch, dirate=1)
-
- def forward(self, x):
- hx = x
- hxin = self.rebnconvin(hx)
-
- hx1 = self.rebnconv1(hxin)
- hx = self.pool1(hx1)
-
- hx2 = self.rebnconv2(hx)
- hx = self.pool2(hx2)
-
- hx3 = self.rebnconv3(hx)
- hx = self.pool3(hx3)
-
- hx4 = self.rebnconv4(hx)
- hx = self.pool4(hx4)
-
- hx5 = self.rebnconv5(hx)
- hx = self.pool5(hx5)
-
- hx6 = self.rebnconv6(hx)
-
- hx7 = self.rebnconv7(hx6)
-
- hx6d = self.rebnconv6d(torch.cat((hx7, hx6), 1))
- hx6dup = _upsample_like(hx6d, hx5)
-
- hx5d = self.rebnconv5d(torch.cat((hx6dup, hx5), 1))
- hx5dup = _upsample_like(hx5d, hx4)
-
- hx4d = self.rebnconv4d(torch.cat((hx5dup, hx4), 1))
- hx4dup = _upsample_like(hx4d, hx3)
-
- hx3d = self.rebnconv3d(torch.cat((hx4dup, hx3), 1))
- hx3dup = _upsample_like(hx3d, hx2)
-
- hx2d = self.rebnconv2d(torch.cat((hx3dup, hx2), 1))
- hx2dup = _upsample_like(hx2d, hx1)
-
- hx1d = self.rebnconv1d(torch.cat((hx2dup, hx1), 1))
-
- return hx1d + hxin
-
-
-### RSU-6 ###
-class RSU6(nn.Module): # UNet06DRES(nn.Module):
-
- def __init__(self, in_ch=3, mid_ch=12, out_ch=3):
- super(RSU6, self).__init__()
-
- self.rebnconvin = REBNCONV(in_ch, out_ch, dirate=1)
-
- self.rebnconv1 = REBNCONV(out_ch, mid_ch, dirate=1)
- self.pool1 = nn.MaxPool2d(2, stride=2, ceil_mode=True)
-
- self.rebnconv2 = REBNCONV(mid_ch, mid_ch, dirate=1)
- self.pool2 = nn.MaxPool2d(2, stride=2, ceil_mode=True)
-
- self.rebnconv3 = REBNCONV(mid_ch, mid_ch, dirate=1)
- self.pool3 = nn.MaxPool2d(2, stride=2, ceil_mode=True)
-
- self.rebnconv4 = REBNCONV(mid_ch, mid_ch, dirate=1)
- self.pool4 = nn.MaxPool2d(2, stride=2, ceil_mode=True)
-
- self.rebnconv5 = REBNCONV(mid_ch, mid_ch, dirate=1)
-
- self.rebnconv6 = REBNCONV(mid_ch, mid_ch, dirate=2)
-
- self.rebnconv5d = REBNCONV(mid_ch * 2, mid_ch, dirate=1)
- self.rebnconv4d = REBNCONV(mid_ch * 2, mid_ch, dirate=1)
- self.rebnconv3d = REBNCONV(mid_ch * 2, mid_ch, dirate=1)
- self.rebnconv2d = REBNCONV(mid_ch * 2, mid_ch, dirate=1)
- self.rebnconv1d = REBNCONV(mid_ch * 2, out_ch, dirate=1)
-
- def forward(self, x):
- hx = x
-
- hxin = self.rebnconvin(hx)
-
- hx1 = self.rebnconv1(hxin)
- hx = self.pool1(hx1)
-
- hx2 = self.rebnconv2(hx)
- hx = self.pool2(hx2)
-
- hx3 = self.rebnconv3(hx)
- hx = self.pool3(hx3)
-
- hx4 = self.rebnconv4(hx)
- hx = self.pool4(hx4)
-
- hx5 = self.rebnconv5(hx)
-
- hx6 = self.rebnconv6(hx5)
-
- hx5d = self.rebnconv5d(torch.cat((hx6, hx5), 1))
- hx5dup = _upsample_like(hx5d, hx4)
-
- hx4d = self.rebnconv4d(torch.cat((hx5dup, hx4), 1))
- hx4dup = _upsample_like(hx4d, hx3)
-
- hx3d = self.rebnconv3d(torch.cat((hx4dup, hx3), 1))
- hx3dup = _upsample_like(hx3d, hx2)
-
- hx2d = self.rebnconv2d(torch.cat((hx3dup, hx2), 1))
- hx2dup = _upsample_like(hx2d, hx1)
-
- hx1d = self.rebnconv1d(torch.cat((hx2dup, hx1), 1))
-
- return hx1d + hxin
-
-
-### RSU-5 ###
-class RSU5(nn.Module): # UNet05DRES(nn.Module):
-
- def __init__(self, in_ch=3, mid_ch=12, out_ch=3):
- super(RSU5, self).__init__()
-
- self.rebnconvin = REBNCONV(in_ch, out_ch, dirate=1)
-
- self.rebnconv1 = REBNCONV(out_ch, mid_ch, dirate=1)
- self.pool1 = nn.MaxPool2d(2, stride=2, ceil_mode=True)
-
- self.rebnconv2 = REBNCONV(mid_ch, mid_ch, dirate=1)
- self.pool2 = nn.MaxPool2d(2, stride=2, ceil_mode=True)
-
- self.rebnconv3 = REBNCONV(mid_ch, mid_ch, dirate=1)
- self.pool3 = nn.MaxPool2d(2, stride=2, ceil_mode=True)
-
- self.rebnconv4 = REBNCONV(mid_ch, mid_ch, dirate=1)
-
- self.rebnconv5 = REBNCONV(mid_ch, mid_ch, dirate=2)
-
- self.rebnconv4d = REBNCONV(mid_ch * 2, mid_ch, dirate=1)
- self.rebnconv3d = REBNCONV(mid_ch * 2, mid_ch, dirate=1)
- self.rebnconv2d = REBNCONV(mid_ch * 2, mid_ch, dirate=1)
- self.rebnconv1d = REBNCONV(mid_ch * 2, out_ch, dirate=1)
-
- def forward(self, x):
- hx = x
-
- hxin = self.rebnconvin(hx)
-
- hx1 = self.rebnconv1(hxin)
- hx = self.pool1(hx1)
-
- hx2 = self.rebnconv2(hx)
- hx = self.pool2(hx2)
-
- hx3 = self.rebnconv3(hx)
- hx = self.pool3(hx3)
-
- hx4 = self.rebnconv4(hx)
-
- hx5 = self.rebnconv5(hx4)
-
- hx4d = self.rebnconv4d(torch.cat((hx5, hx4), 1))
- hx4dup = _upsample_like(hx4d, hx3)
-
- hx3d = self.rebnconv3d(torch.cat((hx4dup, hx3), 1))
- hx3dup = _upsample_like(hx3d, hx2)
-
- hx2d = self.rebnconv2d(torch.cat((hx3dup, hx2), 1))
- hx2dup = _upsample_like(hx2d, hx1)
-
- hx1d = self.rebnconv1d(torch.cat((hx2dup, hx1), 1))
-
- return hx1d + hxin
-
-
-### RSU-4 ###
-class RSU4(nn.Module): # UNet04DRES(nn.Module):
-
- def __init__(self, in_ch=3, mid_ch=12, out_ch=3):
- super(RSU4, self).__init__()
-
- self.rebnconvin = REBNCONV(in_ch, out_ch, dirate=1)
-
- self.rebnconv1 = REBNCONV(out_ch, mid_ch, dirate=1)
- self.pool1 = nn.MaxPool2d(2, stride=2, ceil_mode=True)
-
- self.rebnconv2 = REBNCONV(mid_ch, mid_ch, dirate=1)
- self.pool2 = nn.MaxPool2d(2, stride=2, ceil_mode=True)
-
- self.rebnconv3 = REBNCONV(mid_ch, mid_ch, dirate=1)
-
- self.rebnconv4 = REBNCONV(mid_ch, mid_ch, dirate=2)
-
- self.rebnconv3d = REBNCONV(mid_ch * 2, mid_ch, dirate=1)
- self.rebnconv2d = REBNCONV(mid_ch * 2, mid_ch, dirate=1)
- self.rebnconv1d = REBNCONV(mid_ch * 2, out_ch, dirate=1)
-
- def forward(self, x):
- hx = x
-
- hxin = self.rebnconvin(hx)
-
- hx1 = self.rebnconv1(hxin)
- hx = self.pool1(hx1)
-
- hx2 = self.rebnconv2(hx)
- hx = self.pool2(hx2)
-
- hx3 = self.rebnconv3(hx)
-
- hx4 = self.rebnconv4(hx3)
-
- hx3d = self.rebnconv3d(torch.cat((hx4, hx3), 1))
- hx3dup = _upsample_like(hx3d, hx2)
-
- hx2d = self.rebnconv2d(torch.cat((hx3dup, hx2), 1))
- hx2dup = _upsample_like(hx2d, hx1)
-
- hx1d = self.rebnconv1d(torch.cat((hx2dup, hx1), 1))
-
- return hx1d + hxin
-
-
-### RSU-4F ###
-class RSU4F(nn.Module): # UNet04FRES(nn.Module):
-
- def __init__(self, in_ch=3, mid_ch=12, out_ch=3):
- super(RSU4F, self).__init__()
-
- self.rebnconvin = REBNCONV(in_ch, out_ch, dirate=1)
-
- self.rebnconv1 = REBNCONV(out_ch, mid_ch, dirate=1)
- self.rebnconv2 = REBNCONV(mid_ch, mid_ch, dirate=2)
- self.rebnconv3 = REBNCONV(mid_ch, mid_ch, dirate=4)
-
- self.rebnconv4 = REBNCONV(mid_ch, mid_ch, dirate=8)
-
- self.rebnconv3d = REBNCONV(mid_ch * 2, mid_ch, dirate=4)
- self.rebnconv2d = REBNCONV(mid_ch * 2, mid_ch, dirate=2)
- self.rebnconv1d = REBNCONV(mid_ch * 2, out_ch, dirate=1)
-
- def forward(self, x):
- hx = x
-
- hxin = self.rebnconvin(hx)
-
- hx1 = self.rebnconv1(hxin)
- hx2 = self.rebnconv2(hx1)
- hx3 = self.rebnconv3(hx2)
-
- hx4 = self.rebnconv4(hx3)
-
- hx3d = self.rebnconv3d(torch.cat((hx4, hx3), 1))
- hx2d = self.rebnconv2d(torch.cat((hx3d, hx2), 1))
- hx1d = self.rebnconv1d(torch.cat((hx2d, hx1), 1))
-
- return hx1d + hxin
-
-
-##### U^2-Net ####
-class U2NET(nn.Module):
-
- def __init__(self, in_ch=3, out_ch=1):
- super(U2NET, self).__init__()
- self.edge = sobel_net()
-
- self.stage1 = RSU7(in_ch, 32, 64)
- self.pool12 = nn.MaxPool2d(2, stride=2, ceil_mode=True)
-
- self.stage2 = RSU6(64, 32, 128)
- self.pool23 = nn.MaxPool2d(2, stride=2, ceil_mode=True)
-
- self.stage3 = RSU5(128, 64, 256)
- self.pool34 = nn.MaxPool2d(2, stride=2, ceil_mode=True)
-
- self.stage4 = RSU4(256, 128, 512)
- self.pool45 = nn.MaxPool2d(2, stride=2, ceil_mode=True)
-
- self.stage5 = RSU4F(512, 256, 512)
- self.pool56 = nn.MaxPool2d(2, stride=2, ceil_mode=True)
-
- self.stage6 = RSU4F(512, 256, 512)
-
- # decoder
- self.stage5d = RSU4F(1024, 256, 512)
- self.stage4d = RSU4(1024, 128, 256)
- self.stage3d = RSU5(512, 64, 128)
- self.stage2d = RSU6(256, 32, 64)
- self.stage1d = RSU7(128, 16, 64)
-
- self.side1 = nn.Conv2d(64, out_ch, 3, padding=1)
- self.side2 = nn.Conv2d(64, out_ch, 3, padding=1)
- self.side3 = nn.Conv2d(128, out_ch, 3, padding=1)
- self.side4 = nn.Conv2d(256, out_ch, 3, padding=1)
- self.side5 = nn.Conv2d(512, out_ch, 3, padding=1)
- self.side6 = nn.Conv2d(512, out_ch, 3, padding=1)
-
- self.outconv = nn.Conv2d(6, out_ch, 1)
-
- def forward(self, x):
- x = self.edge(x)
- hx = x
-
- # stage 1
- hx1 = self.stage1(hx)
- hx = self.pool12(hx1)
-
- # stage 2
- hx2 = self.stage2(hx)
- hx = self.pool23(hx2)
-
- # stage 3
- hx3 = self.stage3(hx)
- hx = self.pool34(hx3)
-
- # stage 4
- hx4 = self.stage4(hx)
- hx = self.pool45(hx4)
-
- # stage 5
- hx5 = self.stage5(hx)
- hx = self.pool56(hx5)
-
- # stage 6
- hx6 = self.stage6(hx)
- hx6up = _upsample_like(hx6, hx5)
-
- # -------------------- decoder --------------------
- hx5d = self.stage5d(torch.cat((hx6up, hx5), 1))
- hx5dup = _upsample_like(hx5d, hx4)
-
- hx4d = self.stage4d(torch.cat((hx5dup, hx4), 1))
- hx4dup = _upsample_like(hx4d, hx3)
-
- hx3d = self.stage3d(torch.cat((hx4dup, hx3), 1))
- hx3dup = _upsample_like(hx3d, hx2)
-
- hx2d = self.stage2d(torch.cat((hx3dup, hx2), 1))
- hx2dup = _upsample_like(hx2d, hx1)
-
- hx1d = self.stage1d(torch.cat((hx2dup, hx1), 1))
-
- # side output
- d1 = self.side1(hx1d)
-
- d2 = self.side2(hx2d)
- d2 = _upsample_like(d2, d1)
-
- d3 = self.side3(hx3d)
- d3 = _upsample_like(d3, d1)
-
- d4 = self.side4(hx4d)
- d4 = _upsample_like(d4, d1)
-
- d5 = self.side5(hx5d)
- d5 = _upsample_like(d5, d1)
-
- d6 = self.side6(hx6)
- d6 = _upsample_like(d6, d1)
-
- d0 = self.outconv(torch.cat((d1, d2, d3, d4, d5, d6), 1))
-
- return torch.sigmoid(d0), torch.sigmoid(d1), torch.sigmoid(d2), torch.sigmoid(d3), torch.sigmoid(
- d4), torch.sigmoid(d5), torch.sigmoid(d6)
-
-
-### U^2-Net small ###
-class U2NETP(nn.Module):
-
- def __init__(self, in_ch=3, out_ch=1):
- super(U2NETP, self).__init__()
-
- self.stage1 = RSU7(in_ch, 16, 64)
- self.pool12 = nn.MaxPool2d(2, stride=2, ceil_mode=True)
-
- self.stage2 = RSU6(64, 16, 64)
- self.pool23 = nn.MaxPool2d(2, stride=2, ceil_mode=True)
-
- self.stage3 = RSU5(64, 16, 64)
- self.pool34 = nn.MaxPool2d(2, stride=2, ceil_mode=True)
-
- self.stage4 = RSU4(64, 16, 64)
- self.pool45 = nn.MaxPool2d(2, stride=2, ceil_mode=True)
-
- self.stage5 = RSU4F(64, 16, 64)
- self.pool56 = nn.MaxPool2d(2, stride=2, ceil_mode=True)
-
- self.stage6 = RSU4F(64, 16, 64)
-
- # decoder
- self.stage5d = RSU4F(128, 16, 64)
- self.stage4d = RSU4(128, 16, 64)
- self.stage3d = RSU5(128, 16, 64)
- self.stage2d = RSU6(128, 16, 64)
- self.stage1d = RSU7(128, 16, 64)
-
- self.side1 = nn.Conv2d(64, out_ch, 3, padding=1)
- self.side2 = nn.Conv2d(64, out_ch, 3, padding=1)
- self.side3 = nn.Conv2d(64, out_ch, 3, padding=1)
- self.side4 = nn.Conv2d(64, out_ch, 3, padding=1)
- self.side5 = nn.Conv2d(64, out_ch, 3, padding=1)
- self.side6 = nn.Conv2d(64, out_ch, 3, padding=1)
-
- self.outconv = nn.Conv2d(6, out_ch, 1)
-
- def forward(self, x):
- hx = x
-
- # stage 1
- hx1 = self.stage1(hx)
- hx = self.pool12(hx1)
-
- # stage 2
- hx2 = self.stage2(hx)
- hx = self.pool23(hx2)
-
- # stage 3
- hx3 = self.stage3(hx)
- hx = self.pool34(hx3)
-
- # stage 4
- hx4 = self.stage4(hx)
- hx = self.pool45(hx4)
-
- # stage 5
- hx5 = self.stage5(hx)
- hx = self.pool56(hx5)
-
- # stage 6
- hx6 = self.stage6(hx)
- hx6up = _upsample_like(hx6, hx5)
-
- # decoder
- hx5d = self.stage5d(torch.cat((hx6up, hx5), 1))
- hx5dup = _upsample_like(hx5d, hx4)
-
- hx4d = self.stage4d(torch.cat((hx5dup, hx4), 1))
- hx4dup = _upsample_like(hx4d, hx3)
-
- hx3d = self.stage3d(torch.cat((hx4dup, hx3), 1))
- hx3dup = _upsample_like(hx3d, hx2)
-
- hx2d = self.stage2d(torch.cat((hx3dup, hx2), 1))
- hx2dup = _upsample_like(hx2d, hx1)
-
- hx1d = self.stage1d(torch.cat((hx2dup, hx1), 1))
-
- # side output
- d1 = self.side1(hx1d)
-
- d2 = self.side2(hx2d)
- d2 = _upsample_like(d2, d1)
-
- d3 = self.side3(hx3d)
- d3 = _upsample_like(d3, d1)
-
- d4 = self.side4(hx4d)
- d4 = _upsample_like(d4, d1)
-
- d5 = self.side5(hx5d)
- d5 = _upsample_like(d5, d1)
-
- d6 = self.side6(hx6)
- d6 = _upsample_like(d6, d1)
-
- d0 = self.outconv(torch.cat((d1, d2, d3, d4, d5, d6), 1))
-
- return torch.sigmoid(d0), torch.sigmoid(d1), torch.sigmoid(d2), torch.sigmoid(d3), torch.sigmoid(
- d4), torch.sigmoid(d5), torch.sigmoid(d6)
-
-
-def get_parameter_number(net):
- total_num = sum(p.numel() for p in net.parameters())
- trainable_num = sum(p.numel() for p in net.parameters() if p.requires_grad)
- return {'Total': total_num, 'Trainable': trainable_num}
-
-
-if __name__ == '__main__':
- net = U2NET(4, 1)#.cuda()
- print(get_parameter_number(net)) # 69090500 加attention后69442032
- with torch.no_grad():
- inputs = torch.zeros(1, 3, 256, 256)#.cuda()
- outs = net(inputs)
- print(outs[0].shape) # torch.Size([2, 3, 256, 256]) torch.Size([2, 2, 256, 256])
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/pointer_generator/pointer_generator_src/transformer_pg.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/pointer_generator/pointer_generator_src/transformer_pg.py
deleted file mode 100644
index 4ccf30f4eb154f8fab1e285934fb973a2d1166cb..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/pointer_generator/pointer_generator_src/transformer_pg.py
+++ /dev/null
@@ -1,518 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-from typing import Any, Dict, Optional, List, Tuple
-
-import torch
-import torch.nn as nn
-from fairseq import utils
-from fairseq.models import register_model, register_model_architecture
-from fairseq.models.transformer import (
- DEFAULT_MAX_SOURCE_POSITIONS,
- DEFAULT_MAX_TARGET_POSITIONS,
- TransformerDecoder,
- TransformerEncoder,
- TransformerModel,
- base_architecture,
-)
-from torch import Tensor
-
-
-logger = logging.getLogger(__name__)
-
-
-@register_model("transformer_pointer_generator")
-class TransformerPointerGeneratorModel(TransformerModel):
- """
- Transformer model from `"Attention Is All You Need" (Vaswani et al, 2017)
- `_, augmented with a pointer-generator
- network from `"Get To The Point: Summarization with Pointer-Generator
- Networks" (See et al, 2017) `_.
-
- Args:
- encoder (TransformerPointerGeneratorEncoder): the encoder
- decoder (TransformerPointerGeneratorDecoder): the decoder
-
- The Transformer pointer-generator model provides the following named
- architectures and command-line arguments:
-
- .. argparse::
- :ref: fairseq.models.transformer_pointer_generator_parser
- :prog:
- """
-
- @staticmethod
- def add_args(parser):
- """Add model-specific arguments to the parser."""
- # fmt: off
- TransformerModel.add_args(parser)
- parser.add_argument('--alignment-heads', type=int, metavar='N',
- help='number of attention heads to be used for '
- 'pointing')
- parser.add_argument('--alignment-layer', type=int, metavar='I',
- help='layer number to be used for pointing (0 '
- 'corresponding to the bottommost layer)')
- parser.add_argument('--source-position-markers', type=int, metavar='N',
- help='dictionary includes N additional items that '
- 'represent an OOV token at a particular input '
- 'position')
- parser.add_argument('--force-generation', type=float, metavar='P',
- default=None,
- help='set the vocabulary distribution weight to P, '
- 'instead of predicting it from the input (1.0 '
- 'corresponding to generation, 0.0 to pointing)')
- # fmt: on
-
- @classmethod
- def build_model(cls, args, task):
- """Build a new model instance."""
-
- # make sure all arguments are present in older models
- base_architecture(args)
-
- if args.encoder_layers_to_keep:
- args.encoder_layers = len(args.encoder_layers_to_keep.split(","))
- if args.decoder_layers_to_keep:
- args.decoder_layers = len(args.decoder_layers_to_keep.split(","))
-
- if getattr(args, "max_source_positions", None) is None:
- args.max_source_positions = DEFAULT_MAX_SOURCE_POSITIONS
- if getattr(args, "max_target_positions", None) is None:
- args.max_target_positions = DEFAULT_MAX_TARGET_POSITIONS
- if getattr(args, "source_position_markers", None) is None:
- args.source_position_markers = args.max_source_positions
-
- src_dict, tgt_dict = task.source_dictionary, task.target_dictionary
- if src_dict != tgt_dict:
- raise ValueError("Pointer-generator requires a joined dictionary")
-
- def build_embedding(dictionary, embed_dim, path=None):
- # The dictionary may include additional items that can be used in
- # place of the normal OOV token and that all map to the same
- # embedding. Using a different token for each input position allows
- # one to restore the word identities from the original source text.
- num_embeddings = len(dictionary) - args.source_position_markers
- padding_idx = dictionary.pad()
- unk_idx = dictionary.unk()
- logger.info(
- "dictionary indices from {0} to {1} will be mapped to {2}".format(
- num_embeddings, len(dictionary) - 1, unk_idx
- )
- )
- emb = Embedding(num_embeddings, embed_dim, padding_idx, unk_idx)
- # if provided, load from preloaded dictionaries
- if path:
- embed_dict = utils.parse_embedding(path)
- utils.load_embedding(embed_dict, dictionary, emb)
- return emb
-
- if args.share_all_embeddings:
- if args.encoder_embed_dim != args.decoder_embed_dim:
- raise ValueError(
- "--share-all-embeddings requires --encoder-embed-dim to match --decoder-embed-dim"
- )
- if args.decoder_embed_path and (
- args.decoder_embed_path != args.encoder_embed_path
- ):
- raise ValueError(
- "--share-all-embeddings not compatible with --decoder-embed-path"
- )
- encoder_embed_tokens = build_embedding(
- src_dict, args.encoder_embed_dim, args.encoder_embed_path
- )
- decoder_embed_tokens = encoder_embed_tokens
- args.share_decoder_input_output_embed = True
- else:
- encoder_embed_tokens = build_embedding(
- src_dict, args.encoder_embed_dim, args.encoder_embed_path
- )
- decoder_embed_tokens = build_embedding(
- tgt_dict, args.decoder_embed_dim, args.decoder_embed_path
- )
-
- encoder = cls.build_encoder(args, src_dict, encoder_embed_tokens)
- decoder = cls.build_decoder(args, tgt_dict, decoder_embed_tokens)
- return cls(args, encoder, decoder)
-
- @classmethod
- def build_encoder(cls, args, src_dict, embed_tokens):
- return TransformerPointerGeneratorEncoder(args, src_dict, embed_tokens)
-
- @classmethod
- def build_decoder(cls, args, tgt_dict, embed_tokens):
- return TransformerPointerGeneratorDecoder(args, tgt_dict, embed_tokens)
-
-
-class TransformerPointerGeneratorEncoder(TransformerEncoder):
- """
- Transformer encoder consisting of *args.encoder_layers* layers. Each layer
- is a :class:`TransformerEncoderLayer`. The pointer-generator variant adds
- the source tokens to the encoder output as these are otherwise not passed
- to the decoder.
- """
-
- def forward(
- self,
- src_tokens,
- src_lengths: Optional[Tensor] = None,
- return_all_hiddens: bool = False,
- token_embeddings: Optional[Tensor] = None
- ):
- """
- Runs the `forward()` method of the parent Transformer class. Then adds
- the source tokens into the encoder output tuple.
-
- While it might be more elegant that the model would pass the source
- tokens to the `forward()` method of the decoder too, this would require
- changes to `SequenceGenerator`.
-
- Args:
- src_tokens (torch.LongTensor): tokens in the source language of
- shape `(batch, src_len)`
- src_lengths (torch.LongTensor): lengths of each source sentence of
- shape `(batch)`
- return_all_hiddens (bool, optional): also return all of the
- intermediate hidden states (default: False).
- token_embeddings (torch.Tensor, optional): precomputed embeddings
- default `None` will recompute embeddings
-
- Returns:
- namedtuple:
- - **encoder_out** (Tensor): the last encoder layer's output of
- shape `(src_len, batch, embed_dim)`
- - **encoder_padding_mask** (ByteTensor): the positions of
- padding elements of shape `(batch, src_len)`
- - **encoder_embedding** (Tensor): the (scaled) embedding lookup
- of shape `(batch, src_len, embed_dim)`
- - **encoder_states** (List[Tensor]): all intermediate
- hidden states of shape `(src_len, batch, embed_dim)`.
- Only populated if *return_all_hiddens* is True.
- - **src_tokens** (Tensor): input token ids of shape
- `(batch, src_len)`
- """
- encoder_out = self.forward_scriptable(src_tokens,
- src_lengths,
- return_all_hiddens,
- token_embeddings)
-
- # The Pytorch Mobile lite interpreter does not supports returning NamedTuple in
- # `forward` so we use a dictionary instead.
- # TorchScript does not support mixed values so the values are all lists.
- # The empty list is equivalent to None.
- return {
- "encoder_out": encoder_out["encoder_out"], # T x B x C
- "encoder_padding_mask": encoder_out["encoder_padding_mask"], # B x T
- "encoder_embedding": encoder_out["encoder_embedding"], # B x T x C
- "encoder_states": encoder_out["encoder_states"], # List[T x B x C]
- "src_tokens": [src_tokens], # B x T
- "src_lengths": [],
- }
-
-
-class TransformerPointerGeneratorDecoder(TransformerDecoder):
- """
- Transformer decoder consisting of *args.decoder_layers* layers. Each layer
- is a :class:`TransformerDecoderLayer`. The pointer-generator variant mixes
- the output probabilities with an attention distribution in the output layer.
-
- Args:
- args (argparse.Namespace): parsed command-line arguments
- dictionary (~fairseq.data.Dictionary): decoding dictionary
- embed_tokens (torch.nn.Embedding): output embedding
- """
-
- def __init__(self, args, dictionary, embed_tokens):
- super().__init__(args, dictionary, embed_tokens, no_encoder_attn=False)
-
- # In the pointer-generator model these arguments define the decoder
- # layer and the number of attention heads that will be averaged to
- # create the alignment for pointing.
- self.alignment_heads = args.alignment_heads
- self.alignment_layer = args.alignment_layer
-
- input_embed_dim = embed_tokens.embedding_dim
-
- # Generation probabilities / interpolation coefficients are predicted
- # from the current decoder input embedding and the decoder output, which
- # is the size of output_embed_dim.
- p_gen_input_size = input_embed_dim + self.output_embed_dim
- self.project_p_gens = nn.Linear(p_gen_input_size, 1)
- nn.init.zeros_(self.project_p_gens.bias)
-
- # The dictionary may include a separate entry for an OOV token in each
- # input position, so that their identity can be restored from the
- # original source text.
- self.num_types = len(dictionary)
- self.num_oov_types = args.source_position_markers
- self.num_embeddings = self.num_types - self.num_oov_types
- self.force_p_gen = args.force_generation
-
- def forward(
- self,
- prev_output_tokens,
- encoder_out: Optional[Dict[str, List[Tensor]]] = None,
- incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None,
- features_only: bool = False,
- alignment_layer: Optional[int] = 0,
- alignment_heads: Optional[int] = 1,
- src_lengths: Optional[Any] = None,
- return_all_hiddens: bool = False,
- ):
- """
- Args:
- prev_output_tokens (LongTensor): previous decoder outputs of shape
- `(batch, tgt_len)`, for teacher forcing
- encoder_out (optional): output from the encoder, used for
- encoder-side attention
- incremental_state (dict, optional): dictionary used for storing
- state during :ref:`Incremental decoding`
- features_only (bool, optional): only return features without
- applying output layer (default: False)
- alignment_layer (int, optional): 0-based index of the layer to be
- used for pointing (default: 0)
- alignment_heads (int, optional): number of attention heads to be
- used for pointing (default: 1)
-
- Returns:
- tuple:
- - the decoder's output of shape `(batch, tgt_len, vocab)`
- - a dictionary with any model-specific outputs
- """
- # The normal Transformer model doesn't pass the alignment_layer and
- # alignment_heads parameters correctly. We use our local variables.
- x, extra = self.extract_features(
- prev_output_tokens,
- encoder_out=encoder_out,
- incremental_state=incremental_state,
- alignment_layer=self.alignment_layer,
- alignment_heads=self.alignment_heads,
- )
- if not features_only:
- # Embedding the tokens again for generation probability prediction,
- # so that we don't have to reimplement the whole extract_features()
- # method.
- if incremental_state is not None:
- prev_output_tokens = prev_output_tokens[:, -1:]
- prev_output_embed = self.embed_tokens(prev_output_tokens)
- prev_output_embed *= self.embed_scale
- predictors = torch.cat((prev_output_embed, x), 2)
- p_gens = self.project_p_gens(predictors)
- p_gens = torch.sigmoid(p_gens.float())
- # Torchscript complains if encoder_out or attn are None because
- # `output_layer()` signature expects tensors instead
- attn: Optional[Tensor] = extra["attn"][0]
- assert encoder_out is not None
- assert attn is not None
- x = self.output_layer(x, attn, encoder_out["src_tokens"][0], p_gens)
- return x, extra
-
- def output_layer(
- self,
- features: Tensor,
- attn: Tensor,
- src_tokens: Tensor,
- p_gens: Tensor
- ) -> Tensor:
- """
- Project features to the vocabulary size and mix with the attention
- distributions.
- """
- if self.force_p_gen is not None:
- p_gens = self.force_p_gen
-
- # project back to size of vocabulary
- if self.adaptive_softmax is None:
- logits = self.output_projection(features)
- else:
- logits = features
-
- batch_size = logits.shape[0]
- output_length = logits.shape[1]
- assert logits.shape[2] == self.num_embeddings
- assert src_tokens.shape[0] == batch_size
- src_length = src_tokens.shape[1]
-
- # The final output distribution will be a mixture of the normal output
- # distribution (softmax of logits) and attention weights.
- gen_dists = self.get_normalized_probs_scriptable(
- (logits, None), log_probs=False, sample=None
- )
- gen_dists = torch.mul(gen_dists, p_gens)
- padding_size = (batch_size, output_length, self.num_oov_types)
- padding = gen_dists.new_zeros(padding_size)
- gen_dists = torch.cat((gen_dists, padding), 2)
- assert gen_dists.shape[2] == self.num_types
-
- # Scatter attention distributions to distributions over the extended
- # vocabulary in a tensor of shape [batch_size, output_length,
- # vocab_size]. Each attention weight will be written into a location
- # that is for other dimensions the same as in the index tensor, but for
- # the third dimension it's the value of the index tensor (the token ID).
- attn = torch.mul(attn.float(), 1 - p_gens)
- index = src_tokens[:, None, :]
- index = index.expand(batch_size, output_length, src_length)
- attn_dists_size = (batch_size, output_length, self.num_types)
- attn_dists = attn.new_zeros(attn_dists_size)
- attn_dists.scatter_add_(2, index, attn.float())
-
- # Final distributions, [batch_size, output_length, num_types].
- return gen_dists + attn_dists
-
- def get_normalized_probs(
- self,
- net_output: Tuple[Tensor, Optional[Dict[str, List[Optional[Tensor]]]]],
- log_probs: bool,
- sample: Optional[Dict[str, Tensor]] = None,
- ):
- """
- Get normalized probabilities (or log probs) from a net's output.
- Pointer-generator network output is already normalized.
- """
- probs = net_output[0]
- # Make sure the probabilities are greater than zero when returning log
- # probabilities.
- return probs.clamp(1e-10, 1.0).log() if log_probs else probs
-
-
-class Embedding(nn.Embedding):
- r"""A simple lookup table that stores embeddings of a fixed dictionary and size.
- This module is often used to store word embeddings and retrieve them using indices.
- The input to the module is a list of indices, and the output is the corresponding
- word embeddings. This subclass differs from the standard PyTorch Embedding class by
- allowing additional vocabulary entries that will be mapped to the unknown token
- embedding.
- Args:
- num_embeddings (int): size of the dictionary of embeddings
- embedding_dim (int): the size of each embedding vector
- padding_idx (int): Pads the output with the embedding vector at :attr:`padding_idx`
- (initialized to zeros) whenever it encounters the index.
- unk_idx (int): Maps all token indices that are greater than or equal to
- num_embeddings to this index.
- Attributes:
- weight (Tensor): the learnable weights of the module of shape (num_embeddings, embedding_dim)
- initialized from :math:`\mathcal{N}(0, 1)`
- Shape:
- - Input: :math:`(*)`, LongTensor of arbitrary shape containing the indices to extract
- - Output: :math:`(*, H)`, where `*` is the input shape and :math:`H=\text{embedding\_dim}`
- .. note::
- Keep in mind that only a limited number of optimizers support
- sparse gradients: currently it's :class:`optim.SGD` (`CUDA` and `CPU`),
- :class:`optim.SparseAdam` (`CUDA` and `CPU`) and :class:`optim.Adagrad` (`CPU`)
- .. note::
- With :attr:`padding_idx` set, the embedding vector at
- :attr:`padding_idx` is initialized to all zeros. However, note that this
- vector can be modified afterwards, e.g., using a customized
- initialization method, and thus changing the vector used to pad the
- output. The gradient for this vector from :class:`~torch.nn.Embedding`
- is always zero.
- """
- __constants__ = ["unk_idx"]
-
- # Torchscript: Inheriting from Embedding class produces an error when exporting to Torchscript
- # -> RuntimeError: Unable to cast Python instance to C++ type (compile in debug mode for details
- # It's happening because max_norm attribute from nn.Embedding is None by default and it cannot be
- # cast to a C++ type
- def __init__(
- self,
- num_embeddings: int,
- embedding_dim: int,
- padding_idx: Optional[int],
- unk_idx: int,
- max_norm: Optional[float] = float("inf"),
- ):
- super().__init__(num_embeddings, embedding_dim, padding_idx=padding_idx, max_norm=max_norm)
- self.unk_idx = unk_idx
- nn.init.normal_(self.weight, mean=0, std=embedding_dim ** -0.5)
- nn.init.constant_(self.weight[padding_idx], 0)
-
- def forward(self, input):
- input = torch.where(
- input >= self.num_embeddings, torch.ones_like(input) * self.unk_idx, input
- )
- return nn.functional.embedding(
- input, self.weight, self.padding_idx, self.max_norm,
- self.norm_type, self.scale_grad_by_freq, self.sparse
- )
-
-
-@register_model_architecture(
- "transformer_pointer_generator", "transformer_pointer_generator"
-)
-def transformer_pointer_generator(args):
- args.alignment_heads = getattr(args, "alignment_heads", 1)
- args.alignment_layer = getattr(args, "alignment_layer", -1)
- base_architecture(args)
- if args.alignment_layer < 0:
- args.alignment_layer = args.decoder_layers + args.alignment_layer
-
-
-@register_model_architecture(
- "transformer_pointer_generator", "transformer_pointer_generator_iwslt_de_en"
-)
-def transformer_pointer_generator_iwslt_de_en(args):
- args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512)
- args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 1024)
- args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 4)
- args.encoder_layers = getattr(args, "encoder_layers", 6)
- args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512)
- args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 1024)
- args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 4)
- args.decoder_layers = getattr(args, "decoder_layers", 6)
- transformer_pointer_generator(args)
-
-
-@register_model_architecture(
- "transformer_pointer_generator", "transformer_pointer_generator_wmt_en_de"
-)
-def transformer_pointer_generator_wmt_en_de(args):
- transformer_pointer_generator(args)
-
-
-# Transformer pointer-generator with the base Transformer parameters as used in
-# the "Attention Is All You Need" paper (Vaswani et al., 2017)
-@register_model_architecture(
- "transformer_pointer_generator",
- "transformer_pointer_generator_vaswani_wmt_en_de_big",
-)
-def transformer_pointer_generator_vaswani_wmt_en_de_big(args):
- args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024)
- args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4096)
- args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16)
- args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False)
- args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 1024)
- args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 4096)
- args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16)
- args.dropout = getattr(args, "dropout", 0.3)
- transformer_pointer_generator(args)
-
-
-@register_model_architecture(
- "transformer_pointer_generator",
- "transformer_pointer_generator_vaswani_wmt_en_fr_big",
-)
-def transformer_pointer_generator_vaswani_wmt_en_fr_big(args):
- args.dropout = getattr(args, "dropout", 0.1)
- transformer_pointer_generator_vaswani_wmt_en_de_big(args)
-
-
-@register_model_architecture(
- "transformer_pointer_generator", "transformer_pointer_generator_wmt_en_de_big"
-)
-def transformer_pointer_generator_wmt_en_de_big(args):
- args.attention_dropout = getattr(args, "attention_dropout", 0.1)
- transformer_pointer_generator_vaswani_wmt_en_de_big(args)
-
-
-# default parameters used in tensor2tensor implementation
-@register_model_architecture(
- "transformer_pointer_generator", "transformer_pointer_generator_wmt_en_de_big_t2t"
-)
-def transformer_pointer_generator_wmt_en_de_big_t2t(args):
- args.encoder_normalize_before = getattr(args, "encoder_normalize_before", True)
- args.decoder_normalize_before = getattr(args, "decoder_normalize_before", True)
- args.attention_dropout = getattr(args, "attention_dropout", 0.1)
- args.activation_dropout = getattr(args, "activation_dropout", 0.1)
- transformer_pointer_generator_vaswani_wmt_en_de_big(args)
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_synthesis/preprocessing/denoiser/demucs.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_synthesis/preprocessing/denoiser/demucs.py
deleted file mode 100644
index 3f70e73d6a37d32e05b6cf0e87f42e13c467cd52..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_synthesis/preprocessing/denoiser/demucs.py
+++ /dev/null
@@ -1,473 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-# author: adefossez
-
-import math
-import time
-
-import torch as th
-from torch import nn
-from torch.nn import functional as F
-
-from .resample import downsample2, upsample2
-from .utils import capture_init
-
-
-class BLSTM(nn.Module):
- def __init__(self, dim, layers=2, bi=True):
- super().__init__()
- klass = nn.LSTM
- self.lstm = klass(
- bidirectional=bi, num_layers=layers, hidden_size=dim, input_size=dim
- )
- self.linear = None
- if bi:
- self.linear = nn.Linear(2 * dim, dim)
-
- def forward(self, x, hidden=None):
- x, hidden = self.lstm(x, hidden)
- if self.linear:
- x = self.linear(x)
- return x, hidden
-
-
-def rescale_conv(conv, reference):
- std = conv.weight.std().detach()
- scale = (std / reference)**0.5
- conv.weight.data /= scale
- if conv.bias is not None:
- conv.bias.data /= scale
-
-
-def rescale_module(module, reference):
- for sub in module.modules():
- if isinstance(sub, (nn.Conv1d, nn.ConvTranspose1d)):
- rescale_conv(sub, reference)
-
-
-class Demucs(nn.Module):
- """
- Demucs speech enhancement model.
- Args:
- - chin (int): number of input channels.
- - chout (int): number of output channels.
- - hidden (int): number of initial hidden channels.
- - depth (int): number of layers.
- - kernel_size (int): kernel size for each layer.
- - stride (int): stride for each layer.
- - causal (bool): if false, uses BiLSTM instead of LSTM.
- - resample (int): amount of resampling to apply to the input/output.
- Can be one of 1, 2 or 4.
- - growth (float): number of channels is multiplied by this for every layer.
- - max_hidden (int): maximum number of channels. Can be useful to
- control the size/speed of the model.
- - normalize (bool): if true, normalize the input.
- - glu (bool): if true uses GLU instead of ReLU in 1x1 convolutions.
- - rescale (float): controls custom weight initialization.
- See https://arxiv.org/abs/1911.13254.
- - floor (float): stability flooring when normalizing.
-
- """
- @capture_init
- def __init__(self,
- chin=1,
- chout=1,
- hidden=48,
- depth=5,
- kernel_size=8,
- stride=4,
- causal=True,
- resample=4,
- growth=2,
- max_hidden=10_000,
- normalize=True,
- glu=True,
- rescale=0.1,
- floor=1e-3):
-
- super().__init__()
- if resample not in [1, 2, 4]:
- raise ValueError("Resample should be 1, 2 or 4.")
-
- self.chin = chin
- self.chout = chout
- self.hidden = hidden
- self.depth = depth
- self.kernel_size = kernel_size
- self.stride = stride
- self.causal = causal
- self.floor = floor
- self.resample = resample
- self.normalize = normalize
-
- self.encoder = nn.ModuleList()
- self.decoder = nn.ModuleList()
- activation = nn.GLU(1) if glu else nn.ReLU()
- ch_scale = 2 if glu else 1
-
- for index in range(depth):
- encode = []
- encode += [
- nn.Conv1d(chin, hidden, kernel_size, stride),
- nn.ReLU(),
- nn.Conv1d(hidden, hidden * ch_scale, 1), activation,
- ]
- self.encoder.append(nn.Sequential(*encode))
-
- decode = []
- decode += [
- nn.Conv1d(hidden, ch_scale * hidden, 1), activation,
- nn.ConvTranspose1d(hidden, chout, kernel_size, stride),
- ]
- if index > 0:
- decode.append(nn.ReLU())
- self.decoder.insert(0, nn.Sequential(*decode))
- chout = hidden
- chin = hidden
- hidden = min(int(growth * hidden), max_hidden)
-
- self.lstm = BLSTM(chin, bi=not causal)
- if rescale:
- rescale_module(self, reference=rescale)
-
- def valid_length(self, length):
- """
- Return the nearest valid length to use with the model so that
- there is no time steps left over in a convolutions, e.g. for all
- layers, size of the input - kernel_size % stride = 0.
-
- If the mixture has a valid length, the estimated sources
- will have exactly the same length.
- """
- length = math.ceil(length * self.resample)
- for _ in range(self.depth):
- length = math.ceil((length - self.kernel_size) / self.stride) + 1
- length = max(length, 1)
- for _ in range(self.depth):
- length = (length - 1) * self.stride + self.kernel_size
- length = int(math.ceil(length / self.resample))
- return int(length)
-
- @property
- def total_stride(self):
- return self.stride ** self.depth // self.resample
-
- def forward(self, mix):
- if mix.dim() == 2:
- mix = mix.unsqueeze(1)
-
- if self.normalize:
- mono = mix.mean(dim=1, keepdim=True)
- std = mono.std(dim=-1, keepdim=True)
- mix = mix / (self.floor + std)
- else:
- std = 1
- length = mix.shape[-1]
- x = mix
- x = F.pad(x, (0, self.valid_length(length) - length))
- if self.resample == 2:
- x = upsample2(x)
- elif self.resample == 4:
- x = upsample2(x)
- x = upsample2(x)
- skips = []
- for encode in self.encoder:
- x = encode(x)
- skips.append(x)
- x = x.permute(2, 0, 1)
- x, _ = self.lstm(x)
- x = x.permute(1, 2, 0)
- for decode in self.decoder:
- skip = skips.pop(-1)
- x = x + skip[..., :x.shape[-1]]
- x = decode(x)
- if self.resample == 2:
- x = downsample2(x)
- elif self.resample == 4:
- x = downsample2(x)
- x = downsample2(x)
-
- x = x[..., :length]
- return std * x
-
-
-def fast_conv(conv, x):
- """
- Faster convolution evaluation if either kernel size is 1
- or length of sequence is 1.
- """
- batch, chin, length = x.shape
- chout, chin, kernel = conv.weight.shape
- assert batch == 1
- if kernel == 1:
- x = x.view(chin, length)
- out = th.addmm(conv.bias.view(-1, 1),
- conv.weight.view(chout, chin), x)
- elif length == kernel:
- x = x.view(chin * kernel, 1)
- out = th.addmm(conv.bias.view(-1, 1),
- conv.weight.view(chout, chin * kernel), x)
- else:
- out = conv(x)
- return out.view(batch, chout, -1)
-
-
-class DemucsStreamer:
- """
- Streaming implementation for Demucs. It supports being fed with any amount
- of audio at a time. You will get back as much audio as possible at that
- point.
-
- Args:
- - demucs (Demucs): Demucs model.
- - dry (float): amount of dry (e.g. input) signal to keep. 0 is maximum
- noise removal, 1 just returns the input signal. Small values > 0
- allows to limit distortions.
- - num_frames (int): number of frames to process at once. Higher values
- will increase overall latency but improve the real time factor.
- - resample_lookahead (int): extra lookahead used for the resampling.
- - resample_buffer (int): size of the buffer of previous inputs/outputs
- kept for resampling.
- """
- def __init__(self, demucs,
- dry=0,
- num_frames=1,
- resample_lookahead=64,
- resample_buffer=256):
- device = next(iter(demucs.parameters())).device
- self.demucs = demucs
- self.lstm_state = None
- self.conv_state = None
- self.dry = dry
- self.resample_lookahead = resample_lookahead
- resample_buffer = min(demucs.total_stride, resample_buffer)
- self.resample_buffer = resample_buffer
- self.frame_length = demucs.valid_length(1) + \
- demucs.total_stride * (num_frames - 1)
- self.total_length = self.frame_length + self.resample_lookahead
- self.stride = demucs.total_stride * num_frames
- self.resample_in = th.zeros(demucs.chin, resample_buffer, device=device)
- self.resample_out = th.zeros(
- demucs.chin, resample_buffer, device=device
- )
-
- self.frames = 0
- self.total_time = 0
- self.variance = 0
- self.pending = th.zeros(demucs.chin, 0, device=device)
-
- bias = demucs.decoder[0][2].bias
- weight = demucs.decoder[0][2].weight
- chin, chout, kernel = weight.shape
- self._bias = bias.view(-1, 1).repeat(1, kernel).view(-1, 1)
- self._weight = weight.permute(1, 2, 0).contiguous()
-
- def reset_time_per_frame(self):
- self.total_time = 0
- self.frames = 0
-
- @property
- def time_per_frame(self):
- return self.total_time / self.frames
-
- def flush(self):
- """
- Flush remaining audio by padding it with zero. Call this
- when you have no more input and want to get back the last chunk of audio.
- """
- pending_length = self.pending.shape[1]
- padding = th.zeros(
- self.demucs.chin, self.total_length, device=self.pending.device
- )
- out = self.feed(padding)
- return out[:, :pending_length]
-
- def feed(self, wav):
- """
- Apply the model to mix using true real time evaluation.
- Normalization is done online as is the resampling.
- """
- begin = time.time()
- demucs = self.demucs
- resample_buffer = self.resample_buffer
- stride = self.stride
- resample = demucs.resample
-
- if wav.dim() != 2:
- raise ValueError("input wav should be two dimensional.")
- chin, _ = wav.shape
- if chin != demucs.chin:
- raise ValueError(f"Expected {demucs.chin} channels, got {chin}")
-
- self.pending = th.cat([self.pending, wav], dim=1)
- outs = []
- while self.pending.shape[1] >= self.total_length:
- self.frames += 1
- frame = self.pending[:, :self.total_length]
- dry_signal = frame[:, :stride]
- if demucs.normalize:
- mono = frame.mean(0)
- variance = (mono**2).mean()
- self.variance = variance / self.frames + \
- (1 - 1 / self.frames) * self.variance
- frame = frame / (demucs.floor + math.sqrt(self.variance))
- frame = th.cat([self.resample_in, frame], dim=-1)
- self.resample_in[:] = frame[:, stride - resample_buffer:stride]
-
- if resample == 4:
- frame = upsample2(upsample2(frame))
- elif resample == 2:
- frame = upsample2(frame)
- # remove pre sampling buffer
- frame = frame[:, resample * resample_buffer:]
- # remove extra samples after window
- frame = frame[:, :resample * self.frame_length]
-
- out, extra = self._separate_frame(frame)
- padded_out = th.cat([self.resample_out, out, extra], 1)
- self.resample_out[:] = out[:, -resample_buffer:]
- if resample == 4:
- out = downsample2(downsample2(padded_out))
- elif resample == 2:
- out = downsample2(padded_out)
- else:
- out = padded_out
-
- out = out[:, resample_buffer // resample:]
- out = out[:, :stride]
-
- if demucs.normalize:
- out *= math.sqrt(self.variance)
- out = self.dry * dry_signal + (1 - self.dry) * out
- outs.append(out)
- self.pending = self.pending[:, stride:]
-
- self.total_time += time.time() - begin
- if outs:
- out = th.cat(outs, 1)
- else:
- out = th.zeros(chin, 0, device=wav.device)
- return out
-
- def _separate_frame(self, frame):
- demucs = self.demucs
- skips = []
- next_state = []
- first = self.conv_state is None
- stride = self.stride * demucs.resample
- x = frame[None]
- for idx, encode in enumerate(demucs.encoder):
- stride //= demucs.stride
- length = x.shape[2]
- if idx == demucs.depth - 1:
- # This is sligthly faster for the last conv
- x = fast_conv(encode[0], x)
- x = encode[1](x)
- x = fast_conv(encode[2], x)
- x = encode[3](x)
- else:
- if not first:
- prev = self.conv_state.pop(0)
- prev = prev[..., stride:]
- tgt = (length - demucs.kernel_size) // demucs.stride + 1
- missing = tgt - prev.shape[-1]
- offset = length - demucs.kernel_size - \
- demucs.stride * (missing - 1)
- x = x[..., offset:]
- x = encode[1](encode[0](x))
- x = fast_conv(encode[2], x)
- x = encode[3](x)
- if not first:
- x = th.cat([prev, x], -1)
- next_state.append(x)
- skips.append(x)
-
- x = x.permute(2, 0, 1)
- x, self.lstm_state = demucs.lstm(x, self.lstm_state)
- x = x.permute(1, 2, 0)
- # In the following, x contains only correct samples, i.e. the one
- # for which each time position is covered by two window of the upper
- # layer. extra contains extra samples to the right, and is used only as
- # a better padding for the online resampling.
- extra = None
- for idx, decode in enumerate(demucs.decoder):
- skip = skips.pop(-1)
- x += skip[..., :x.shape[-1]]
- x = fast_conv(decode[0], x)
- x = decode[1](x)
-
- if extra is not None:
- skip = skip[..., x.shape[-1]:]
- extra += skip[..., :extra.shape[-1]]
- extra = decode[2](decode[1](decode[0](extra)))
- x = decode[2](x)
- next_state.append(
- x[..., -demucs.stride:] - decode[2].bias.view(-1, 1)
- )
- if extra is None:
- extra = x[..., -demucs.stride:]
- else:
- extra[..., :demucs.stride] += next_state[-1]
- x = x[..., :-demucs.stride]
-
- if not first:
- prev = self.conv_state.pop(0)
- x[..., :demucs.stride] += prev
- if idx != demucs.depth - 1:
- x = decode[3](x)
- extra = decode[3](extra)
- self.conv_state = next_state
- return x[0], extra[0]
-
-
-def test():
- import argparse
- parser = argparse.ArgumentParser(
- "denoiser.demucs",
- description="Benchmark the streaming Demucs implementation, as well as "
- "checking the delta with the offline implementation.")
- parser.add_argument("--depth", default=5, type=int)
- parser.add_argument("--resample", default=4, type=int)
- parser.add_argument("--hidden", default=48, type=int)
- parser.add_argument("--sample_rate", default=16000, type=float)
- parser.add_argument("--device", default="cpu")
- parser.add_argument("-t", "--num_threads", type=int)
- parser.add_argument("-f", "--num_frames", type=int, default=1)
- args = parser.parse_args()
- if args.num_threads:
- th.set_num_threads(args.num_threads)
- sr = args.sample_rate
- sr_ms = sr / 1000
- demucs = Demucs(
- depth=args.depth, hidden=args.hidden, resample=args.resample
- ).to(args.device)
- x = th.randn(1, int(sr * 4)).to(args.device)
- out = demucs(x[None])[0]
- streamer = DemucsStreamer(demucs, num_frames=args.num_frames)
- out_rt = []
- frame_size = streamer.total_length
- with th.no_grad():
- while x.shape[1] > 0:
- out_rt.append(streamer.feed(x[:, :frame_size]))
- x = x[:, frame_size:]
- frame_size = streamer.demucs.total_stride
- out_rt.append(streamer.flush())
- out_rt = th.cat(out_rt, 1)
- model_size = sum(p.numel() for p in demucs.parameters()) * 4 / 2**20
- initial_lag = streamer.total_length / sr_ms
- tpf = 1000 * streamer.time_per_frame
- print(f"model size: {model_size:.1f}MB, ", end='')
- print(f"delta batch/streaming: {th.norm(out - out_rt) / th.norm(out):.2%}")
- print(f"initial lag: {initial_lag:.1f}ms, ", end='')
- print(f"stride: {streamer.stride * args.num_frames / sr_ms:.1f}ms")
- print(f"time per frame: {tpf:.1f}ms, ", end='')
- rtf = (1000 * streamer.time_per_frame) / (streamer.stride / sr_ms)
- print(f"RTF: {rtf:.2f}")
- print(f"Total lag with computation: {initial_lag + tpf:.1f}ms")
-
-
-if __name__ == "__main__":
- test()
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/multilingual/sampling_method.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/multilingual/sampling_method.py
deleted file mode 100644
index 140c68f01d60e902ef88f11f30f8813dc15fc681..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/multilingual/sampling_method.py
+++ /dev/null
@@ -1,78 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-from typing import List
-
-
-logger = logging.getLogger(__name__)
-
-
-def uniform(dataset_sizes: List[int]):
- return [1.0] * len(dataset_sizes)
-
-
-def temperature_sampling(dataset_sizes, temp):
- total_size = sum(dataset_sizes)
- return [(size / total_size) ** (1.0 / temp) for size in dataset_sizes]
-
-
-def make_temperature_sampling(temp=1.0):
- def sampling_func(dataset_sizes):
- return temperature_sampling(dataset_sizes, temp)
-
- return sampling_func
-
-
-def make_ratio_sampling(ratios):
- def sampling_func(dataset_sizes):
- return ratios
-
- return sampling_func
-
-
-class SamplingMethod:
- @staticmethod
- def add_arguments(parser):
- parser.add_argument(
- "--sampling-method",
- choices=[
- "uniform",
- "temperature",
- "concat",
- "RoundRobin",
- ],
- type=str,
- default="concat",
- help="The method to sample data per language pairs",
- )
- parser.add_argument(
- "--sampling-temperature",
- default=1.5,
- type=float,
- help="only work with --sampling-method temperature",
- )
-
- @staticmethod
- def build_sampler(args, task):
- return SamplingMethod(args, task)
-
- def __init__(self, args, task):
- self.args = args
- self.task = task
-
- def is_adaptive(self):
- return False
-
- def sampling_method_selector(self):
- args = self.args
- logger.info(f"selected sampler: {args.sampling_method}")
- if args.sampling_method == "uniform":
- return uniform
- elif args.sampling_method == "temperature" or self.is_adaptive():
- return make_temperature_sampling(float(args.sampling_temperature))
- else:
- # default to concating all data set together
- return None
diff --git a/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/README.md b/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/README.md
deleted file mode 100644
index 02892bc9dd4344e550596d238e2b71870cfc7dd3..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/README.md
+++ /dev/null
@@ -1,220 +0,0 @@
-# vakyansh-tts
-Text to Speech for Indic languages
-
-## 1. Installation and Setup for training
-
-Clone repo
-Note : for multspeaker glow-tts training use branch [multispeaker](https://github.com/Open-Speech-EkStep/vakyansh-tts/tree/multispeaker)
-```
-git clone https://github.com/Open-Speech-EkStep/vakyansh-tts
-```
-Build conda virtual environment
-```
-cd ./vakyansh-tts
-conda create --name python=3.7
-conda activate
-pip install -r requirements.txt
-```
-Install [apex](https://github.com/NVIDIA/apex); commit: 37cdaf4 for Mixed-precision training
-
-Note : used only for glow-tts
-```
-cd ..
-git clone https://github.com/NVIDIA/apex
-cd apex
-git checkout 37cdaf4
-pip install -v --disable-pip-version-check --no-cache-dir ./
-cd ../vakyansh-tts
-```
-Build Monotonic Alignment Search Code (Cython)
-
-Note : used only for glow-tts
-```
-bash install.sh
-```
-
-## 2. Data Resampling
-
-The data format should have a folder containing all the .wav files for glow-tts and a text file containing filenames with their sentences.
-
-Directory structure:
-
-langauge_folder_name
-```
-language_folder_name
-|-- ./wav/*.wav
-|-- ./text_file_name.txt
-```
-The format for text_file_name.txt (Text file is only needed for glow-tts training)
-
-```
-( audio1.wav "Sentence1." )
-( audio2.wav "Sentence2." )
-```
-
-To resample the .wav files to 22050 sample rate, change the following parameters in the vakyansh-tts/scripts/data/resample.sh
-
-```
-input_wav_path : absolute path to wav file folder in vakyansh_tts/data/
-output_wav_path : absolute path to vakyansh_tts/data/resampled_wav_folder_name
-output_sample_rate : 22050 (or any other desired sample rate)
-```
-
-To run:
-```bash
-cd scripts/data/
-bash resample.sh
-```
-
-
-## 3. Spectogram Training (glow-tts)
-
-### 3.1 Data Preparation
-
-
-To prepare the data edit the vakyansh-tts/scripts/glow/prepare_data.sh file and change the following parameters
-```
-input_text_path : absolute path to vakyansh_tts/data/text_file_name.txt
-input_wav_path : absolute path to vakyansh_tts/data/resampled_wav_folder_name
-gender : female or male voice
-```
-To run:
-```bash
-cd scripts/glow/
-bash prepare_data.sh
-```
-### 3.2 Training glow-tts
-
-To start the spectogram-training edit the vakyansh-tts/scripts/glow/train_glow.sh file and change the following parameter:
-```
-gender : female or male voice
-```
-Make sure that the gender is same as that of the prepare_data.sh file
-
-To start the training, run:
-```bash
-cd scripts/glow/
-bash train_glow.sh
-```
-## 4. Vocoder Training (hifi-gan)
-
-### 4.1 Data Preparation
-
-To prepare the data edit the vakyansh-tts/scripts/hifi/prepare_data.sh file and change the following parameters
-```
-input_wav_path : absolute path to vakyansh_tts/data/resampled_wav_folder_name
-gender : female or male voice
-```
-To run:
-```bash
-cd scripts/hifi/
-bash prepare_data.sh
-```
-### 4.2 Training hifi-gan
-
-To start the spectogram-training edit the vakyansh-tts/scripts/hifi/train_hifi.sh file and change the following parameter:
-```
-gender : female or male voice
-```
-Make sure that the gender is same as that of the prepare_data.sh file
-
-To start the training, run:
-```bash
-cd scripts/hifi/
-bash train_hifi.sh
-```
-
-## 5. Inference
-
-### 5.1 Using Gradio
-
-To use the gradio link edit the following parameters in the vakyansh-tts/scripts/inference/gradio.sh file:
-```
-gender : female or male voice
-device : cpu or cuda
-lang : langauge code
-```
-
-To run:
-```bash
-cd scripts/inference/
-bash gradio.sh
-```
-### 5.2 Using fast API
-To use the fast api link edit the parameters in the vakyansh-tts/scripts/inference/api.sh file similar to section 5.1
-
-To run:
-```bash
-cd scripts/inference/
-bash api.sh
-```
-
-### 5.3 Direct Inference using text
-To infer, edit the parameters in the vakyansh-tts/scripts/inference/infer.sh file similar to section 5.1 and set the text to the text variable
-
-To run:
-```bash
-cd scripts/inference/
-bash infer.sh
-```
-
-To configure other parameters there is a version that runs the advanced inference as well. Additional Parameters:
-```
-noise_scale : can vary from 0 to 1 for noise factor
-length_scale : can vary from 0 to 2 for changing the speed of the generated audio
-transliteration : whether to switch on/off transliteration. 1: ON, 0: OFF
-number_conversion : whether to switch on/off number to words conversion. 1: ON, 0: OFF
-split_sentences : whether to switch on/off splitting of sentences. 1: ON, 0: OFF
-```
-To run:
-```
-cd scripts/inference/
-bash advanced_infer.sh
-```
-
-### 5.4 Installation of tts_infer package
-
-In tts_infer package, we currently have two components:
-
- 1. Transliteration (AI4bharat's open sourced models) (Languages supported: {'hi', 'gu', 'mr', 'bn', 'te', 'ta', 'kn', 'pa', 'gom', 'mai', 'ml', 'sd', 'si', 'ur'} )
-
- 2. Num to Word (Languages supported: {'en', 'hi', 'gu', 'mr', 'bn', 'te', 'ta', 'kn', 'or', 'pa'} )
-```
-git clone https://github.com/Open-Speech-EkStep/vakyansh-tts
-cd vakyansh-tts
-bash install.sh
-python setup.py bdist_wheel
-pip install -e .
-cd tts_infer
-gsutil -m cp -r gs://vakyaansh-open-models/translit_models .
-```
-
-Usage: Refer to example file in tts_infer/
-```
-from tts_infer.tts import TextToMel, MelToWav
-from tts_infer.transliterate import XlitEngine
-from tts_infer.num_to_word_on_sent import normalize_nums
-
-import re
-from scipy.io.wavfile import write
-
-text_to_mel = TextToMel(glow_model_dir='/path/to/glow-tts/checkpoint/dir', device='cuda')
-mel_to_wav = MelToWav(hifi_model_dir='/path/to/hifi/checkpoint/dir', device='cuda')
-
-def translit(text, lang):
- reg = re.compile(r'[a-zA-Z]')
- engine = XlitEngine(lang)
- words = [engine.translit_word(word, topk=1)[lang][0] if reg.match(word) else word for word in text.split()]
- updated_sent = ' '.join(words)
- return updated_sent
-
-def run_tts(text, lang):
- text = text.replace('।', '.') # only for hindi models
- text_num_to_word = normalize_nums(text, lang) # converting numbers to words in lang
- text_num_to_word_and_transliterated = translit(text_num_to_word, lang) # transliterating english words to lang
-
- mel = text_to_mel.generate_mel(text_num_to_word_and_transliterated)
- audio, sr = mel_to_wav.generate_wav(mel)
- write(filename='temp.wav', rate=sr, data=audio) # for saving wav file, if needed
- return (sr, audio)
-```
diff --git a/spaces/Hina4867/bingo/src/components/chat-header.tsx b/spaces/Hina4867/bingo/src/components/chat-header.tsx
deleted file mode 100644
index c6664b8dee61179f844d45c5bd650518fc2cb4c2..0000000000000000000000000000000000000000
--- a/spaces/Hina4867/bingo/src/components/chat-header.tsx
+++ /dev/null
@@ -1,12 +0,0 @@
-import LogoIcon from '@/assets/images/logo.svg'
-import Image from 'next/image'
-
-export function ChatHeader() {
- return (
-
-
- 欢迎使用新必应
- 由 AI 支持的网页版 Copilot
-
- )
-}
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/data/audio/text_to_speech_dataset.py b/spaces/ICML2022/OFA/fairseq/fairseq/data/audio/text_to_speech_dataset.py
deleted file mode 100644
index abfcb2be4028889acd72c6f40d4c832e48cff344..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/data/audio/text_to_speech_dataset.py
+++ /dev/null
@@ -1,215 +0,0 @@
-# Copyright (c) 2017-present, Facebook, Inc.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the LICENSE file in
-# the root directory of this source tree. An additional grant of patent rights
-# can be found in the PATENTS file in the same directory.abs
-
-from pathlib import Path
-from typing import List, Dict, Optional, Any
-from dataclasses import dataclass
-
-import numpy as np
-import torch
-
-from fairseq.data.audio.speech_to_text_dataset import (
- SpeechToTextDataset, SpeechToTextDatasetCreator, S2TDataConfig,
- _collate_frames, get_features_or_waveform
-)
-from fairseq.data import Dictionary, data_utils as fairseq_data_utils
-
-
-@dataclass
-class TextToSpeechDatasetItem(object):
- index: int
- source: torch.Tensor
- target: Optional[torch.Tensor] = None
- speaker_id: Optional[int] = None
- duration: Optional[torch.Tensor] = None
- pitch: Optional[torch.Tensor] = None
- energy: Optional[torch.Tensor] = None
-
-
-class TextToSpeechDataset(SpeechToTextDataset):
- def __init__(
- self,
- split: str,
- is_train_split: bool,
- cfg: S2TDataConfig,
- audio_paths: List[str],
- n_frames: List[int],
- src_texts: Optional[List[str]] = None,
- tgt_texts: Optional[List[str]] = None,
- speakers: Optional[List[str]] = None,
- src_langs: Optional[List[str]] = None,
- tgt_langs: Optional[List[str]] = None,
- ids: Optional[List[str]] = None,
- tgt_dict: Optional[Dictionary] = None,
- pre_tokenizer=None,
- bpe_tokenizer=None,
- n_frames_per_step=1,
- speaker_to_id=None,
- durations: Optional[List[List[int]]] = None,
- pitches: Optional[List[str]] = None,
- energies: Optional[List[str]] = None
- ):
- super(TextToSpeechDataset, self).__init__(
- split, is_train_split, cfg, audio_paths, n_frames,
- src_texts=src_texts, tgt_texts=tgt_texts, speakers=speakers,
- src_langs=src_langs, tgt_langs=tgt_langs, ids=ids,
- tgt_dict=tgt_dict, pre_tokenizer=pre_tokenizer,
- bpe_tokenizer=bpe_tokenizer, n_frames_per_step=n_frames_per_step,
- speaker_to_id=speaker_to_id
- )
- self.durations = durations
- self.pitches = pitches
- self.energies = energies
-
- def __getitem__(self, index: int) -> TextToSpeechDatasetItem:
- s2t_item = super().__getitem__(index)
-
- duration, pitch, energy = None, None, None
- if self.durations is not None:
- duration = torch.tensor(
- self.durations[index] + [0], dtype=torch.long # pad 0 for EOS
- )
- if self.pitches is not None:
- pitch = get_features_or_waveform(self.pitches[index])
- pitch = torch.from_numpy(
- np.concatenate((pitch, [0])) # pad 0 for EOS
- ).float()
- if self.energies is not None:
- energy = get_features_or_waveform(self.energies[index])
- energy = torch.from_numpy(
- np.concatenate((energy, [0])) # pad 0 for EOS
- ).float()
- return TextToSpeechDatasetItem(
- index=index, source=s2t_item.source, target=s2t_item.target,
- speaker_id=s2t_item.speaker_id, duration=duration, pitch=pitch,
- energy=energy
- )
-
- def collater(self, samples: List[TextToSpeechDatasetItem]) -> Dict[str, Any]:
- if len(samples) == 0:
- return {}
-
- src_lengths, order = torch.tensor(
- [s.target.shape[0] for s in samples], dtype=torch.long
- ).sort(descending=True)
- id_ = torch.tensor([s.index for s in samples],
- dtype=torch.long).index_select(0, order)
- feat = _collate_frames(
- [s.source for s in samples], self.cfg.use_audio_input
- ).index_select(0, order)
- target_lengths = torch.tensor(
- [s.source.shape[0] for s in samples], dtype=torch.long
- ).index_select(0, order)
-
- src_tokens = fairseq_data_utils.collate_tokens(
- [s.target for s in samples],
- self.tgt_dict.pad(),
- self.tgt_dict.eos(),
- left_pad=False,
- move_eos_to_beginning=False,
- ).index_select(0, order)
-
- speaker = None
- if self.speaker_to_id is not None:
- speaker = torch.tensor(
- [s.speaker_id for s in samples], dtype=torch.long
- ).index_select(0, order).view(-1, 1)
-
- bsz, _, d = feat.size()
- prev_output_tokens = torch.cat(
- (feat.new_zeros((bsz, 1, d)), feat[:, :-1, :]), dim=1
- )
-
- durations, pitches, energies = None, None, None
- if self.durations is not None:
- durations = fairseq_data_utils.collate_tokens(
- [s.duration for s in samples], 0
- ).index_select(0, order)
- assert src_tokens.shape[1] == durations.shape[1]
- if self.pitches is not None:
- pitches = _collate_frames([s.pitch for s in samples], True)
- pitches = pitches.index_select(0, order)
- assert src_tokens.shape[1] == pitches.shape[1]
- if self.energies is not None:
- energies = _collate_frames([s.energy for s in samples], True)
- energies = energies.index_select(0, order)
- assert src_tokens.shape[1] == energies.shape[1]
- src_texts = [self.tgt_dict.string(samples[i].target) for i in order]
-
- return {
- "id": id_,
- "net_input": {
- "src_tokens": src_tokens,
- "src_lengths": src_lengths,
- "prev_output_tokens": prev_output_tokens,
- },
- "speaker": speaker,
- "target": feat,
- "durations": durations,
- "pitches": pitches,
- "energies": energies,
- "target_lengths": target_lengths,
- "ntokens": sum(target_lengths).item(),
- "nsentences": len(samples),
- "src_texts": src_texts,
- }
-
-
-class TextToSpeechDatasetCreator(SpeechToTextDatasetCreator):
- KEY_DURATION = "duration"
- KEY_PITCH = "pitch"
- KEY_ENERGY = "energy"
-
- @classmethod
- def _from_list(
- cls,
- split_name: str,
- is_train_split,
- samples: List[Dict],
- cfg: S2TDataConfig,
- tgt_dict,
- pre_tokenizer,
- bpe_tokenizer,
- n_frames_per_step,
- speaker_to_id
- ) -> TextToSpeechDataset:
- audio_root = Path(cfg.audio_root)
- ids = [s[cls.KEY_ID] for s in samples]
- audio_paths = [(audio_root / s[cls.KEY_AUDIO]).as_posix() for s in samples]
- n_frames = [int(s[cls.KEY_N_FRAMES]) for s in samples]
- tgt_texts = [s[cls.KEY_TGT_TEXT] for s in samples]
- src_texts = [s.get(cls.KEY_SRC_TEXT, cls.DEFAULT_SRC_TEXT) for s in samples]
- speakers = [s.get(cls.KEY_SPEAKER, cls.DEFAULT_SPEAKER) for s in samples]
- src_langs = [s.get(cls.KEY_SRC_LANG, cls.DEFAULT_LANG) for s in samples]
- tgt_langs = [s.get(cls.KEY_TGT_LANG, cls.DEFAULT_LANG) for s in samples]
-
- durations = [s.get(cls.KEY_DURATION, None) for s in samples]
- durations = [
- None if dd is None else [int(d) for d in dd.split(" ")]
- for dd in durations
- ]
- durations = None if any(dd is None for dd in durations) else durations
-
- pitches = [s.get(cls.KEY_PITCH, None) for s in samples]
- pitches = [
- None if pp is None else (audio_root / pp).as_posix()
- for pp in pitches
- ]
- pitches = None if any(pp is None for pp in pitches) else pitches
-
- energies = [s.get(cls.KEY_ENERGY, None) for s in samples]
- energies = [
- None if ee is None else (audio_root / ee).as_posix()
- for ee in energies]
- energies = None if any(ee is None for ee in energies) else energies
-
- return TextToSpeechDataset(
- split_name, is_train_split, cfg, audio_paths, n_frames,
- src_texts, tgt_texts, speakers, src_langs, tgt_langs, ids, tgt_dict,
- pre_tokenizer, bpe_tokenizer, n_frames_per_step, speaker_to_id,
- durations, pitches, energies
- )
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/data/concat_dataset.py b/spaces/ICML2022/OFA/fairseq/fairseq/data/concat_dataset.py
deleted file mode 100644
index 01a4078bb159fa44b2d1062b9a971fe7f1abd1c2..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/data/concat_dataset.py
+++ /dev/null
@@ -1,124 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import bisect
-
-import numpy as np
-from torch.utils.data.dataloader import default_collate
-
-from . import FairseqDataset
-
-
-class ConcatDataset(FairseqDataset):
- @staticmethod
- def cumsum(sequence, sample_ratios):
- r, s = [], 0
- for e, ratio in zip(sequence, sample_ratios):
- curr_len = int(ratio * len(e))
- r.append(curr_len + s)
- s += curr_len
- return r
-
- def __init__(self, datasets, sample_ratios=1):
- super(ConcatDataset, self).__init__()
- assert len(datasets) > 0, "datasets should not be an empty iterable"
- self.datasets = list(datasets)
- if isinstance(sample_ratios, int):
- sample_ratios = [sample_ratios] * len(self.datasets)
- self.sample_ratios = sample_ratios
- self.cumulative_sizes = self.cumsum(self.datasets, sample_ratios)
- self.real_sizes = [len(d) for d in self.datasets]
-
- def __len__(self):
- return self.cumulative_sizes[-1]
-
- def __getitem__(self, idx):
- dataset_idx, sample_idx = self._get_dataset_and_sample_index(idx)
- return self.datasets[dataset_idx][sample_idx]
-
- def _get_dataset_and_sample_index(self, idx: int):
- dataset_idx = bisect.bisect_right(self.cumulative_sizes, idx)
- if dataset_idx == 0:
- sample_idx = idx
- else:
- sample_idx = idx - self.cumulative_sizes[dataset_idx - 1]
- sample_idx = sample_idx % self.real_sizes[dataset_idx]
- return dataset_idx, sample_idx
-
- def collater(self, samples, **extra_args):
- # For now only supports datasets with same underlying collater implementations
- if hasattr(self.datasets[0], "collater"):
- return self.datasets[0].collater(samples, **extra_args)
- else:
- return default_collate(samples, **extra_args)
-
- def size(self, idx: int):
- """
- Return an example's size as a float or tuple.
- """
- dataset_idx, sample_idx = self._get_dataset_and_sample_index(idx)
- return self.datasets[dataset_idx].size(sample_idx)
-
- def num_tokens(self, index: int):
- return np.max(self.size(index))
-
- def attr(self, attr: str, index: int):
- dataset_idx = bisect.bisect_right(self.cumulative_sizes, index)
- return getattr(self.datasets[dataset_idx], attr, None)
-
- @property
- def sizes(self):
- _dataset_sizes = []
- for ds, sr in zip(self.datasets, self.sample_ratios):
- if isinstance(ds.sizes, np.ndarray):
- _dataset_sizes.append(np.tile(ds.sizes, sr))
- else:
- # Only support underlying dataset with single size array.
- assert isinstance(ds.sizes, list)
- _dataset_sizes.append(np.tile(ds.sizes[0], sr))
- return np.concatenate(_dataset_sizes)
-
- @property
- def supports_prefetch(self):
- return all(d.supports_prefetch for d in self.datasets)
-
- def ordered_indices(self):
- """
- Returns indices sorted by length. So less padding is needed.
- """
- if isinstance(self.sizes, np.ndarray) and len(self.sizes.shape) > 1:
- # special handling for concatenating lang_pair_datasets
- indices = np.arange(len(self))
- sizes = self.sizes
- tgt_sizes = (
- sizes[:, 1] if len(sizes.shape) > 0 and sizes.shape[1] > 1 else None
- )
- src_sizes = (
- sizes[:, 0] if len(sizes.shape) > 0 and sizes.shape[1] > 1 else sizes
- )
- # sort by target length, then source length
- if tgt_sizes is not None:
- indices = indices[np.argsort(tgt_sizes[indices], kind="mergesort")]
- return indices[np.argsort(src_sizes[indices], kind="mergesort")]
- else:
- return np.argsort(self.sizes)
-
- def prefetch(self, indices):
- frm = 0
- for to, ds in zip(self.cumulative_sizes, self.datasets):
- real_size = len(ds)
- if getattr(ds, "supports_prefetch", False):
- ds.prefetch([(i - frm) % real_size for i in indices if frm <= i < to])
- frm = to
-
- @property
- def can_reuse_epoch_itr_across_epochs(self):
- return all(d.can_reuse_epoch_itr_across_epochs for d in self.datasets)
-
- def set_epoch(self, epoch):
- super().set_epoch(epoch)
- for ds in self.datasets:
- if hasattr(ds, "set_epoch"):
- ds.set_epoch(epoch)
diff --git a/spaces/ICML2022/resefa/compute_direction.py b/spaces/ICML2022/resefa/compute_direction.py
deleted file mode 100644
index ab441f331739acae9606f34f42d4c33e69b952d3..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/resefa/compute_direction.py
+++ /dev/null
@@ -1,96 +0,0 @@
-# python3.7
-"""Computes the semantic directions regarding a specific image region."""
-
-import os
-import argparse
-import numpy as np
-from tqdm import tqdm
-
-from coordinate import COORDINATES
-from coordinate import get_mask
-from utils.image_utils import save_image
-
-
-def parse_args():
- """Parses arguments."""
-
- parser = argparse.ArgumentParser()
- parser.add_argument('jaco_path', type=str,
- help='Path to jacobian matrix.')
- parser.add_argument('--region', type=str, default='eyes',
- help='The region to be used to compute jacobian.')
- parser.add_argument('--save_dir', type=str, default='',
- help='Directory to save the results. If not specified,'
- 'the results will be saved to '
- '`work_dirs/{TASK_SPECIFIC}/` by default')
- parser.add_argument('--job', type=str, default='directions',
- help='Name for the job (default: directions)')
- parser.add_argument('--name', type=str, default='resefa',
- help='Name of help save the results.')
- parser.add_argument('--data_name', type=str, default='ffhq',
- help='Name of the dataset.')
- parser.add_argument('--full_rank', action='store_true',
- help='Whether or not to full rank background'
- ' (default: False).')
- parser.add_argument('--tao', type=float, default=1e-3,
- help='Coefficient to the identity matrix '
- '(default: 1e-3).')
- return parser.parse_args()
-
-
-def main():
- """Main function."""
- args = parse_args()
- assert os.path.exists(args.jaco_path)
- Jacobians = np.load(args.jaco_path)
- image_size = Jacobians.shape[2]
- w_dim = Jacobians.shape[-1]
- coord_dict = COORDINATES[args.data_name]
- assert args.region in coord_dict, \
- f'{args.region} coordinate is not defined in ' \
- f'COORDINATE_{args.data_name}. Please define this region first!'
- coords = coord_dict[args.region]
- mask = get_mask(image_size, coordinate=coords)
- foreground_ind = np.where(mask == 1)
- background_ind = np.where((1 - mask) == 1)
- temp_dir = f'./work_dirs/{args.job}/{args.data_name}/{args.region}'
- save_dir = args.save_dir or temp_dir
- os.makedirs(save_dir, exist_ok=True)
- for ind in tqdm(range(Jacobians.shape[0])):
- Jacobian = Jacobians[ind]
- if len(Jacobian.shape) == 4: # [H, W, 1, latent_dim]
- Jaco_fore = Jacobian[foreground_ind[0], foreground_ind[1], 0]
- Jaco_back = Jacobian[background_ind[0], background_ind[1], 0]
- elif len(Jacobian.shape) == 5: # [channel, H, W, 1, latent_dim]
- Jaco_fore = Jacobian[:, foreground_ind[0], foreground_ind[1], 0]
- Jaco_back = Jacobian[:, background_ind[0], background_ind[1], 0]
- else:
- raise ValueError('Shape of the Jacobian is not correct!')
- Jaco_fore = np.reshape(Jaco_fore, [-1, w_dim])
- Jaco_back = np.reshape(Jaco_back, [-1, w_dim])
- coef_f = 1 / Jaco_fore.shape[0]
- coef_b = 1 / Jaco_back.shape[0]
- M_fore = coef_f * Jaco_fore.T.dot(Jaco_fore)
- M_back = coef_b * Jaco_back.T.dot(Jaco_back)
- if args.full_rank:
- # J = J_b^TJ_b
- # J = (J + tao * trace(J) * I)
- print('Using full rank')
- coef = args.tao * np.trace(M_back)
- M_back = M_back + coef * np.identity(M_back.shape[0])
- # inv(B) * A = lambda x
- temp = np.linalg.inv(M_back).dot(M_fore)
- eig_val, eig_vec = np.linalg.eig(temp)
- eig_val = np.real(eig_val)
- eig_vec = np.real(eig_vec)
- directions = eig_vec.T
- directions = directions[np.argsort(-eig_val)]
- save_name = f'{save_dir}/image_{ind:02d}_region_{args.region}' \
- f'_name_{args.name}'
- np.save(f'{save_name}.npy', directions)
- mask_i = np.tile(mask[:, :, np.newaxis], [1, 1, 3]) * 255
- save_image(f'{save_name}_mask.png', mask_i.astype(np.uint8))
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/IDEA-CCNL/Taiyi-Stable-Diffusion-Chinese/README.md b/spaces/IDEA-CCNL/Taiyi-Stable-Diffusion-Chinese/README.md
deleted file mode 100644
index 6937d9ef4a8879d458efc0ff03b0f070741c253a..0000000000000000000000000000000000000000
--- a/spaces/IDEA-CCNL/Taiyi-Stable-Diffusion-Chinese/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Taiyi Stable Diffusion Chinese
-emoji: 🤯
-colorFrom: red
-colorTo: red
-sdk: gradio
-sdk_version: 3.10.0
-app_file: app.py
-pinned: false
-license: creativeml-openrail-m
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Illumotion/Koboldcpp/examples/embd-input/embd-input-lib.cpp b/spaces/Illumotion/Koboldcpp/examples/embd-input/embd-input-lib.cpp
deleted file mode 100644
index 99e6bdad5ac45442a05e2663bea6d3ce2b3aeae9..0000000000000000000000000000000000000000
--- a/spaces/Illumotion/Koboldcpp/examples/embd-input/embd-input-lib.cpp
+++ /dev/null
@@ -1,220 +0,0 @@
-#include "build-info.h"
-#include "common.h"
-#include "embd-input.h"
-
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-
-static llama_context ** g_ctx;
-
-extern "C" {
-
-struct MyModel* create_mymodel(int argc, char ** argv) {
- gpt_params params;
-
- if (!gpt_params_parse(argc, argv, params)) {
- return nullptr;
- }
-
- print_build_info();
-
- if (params.seed == LLAMA_DEFAULT_SEED) {
- params.seed = uint32_t(time(NULL));
- }
- fprintf(stderr, "%s: seed = %d\n", __func__, params.seed);
-
- llama_backend_init(params.numa);
-
- llama_model * model;
- llama_context * ctx;
-
- g_ctx = &ctx;
-
- // load the model and apply lora adapter, if any
- std::tie(model, ctx) = llama_init_from_gpt_params(params);
- if (model == NULL) {
- fprintf(stderr, "%s: error: unable to load model\n", __func__);
- return nullptr;
- }
-
- // print system information
- {
- fprintf(stderr, "\n");
- fprintf(stderr, "%s\n", get_system_info(params).c_str());
- }
- struct MyModel * ret = new MyModel();
- ret->ctx = ctx;
- ret->params = params;
- ret->n_past = 0;
- // printf("ctx: %d\n", ret->ctx);
- return ret;
-}
-
-void free_mymodel(struct MyModel * mymodel) {
- llama_context * ctx = mymodel->ctx;
- llama_print_timings(ctx);
- llama_free(ctx);
- delete mymodel;
-}
-
-
-bool eval_float(void * model, float * input, int N){
- MyModel * mymodel = (MyModel*)model;
- llama_context * ctx = mymodel->ctx;
- gpt_params params = mymodel->params;
- int n_emb = llama_n_embd(llama_get_model(ctx));
- int n_past = mymodel->n_past;
- int n_batch = N; // params.n_batch;
-
- for (int i = 0; i < (int) N; i += n_batch) {
- int n_eval = (int) N - i;
- if (n_eval > n_batch) {
- n_eval = n_batch;
- }
- llama_batch batch = { int32_t(n_eval), nullptr, (input+i*n_emb), nullptr, nullptr, nullptr, n_past, 1, 0, };
- if (llama_decode(ctx, batch)) {
- fprintf(stderr, "%s : failed to eval\n", __func__);
- return false;
- }
- n_past += n_eval;
- }
- mymodel->n_past = n_past;
- return true;
-}
-
-bool eval_tokens(void * model, std::vector tokens) {
- MyModel * mymodel = (MyModel* )model;
- llama_context * ctx;
- ctx = mymodel->ctx;
- gpt_params params = mymodel->params;
- int n_past = mymodel->n_past;
- for (int i = 0; i < (int) tokens.size(); i += params.n_batch) {
- int n_eval = (int) tokens.size() - i;
- if (n_eval > params.n_batch) {
- n_eval = params.n_batch;
- }
- if (llama_decode(ctx, llama_batch_get_one(&tokens[i], n_eval, n_past, 0))) {
- fprintf(stderr, "%s : failed to eval\n", __func__);
- return false;
- }
- n_past += n_eval;
- }
- mymodel->n_past = n_past;
- return true;
-}
-
-bool eval_id(struct MyModel* mymodel, int id) {
- std::vector tokens;
- tokens.push_back(id);
- return eval_tokens(mymodel, tokens);
-}
-
-bool eval_string(struct MyModel * mymodel,const char* str){
- llama_context * ctx = mymodel->ctx;
- std::string str2 = str;
- std::vector embd_inp = ::llama_tokenize(ctx, str2, true);
- eval_tokens(mymodel, embd_inp);
- return true;
-}
-
-llama_token sampling_id(struct MyModel* mymodel) {
- llama_context* ctx = mymodel->ctx;
- gpt_params params = mymodel->params;
- // int n_ctx = llama_n_ctx(ctx);
-
- // out of user input, sample next token
- const float temp = params.temp;
- const int32_t top_k = params.top_k <= 0 ? llama_n_vocab(llama_get_model(ctx)) : params.top_k;
- const float top_p = params.top_p;
- const float tfs_z = params.tfs_z;
- const float typical_p = params.typical_p;
- // const int32_t repeat_last_n = params.repeat_last_n < 0 ? n_ctx : params.repeat_last_n;
- // const float repeat_penalty = params.repeat_penalty;
- // const float alpha_presence = params.presence_penalty;
- // const float alpha_frequency = params.frequency_penalty;
- const int mirostat = params.mirostat;
- const float mirostat_tau = params.mirostat_tau;
- const float mirostat_eta = params.mirostat_eta;
- // const bool penalize_nl = params.penalize_nl;
-
- llama_token id = 0;
- {
- auto logits = llama_get_logits(ctx);
- auto n_vocab = llama_n_vocab(llama_get_model(ctx));
-
- // Apply params.logit_bias map
- for (auto it = params.logit_bias.begin(); it != params.logit_bias.end(); it++) {
- logits[it->first] += it->second;
- }
-
- std::vector candidates;
- candidates.reserve(n_vocab);
- for (llama_token token_id = 0; token_id < n_vocab; token_id++) {
- candidates.emplace_back(llama_token_data{token_id, logits[token_id], 0.0f});
- }
-
- llama_token_data_array candidates_p = { candidates.data(), candidates.size(), false };
-
- // TODO: Apply penalties
- // float nl_logit = logits[llama_token_nl(ctx)];
- // auto last_n_repeat = std::min(std::min((int)last_n_tokens.size(), repeat_last_n), n_ctx);
- // llama_sample_repetition_penalty(ctx, &candidates_p,
- // last_n_tokens.data() + last_n_tokens.size() - last_n_repeat,
- // last_n_repeat, repeat_penalty);
- // llama_sample_frequency_and_presence_penalties(ctx, &candidates_p,
- // last_n_tokens.data() + last_n_tokens.size() - last_n_repeat,
- // last_n_repeat, alpha_frequency, alpha_presence);
- // if (!penalize_nl) {
- // logits[llama_token_nl(ctx)] = nl_logit;
- // }
-
- if (temp <= 0) {
- // Greedy sampling
- id = llama_sample_token_greedy(ctx, &candidates_p);
- } else {
- if (mirostat == 1) {
- static float mirostat_mu = 2.0f * mirostat_tau;
- const int mirostat_m = 100;
- llama_sample_temp(ctx, &candidates_p, temp);
- id = llama_sample_token_mirostat(ctx, &candidates_p, mirostat_tau, mirostat_eta, mirostat_m, &mirostat_mu);
- } else if (mirostat == 2) {
- static float mirostat_mu = 2.0f * mirostat_tau;
- llama_sample_temp(ctx, &candidates_p, temp);
- id = llama_sample_token_mirostat_v2(ctx, &candidates_p, mirostat_tau, mirostat_eta, &mirostat_mu);
- } else {
- // Temperature sampling
- llama_sample_top_k(ctx, &candidates_p, top_k, 1);
- llama_sample_tail_free(ctx, &candidates_p, tfs_z, 1);
- llama_sample_typical(ctx, &candidates_p, typical_p, 1);
- llama_sample_top_p(ctx, &candidates_p, top_p, 1);
- llama_sample_temp(ctx, &candidates_p, temp);
- id = llama_sample_token(ctx, &candidates_p);
- }
- }
- }
-
- return id;
-}
-
-const char * sampling(struct MyModel * mymodel) {
- llama_context * ctx = mymodel->ctx;
- int id = sampling_id(mymodel);
- static std::string ret;
- if (id == llama_token_eos(ctx)) {
- ret = "";
- } else {
- ret = llama_token_to_piece(ctx, id);
- }
- eval_id(mymodel, id);
- return ret.c_str();
-}
-
-}
diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/bin/predict_inner_features.py b/spaces/InpaintAI/Inpaint-Anything/third_party/lama/bin/predict_inner_features.py
deleted file mode 100644
index 4f9f7a11a6c4757a4eaa05cf1ac648d372f7e02f..0000000000000000000000000000000000000000
--- a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/bin/predict_inner_features.py
+++ /dev/null
@@ -1,119 +0,0 @@
-#!/usr/bin/env python3
-
-# Example command:
-# ./bin/predict.py \
-# model.path= \
-# indir= \
-# outdir=
-
-import logging
-import os
-import sys
-import traceback
-
-from saicinpainting.evaluation.utils import move_to_device
-
-os.environ['OMP_NUM_THREADS'] = '1'
-os.environ['OPENBLAS_NUM_THREADS'] = '1'
-os.environ['MKL_NUM_THREADS'] = '1'
-os.environ['VECLIB_MAXIMUM_THREADS'] = '1'
-os.environ['NUMEXPR_NUM_THREADS'] = '1'
-
-import cv2
-import hydra
-import numpy as np
-import torch
-import tqdm
-import yaml
-from omegaconf import OmegaConf
-from torch.utils.data._utils.collate import default_collate
-
-from saicinpainting.training.data.datasets import make_default_val_dataset
-from saicinpainting.training.trainers import load_checkpoint, DefaultInpaintingTrainingModule
-from saicinpainting.utils import register_debug_signal_handlers, get_shape
-
-LOGGER = logging.getLogger(__name__)
-
-
-@hydra.main(config_path='../configs/prediction', config_name='default_inner_features.yaml')
-def main(predict_config: OmegaConf):
- try:
- register_debug_signal_handlers() # kill -10 will result in traceback dumped into log
-
- device = torch.device(predict_config.device)
-
- train_config_path = os.path.join(predict_config.model.path, 'config.yaml')
- with open(train_config_path, 'r') as f:
- train_config = OmegaConf.create(yaml.safe_load(f))
-
- checkpoint_path = os.path.join(predict_config.model.path, 'models', predict_config.model.checkpoint)
- model = load_checkpoint(train_config, checkpoint_path, strict=False)
- model.freeze()
- model.to(device)
-
- assert isinstance(model, DefaultInpaintingTrainingModule), 'Only DefaultInpaintingTrainingModule is supported'
- assert isinstance(getattr(model.generator, 'model', None), torch.nn.Sequential)
-
- if not predict_config.indir.endswith('/'):
- predict_config.indir += '/'
-
- dataset = make_default_val_dataset(predict_config.indir, **predict_config.dataset)
-
- max_level = max(predict_config.levels)
-
- with torch.no_grad():
- for img_i in tqdm.trange(len(dataset)):
- mask_fname = dataset.mask_filenames[img_i]
- cur_out_fname = os.path.join(predict_config.outdir, os.path.splitext(mask_fname[len(predict_config.indir):])[0])
- os.makedirs(os.path.dirname(cur_out_fname), exist_ok=True)
-
- batch = move_to_device(default_collate([dataset[img_i]]), device)
-
- img = batch['image']
- mask = batch['mask']
- mask[:] = 0
- mask_h, mask_w = mask.shape[-2:]
- mask[:, :,
- mask_h // 2 - predict_config.hole_radius : mask_h // 2 + predict_config.hole_radius,
- mask_w // 2 - predict_config.hole_radius : mask_w // 2 + predict_config.hole_radius] = 1
-
- masked_img = torch.cat([img * (1 - mask), mask], dim=1)
-
- feats = masked_img
- for level_i, level in enumerate(model.generator.model):
- feats = level(feats)
- if level_i in predict_config.levels:
- cur_feats = torch.cat([f for f in feats if torch.is_tensor(f)], dim=1) \
- if isinstance(feats, tuple) else feats
-
- if predict_config.slice_channels:
- cur_feats = cur_feats[:, slice(*predict_config.slice_channels)]
-
- cur_feat = cur_feats.pow(2).mean(1).pow(0.5).clone()
- cur_feat -= cur_feat.min()
- cur_feat /= cur_feat.std()
- cur_feat = cur_feat.clamp(0, 1) / 1
- cur_feat = cur_feat.cpu().numpy()[0]
- cur_feat *= 255
- cur_feat = np.clip(cur_feat, 0, 255).astype('uint8')
- cv2.imwrite(cur_out_fname + f'_lev{level_i:02d}_norm.png', cur_feat)
-
- # for channel_i in predict_config.channels:
- #
- # cur_feat = cur_feats[0, channel_i].clone().detach().cpu().numpy()
- # cur_feat -= cur_feat.min()
- # cur_feat /= cur_feat.max()
- # cur_feat *= 255
- # cur_feat = np.clip(cur_feat, 0, 255).astype('uint8')
- # cv2.imwrite(cur_out_fname + f'_lev{level_i}_ch{channel_i}.png', cur_feat)
- elif level_i >= max_level:
- break
- except KeyboardInterrupt:
- LOGGER.warning('Interrupted by user')
- except Exception as ex:
- LOGGER.critical(f'Prediction failed due to {ex}:\n{traceback.format_exc()}')
- sys.exit(1)
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/JUNGU/VToonify/vtoonify/smooth_parsing_map.py b/spaces/JUNGU/VToonify/vtoonify/smooth_parsing_map.py
deleted file mode 100644
index 7720d0c7786925db38d3e793d6a3a8f68f6e663e..0000000000000000000000000000000000000000
--- a/spaces/JUNGU/VToonify/vtoonify/smooth_parsing_map.py
+++ /dev/null
@@ -1,172 +0,0 @@
-import os
-#os.environ['CUDA_VISIBLE_DEVICES'] = "0"
-import numpy as np
-import cv2
-import math
-import argparse
-from tqdm import tqdm
-import torch
-from torch import nn
-from torchvision import transforms
-import torch.nn.functional as F
-from model.raft.core.raft import RAFT
-from model.raft.core.utils.utils import InputPadder
-from model.bisenet.model import BiSeNet
-from model.stylegan.model import Downsample
-
-class Options():
- def __init__(self):
-
- self.parser = argparse.ArgumentParser(description="Smooth Parsing Maps")
- self.parser.add_argument("--window_size", type=int, default=5, help="temporal window size")
-
- self.parser.add_argument("--faceparsing_path", type=str, default='./checkpoint/faceparsing.pth', help="path of the face parsing model")
- self.parser.add_argument("--raft_path", type=str, default='./checkpoint/raft-things.pth', help="path of the RAFT model")
-
- self.parser.add_argument("--video_path", type=str, help="path of the target video")
- self.parser.add_argument("--output_path", type=str, default='./output/', help="path of the output parsing maps")
-
- def parse(self):
- self.opt = self.parser.parse_args()
- args = vars(self.opt)
- print('Load options')
- for name, value in sorted(args.items()):
- print('%s: %s' % (str(name), str(value)))
- return self.opt
-
-# from RAFT
-def warp(x, flo):
- """
- warp an image/tensor (im2) back to im1, according to the optical flow
- x: [B, C, H, W] (im2)
- flo: [B, 2, H, W] flow
- """
- B, C, H, W = x.size()
- # mesh grid
- xx = torch.arange(0, W).view(1,-1).repeat(H,1)
- yy = torch.arange(0, H).view(-1,1).repeat(1,W)
- xx = xx.view(1,1,H,W).repeat(B,1,1,1)
- yy = yy.view(1,1,H,W).repeat(B,1,1,1)
- grid = torch.cat((xx,yy),1).float()
-
-
- #x = x.cuda()
- grid = grid.cuda()
- vgrid = grid + flo # B,2,H,W
-
- # scale grid to [-1,1]
- ##2019 code
- vgrid[:,0,:,:] = 2.0*vgrid[:,0,:,:].clone()/max(W-1,1)-1.0
- vgrid[:,1,:,:] = 2.0*vgrid[:,1,:,:].clone()/max(H-1,1)-1.0
-
- vgrid = vgrid.permute(0,2,3,1)
- output = nn.functional.grid_sample(x, vgrid,align_corners=True)
- mask = torch.autograd.Variable(torch.ones(x.size())).cuda()
- mask = nn.functional.grid_sample(mask, vgrid,align_corners=True)
-
- ##2019 author
- mask[mask<0.9999] = 0
- mask[mask>0] = 1
-
- ##2019 code
- # mask = torch.floor(torch.clamp(mask, 0 ,1))
-
- return output*mask, mask
-
-
-if __name__ == "__main__":
-
- parser = Options()
- args = parser.parse()
- print('*'*98)
-
-
- device = "cuda"
-
- transform = transforms.Compose([
- transforms.ToTensor(),
- transforms.Normalize(mean=[0.5, 0.5, 0.5],std=[0.5,0.5,0.5]),
- ])
-
- parser = argparse.ArgumentParser()
- parser.add_argument('--model', help="restore checkpoint")
- parser.add_argument('--small', action='store_true', help='use small model')
- parser.add_argument('--mixed_precision', action='store_true', help='use mixed precision')
- parser.add_argument('--alternate_corr', action='store_true', help='use efficent correlation implementation')
-
- raft_model = torch.nn.DataParallel(RAFT(parser.parse_args(['--model', args.raft_path])))
- raft_model.load_state_dict(torch.load(args.raft_path))
-
- raft_model = raft_model.module
- raft_model.to(device)
- raft_model.eval()
-
- parsingpredictor = BiSeNet(n_classes=19)
- parsingpredictor.load_state_dict(torch.load(args.faceparsing_path, map_location=lambda storage, loc: storage))
- parsingpredictor.to(device).eval()
-
- down = Downsample(kernel=[1, 3, 3, 1], factor=2).to(device).eval()
-
- print('Load models successfully!')
-
- window = args.window_size
-
- video_cap = cv2.VideoCapture(args.video_path)
- num = int(video_cap.get(7))
-
- Is = []
- for i in range(num):
- success, frame = video_cap.read()
- if success == False:
- break
- frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
- with torch.no_grad():
- Is += [transform(frame).unsqueeze(dim=0).cpu()]
- video_cap.release()
-
- # enlarge frames for more accurate parsing maps and optical flows
- Is = F.upsample(torch.cat(Is, dim=0), scale_factor=2, mode='bilinear')
- Is_ = torch.cat((Is[0:window], Is, Is[-window:]), dim=0)
-
- print('Load video with %d frames successfully!'%(len(Is)))
-
- Ps = []
- for i in tqdm(range(len(Is))):
- with torch.no_grad():
- Ps += [parsingpredictor(2*Is[i:i+1].to(device))[0].detach().cpu()]
- Ps = torch.cat(Ps, dim=0)
- Ps_ = torch.cat((Ps[0:window], Ps, Ps[-window:]), dim=0)
-
- print('Predict parsing maps successfully!')
-
-
- # temporal weights of the (2*args.window_size+1) frames
- wt = torch.exp(-(torch.arange(2*window+1).float()-window)**2/(2*((window+0.5)**2))).reshape(2*window+1,1,1,1).to(device)
-
- parse = []
- for ii in tqdm(range(len(Is))):
- i = ii + window
- image2 = Is_[i-window:i+window+1].to(device)
- image1 = Is_[i].repeat(2*window+1,1,1,1).to(device)
- padder = InputPadder(image1.shape)
- image1, image2 = padder.pad(image1, image2)
- with torch.no_grad():
- flow_low, flow_up = raft_model((image1+1)*255.0/2, (image2+1)*255.0/2, iters=20, test_mode=True)
- output, mask = warp(torch.cat((image2, Ps_[i-window:i+window+1].to(device)), dim=1), flow_up)
- aligned_Is = output[:,0:3].detach()
- aligned_Ps = output[:,3:].detach()
- # the spatial weight
- ws = torch.exp(-((aligned_Is-image1)**2).mean(dim=1, keepdims=True)/(2*(0.2**2))) * mask[:,0:1]
- aligned_Ps[window] = Ps_[i].to(device)
- # the weight between i and i shoud be 1.0
- ws[window,:,:,:] = 1.0
- weights = ws*wt
- weights = weights / weights.sum(dim=(0), keepdims=True)
- fused_Ps = (aligned_Ps * weights).sum(dim=0, keepdims=True)
- parse += [down(fused_Ps).detach().cpu()]
- parse = torch.cat(parse, dim=0)
-
- basename = os.path.basename(args.video_path).split('.')[0]
- np.save(os.path.join(args.output_path, basename+'_parsingmap.npy'), parse.numpy())
-
- print('Done!')
\ No newline at end of file
diff --git a/spaces/Jeff2323/ai-comic-factory/src/app/queries/getStyle.ts b/spaces/Jeff2323/ai-comic-factory/src/app/queries/getStyle.ts
deleted file mode 100644
index 649279a45615d5c2354d93ef297963908b86cf0a..0000000000000000000000000000000000000000
--- a/spaces/Jeff2323/ai-comic-factory/src/app/queries/getStyle.ts
+++ /dev/null
@@ -1,52 +0,0 @@
-import { createLlamaPrompt } from "@/lib/createLlamaPrompt"
-
-import { predict } from "./predict"
-import { Preset } from "../engine/presets"
-
-export const getStory = async ({
- preset,
- prompt = "",
-}: {
- preset: Preset;
- prompt: string;
-}) => {
-
- const query = createLlamaPrompt([
- {
- role: "system",
- content: [
- `You are a comic book author specialized in ${preset.llmPrompt}`,
- `You are going to be asked to write a comic book page, your mission is to answer a JSON array containing 4 items, to describe the page (one item per panel).`,
- `Each array item should be a comic book panel caption the describe the environment, era, characters, objects, textures, lighting.`,
- `Be brief in your caption don't add your own comments. Be straight to the point, and never reply things like "Sure, I can.." etc.`
- ].filter(item => item).join("\n")
- },
- {
- role: "user",
- content: `The story is: ${prompt}`,
- }
- ])
-
-
- let result = ""
- try {
- result = await predict(query)
- if (!result.trim().length) {
- throw new Error("empty result!")
- }
- } catch (err) {
- console.log(`prediction of the story failed, trying again..`)
- try {
- result = await predict(query+".")
- if (!result.trim().length) {
- throw new Error("empty result!")
- }
- } catch (err) {
- console.error(`prediction of the story failed again!`)
- throw new Error(`failed to generate the story ${err}`)
- }
- }
-
- const tmp = result // result.split("Caption:").pop() || result
- return tmp.replaceAll("\n", ", ")
-}
\ No newline at end of file
diff --git a/spaces/Jeff2323/ai-comic-factory/src/components/ui/textarea.tsx b/spaces/Jeff2323/ai-comic-factory/src/components/ui/textarea.tsx
deleted file mode 100644
index af10d34eeae448c2614c67141f83a8748754332c..0000000000000000000000000000000000000000
--- a/spaces/Jeff2323/ai-comic-factory/src/components/ui/textarea.tsx
+++ /dev/null
@@ -1,24 +0,0 @@
-import * as React from "react"
-
-import { cn } from "@/lib/utils"
-
-export interface TextareaProps
- extends React.TextareaHTMLAttributes {}
-
-const Textarea = React.forwardRef(
- ({ className, ...props }, ref) => {
- return (
-
- )
- }
-)
-Textarea.displayName = "Textarea"
-
-export { Textarea }
diff --git a/spaces/JohnPinto/Human_Activity_Recognition-HAR-Video_Classification-HMDB51-Dataset/README.md b/spaces/JohnPinto/Human_Activity_Recognition-HAR-Video_Classification-HMDB51-Dataset/README.md
deleted file mode 100644
index aa269c029e2c335eb2adede0dc435c5a247a33eb..0000000000000000000000000000000000000000
--- a/spaces/JohnPinto/Human_Activity_Recognition-HAR-Video_Classification-HMDB51-Dataset/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Human Activity Recognition-HAR-Video Classification-HMDB51-Dataset
-emoji: 👁
-colorFrom: indigo
-colorTo: gray
-sdk: gradio
-sdk_version: 3.24.1
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Jour/Translate-bloomz/app.py b/spaces/Jour/Translate-bloomz/app.py
deleted file mode 100644
index 5324fa3c69ab40b6a58334464c75927093b7eb23..0000000000000000000000000000000000000000
--- a/spaces/Jour/Translate-bloomz/app.py
+++ /dev/null
@@ -1,52 +0,0 @@
-import gradio as gr
-import requests
-import json
-import os
-
-
-LANGUAGES = ['Akan', 'Arabic', ' Assamese', 'Bambara', 'Bengali', 'Catalan', 'English', 'Spanish', ' Basque', 'French', ' Gujarati', 'Hindi',
-'Indonesian', 'Igbo', 'Kikuyu', 'Kannada', 'Ganda', 'Lingala', 'Malayalam', 'Marathi', 'Nepali', 'Chichewa', 'Oriya', 'Panjabi', 'Portuguese',
-'Kirundi', 'Kinyarwanda', 'Shona', 'Sotho', 'Swahili', 'Tamil', 'Telugu', 'Tswana', 'Tsonga', 'Twi', 'Urdu', 'Viêt Namese', 'Wolof', 'Xhosa',
-'Yoruba', 'Chinese', 'Zulu']
-
-API_URL = "https://api-inference.huggingface.co/models/bigscience/bloomz"
-
-
-def translate(output, text):
- """Translate text from input language to output language"""
-
- instruction = f"""Translatate to {output}: {text}\nTranslation: """
-
- json_ = {
- "inputs": instruction,
- "parameters": {
- "return_full_text": True,
- "do_sample": False,
- "max_new_tokens": 250,
- },
- "options": {
- "use_cache": True,
- "wait_for_model": True,
- },
- }
- response = requests.request("POST", API_URL, json=json_)
- output = response.json()
-
- return output
-
-demo = gr.Blocks()
-
-with demo:
- gr.Markdown("Translation with Bloom ")
- gr.Markdown("Translation in many language with mt0-xxl ")
-
- with gr.Row():
- output_lang = gr.Dropdown(LANGUAGES, value='French', label='Select output language')
-
- input_text = gr.Textbox(label="Input", lines=6)
- output_text = gr.Textbox(lines=6, label="Output")
-
- buton = gr.Button("translate")
- buton.click(translate, inputs=[output_lang, input_text], outputs=output_text)
-
-demo.launch(enable_queue=True, debug=True)
diff --git a/spaces/Kaludi/AI-Assistant-revChatGPT_App/README.md b/spaces/Kaludi/AI-Assistant-revChatGPT_App/README.md
deleted file mode 100644
index 8ad46ee363af3e0adc46d7fcb2948d83b770d648..0000000000000000000000000000000000000000
--- a/spaces/Kaludi/AI-Assistant-revChatGPT_App/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: AI Assistant - revChatGPT
-emoji: 🤖
-colorFrom: purple
-colorTo: blue
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Kangarroar/ApplioRVC-Inference/lib/uvr5_pack/lib_v5/layers_33966KB.py b/spaces/Kangarroar/ApplioRVC-Inference/lib/uvr5_pack/lib_v5/layers_33966KB.py
deleted file mode 100644
index a38b7bb3ae3136b07eadfc2db445fef4c2de186b..0000000000000000000000000000000000000000
--- a/spaces/Kangarroar/ApplioRVC-Inference/lib/uvr5_pack/lib_v5/layers_33966KB.py
+++ /dev/null
@@ -1,126 +0,0 @@
-import torch
-from torch import nn
-import torch.nn.functional as F
-
-from . import spec_utils
-
-
-class Conv2DBNActiv(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
- super(Conv2DBNActiv, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(
- nin,
- nout,
- kernel_size=ksize,
- stride=stride,
- padding=pad,
- dilation=dilation,
- bias=False,
- ),
- nn.BatchNorm2d(nout),
- activ(),
- )
-
- def __call__(self, x):
- return self.conv(x)
-
-
-class SeperableConv2DBNActiv(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
- super(SeperableConv2DBNActiv, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(
- nin,
- nin,
- kernel_size=ksize,
- stride=stride,
- padding=pad,
- dilation=dilation,
- groups=nin,
- bias=False,
- ),
- nn.Conv2d(nin, nout, kernel_size=1, bias=False),
- nn.BatchNorm2d(nout),
- activ(),
- )
-
- def __call__(self, x):
- return self.conv(x)
-
-
-class Encoder(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU):
- super(Encoder, self).__init__()
- self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
- self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ)
-
- def __call__(self, x):
- skip = self.conv1(x)
- h = self.conv2(skip)
-
- return h, skip
-
-
-class Decoder(nn.Module):
- def __init__(
- self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False
- ):
- super(Decoder, self).__init__()
- self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
- self.dropout = nn.Dropout2d(0.1) if dropout else None
-
- def __call__(self, x, skip=None):
- x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True)
- if skip is not None:
- skip = spec_utils.crop_center(skip, x)
- x = torch.cat([x, skip], dim=1)
- h = self.conv(x)
-
- if self.dropout is not None:
- h = self.dropout(h)
-
- return h
-
-
-class ASPPModule(nn.Module):
- def __init__(self, nin, nout, dilations=(4, 8, 16, 32, 64), activ=nn.ReLU):
- super(ASPPModule, self).__init__()
- self.conv1 = nn.Sequential(
- nn.AdaptiveAvgPool2d((1, None)),
- Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ),
- )
- self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ)
- self.conv3 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[0], dilations[0], activ=activ
- )
- self.conv4 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[1], dilations[1], activ=activ
- )
- self.conv5 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
- )
- self.conv6 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
- )
- self.conv7 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
- )
- self.bottleneck = nn.Sequential(
- Conv2DBNActiv(nin * 7, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1)
- )
-
- def forward(self, x):
- _, _, h, w = x.size()
- feat1 = F.interpolate(
- self.conv1(x), size=(h, w), mode="bilinear", align_corners=True
- )
- feat2 = self.conv2(x)
- feat3 = self.conv3(x)
- feat4 = self.conv4(x)
- feat5 = self.conv5(x)
- feat6 = self.conv6(x)
- feat7 = self.conv7(x)
- out = torch.cat((feat1, feat2, feat3, feat4, feat5, feat6, feat7), dim=1)
- bottle = self.bottleneck(out)
- return bottle
diff --git a/spaces/Kayson/InstructDiffusion/main.py b/spaces/Kayson/InstructDiffusion/main.py
deleted file mode 100644
index 52f8a361561f3ba3452b12d374e924b2d468bd4e..0000000000000000000000000000000000000000
--- a/spaces/Kayson/InstructDiffusion/main.py
+++ /dev/null
@@ -1,566 +0,0 @@
-# --------------------------------------------------------
-# InstructDiffusion
-# Based on instruct-pix2pix (https://github.com/timothybrooks/instruct-pix2pix)
-# Removed Pytorch-lightning and supported deepspeed by Zigang Geng (zigang@mail.ustc.edu.cn)
-# --------------------------------------------------------
-
-import argparse, os, sys, datetime, glob
-import numpy as np
-import time
-import json
-import pickle
-import wandb
-import deepspeed
-
-from packaging import version
-from omegaconf import OmegaConf
-from functools import partial
-from PIL import Image
-
-from timm.utils import AverageMeter
-
-import torch
-import torchvision
-import torch.cuda.amp as amp
-import torch.distributed as dist
-import torch.backends.cudnn as cudnn
-from torch.utils.data import DataLoader, Dataset, ConcatDataset
-sys.path.append("./stable_diffusion")
-
-from ldm.data.base import Txt2ImgIterableBaseDataset
-from ldm.util import instantiate_from_config
-from ldm.modules.ema import LitEma
-from utils.logger import create_logger
-from utils.utils import load_checkpoint, save_checkpoint, get_grad_norm, auto_resume_helper
-from utils.deepspeed import create_ds_config
-
-
-def wandb_log(*args, **kwargs):
- if dist.get_rank() == 0:
- wandb.log(*args, **kwargs)
-
-
-def get_parser(**parser_kwargs):
- def str2bool(v):
- if isinstance(v, bool):
- return v
- if v.lower() in ("yes", "true", "t", "y", "1"):
- return True
- elif v.lower() in ("no", "false", "f", "n", "0"):
- return False
- else:
- raise argparse.ArgumentTypeError("Boolean value expected.")
-
- parser = argparse.ArgumentParser(**parser_kwargs)
- parser.add_argument(
- "-n",
- "--name",
- type=str,
- const=True,
- default="",
- nargs="?",
- help="postfix for logdir",
- )
- parser.add_argument(
- "-r",
- "--resume",
- type=str,
- const=True,
- default="",
- nargs="?",
- help="resume from logdir or checkpoint in logdir",
- )
- parser.add_argument(
- "-b",
- "--base",
- nargs="*",
- metavar="base_config.yaml",
- help="paths to base configs. Loaded from left-to-right. "
- "Parameters can be overwritten or added with command-line options of the form `--key value`.",
- default=list(),
- )
- parser.add_argument(
- "-t",
- "--train",
- type=str2bool,
- const=True,
- default=False,
- nargs="?",
- help="train",
- )
- parser.add_argument(
- "--no-test",
- type=str2bool,
- const=True,
- default=False,
- nargs="?",
- help="disable test",
- )
- parser.add_argument(
- "-p",
- "--project",
- help="name of new or path to existing project"
- )
- parser.add_argument(
- "-d",
- "--debug",
- type=str2bool,
- nargs="?",
- const=True,
- default=False,
- help="enable post-mortem debugging",
- )
- parser.add_argument(
- "-s",
- "--seed",
- type=int,
- default=23,
- help="seed for seed_everything",
- )
- parser.add_argument(
- "-f",
- "--postfix",
- type=str,
- default="",
- help="post-postfix for default name",
- )
- parser.add_argument(
- "-l",
- "--logdir",
- type=str,
- default="logs",
- help="directory for logging dat shit",
- )
- parser.add_argument(
- "--scale_lr",
- action="store_true",
- default=False,
- help="scale base-lr by ngpu * batch_size * n_accumulate",
- )
- parser.add_argument(
- "--amd",
- action="store_true",
- default=False,
- help="amd",
- )
- parser.add_argument(
- "--local_rank",
- type=int,
- # required=False,
- default=int(os.environ.get('LOCAL_RANK', 0)),
- help="local rank for DistributedDataParallel",
- )
- return parser
-
-
-class WrappedDataset(Dataset):
- """Wraps an arbitrary object with __len__ and __getitem__ into a pytorch dataset"""
-
- def __init__(self, dataset):
- self.data = dataset
-
- def __len__(self):
- return len(self.data)
-
- def __getitem__(self, idx):
- return self.data[idx]
-
-
-class DataModuleFromConfig():
- def __init__(self, batch_size, train=None, validation=None, test=None, predict=None,
- wrap=False, num_workers=None, shuffle_test_loader=False, use_worker_init_fn=False,
- shuffle_val_dataloader=False):
- super().__init__()
- self.batch_size = batch_size
- self.dataset_configs = dict()
- self.num_workers = num_workers if num_workers is not None else batch_size * 2
- self.use_worker_init_fn = use_worker_init_fn
- if train is not None:
- if "target" in train:
- self.dataset_configs["train"] = train
- self.train_dataloader = self._train_dataloader
- else:
- for ds in train:
- ds_name = str([key for key in ds.keys()][0])
- self.dataset_configs[ds_name] = ds
- self.train_dataloader = self._train_concat_dataloader
-
- if validation is not None:
- self.dataset_configs["validation"] = validation
- self.val_dataloader = partial(self._val_dataloader, shuffle=shuffle_val_dataloader)
- if test is not None:
- self.dataset_configs["test"] = test
- self.test_dataloader = partial(self._test_dataloader, shuffle=shuffle_test_loader)
- if predict is not None:
- self.dataset_configs["predict"] = predict
- self.predict_dataloader = self._predict_dataloader
- self.wrap = wrap
-
- def prepare_data(self):
- for data_cfg in self.dataset_configs.values():
- instantiate_from_config(data_cfg)
-
- def setup(self, stage=None):
- self.datasets = dict(
- (k, instantiate_from_config(self.dataset_configs[k]))
- for k in self.dataset_configs)
- if self.wrap:
- for k in self.datasets:
- self.datasets[k] = WrappedDataset(self.datasets[k])
-
- def _train_concat_dataloader(self):
- is_iterable_dataset = isinstance(self.datasets['ds1'], Txt2ImgIterableBaseDataset)
-
- if is_iterable_dataset or self.use_worker_init_fn:
- init_fn = worker_init_fn
- else:
- init_fn = None
-
- concat_dataset = []
- for ds in self.datasets.keys():
- concat_dataset.append(self.datasets[ds])
-
- concat_dataset = ConcatDataset(concat_dataset)
- sampler_train = torch.utils.data.DistributedSampler(
- concat_dataset, num_replicas=dist.get_world_size(), rank=dist.get_rank(), shuffle=True
- )
- return DataLoader(concat_dataset, batch_size=self.batch_size, sampler=sampler_train,
- num_workers=self.num_workers, worker_init_fn=init_fn, persistent_workers=True)
-
- def _train_dataloader(self):
- is_iterable_dataset = isinstance(self.datasets['train'], Txt2ImgIterableBaseDataset)
- if is_iterable_dataset or self.use_worker_init_fn:
- init_fn = worker_init_fn
- else:
- init_fn = None
-
- sampler_train = torch.utils.data.DistributedSampler(
- self.datasets["train"], num_replicas=dist.get_world_size(), rank=dist.get_rank(), shuffle=True
- )
- return DataLoader(self.datasets["train"], batch_size=self.batch_size, sampler=sampler_train,
- num_workers=self.num_workers, worker_init_fn=init_fn, persistent_workers=True)
-
- def _val_dataloader(self, shuffle=False):
- if isinstance(self.datasets['validation'], Txt2ImgIterableBaseDataset) or self.use_worker_init_fn:
- init_fn = worker_init_fn
- else:
- init_fn = None
- return DataLoader(self.datasets["validation"],
- batch_size=self.batch_size,
- num_workers=self.num_workers,
- worker_init_fn=init_fn,
- shuffle=shuffle, persistent_workers=True)
-
- def _test_dataloader(self, shuffle=False):
- is_iterable_dataset = isinstance(self.datasets['train'], Txt2ImgIterableBaseDataset)
- if is_iterable_dataset or self.use_worker_init_fn:
- init_fn = worker_init_fn
- else:
- init_fn = None
-
- # do not shuffle dataloader for iterable dataset
- shuffle = shuffle and (not is_iterable_dataset)
-
- return DataLoader(self.datasets["test"], batch_size=self.batch_size,
- num_workers=self.num_workers, worker_init_fn=init_fn, shuffle=shuffle, persistent_workers=True)
-
- def _predict_dataloader(self, shuffle=False):
- if isinstance(self.datasets['predict'], Txt2ImgIterableBaseDataset) or self.use_worker_init_fn:
- init_fn = worker_init_fn
- else:
- init_fn = None
- return DataLoader(self.datasets["predict"], batch_size=self.batch_size,
- num_workers=self.num_workers, worker_init_fn=init_fn, persistent_workers=True)
-
-
-def train_one_epoch(config, model, model_ema, data_loader, val_data_loader, optimizer, epoch, lr_scheduler, scaler):
- model.train()
- optimizer.zero_grad()
-
- num_steps = len(data_loader)
- accumul_steps = config.trainer.accumulate_grad_batches
- batch_time = AverageMeter()
- loss_meter = AverageMeter()
- val_loss_meter = AverageMeter()
- norm_meter = AverageMeter()
- loss_scale_meter = AverageMeter()
- loss_scale_meter_min = AverageMeter()
-
- start = time.time()
- end = time.time()
- for idx, batch in enumerate(data_loader):
- batch_size = batch['edited'].shape[0]
-
- if config.model.params.deepspeed != '':
- loss, _ = model(batch, idx, accumul_steps)
- model.backward(loss)
-
- model.step()
- loss_scale = optimizer.cur_scale
- grad_norm = model.get_global_grad_norm()
-
- with torch.no_grad():
- if idx % config.trainer.accumulate_grad_batches == 0:
- model_ema(model)
-
- loss_number = loss.item()
- else:
- with amp.autocast(enabled=config.model.params.fp16):
- loss, _ = model(batch, idx, accumul_steps)
-
- if config.trainer.accumulate_grad_batches > 1:
- loss = loss / config.trainer.accumulate_grad_batches
- scaler.scale(loss).backward()
- # loss.backward()
- if config.trainer.clip_grad > 0.0:
- scaler.unscale_(optimizer)
- grad_norm = torch.nn.utils.clip_grad_norm_(model.parameters(), config.trainer.clip_grad)
- else:
- grad_norm = get_grad_norm(model.parameters())
- if (idx + 1) % config.trainer.accumulate_grad_batches == 0:
- scaler.step(optimizer)
- optimizer.zero_grad()
- scaler.update()
- # scaler.unscale_grads()
- # optimizer.step()
- # optimizer.zero_grad()
- # lr_scheduler.step_update(epoch * num_steps + idx)
- else:
- optimizer.zero_grad()
- scaler.scale(loss).backward()
- if config.trainer.clip_grad > 0.0:
- scaler.unscale_(optimizer)
- grad_norm = torch.nn.utils.clip_grad_norm_(model.parameters(), config.trainer.clip_grad)
- else:
- grad_norm = get_grad_norm(model.parameters())
- scaler.step(optimizer)
- scaler.update()
- # lr_scheduler.step_update(epoch * num_steps + idx)
-
- loss_scale = scaler.get_scale()
- loss_number = loss.item() * config.trainer.accumulate_grad_batches
-
- torch.cuda.synchronize()
-
- loss_meter.update(loss_number, batch_size)
- if grad_norm is not None:
- norm_meter.update(grad_norm)
- else:
- norm_meter.update(0.0)
-
- loss_scale_meter.update(loss_scale)
- # loss_scale_meter.update(0.0)
- batch_time.update(time.time() - end)
- end = time.time()
-
- if idx % 100 == 0:
- lr = optimizer.param_groups[0]['lr']
- memory_used = torch.cuda.max_memory_allocated() / (1024.0 * 1024.0)
- etas = batch_time.avg * (num_steps - idx)
- logger.info(
- f'Train: [{epoch}][{idx}/{num_steps}]\t'
- f'eta {datetime.timedelta(seconds=int(etas))} lr {lr:.6f}\t'
- f'time {batch_time.val:.4f} ({batch_time.avg:.4f})\t'
- f'loss {loss_meter.val:.4f} ({loss_meter.avg:.4f})\t'
- f'grad_norm {norm_meter.val:.4f} ({norm_meter.avg:.4f})\t'
- f'loss_scale {loss_scale_meter.val:.4f} ({loss_scale_meter.avg:.4f})\t'
- f'mem {memory_used:.0f}MB')
-
- if (epoch * num_steps + idx) % 100 == 0:
- log_message = dict(
- lr=optimizer.param_groups[0]['lr'],
- time=batch_time.val,
- epoch=epoch,
- iter=idx,
- loss=loss_meter.val,
- grad_norm=norm_meter.val,
- loss_scale=loss_scale_meter.val,
- memory=torch.cuda.max_memory_allocated() / (1024.0 * 1024.0),
- global_iter=epoch * num_steps + idx)
-
- # log_message.update({'ref_img': wandb.Image(unnormalize(img[:8].cpu().float())), 'mask': wandb.Image(mask[:8].cpu().float().unsqueeze(1))})
- # if x_rec is not None:
- # log_message.update({'rec_img': wandb.Image(unnormalize(x_rec[:8].cpu().float()))})
- wandb_log(
- data=log_message,
- step=epoch * num_steps + idx,
- )
-
- if idx == num_steps - 1:
- with torch.no_grad():
- model_ema.store(model.parameters())
- model_ema.copy_to(model)
- for val_idx, batch in enumerate(val_data_loader):
- batch_size = batch['edited'].shape[0]
-
- loss, _ = model(batch, -1, 1)
-
- loss_number = loss.item()
- val_loss_meter.update(loss_number, batch_size)
- if val_idx % 10 == 0:
- logger.info(
- f'Val: [{val_idx}/{len(val_data_loader)}]\t'
- f'loss {val_loss_meter.val:.4f} ({val_loss_meter.avg:.4f})\t')
- if val_idx == 50:
- break
- model_ema.restore(model.parameters())
-
- epoch_time = time.time() - start
- logger.info(f"EPOCH {epoch} training takes {datetime.timedelta(seconds=int(epoch_time))}")
-
-
-if __name__ == "__main__":
-
- now = datetime.datetime.now().strftime("%Y-%m-%dT%H-%M-%S")
-
- # add cwd for convenience and to make classes in this file available when
- # running as `python main.py`
- # (in particular `main.DataModuleFromConfig`)
- sys.path.append(os.getcwd())
-
- parser = get_parser()
- opt, unknown = parser.parse_known_args()
-
- assert opt.name
- cfg_fname = os.path.split(opt.base[0])[-1]
- cfg_name = os.path.splitext(cfg_fname)[0]
- nowname = f"{cfg_name}_{opt.name}"
- logdir = os.path.join(opt.logdir, nowname)
-
- if 'RANK' in os.environ and 'WORLD_SIZE' in os.environ:
- rank = int(os.environ["RANK"])
- world_size = int(os.environ['WORLD_SIZE'])
- print(f"RANK and WORLD_SIZE in environ: {rank}/{world_size}")
- else:
- rank = -1
- world_size = -1
- if opt.amd:
- os.environ["CUDA_VISIBLE_DEVICES"] = str(opt.local_rank)
- torch.distributed.init_process_group(backend='gloo', init_method='env://', world_size=world_size, rank=rank)
- else:
- torch.cuda.set_device(opt.local_rank)
- torch.distributed.init_process_group(backend='nccl', init_method='env://', world_size=world_size, rank=rank)
- torch.distributed.barrier()
-
- seed = opt.seed + dist.get_rank()
- torch.manual_seed(seed)
- np.random.seed(seed)
- cudnn.benchmark = True
-
- ckptdir = os.path.join(logdir, "checkpoints")
- cfgdir = os.path.join(logdir, "configs")
-
- os.makedirs(logdir, exist_ok=True)
- os.makedirs(ckptdir, exist_ok=True)
- os.makedirs(cfgdir, exist_ok=True)
-
- # init and save configs
- # config: the configs in the config file
- configs = [OmegaConf.load(cfg) for cfg in opt.base]
- cli = OmegaConf.from_dotlist(unknown)
- config = OmegaConf.merge(*configs, cli)
-
- if config.model.params.deepspeed != '':
- create_ds_config(opt, config, cfgdir)
-
- if dist.get_rank() == 0:
- run = wandb.init(
- id=nowname,
- name=nowname,
- project='readoutpose',
- config=OmegaConf.to_container(config, resolve=True),
- )
-
- logger = create_logger(output_dir=logdir, dist_rank=dist.get_rank(), name=f"{nowname}")
-
- resume_file = auto_resume_helper(config, ckptdir)
- if resume_file:
- resume = True
- logger.info(f'resume checkpoint in {resume_file}')
- else:
- resume = False
- logger.info(f'no checkpoint found in {ckptdir}, ignoring auto resume')
-
- # model
- model = instantiate_from_config(config.model)
- model_ema = LitEma(model, decay_resume=config.model.params.get('ema_resume', 0.9999))
-
- # data
- data = instantiate_from_config(config.data)
- # NOTE according to https://pytorch-lightning.readthedocs.io/en/latest/datamodules.html
- # calling these ourselves should not be necessary but it is.
- # lightning still takes care of proper multiprocessing though
- data.prepare_data()
- data.setup()
- data_loader_train = data.train_dataloader()
- data_loader_val = data.val_dataloader()
-
- print("#### Data #####")
- for k in data.datasets:
- print(f"{k}, {data.datasets[k].__class__.__name__}, {len(data.datasets[k])}")
-
- # configure learning rate
- bs, base_lr = config.data.params.batch_size, config.model.base_learning_rate
- ngpu = dist.get_world_size()
- if 'accumulate_grad_batches' in config.trainer:
- accumulate_grad_batches = config.trainer.accumulate_grad_batches
- else:
- accumulate_grad_batches = 1
- print(f"accumulate_grad_batches = {accumulate_grad_batches}")
-
- if opt.scale_lr:
- model.learning_rate = accumulate_grad_batches * ngpu * bs * base_lr
- print(
- "Setting learning rate to {:.2e} = {} (accumulate_grad_batches) * {} (num_gpus) * {} (batchsize) * {:.2e} (base_lr)".format(
- model.learning_rate, accumulate_grad_batches, ngpu, bs, base_lr))
- else:
- model.learning_rate = base_lr
- print("++++ NOT USING LR SCALING ++++")
- print(f"Setting learning rate to {model.learning_rate:.2e}")
-
- if not opt.amd:
- model.cuda()
-
- if config.model.params.fp16 and config.model.params.deepspeed == '':
- scaler = amp.GradScaler()
- param_groups = model.parameters()
- else:
- scaler = None
- param_groups = model.parameters()
-
- if config.model.params.deepspeed != '':
- model, optimizer, _, _ = deepspeed.initialize(
- args=config,
- model=model,
- model_parameters=param_groups,
- dist_init_required=False,
- )
- for name, param in model.named_parameters():
- param.global_name = name
- model_without_ddp = model
- lr_scheduler = None
- model_ema = model_ema.to(next(model.parameters()).device)
- else:
- optimizer, lr_scheduler = model.configure_optimizers()
- model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[opt.local_rank], broadcast_buffers=False)
- model_without_ddp = model.module
- # print(optimizer.param_groups[1])
- if opt.resume != '':
- resume_file = opt.resume
- if resume_file:
- _, start_epoch = load_checkpoint(resume_file, config, model_without_ddp, model_ema, optimizer, lr_scheduler, scaler, logger)
- else:
- start_epoch = 0
-
- logger.info("Start training")
- start_time = time.time()
-
- for epoch in range(start_epoch, config.trainer.max_epochs):
- data_loader_train.sampler.set_epoch(epoch)
- train_one_epoch(config, model, model_ema, data_loader_train, data_loader_val, optimizer, epoch, lr_scheduler, scaler)
- if epoch % config.trainer.save_freq == 0:
- save_checkpoint(ckptdir, config, epoch, model_without_ddp, model_ema, 0., optimizer, lr_scheduler, scaler, logger)
-
- total_time = time.time() - start_time
- total_time_str = str(datetime.timedelta(seconds=int(total_time)))
- logger.info('Training time {}'.format(total_time_str))
diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/encoder/data_objects/__init__.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/encoder/data_objects/__init__.py
deleted file mode 100644
index ef04ade68544d0477a7f6deb4e7d51e97f592910..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/encoder/data_objects/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-from encoder.data_objects.speaker_verification_dataset import SpeakerVerificationDataset
-from encoder.data_objects.speaker_verification_dataset import SpeakerVerificationDataLoader
diff --git a/spaces/Khaled27/Naptah/app.py b/spaces/Khaled27/Naptah/app.py
deleted file mode 100644
index cc77177e392a2d5e24ef05725ac2125f46d90755..0000000000000000000000000000000000000000
--- a/spaces/Khaled27/Naptah/app.py
+++ /dev/null
@@ -1,48 +0,0 @@
-from flask import Flask, request
-from transformers import AutoModelForImageClassification
-from transformers import AutoImageProcessor
-from PIL import Image
-from io import BytesIO
-import os
-import torch
-
-app = Flask(__name__)
-
-model = AutoModelForImageClassification.from_pretrained(
- './my_model')
-image_processor = AutoImageProcessor.from_pretrained(
- "microsoft/resnet-50")
-
-
-@app.route('/upload_image', methods=['POST'])
-def upload_image():
- # Get the image file from the request
- image_file = request.files['image'].stream
-
- # image = Image.open(BytesIO(image_file.read()))
- image = Image.open(image_file)
- inputs = image_processor(image, return_tensors="pt")
-
- with torch.no_grad():
- logits = model(**inputs).logits
-
- predicted_label = logits.argmax(-1).item()
-
- disease = model.config.id2label[predicted_label]
-
-
- # You can perform additional operations with the image here
- # ...
-
- return disease
-
-
-
-@app.route('/', methods=['GET'])
-def hi():
- return "NAPTAH Mobile Application"
-
-
-
-
-app.run(host='0.0.0.0', port=7860)
\ No newline at end of file
diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/necks/channel_mapper.py b/spaces/KyanChen/RSPrompter/mmdet/models/necks/channel_mapper.py
deleted file mode 100644
index 9700a2b3e7296661cc0c988d86152fe8fb03eaf6..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmdet/models/necks/channel_mapper.py
+++ /dev/null
@@ -1,106 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from typing import List, Tuple
-
-import torch.nn as nn
-from mmcv.cnn import ConvModule
-from mmengine.model import BaseModule
-from torch import Tensor
-
-from mmdet.registry import MODELS
-from mmdet.utils import OptConfigType, OptMultiConfig
-
-
-@MODELS.register_module()
-class ChannelMapper(BaseModule):
- """Channel Mapper to reduce/increase channels of backbone features.
-
- This is used to reduce/increase channels of backbone features.
-
- Args:
- in_channels (List[int]): Number of input channels per scale.
- out_channels (int): Number of output channels (used at each scale).
- kernel_size (int, optional): kernel_size for reducing channels (used
- at each scale). Default: 3.
- conv_cfg (:obj:`ConfigDict` or dict, optional): Config dict for
- convolution layer. Default: None.
- norm_cfg (:obj:`ConfigDict` or dict, optional): Config dict for
- normalization layer. Default: None.
- act_cfg (:obj:`ConfigDict` or dict, optional): Config dict for
- activation layer in ConvModule. Default: dict(type='ReLU').
- num_outs (int, optional): Number of output feature maps. There would
- be extra_convs when num_outs larger than the length of in_channels.
- init_cfg (:obj:`ConfigDict` or dict or list[:obj:`ConfigDict` or dict],
- optional): Initialization config dict.
- Example:
- >>> import torch
- >>> in_channels = [2, 3, 5, 7]
- >>> scales = [340, 170, 84, 43]
- >>> inputs = [torch.rand(1, c, s, s)
- ... for c, s in zip(in_channels, scales)]
- >>> self = ChannelMapper(in_channels, 11, 3).eval()
- >>> outputs = self.forward(inputs)
- >>> for i in range(len(outputs)):
- ... print(f'outputs[{i}].shape = {outputs[i].shape}')
- outputs[0].shape = torch.Size([1, 11, 340, 340])
- outputs[1].shape = torch.Size([1, 11, 170, 170])
- outputs[2].shape = torch.Size([1, 11, 84, 84])
- outputs[3].shape = torch.Size([1, 11, 43, 43])
- """
-
- def __init__(
- self,
- in_channels: List[int],
- out_channels: int,
- kernel_size: int = 3,
- conv_cfg: OptConfigType = None,
- norm_cfg: OptConfigType = None,
- act_cfg: OptConfigType = dict(type='ReLU'),
- num_outs: int = None,
- init_cfg: OptMultiConfig = dict(
- type='Xavier', layer='Conv2d', distribution='uniform')
- ) -> None:
- super().__init__(init_cfg=init_cfg)
- assert isinstance(in_channels, list)
- self.extra_convs = None
- if num_outs is None:
- num_outs = len(in_channels)
- self.convs = nn.ModuleList()
- for in_channel in in_channels:
- self.convs.append(
- ConvModule(
- in_channel,
- out_channels,
- kernel_size,
- padding=(kernel_size - 1) // 2,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg))
- if num_outs > len(in_channels):
- self.extra_convs = nn.ModuleList()
- for i in range(len(in_channels), num_outs):
- if i == len(in_channels):
- in_channel = in_channels[-1]
- else:
- in_channel = out_channels
- self.extra_convs.append(
- ConvModule(
- in_channel,
- out_channels,
- 3,
- stride=2,
- padding=1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg))
-
- def forward(self, inputs: Tuple[Tensor]) -> Tuple[Tensor]:
- """Forward function."""
- assert len(inputs) == len(self.convs)
- outs = [self.convs[i](inputs[i]) for i in range(len(inputs))]
- if self.extra_convs:
- for i in range(len(self.extra_convs)):
- if i == 0:
- outs.append(self.extra_convs[0](inputs[-1]))
- else:
- outs.append(self.extra_convs[i](outs[-1]))
- return tuple(outs)
diff --git a/spaces/LeoLeoLeo1/ChuanhuChatGPT/Dockerfile b/spaces/LeoLeoLeo1/ChuanhuChatGPT/Dockerfile
deleted file mode 100644
index 8cbd335b09b1d1975bfd83a053b5fcaf398147ea..0000000000000000000000000000000000000000
--- a/spaces/LeoLeoLeo1/ChuanhuChatGPT/Dockerfile
+++ /dev/null
@@ -1,14 +0,0 @@
-FROM python:3.9 as builder
-RUN apt-get update && apt-get install -y build-essential
-COPY requirements.txt .
-RUN pip install --user -r requirements.txt
-
-FROM python:3.9
-MAINTAINER iskoldt
-COPY --from=builder /root/.local /root/.local
-ENV PATH=/root/.local/bin:$PATH
-COPY . /app
-WORKDIR /app
-ENV my_api_key empty
-ENV dockerrun yes
-CMD ["python3", "-u", "ChuanhuChatbot.py", "2>&1", "|", "tee", "/var/log/application.log"]
diff --git a/spaces/Liu-LAB/GPT-academic/crazy_functions/CodeInterpreter.py b/spaces/Liu-LAB/GPT-academic/crazy_functions/CodeInterpreter.py
deleted file mode 100644
index 3c970f3571f6b6a188c8166a9c26c7f07901dd21..0000000000000000000000000000000000000000
--- a/spaces/Liu-LAB/GPT-academic/crazy_functions/CodeInterpreter.py
+++ /dev/null
@@ -1,231 +0,0 @@
-from collections.abc import Callable, Iterable, Mapping
-from typing import Any
-from toolbox import CatchException, update_ui, gen_time_str, trimmed_format_exc, promote_file_to_downloadzone, clear_file_downloadzone
-from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
-from .crazy_utils import input_clipping, try_install_deps
-from multiprocessing import Process, Pipe
-import os
-import time
-
-templete = """
-```python
-import ... # Put dependencies here, e.g. import numpy as np
-
-class TerminalFunction(object): # Do not change the name of the class, The name of the class must be `TerminalFunction`
-
- def run(self, path): # The name of the function must be `run`, it takes only a positional argument.
- # rewrite the function you have just written here
- ...
- return generated_file_path
-```
-"""
-
-def inspect_dependency(chatbot, history):
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return True
-
-def get_code_block(reply):
- import re
- pattern = r"```([\s\S]*?)```" # regex pattern to match code blocks
- matches = re.findall(pattern, reply) # find all code blocks in text
- if len(matches) == 1:
- return matches[0].strip('python') # code block
- for match in matches:
- if 'class TerminalFunction' in match:
- return match.strip('python') # code block
- raise RuntimeError("GPT is not generating proper code.")
-
-def gpt_interact_multi_step(txt, file_type, llm_kwargs, chatbot, history):
- # 输入
- prompt_compose = [
- f'Your job:\n'
- f'1. write a single Python function, which takes a path of a `{file_type}` file as the only argument and returns a `string` containing the result of analysis or the path of generated files. \n',
- f"2. You should write this function to perform following task: " + txt + "\n",
- f"3. Wrap the output python function with markdown codeblock."
- ]
- i_say = "".join(prompt_compose)
- demo = []
-
- # 第一步
- gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
- inputs=i_say, inputs_show_user=i_say,
- llm_kwargs=llm_kwargs, chatbot=chatbot, history=demo,
- sys_prompt= r"You are a programmer."
- )
- history.extend([i_say, gpt_say])
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
-
- # 第二步
- prompt_compose = [
- "If previous stage is successful, rewrite the function you have just written to satisfy following templete: \n",
- templete
- ]
- i_say = "".join(prompt_compose); inputs_show_user = "If previous stage is successful, rewrite the function you have just written to satisfy executable templete. "
- gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
- inputs=i_say, inputs_show_user=inputs_show_user,
- llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
- sys_prompt= r"You are a programmer."
- )
- code_to_return = gpt_say
- history.extend([i_say, gpt_say])
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
-
- # # 第三步
- # i_say = "Please list to packages to install to run the code above. Then show me how to use `try_install_deps` function to install them."
- # i_say += 'For instance. `try_install_deps(["opencv-python", "scipy", "numpy"])`'
- # installation_advance = yield from request_gpt_model_in_new_thread_with_ui_alive(
- # inputs=i_say, inputs_show_user=inputs_show_user,
- # llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
- # sys_prompt= r"You are a programmer."
- # )
- # # # 第三步
- # i_say = "Show me how to use `pip` to install packages to run the code above. "
- # i_say += 'For instance. `pip install -r opencv-python scipy numpy`'
- # installation_advance = yield from request_gpt_model_in_new_thread_with_ui_alive(
- # inputs=i_say, inputs_show_user=i_say,
- # llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
- # sys_prompt= r"You are a programmer."
- # )
- installation_advance = ""
-
- return code_to_return, installation_advance, txt, file_type, llm_kwargs, chatbot, history
-
-def make_module(code):
- module_file = 'gpt_fn_' + gen_time_str().replace('-','_')
- with open(f'gpt_log/{module_file}.py', 'w', encoding='utf8') as f:
- f.write(code)
-
- def get_class_name(class_string):
- import re
- # Use regex to extract the class name
- class_name = re.search(r'class (\w+)\(', class_string).group(1)
- return class_name
-
- class_name = get_class_name(code)
- return f"gpt_log.{module_file}->{class_name}"
-
-def init_module_instance(module):
- import importlib
- module_, class_ = module.split('->')
- init_f = getattr(importlib.import_module(module_), class_)
- return init_f()
-
-def for_immediate_show_off_when_possible(file_type, fp, chatbot):
- if file_type in ['png', 'jpg']:
- image_path = os.path.abspath(fp)
- chatbot.append(['这是一张图片, 展示如下:',
- f'本地文件地址: `{image_path}` '+
- f'本地文件预览: '
- ])
- return chatbot
-
-def subprocess_worker(instance, file_path, return_dict):
- return_dict['result'] = instance.run(file_path)
-
-def have_any_recent_upload_files(chatbot):
- _5min = 5 * 60
- if not chatbot: return False # chatbot is None
- most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
- if not most_recent_uploaded: return False # most_recent_uploaded is None
- if time.time() - most_recent_uploaded["time"] < _5min: return True # most_recent_uploaded is new
- else: return False # most_recent_uploaded is too old
-
-def get_recent_file_prompt_support(chatbot):
- most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
- path = most_recent_uploaded['path']
- return path
-
-@CatchException
-def 虚空终端CodeInterpreter(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- """
- txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
- llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
- plugin_kwargs 插件模型的参数,暂时没有用武之地
- chatbot 聊天显示框的句柄,用于显示给用户
- history 聊天历史,前情提要
- system_prompt 给gpt的静默提醒
- web_port 当前软件运行的端口号
- """
- raise NotImplementedError
-
- # 清空历史,以免输入溢出
- history = []; clear_file_downloadzone(chatbot)
-
- # 基本信息:功能、贡献者
- chatbot.append([
- "函数插件功能?",
- "CodeInterpreter开源版, 此插件处于开发阶段, 建议暂时不要使用, 插件初始化中 ..."
- ])
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- if have_any_recent_upload_files(chatbot):
- file_path = get_recent_file_prompt_support(chatbot)
- else:
- chatbot.append(["文件检索", "没有发现任何近期上传的文件。"])
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- # 读取文件
- if ("recently_uploaded_files" in plugin_kwargs) and (plugin_kwargs["recently_uploaded_files"] == ""): plugin_kwargs.pop("recently_uploaded_files")
- recently_uploaded_files = plugin_kwargs.get("recently_uploaded_files", None)
- file_path = recently_uploaded_files[-1]
- file_type = file_path.split('.')[-1]
-
- # 粗心检查
- if 'private_upload' in txt:
- chatbot.append([
- "...",
- f"请在输入框内填写需求,然后再次点击该插件(文件路径 {file_path} 已经被记忆)"
- ])
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
-
- # 开始干正事
- for j in range(5): # 最多重试5次
- try:
- code, installation_advance, txt, file_type, llm_kwargs, chatbot, history = \
- yield from gpt_interact_multi_step(txt, file_type, llm_kwargs, chatbot, history)
- code = get_code_block(code)
- res = make_module(code)
- instance = init_module_instance(res)
- break
- except Exception as e:
- chatbot.append([f"第{j}次代码生成尝试,失败了", f"错误追踪\n```\n{trimmed_format_exc()}\n```\n"])
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- # 代码生成结束, 开始执行
- try:
- import multiprocessing
- manager = multiprocessing.Manager()
- return_dict = manager.dict()
-
- p = multiprocessing.Process(target=subprocess_worker, args=(instance, file_path, return_dict))
- # only has 10 seconds to run
- p.start(); p.join(timeout=10)
- if p.is_alive(): p.terminate(); p.join()
- p.close()
- res = return_dict['result']
- # res = instance.run(file_path)
- except Exception as e:
- chatbot.append(["执行失败了", f"错误追踪\n```\n{trimmed_format_exc()}\n```\n"])
- # chatbot.append(["如果是缺乏依赖,请参考以下建议", installation_advance])
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
-
- # 顺利完成,收尾
- res = str(res)
- if os.path.exists(res):
- chatbot.append(["执行成功了,结果是一个有效文件", "结果:" + res])
- new_file_path = promote_file_to_downloadzone(res, chatbot=chatbot)
- chatbot = for_immediate_show_off_when_possible(file_type, new_file_path, chatbot)
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
- else:
- chatbot.append(["执行成功了,结果是一个字符串", "结果:" + res])
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
-
-"""
-测试:
- 裁剪图像,保留下半部分
- 交换图像的蓝色通道和红色通道
- 将图像转为灰度图像
- 将csv文件转excel表格
-"""
\ No newline at end of file
diff --git a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/det_datasets/synthtext.py b/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/det_datasets/synthtext.py
deleted file mode 100644
index fb9a44b3422dae5a9788d39b0901335dfc6076a9..0000000000000000000000000000000000000000
--- a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/det_datasets/synthtext.py
+++ /dev/null
@@ -1,18 +0,0 @@
-dataset_type = 'TextDetDataset'
-data_root = 'data/synthtext'
-
-train = dict(
- type=dataset_type,
- ann_file=f'{data_root}/instances_training.lmdb',
- loader=dict(
- type='AnnFileLoader',
- repeat=1,
- file_format='lmdb',
- parser=dict(
- type='LineJsonParser',
- keys=['file_name', 'height', 'width', 'annotations'])),
- img_prefix=f'{data_root}/imgs',
- pipeline=None)
-
-train_list = [train]
-test_list = [train]
diff --git a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/det_models/ocr_mask_rcnn_r50_fpn_ohem.py b/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/det_models/ocr_mask_rcnn_r50_fpn_ohem.py
deleted file mode 100644
index 843fd36fc60682706503120f16866ba511cf7310..0000000000000000000000000000000000000000
--- a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/det_models/ocr_mask_rcnn_r50_fpn_ohem.py
+++ /dev/null
@@ -1,126 +0,0 @@
-# model settings
-model = dict(
- type='OCRMaskRCNN',
- backbone=dict(
- type='mmdet.ResNet',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=True),
- init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50'),
- norm_eval=True,
- style='pytorch'),
- neck=dict(
- type='mmdet.FPN',
- in_channels=[256, 512, 1024, 2048],
- out_channels=256,
- num_outs=5),
- rpn_head=dict(
- type='RPNHead',
- in_channels=256,
- feat_channels=256,
- anchor_generator=dict(
- type='AnchorGenerator',
- scales=[4],
- ratios=[0.17, 0.44, 1.13, 2.90, 7.46],
- strides=[4, 8, 16, 32, 64]),
- bbox_coder=dict(
- type='DeltaXYWHBBoxCoder',
- target_means=[.0, .0, .0, .0],
- target_stds=[1.0, 1.0, 1.0, 1.0]),
- loss_cls=dict(
- type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
- loss_bbox=dict(type='L1Loss', loss_weight=1.0)),
- roi_head=dict(
- type='StandardRoIHead',
- bbox_roi_extractor=dict(
- type='SingleRoIExtractor',
- roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0),
- out_channels=256,
- featmap_strides=[4, 8, 16, 32]),
- bbox_head=dict(
- type='Shared2FCBBoxHead',
- in_channels=256,
- fc_out_channels=1024,
- roi_feat_size=7,
- num_classes=1,
- bbox_coder=dict(
- type='DeltaXYWHBBoxCoder',
- target_means=[0., 0., 0., 0.],
- target_stds=[0.1, 0.1, 0.2, 0.2]),
- reg_class_agnostic=False,
- loss_cls=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
- loss_bbox=dict(type='L1Loss', loss_weight=1.0)),
- mask_roi_extractor=dict(
- type='SingleRoIExtractor',
- roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0),
- out_channels=256,
- featmap_strides=[4, 8, 16, 32]),
- mask_head=dict(
- type='FCNMaskHead',
- num_convs=4,
- in_channels=256,
- conv_out_channels=256,
- num_classes=1,
- loss_mask=dict(
- type='CrossEntropyLoss', use_mask=True, loss_weight=1.0))),
-
- # model training and testing settings
- train_cfg=dict(
- rpn=dict(
- assigner=dict(
- type='MaxIoUAssigner',
- pos_iou_thr=0.7,
- neg_iou_thr=0.3,
- min_pos_iou=0.3,
- match_low_quality=True,
- ignore_iof_thr=-1,
- gpu_assign_thr=50),
- sampler=dict(
- type='RandomSampler',
- num=256,
- pos_fraction=0.5,
- neg_pos_ub=-1,
- add_gt_as_proposals=False),
- allowed_border=-1,
- pos_weight=-1,
- debug=False),
- rpn_proposal=dict(
- nms_across_levels=False,
- nms_pre=2000,
- nms_post=1000,
- max_per_img=1000,
- nms=dict(type='nms', iou_threshold=0.7),
- min_bbox_size=0),
- rcnn=dict(
- assigner=dict(
- type='MaxIoUAssigner',
- pos_iou_thr=0.5,
- neg_iou_thr=0.5,
- min_pos_iou=0.5,
- match_low_quality=True,
- ignore_iof_thr=-1),
- sampler=dict(
- type='OHEMSampler',
- num=512,
- pos_fraction=0.25,
- neg_pos_ub=-1,
- add_gt_as_proposals=True),
- mask_size=28,
- pos_weight=-1,
- debug=False)),
- test_cfg=dict(
- rpn=dict(
- nms_across_levels=False,
- nms_pre=1000,
- nms_post=1000,
- max_per_img=1000,
- nms=dict(type='nms', iou_threshold=0.7),
- min_bbox_size=0),
- rcnn=dict(
- score_thr=0.05,
- nms=dict(type='nms', iou_threshold=0.5),
- max_per_img=100,
- mask_thr_binary=0.5)))
diff --git a/spaces/LucasCodeBreak/MusicGen/audiocraft/utils/autocast.py b/spaces/LucasCodeBreak/MusicGen/audiocraft/utils/autocast.py
deleted file mode 100644
index ed644843bb37cf8a92a20fbd51d6cebaa43b9a08..0000000000000000000000000000000000000000
--- a/spaces/LucasCodeBreak/MusicGen/audiocraft/utils/autocast.py
+++ /dev/null
@@ -1,40 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-
-
-class TorchAutocast:
- """TorchAutocast utility class.
- Allows you to enable and disable autocast. This is specially useful
- when dealing with different architectures and clusters with different
- levels of support.
-
- Args:
- enabled (bool): Whether to enable torch.autocast or not.
- args: Additional args for torch.autocast.
- kwargs: Additional kwargs for torch.autocast
- """
- def __init__(self, enabled: bool, *args, **kwargs):
- self.autocast = torch.autocast(*args, **kwargs) if enabled else None
-
- def __enter__(self):
- if self.autocast is None:
- return
- try:
- self.autocast.__enter__()
- except RuntimeError:
- device = self.autocast.device
- dtype = self.autocast.fast_dtype
- raise RuntimeError(
- f"There was an error autocasting with dtype={dtype} device={device}\n"
- "If you are on the FAIR Cluster, you might need to use autocast_dtype=float16"
- )
-
- def __exit__(self, *args, **kwargs):
- if self.autocast is None:
- return
- self.autocast.__exit__(*args, **kwargs)
diff --git a/spaces/Mahiruoshi/Lovelive_Nijigasaki_VITS/monotonic_align/core.c b/spaces/Mahiruoshi/Lovelive_Nijigasaki_VITS/monotonic_align/core.c
deleted file mode 100644
index c7a79724ca39aa96370928183b0c14015468dff6..0000000000000000000000000000000000000000
--- a/spaces/Mahiruoshi/Lovelive_Nijigasaki_VITS/monotonic_align/core.c
+++ /dev/null
@@ -1,21608 +0,0 @@
-/* Generated by Cython 0.29.33 */
-
-/* BEGIN: Cython Metadata
-{
- "distutils": {
- "name": "monotonic_align.core",
- "sources": [
- "core.pyx"
- ]
- },
- "module_name": "monotonic_align.core"
-}
-END: Cython Metadata */
-
-#ifndef PY_SSIZE_T_CLEAN
-#define PY_SSIZE_T_CLEAN
-#endif /* PY_SSIZE_T_CLEAN */
-#include "Python.h"
-#ifndef Py_PYTHON_H
- #error Python headers needed to compile C extensions, please install development version of Python.
-#elif PY_VERSION_HEX < 0x02060000 || (0x03000000 <= PY_VERSION_HEX && PY_VERSION_HEX < 0x03030000)
- #error Cython requires Python 2.6+ or Python 3.3+.
-#else
-#define CYTHON_ABI "0_29_33"
-#define CYTHON_HEX_VERSION 0x001D21F0
-#define CYTHON_FUTURE_DIVISION 0
-#include
-#ifndef offsetof
- #define offsetof(type, member) ( (size_t) & ((type*)0) -> member )
-#endif
-#if !defined(WIN32) && !defined(MS_WINDOWS)
- #ifndef __stdcall
- #define __stdcall
- #endif
- #ifndef __cdecl
- #define __cdecl
- #endif
- #ifndef __fastcall
- #define __fastcall
- #endif
-#endif
-#ifndef DL_IMPORT
- #define DL_IMPORT(t) t
-#endif
-#ifndef DL_EXPORT
- #define DL_EXPORT(t) t
-#endif
-#define __PYX_COMMA ,
-#ifndef HAVE_LONG_LONG
- #if PY_VERSION_HEX >= 0x02070000
- #define HAVE_LONG_LONG
- #endif
-#endif
-#ifndef PY_LONG_LONG
- #define PY_LONG_LONG LONG_LONG
-#endif
-#ifndef Py_HUGE_VAL
- #define Py_HUGE_VAL HUGE_VAL
-#endif
-#ifdef PYPY_VERSION
- #define CYTHON_COMPILING_IN_PYPY 1
- #define CYTHON_COMPILING_IN_PYSTON 0
- #define CYTHON_COMPILING_IN_CPYTHON 0
- #define CYTHON_COMPILING_IN_NOGIL 0
- #undef CYTHON_USE_TYPE_SLOTS
- #define CYTHON_USE_TYPE_SLOTS 0
- #undef CYTHON_USE_PYTYPE_LOOKUP
- #define CYTHON_USE_PYTYPE_LOOKUP 0
- #if PY_VERSION_HEX < 0x03050000
- #undef CYTHON_USE_ASYNC_SLOTS
- #define CYTHON_USE_ASYNC_SLOTS 0
- #elif !defined(CYTHON_USE_ASYNC_SLOTS)
- #define CYTHON_USE_ASYNC_SLOTS 1
- #endif
- #undef CYTHON_USE_PYLIST_INTERNALS
- #define CYTHON_USE_PYLIST_INTERNALS 0
- #undef CYTHON_USE_UNICODE_INTERNALS
- #define CYTHON_USE_UNICODE_INTERNALS 0
- #undef CYTHON_USE_UNICODE_WRITER
- #define CYTHON_USE_UNICODE_WRITER 0
- #undef CYTHON_USE_PYLONG_INTERNALS
- #define CYTHON_USE_PYLONG_INTERNALS 0
- #undef CYTHON_AVOID_BORROWED_REFS
- #define CYTHON_AVOID_BORROWED_REFS 1
- #undef CYTHON_ASSUME_SAFE_MACROS
- #define CYTHON_ASSUME_SAFE_MACROS 0
- #undef CYTHON_UNPACK_METHODS
- #define CYTHON_UNPACK_METHODS 0
- #undef CYTHON_FAST_THREAD_STATE
- #define CYTHON_FAST_THREAD_STATE 0
- #undef CYTHON_FAST_PYCALL
- #define CYTHON_FAST_PYCALL 0
- #undef CYTHON_PEP489_MULTI_PHASE_INIT
- #define CYTHON_PEP489_MULTI_PHASE_INIT 0
- #undef CYTHON_USE_TP_FINALIZE
- #define CYTHON_USE_TP_FINALIZE 0
- #undef CYTHON_USE_DICT_VERSIONS
- #define CYTHON_USE_DICT_VERSIONS 0
- #undef CYTHON_USE_EXC_INFO_STACK
- #define CYTHON_USE_EXC_INFO_STACK 0
- #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC
- #define CYTHON_UPDATE_DESCRIPTOR_DOC 0
- #endif
-#elif defined(PYSTON_VERSION)
- #define CYTHON_COMPILING_IN_PYPY 0
- #define CYTHON_COMPILING_IN_PYSTON 1
- #define CYTHON_COMPILING_IN_CPYTHON 0
- #define CYTHON_COMPILING_IN_NOGIL 0
- #ifndef CYTHON_USE_TYPE_SLOTS
- #define CYTHON_USE_TYPE_SLOTS 1
- #endif
- #undef CYTHON_USE_PYTYPE_LOOKUP
- #define CYTHON_USE_PYTYPE_LOOKUP 0
- #undef CYTHON_USE_ASYNC_SLOTS
- #define CYTHON_USE_ASYNC_SLOTS 0
- #undef CYTHON_USE_PYLIST_INTERNALS
- #define CYTHON_USE_PYLIST_INTERNALS 0
- #ifndef CYTHON_USE_UNICODE_INTERNALS
- #define CYTHON_USE_UNICODE_INTERNALS 1
- #endif
- #undef CYTHON_USE_UNICODE_WRITER
- #define CYTHON_USE_UNICODE_WRITER 0
- #undef CYTHON_USE_PYLONG_INTERNALS
- #define CYTHON_USE_PYLONG_INTERNALS 0
- #ifndef CYTHON_AVOID_BORROWED_REFS
- #define CYTHON_AVOID_BORROWED_REFS 0
- #endif
- #ifndef CYTHON_ASSUME_SAFE_MACROS
- #define CYTHON_ASSUME_SAFE_MACROS 1
- #endif
- #ifndef CYTHON_UNPACK_METHODS
- #define CYTHON_UNPACK_METHODS 1
- #endif
- #undef CYTHON_FAST_THREAD_STATE
- #define CYTHON_FAST_THREAD_STATE 0
- #undef CYTHON_FAST_PYCALL
- #define CYTHON_FAST_PYCALL 0
- #undef CYTHON_PEP489_MULTI_PHASE_INIT
- #define CYTHON_PEP489_MULTI_PHASE_INIT 0
- #undef CYTHON_USE_TP_FINALIZE
- #define CYTHON_USE_TP_FINALIZE 0
- #undef CYTHON_USE_DICT_VERSIONS
- #define CYTHON_USE_DICT_VERSIONS 0
- #undef CYTHON_USE_EXC_INFO_STACK
- #define CYTHON_USE_EXC_INFO_STACK 0
- #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC
- #define CYTHON_UPDATE_DESCRIPTOR_DOC 0
- #endif
-#elif defined(PY_NOGIL)
- #define CYTHON_COMPILING_IN_PYPY 0
- #define CYTHON_COMPILING_IN_PYSTON 0
- #define CYTHON_COMPILING_IN_CPYTHON 0
- #define CYTHON_COMPILING_IN_NOGIL 1
- #ifndef CYTHON_USE_TYPE_SLOTS
- #define CYTHON_USE_TYPE_SLOTS 1
- #endif
- #undef CYTHON_USE_PYTYPE_LOOKUP
- #define CYTHON_USE_PYTYPE_LOOKUP 0
- #ifndef CYTHON_USE_ASYNC_SLOTS
- #define CYTHON_USE_ASYNC_SLOTS 1
- #endif
- #undef CYTHON_USE_PYLIST_INTERNALS
- #define CYTHON_USE_PYLIST_INTERNALS 0
- #ifndef CYTHON_USE_UNICODE_INTERNALS
- #define CYTHON_USE_UNICODE_INTERNALS 1
- #endif
- #undef CYTHON_USE_UNICODE_WRITER
- #define CYTHON_USE_UNICODE_WRITER 0
- #undef CYTHON_USE_PYLONG_INTERNALS
- #define CYTHON_USE_PYLONG_INTERNALS 0
- #ifndef CYTHON_AVOID_BORROWED_REFS
- #define CYTHON_AVOID_BORROWED_REFS 0
- #endif
- #ifndef CYTHON_ASSUME_SAFE_MACROS
- #define CYTHON_ASSUME_SAFE_MACROS 1
- #endif
- #ifndef CYTHON_UNPACK_METHODS
- #define CYTHON_UNPACK_METHODS 1
- #endif
- #undef CYTHON_FAST_THREAD_STATE
- #define CYTHON_FAST_THREAD_STATE 0
- #undef CYTHON_FAST_PYCALL
- #define CYTHON_FAST_PYCALL 0
- #ifndef CYTHON_PEP489_MULTI_PHASE_INIT
- #define CYTHON_PEP489_MULTI_PHASE_INIT 1
- #endif
- #ifndef CYTHON_USE_TP_FINALIZE
- #define CYTHON_USE_TP_FINALIZE 1
- #endif
- #undef CYTHON_USE_DICT_VERSIONS
- #define CYTHON_USE_DICT_VERSIONS 0
- #undef CYTHON_USE_EXC_INFO_STACK
- #define CYTHON_USE_EXC_INFO_STACK 0
-#else
- #define CYTHON_COMPILING_IN_PYPY 0
- #define CYTHON_COMPILING_IN_PYSTON 0
- #define CYTHON_COMPILING_IN_CPYTHON 1
- #define CYTHON_COMPILING_IN_NOGIL 0
- #ifndef CYTHON_USE_TYPE_SLOTS
- #define CYTHON_USE_TYPE_SLOTS 1
- #endif
- #if PY_VERSION_HEX < 0x02070000
- #undef CYTHON_USE_PYTYPE_LOOKUP
- #define CYTHON_USE_PYTYPE_LOOKUP 0
- #elif !defined(CYTHON_USE_PYTYPE_LOOKUP)
- #define CYTHON_USE_PYTYPE_LOOKUP 1
- #endif
- #if PY_MAJOR_VERSION < 3
- #undef CYTHON_USE_ASYNC_SLOTS
- #define CYTHON_USE_ASYNC_SLOTS 0
- #elif !defined(CYTHON_USE_ASYNC_SLOTS)
- #define CYTHON_USE_ASYNC_SLOTS 1
- #endif
- #if PY_VERSION_HEX < 0x02070000
- #undef CYTHON_USE_PYLONG_INTERNALS
- #define CYTHON_USE_PYLONG_INTERNALS 0
- #elif !defined(CYTHON_USE_PYLONG_INTERNALS)
- #define CYTHON_USE_PYLONG_INTERNALS 1
- #endif
- #ifndef CYTHON_USE_PYLIST_INTERNALS
- #define CYTHON_USE_PYLIST_INTERNALS 1
- #endif
- #ifndef CYTHON_USE_UNICODE_INTERNALS
- #define CYTHON_USE_UNICODE_INTERNALS 1
- #endif
- #if PY_VERSION_HEX < 0x030300F0 || PY_VERSION_HEX >= 0x030B00A2
- #undef CYTHON_USE_UNICODE_WRITER
- #define CYTHON_USE_UNICODE_WRITER 0
- #elif !defined(CYTHON_USE_UNICODE_WRITER)
- #define CYTHON_USE_UNICODE_WRITER 1
- #endif
- #ifndef CYTHON_AVOID_BORROWED_REFS
- #define CYTHON_AVOID_BORROWED_REFS 0
- #endif
- #ifndef CYTHON_ASSUME_SAFE_MACROS
- #define CYTHON_ASSUME_SAFE_MACROS 1
- #endif
- #ifndef CYTHON_UNPACK_METHODS
- #define CYTHON_UNPACK_METHODS 1
- #endif
- #if PY_VERSION_HEX >= 0x030B00A4
- #undef CYTHON_FAST_THREAD_STATE
- #define CYTHON_FAST_THREAD_STATE 0
- #elif !defined(CYTHON_FAST_THREAD_STATE)
- #define CYTHON_FAST_THREAD_STATE 1
- #endif
- #ifndef CYTHON_FAST_PYCALL
- #define CYTHON_FAST_PYCALL (PY_VERSION_HEX < 0x030A0000)
- #endif
- #ifndef CYTHON_PEP489_MULTI_PHASE_INIT
- #define CYTHON_PEP489_MULTI_PHASE_INIT (PY_VERSION_HEX >= 0x03050000)
- #endif
- #ifndef CYTHON_USE_TP_FINALIZE
- #define CYTHON_USE_TP_FINALIZE (PY_VERSION_HEX >= 0x030400a1)
- #endif
- #ifndef CYTHON_USE_DICT_VERSIONS
- #define CYTHON_USE_DICT_VERSIONS (PY_VERSION_HEX >= 0x030600B1)
- #endif
- #if PY_VERSION_HEX >= 0x030B00A4
- #undef CYTHON_USE_EXC_INFO_STACK
- #define CYTHON_USE_EXC_INFO_STACK 0
- #elif !defined(CYTHON_USE_EXC_INFO_STACK)
- #define CYTHON_USE_EXC_INFO_STACK (PY_VERSION_HEX >= 0x030700A3)
- #endif
- #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC
- #define CYTHON_UPDATE_DESCRIPTOR_DOC 1
- #endif
-#endif
-#if !defined(CYTHON_FAST_PYCCALL)
-#define CYTHON_FAST_PYCCALL (CYTHON_FAST_PYCALL && PY_VERSION_HEX >= 0x030600B1)
-#endif
-#if CYTHON_USE_PYLONG_INTERNALS
- #if PY_MAJOR_VERSION < 3
- #include "longintrepr.h"
- #endif
- #undef SHIFT
- #undef BASE
- #undef MASK
- #ifdef SIZEOF_VOID_P
- enum { __pyx_check_sizeof_voidp = 1 / (int)(SIZEOF_VOID_P == sizeof(void*)) };
- #endif
-#endif
-#ifndef __has_attribute
- #define __has_attribute(x) 0
-#endif
-#ifndef __has_cpp_attribute
- #define __has_cpp_attribute(x) 0
-#endif
-#ifndef CYTHON_RESTRICT
- #if defined(__GNUC__)
- #define CYTHON_RESTRICT __restrict__
- #elif defined(_MSC_VER) && _MSC_VER >= 1400
- #define CYTHON_RESTRICT __restrict
- #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L
- #define CYTHON_RESTRICT restrict
- #else
- #define CYTHON_RESTRICT
- #endif
-#endif
-#ifndef CYTHON_UNUSED
-# if defined(__GNUC__)
-# if !(defined(__cplusplus)) || (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4))
-# define CYTHON_UNUSED __attribute__ ((__unused__))
-# else
-# define CYTHON_UNUSED
-# endif
-# elif defined(__ICC) || (defined(__INTEL_COMPILER) && !defined(_MSC_VER))
-# define CYTHON_UNUSED __attribute__ ((__unused__))
-# else
-# define CYTHON_UNUSED
-# endif
-#endif
-#ifndef CYTHON_MAYBE_UNUSED_VAR
-# if defined(__cplusplus)
- template void CYTHON_MAYBE_UNUSED_VAR( const T& ) { }
-# else
-# define CYTHON_MAYBE_UNUSED_VAR(x) (void)(x)
-# endif
-#endif
-#ifndef CYTHON_NCP_UNUSED
-# if CYTHON_COMPILING_IN_CPYTHON
-# define CYTHON_NCP_UNUSED
-# else
-# define CYTHON_NCP_UNUSED CYTHON_UNUSED
-# endif
-#endif
-#define __Pyx_void_to_None(void_result) ((void)(void_result), Py_INCREF(Py_None), Py_None)
-#ifdef _MSC_VER
- #ifndef _MSC_STDINT_H_
- #if _MSC_VER < 1300
- typedef unsigned char uint8_t;
- typedef unsigned int uint32_t;
- #else
- typedef unsigned __int8 uint8_t;
- typedef unsigned __int32 uint32_t;
- #endif
- #endif
-#else
- #include
-#endif
-#ifndef CYTHON_FALLTHROUGH
- #if defined(__cplusplus) && __cplusplus >= 201103L
- #if __has_cpp_attribute(fallthrough)
- #define CYTHON_FALLTHROUGH [[fallthrough]]
- #elif __has_cpp_attribute(clang::fallthrough)
- #define CYTHON_FALLTHROUGH [[clang::fallthrough]]
- #elif __has_cpp_attribute(gnu::fallthrough)
- #define CYTHON_FALLTHROUGH [[gnu::fallthrough]]
- #endif
- #endif
- #ifndef CYTHON_FALLTHROUGH
- #if __has_attribute(fallthrough)
- #define CYTHON_FALLTHROUGH __attribute__((fallthrough))
- #else
- #define CYTHON_FALLTHROUGH
- #endif
- #endif
- #if defined(__clang__ ) && defined(__apple_build_version__)
- #if __apple_build_version__ < 7000000
- #undef CYTHON_FALLTHROUGH
- #define CYTHON_FALLTHROUGH
- #endif
- #endif
-#endif
-
-#ifndef CYTHON_INLINE
- #if defined(__clang__)
- #define CYTHON_INLINE __inline__ __attribute__ ((__unused__))
- #elif defined(__GNUC__)
- #define CYTHON_INLINE __inline__
- #elif defined(_MSC_VER)
- #define CYTHON_INLINE __inline
- #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L
- #define CYTHON_INLINE inline
- #else
- #define CYTHON_INLINE
- #endif
-#endif
-
-#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX < 0x02070600 && !defined(Py_OptimizeFlag)
- #define Py_OptimizeFlag 0
-#endif
-#define __PYX_BUILD_PY_SSIZE_T "n"
-#define CYTHON_FORMAT_SSIZE_T "z"
-#if PY_MAJOR_VERSION < 3
- #define __Pyx_BUILTIN_MODULE_NAME "__builtin__"
- #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\
- PyCode_New(a+k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)
- #define __Pyx_DefaultClassType PyClass_Type
-#else
- #define __Pyx_BUILTIN_MODULE_NAME "builtins"
- #define __Pyx_DefaultClassType PyType_Type
-#if PY_VERSION_HEX >= 0x030B00A1
- static CYTHON_INLINE PyCodeObject* __Pyx_PyCode_New(int a, int k, int l, int s, int f,
- PyObject *code, PyObject *c, PyObject* n, PyObject *v,
- PyObject *fv, PyObject *cell, PyObject* fn,
- PyObject *name, int fline, PyObject *lnos) {
- PyObject *kwds=NULL, *argcount=NULL, *posonlyargcount=NULL, *kwonlyargcount=NULL;
- PyObject *nlocals=NULL, *stacksize=NULL, *flags=NULL, *replace=NULL, *call_result=NULL, *empty=NULL;
- const char *fn_cstr=NULL;
- const char *name_cstr=NULL;
- PyCodeObject* co=NULL;
- PyObject *type, *value, *traceback;
- PyErr_Fetch(&type, &value, &traceback);
- if (!(kwds=PyDict_New())) goto end;
- if (!(argcount=PyLong_FromLong(a))) goto end;
- if (PyDict_SetItemString(kwds, "co_argcount", argcount) != 0) goto end;
- if (!(posonlyargcount=PyLong_FromLong(0))) goto end;
- if (PyDict_SetItemString(kwds, "co_posonlyargcount", posonlyargcount) != 0) goto end;
- if (!(kwonlyargcount=PyLong_FromLong(k))) goto end;
- if (PyDict_SetItemString(kwds, "co_kwonlyargcount", kwonlyargcount) != 0) goto end;
- if (!(nlocals=PyLong_FromLong(l))) goto end;
- if (PyDict_SetItemString(kwds, "co_nlocals", nlocals) != 0) goto end;
- if (!(stacksize=PyLong_FromLong(s))) goto end;
- if (PyDict_SetItemString(kwds, "co_stacksize", stacksize) != 0) goto end;
- if (!(flags=PyLong_FromLong(f))) goto end;
- if (PyDict_SetItemString(kwds, "co_flags", flags) != 0) goto end;
- if (PyDict_SetItemString(kwds, "co_code", code) != 0) goto end;
- if (PyDict_SetItemString(kwds, "co_consts", c) != 0) goto end;
- if (PyDict_SetItemString(kwds, "co_names", n) != 0) goto end;
- if (PyDict_SetItemString(kwds, "co_varnames", v) != 0) goto end;
- if (PyDict_SetItemString(kwds, "co_freevars", fv) != 0) goto end;
- if (PyDict_SetItemString(kwds, "co_cellvars", cell) != 0) goto end;
- if (PyDict_SetItemString(kwds, "co_linetable", lnos) != 0) goto end;
- if (!(fn_cstr=PyUnicode_AsUTF8AndSize(fn, NULL))) goto end;
- if (!(name_cstr=PyUnicode_AsUTF8AndSize(name, NULL))) goto end;
- if (!(co = PyCode_NewEmpty(fn_cstr, name_cstr, fline))) goto end;
- if (!(replace = PyObject_GetAttrString((PyObject*)co, "replace"))) goto cleanup_code_too;
- if (!(empty = PyTuple_New(0))) goto cleanup_code_too; // unfortunately __pyx_empty_tuple isn't available here
- if (!(call_result = PyObject_Call(replace, empty, kwds))) goto cleanup_code_too;
- Py_XDECREF((PyObject*)co);
- co = (PyCodeObject*)call_result;
- call_result = NULL;
- if (0) {
- cleanup_code_too:
- Py_XDECREF((PyObject*)co);
- co = NULL;
- }
- end:
- Py_XDECREF(kwds);
- Py_XDECREF(argcount);
- Py_XDECREF(posonlyargcount);
- Py_XDECREF(kwonlyargcount);
- Py_XDECREF(nlocals);
- Py_XDECREF(stacksize);
- Py_XDECREF(replace);
- Py_XDECREF(call_result);
- Py_XDECREF(empty);
- if (type) {
- PyErr_Restore(type, value, traceback);
- }
- return co;
- }
-#else
- #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\
- PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)
-#endif
- #define __Pyx_DefaultClassType PyType_Type
-#endif
-#ifndef Py_TPFLAGS_CHECKTYPES
- #define Py_TPFLAGS_CHECKTYPES 0
-#endif
-#ifndef Py_TPFLAGS_HAVE_INDEX
- #define Py_TPFLAGS_HAVE_INDEX 0
-#endif
-#ifndef Py_TPFLAGS_HAVE_NEWBUFFER
- #define Py_TPFLAGS_HAVE_NEWBUFFER 0
-#endif
-#ifndef Py_TPFLAGS_HAVE_FINALIZE
- #define Py_TPFLAGS_HAVE_FINALIZE 0
-#endif
-#ifndef METH_STACKLESS
- #define METH_STACKLESS 0
-#endif
-#if PY_VERSION_HEX <= 0x030700A3 || !defined(METH_FASTCALL)
- #ifndef METH_FASTCALL
- #define METH_FASTCALL 0x80
- #endif
- typedef PyObject *(*__Pyx_PyCFunctionFast) (PyObject *self, PyObject *const *args, Py_ssize_t nargs);
- typedef PyObject *(*__Pyx_PyCFunctionFastWithKeywords) (PyObject *self, PyObject *const *args,
- Py_ssize_t nargs, PyObject *kwnames);
-#else
- #define __Pyx_PyCFunctionFast _PyCFunctionFast
- #define __Pyx_PyCFunctionFastWithKeywords _PyCFunctionFastWithKeywords
-#endif
-#if CYTHON_FAST_PYCCALL
-#define __Pyx_PyFastCFunction_Check(func)\
- ((PyCFunction_Check(func) && (METH_FASTCALL == (PyCFunction_GET_FLAGS(func) & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_KEYWORDS | METH_STACKLESS)))))
-#else
-#define __Pyx_PyFastCFunction_Check(func) 0
-#endif
-#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Malloc)
- #define PyObject_Malloc(s) PyMem_Malloc(s)
- #define PyObject_Free(p) PyMem_Free(p)
- #define PyObject_Realloc(p) PyMem_Realloc(p)
-#endif
-#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030400A1
- #define PyMem_RawMalloc(n) PyMem_Malloc(n)
- #define PyMem_RawRealloc(p, n) PyMem_Realloc(p, n)
- #define PyMem_RawFree(p) PyMem_Free(p)
-#endif
-#if CYTHON_COMPILING_IN_PYSTON
- #define __Pyx_PyCode_HasFreeVars(co) PyCode_HasFreeVars(co)
- #define __Pyx_PyFrame_SetLineNumber(frame, lineno) PyFrame_SetLineNumber(frame, lineno)
-#else
- #define __Pyx_PyCode_HasFreeVars(co) (PyCode_GetNumFree(co) > 0)
- #define __Pyx_PyFrame_SetLineNumber(frame, lineno) (frame)->f_lineno = (lineno)
-#endif
-#if !CYTHON_FAST_THREAD_STATE || PY_VERSION_HEX < 0x02070000
- #define __Pyx_PyThreadState_Current PyThreadState_GET()
-#elif PY_VERSION_HEX >= 0x03060000
- #define __Pyx_PyThreadState_Current _PyThreadState_UncheckedGet()
-#elif PY_VERSION_HEX >= 0x03000000
- #define __Pyx_PyThreadState_Current PyThreadState_GET()
-#else
- #define __Pyx_PyThreadState_Current _PyThreadState_Current
-#endif
-#if PY_VERSION_HEX < 0x030700A2 && !defined(PyThread_tss_create) && !defined(Py_tss_NEEDS_INIT)
-#include "pythread.h"
-#define Py_tss_NEEDS_INIT 0
-typedef int Py_tss_t;
-static CYTHON_INLINE int PyThread_tss_create(Py_tss_t *key) {
- *key = PyThread_create_key();
- return 0;
-}
-static CYTHON_INLINE Py_tss_t * PyThread_tss_alloc(void) {
- Py_tss_t *key = (Py_tss_t *)PyObject_Malloc(sizeof(Py_tss_t));
- *key = Py_tss_NEEDS_INIT;
- return key;
-}
-static CYTHON_INLINE void PyThread_tss_free(Py_tss_t *key) {
- PyObject_Free(key);
-}
-static CYTHON_INLINE int PyThread_tss_is_created(Py_tss_t *key) {
- return *key != Py_tss_NEEDS_INIT;
-}
-static CYTHON_INLINE void PyThread_tss_delete(Py_tss_t *key) {
- PyThread_delete_key(*key);
- *key = Py_tss_NEEDS_INIT;
-}
-static CYTHON_INLINE int PyThread_tss_set(Py_tss_t *key, void *value) {
- return PyThread_set_key_value(*key, value);
-}
-static CYTHON_INLINE void * PyThread_tss_get(Py_tss_t *key) {
- return PyThread_get_key_value(*key);
-}
-#endif
-#if CYTHON_COMPILING_IN_CPYTHON || defined(_PyDict_NewPresized)
-#define __Pyx_PyDict_NewPresized(n) ((n <= 8) ? PyDict_New() : _PyDict_NewPresized(n))
-#else
-#define __Pyx_PyDict_NewPresized(n) PyDict_New()
-#endif
-#if PY_MAJOR_VERSION >= 3 || CYTHON_FUTURE_DIVISION
- #define __Pyx_PyNumber_Divide(x,y) PyNumber_TrueDivide(x,y)
- #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceTrueDivide(x,y)
-#else
- #define __Pyx_PyNumber_Divide(x,y) PyNumber_Divide(x,y)
- #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceDivide(x,y)
-#endif
-#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1 && CYTHON_USE_UNICODE_INTERNALS
-#define __Pyx_PyDict_GetItemStr(dict, name) _PyDict_GetItem_KnownHash(dict, name, ((PyASCIIObject *) name)->hash)
-#else
-#define __Pyx_PyDict_GetItemStr(dict, name) PyDict_GetItem(dict, name)
-#endif
-#if PY_VERSION_HEX > 0x03030000 && defined(PyUnicode_KIND)
- #define CYTHON_PEP393_ENABLED 1
- #if PY_VERSION_HEX >= 0x030C0000
- #define __Pyx_PyUnicode_READY(op) (0)
- #else
- #define __Pyx_PyUnicode_READY(op) (likely(PyUnicode_IS_READY(op)) ?\
- 0 : _PyUnicode_Ready((PyObject *)(op)))
- #endif
- #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_LENGTH(u)
- #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_READ_CHAR(u, i)
- #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) PyUnicode_MAX_CHAR_VALUE(u)
- #define __Pyx_PyUnicode_KIND(u) PyUnicode_KIND(u)
- #define __Pyx_PyUnicode_DATA(u) PyUnicode_DATA(u)
- #define __Pyx_PyUnicode_READ(k, d, i) PyUnicode_READ(k, d, i)
- #define __Pyx_PyUnicode_WRITE(k, d, i, ch) PyUnicode_WRITE(k, d, i, ch)
- #if PY_VERSION_HEX >= 0x030C0000
- #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_LENGTH(u))
- #else
- #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x03090000
- #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : ((PyCompactUnicodeObject *)(u))->wstr_length))
- #else
- #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : PyUnicode_GET_SIZE(u)))
- #endif
- #endif
-#else
- #define CYTHON_PEP393_ENABLED 0
- #define PyUnicode_1BYTE_KIND 1
- #define PyUnicode_2BYTE_KIND 2
- #define PyUnicode_4BYTE_KIND 4
- #define __Pyx_PyUnicode_READY(op) (0)
- #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_SIZE(u)
- #define __Pyx_PyUnicode_READ_CHAR(u, i) ((Py_UCS4)(PyUnicode_AS_UNICODE(u)[i]))
- #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) ((sizeof(Py_UNICODE) == 2) ? 65535 : 1114111)
- #define __Pyx_PyUnicode_KIND(u) (sizeof(Py_UNICODE))
- #define __Pyx_PyUnicode_DATA(u) ((void*)PyUnicode_AS_UNICODE(u))
- #define __Pyx_PyUnicode_READ(k, d, i) ((void)(k), (Py_UCS4)(((Py_UNICODE*)d)[i]))
- #define __Pyx_PyUnicode_WRITE(k, d, i, ch) (((void)(k)), ((Py_UNICODE*)d)[i] = ch)
- #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_SIZE(u))
-#endif
-#if CYTHON_COMPILING_IN_PYPY
- #define __Pyx_PyUnicode_Concat(a, b) PyNumber_Add(a, b)
- #define __Pyx_PyUnicode_ConcatSafe(a, b) PyNumber_Add(a, b)
-#else
- #define __Pyx_PyUnicode_Concat(a, b) PyUnicode_Concat(a, b)
- #define __Pyx_PyUnicode_ConcatSafe(a, b) ((unlikely((a) == Py_None) || unlikely((b) == Py_None)) ?\
- PyNumber_Add(a, b) : __Pyx_PyUnicode_Concat(a, b))
-#endif
-#if CYTHON_COMPILING_IN_PYPY && !defined(PyUnicode_Contains)
- #define PyUnicode_Contains(u, s) PySequence_Contains(u, s)
-#endif
-#if CYTHON_COMPILING_IN_PYPY && !defined(PyByteArray_Check)
- #define PyByteArray_Check(obj) PyObject_TypeCheck(obj, &PyByteArray_Type)
-#endif
-#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Format)
- #define PyObject_Format(obj, fmt) PyObject_CallMethod(obj, "__format__", "O", fmt)
-#endif
-#define __Pyx_PyString_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyString_Check(b) && !PyString_CheckExact(b)))) ? PyNumber_Remainder(a, b) : __Pyx_PyString_Format(a, b))
-#define __Pyx_PyUnicode_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyUnicode_Check(b) && !PyUnicode_CheckExact(b)))) ? PyNumber_Remainder(a, b) : PyUnicode_Format(a, b))
-#if PY_MAJOR_VERSION >= 3
- #define __Pyx_PyString_Format(a, b) PyUnicode_Format(a, b)
-#else
- #define __Pyx_PyString_Format(a, b) PyString_Format(a, b)
-#endif
-#if PY_MAJOR_VERSION < 3 && !defined(PyObject_ASCII)
- #define PyObject_ASCII(o) PyObject_Repr(o)
-#endif
-#if PY_MAJOR_VERSION >= 3
- #define PyBaseString_Type PyUnicode_Type
- #define PyStringObject PyUnicodeObject
- #define PyString_Type PyUnicode_Type
- #define PyString_Check PyUnicode_Check
- #define PyString_CheckExact PyUnicode_CheckExact
-#ifndef PyObject_Unicode
- #define PyObject_Unicode PyObject_Str
-#endif
-#endif
-#if PY_MAJOR_VERSION >= 3
- #define __Pyx_PyBaseString_Check(obj) PyUnicode_Check(obj)
- #define __Pyx_PyBaseString_CheckExact(obj) PyUnicode_CheckExact(obj)
-#else
- #define __Pyx_PyBaseString_Check(obj) (PyString_Check(obj) || PyUnicode_Check(obj))
- #define __Pyx_PyBaseString_CheckExact(obj) (PyString_CheckExact(obj) || PyUnicode_CheckExact(obj))
-#endif
-#ifndef PySet_CheckExact
- #define PySet_CheckExact(obj) (Py_TYPE(obj) == &PySet_Type)
-#endif
-#if PY_VERSION_HEX >= 0x030900A4
- #define __Pyx_SET_REFCNT(obj, refcnt) Py_SET_REFCNT(obj, refcnt)
- #define __Pyx_SET_SIZE(obj, size) Py_SET_SIZE(obj, size)
-#else
- #define __Pyx_SET_REFCNT(obj, refcnt) Py_REFCNT(obj) = (refcnt)
- #define __Pyx_SET_SIZE(obj, size) Py_SIZE(obj) = (size)
-#endif
-#if CYTHON_ASSUME_SAFE_MACROS
- #define __Pyx_PySequence_SIZE(seq) Py_SIZE(seq)
-#else
- #define __Pyx_PySequence_SIZE(seq) PySequence_Size(seq)
-#endif
-#if PY_MAJOR_VERSION >= 3
- #define PyIntObject PyLongObject
- #define PyInt_Type PyLong_Type
- #define PyInt_Check(op) PyLong_Check(op)
- #define PyInt_CheckExact(op) PyLong_CheckExact(op)
- #define PyInt_FromString PyLong_FromString
- #define PyInt_FromUnicode PyLong_FromUnicode
- #define PyInt_FromLong PyLong_FromLong
- #define PyInt_FromSize_t PyLong_FromSize_t
- #define PyInt_FromSsize_t PyLong_FromSsize_t
- #define PyInt_AsLong PyLong_AsLong
- #define PyInt_AS_LONG PyLong_AS_LONG
- #define PyInt_AsSsize_t PyLong_AsSsize_t
- #define PyInt_AsUnsignedLongMask PyLong_AsUnsignedLongMask
- #define PyInt_AsUnsignedLongLongMask PyLong_AsUnsignedLongLongMask
- #define PyNumber_Int PyNumber_Long
-#endif
-#if PY_MAJOR_VERSION >= 3
- #define PyBoolObject PyLongObject
-#endif
-#if PY_MAJOR_VERSION >= 3 && CYTHON_COMPILING_IN_PYPY
- #ifndef PyUnicode_InternFromString
- #define PyUnicode_InternFromString(s) PyUnicode_FromString(s)
- #endif
-#endif
-#if PY_VERSION_HEX < 0x030200A4
- typedef long Py_hash_t;
- #define __Pyx_PyInt_FromHash_t PyInt_FromLong
- #define __Pyx_PyInt_AsHash_t __Pyx_PyIndex_AsHash_t
-#else
- #define __Pyx_PyInt_FromHash_t PyInt_FromSsize_t
- #define __Pyx_PyInt_AsHash_t __Pyx_PyIndex_AsSsize_t
-#endif
-#if PY_MAJOR_VERSION >= 3
- #define __Pyx_PyMethod_New(func, self, klass) ((self) ? ((void)(klass), PyMethod_New(func, self)) : __Pyx_NewRef(func))
-#else
- #define __Pyx_PyMethod_New(func, self, klass) PyMethod_New(func, self, klass)
-#endif
-#if CYTHON_USE_ASYNC_SLOTS
- #if PY_VERSION_HEX >= 0x030500B1
- #define __Pyx_PyAsyncMethodsStruct PyAsyncMethods
- #define __Pyx_PyType_AsAsync(obj) (Py_TYPE(obj)->tp_as_async)
- #else
- #define __Pyx_PyType_AsAsync(obj) ((__Pyx_PyAsyncMethodsStruct*) (Py_TYPE(obj)->tp_reserved))
- #endif
-#else
- #define __Pyx_PyType_AsAsync(obj) NULL
-#endif
-#ifndef __Pyx_PyAsyncMethodsStruct
- typedef struct {
- unaryfunc am_await;
- unaryfunc am_aiter;
- unaryfunc am_anext;
- } __Pyx_PyAsyncMethodsStruct;
-#endif
-
-#if defined(_WIN32) || defined(WIN32) || defined(MS_WINDOWS)
- #if !defined(_USE_MATH_DEFINES)
- #define _USE_MATH_DEFINES
- #endif
-#endif
-#include
-#ifdef NAN
-#define __PYX_NAN() ((float) NAN)
-#else
-static CYTHON_INLINE float __PYX_NAN() {
- float value;
- memset(&value, 0xFF, sizeof(value));
- return value;
-}
-#endif
-#if defined(__CYGWIN__) && defined(_LDBL_EQ_DBL)
-#define __Pyx_truncl trunc
-#else
-#define __Pyx_truncl truncl
-#endif
-
-#define __PYX_MARK_ERR_POS(f_index, lineno) \
- { __pyx_filename = __pyx_f[f_index]; (void)__pyx_filename; __pyx_lineno = lineno; (void)__pyx_lineno; __pyx_clineno = __LINE__; (void)__pyx_clineno; }
-#define __PYX_ERR(f_index, lineno, Ln_error) \
- { __PYX_MARK_ERR_POS(f_index, lineno) goto Ln_error; }
-
-#ifndef __PYX_EXTERN_C
- #ifdef __cplusplus
- #define __PYX_EXTERN_C extern "C"
- #else
- #define __PYX_EXTERN_C extern
- #endif
-#endif
-
-#define __PYX_HAVE__monotonic_align__core
-#define __PYX_HAVE_API__monotonic_align__core
-/* Early includes */
-#include "pythread.h"
-#include
-#include
-#include
-#include "pystate.h"
-#ifdef _OPENMP
-#include
-#endif /* _OPENMP */
-
-#if defined(PYREX_WITHOUT_ASSERTIONS) && !defined(CYTHON_WITHOUT_ASSERTIONS)
-#define CYTHON_WITHOUT_ASSERTIONS
-#endif
-
-typedef struct {PyObject **p; const char *s; const Py_ssize_t n; const char* encoding;
- const char is_unicode; const char is_str; const char intern; } __Pyx_StringTabEntry;
-
-#define __PYX_DEFAULT_STRING_ENCODING_IS_ASCII 0
-#define __PYX_DEFAULT_STRING_ENCODING_IS_UTF8 0
-#define __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT (PY_MAJOR_VERSION >= 3 && __PYX_DEFAULT_STRING_ENCODING_IS_UTF8)
-#define __PYX_DEFAULT_STRING_ENCODING ""
-#define __Pyx_PyObject_FromString __Pyx_PyBytes_FromString
-#define __Pyx_PyObject_FromStringAndSize __Pyx_PyBytes_FromStringAndSize
-#define __Pyx_uchar_cast(c) ((unsigned char)c)
-#define __Pyx_long_cast(x) ((long)x)
-#define __Pyx_fits_Py_ssize_t(v, type, is_signed) (\
- (sizeof(type) < sizeof(Py_ssize_t)) ||\
- (sizeof(type) > sizeof(Py_ssize_t) &&\
- likely(v < (type)PY_SSIZE_T_MAX ||\
- v == (type)PY_SSIZE_T_MAX) &&\
- (!is_signed || likely(v > (type)PY_SSIZE_T_MIN ||\
- v == (type)PY_SSIZE_T_MIN))) ||\
- (sizeof(type) == sizeof(Py_ssize_t) &&\
- (is_signed || likely(v < (type)PY_SSIZE_T_MAX ||\
- v == (type)PY_SSIZE_T_MAX))) )
-static CYTHON_INLINE int __Pyx_is_valid_index(Py_ssize_t i, Py_ssize_t limit) {
- return (size_t) i < (size_t) limit;
-}
-#if defined (__cplusplus) && __cplusplus >= 201103L
- #include
- #define __Pyx_sst_abs(value) std::abs(value)
-#elif SIZEOF_INT >= SIZEOF_SIZE_T
- #define __Pyx_sst_abs(value) abs(value)
-#elif SIZEOF_LONG >= SIZEOF_SIZE_T
- #define __Pyx_sst_abs(value) labs(value)
-#elif defined (_MSC_VER)
- #define __Pyx_sst_abs(value) ((Py_ssize_t)_abs64(value))
-#elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L
- #define __Pyx_sst_abs(value) llabs(value)
-#elif defined (__GNUC__)
- #define __Pyx_sst_abs(value) __builtin_llabs(value)
-#else
- #define __Pyx_sst_abs(value) ((value<0) ? -value : value)
-#endif
-static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject*);
-static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject*, Py_ssize_t* length);
-#define __Pyx_PyByteArray_FromString(s) PyByteArray_FromStringAndSize((const char*)s, strlen((const char*)s))
-#define __Pyx_PyByteArray_FromStringAndSize(s, l) PyByteArray_FromStringAndSize((const char*)s, l)
-#define __Pyx_PyBytes_FromString PyBytes_FromString
-#define __Pyx_PyBytes_FromStringAndSize PyBytes_FromStringAndSize
-static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char*);
-#if PY_MAJOR_VERSION < 3
- #define __Pyx_PyStr_FromString __Pyx_PyBytes_FromString
- #define __Pyx_PyStr_FromStringAndSize __Pyx_PyBytes_FromStringAndSize
-#else
- #define __Pyx_PyStr_FromString __Pyx_PyUnicode_FromString
- #define __Pyx_PyStr_FromStringAndSize __Pyx_PyUnicode_FromStringAndSize
-#endif
-#define __Pyx_PyBytes_AsWritableString(s) ((char*) PyBytes_AS_STRING(s))
-#define __Pyx_PyBytes_AsWritableSString(s) ((signed char*) PyBytes_AS_STRING(s))
-#define __Pyx_PyBytes_AsWritableUString(s) ((unsigned char*) PyBytes_AS_STRING(s))
-#define __Pyx_PyBytes_AsString(s) ((const char*) PyBytes_AS_STRING(s))
-#define __Pyx_PyBytes_AsSString(s) ((const signed char*) PyBytes_AS_STRING(s))
-#define __Pyx_PyBytes_AsUString(s) ((const unsigned char*) PyBytes_AS_STRING(s))
-#define __Pyx_PyObject_AsWritableString(s) ((char*) __Pyx_PyObject_AsString(s))
-#define __Pyx_PyObject_AsWritableSString(s) ((signed char*) __Pyx_PyObject_AsString(s))
-#define __Pyx_PyObject_AsWritableUString(s) ((unsigned char*) __Pyx_PyObject_AsString(s))
-#define __Pyx_PyObject_AsSString(s) ((const signed char*) __Pyx_PyObject_AsString(s))
-#define __Pyx_PyObject_AsUString(s) ((const unsigned char*) __Pyx_PyObject_AsString(s))
-#define __Pyx_PyObject_FromCString(s) __Pyx_PyObject_FromString((const char*)s)
-#define __Pyx_PyBytes_FromCString(s) __Pyx_PyBytes_FromString((const char*)s)
-#define __Pyx_PyByteArray_FromCString(s) __Pyx_PyByteArray_FromString((const char*)s)
-#define __Pyx_PyStr_FromCString(s) __Pyx_PyStr_FromString((const char*)s)
-#define __Pyx_PyUnicode_FromCString(s) __Pyx_PyUnicode_FromString((const char*)s)
-static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE *u) {
- const Py_UNICODE *u_end = u;
- while (*u_end++) ;
- return (size_t)(u_end - u - 1);
-}
-#define __Pyx_PyUnicode_FromUnicode(u) PyUnicode_FromUnicode(u, __Pyx_Py_UNICODE_strlen(u))
-#define __Pyx_PyUnicode_FromUnicodeAndLength PyUnicode_FromUnicode
-#define __Pyx_PyUnicode_AsUnicode PyUnicode_AsUnicode
-#define __Pyx_NewRef(obj) (Py_INCREF(obj), obj)
-#define __Pyx_Owned_Py_None(b) __Pyx_NewRef(Py_None)
-static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b);
-static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*);
-static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject*);
-static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x);
-#define __Pyx_PySequence_Tuple(obj)\
- (likely(PyTuple_CheckExact(obj)) ? __Pyx_NewRef(obj) : PySequence_Tuple(obj))
-static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*);
-static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t);
-static CYTHON_INLINE Py_hash_t __Pyx_PyIndex_AsHash_t(PyObject*);
-#if CYTHON_ASSUME_SAFE_MACROS
-#define __pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x))
-#else
-#define __pyx_PyFloat_AsDouble(x) PyFloat_AsDouble(x)
-#endif
-#define __pyx_PyFloat_AsFloat(x) ((float) __pyx_PyFloat_AsDouble(x))
-#if PY_MAJOR_VERSION >= 3
-#define __Pyx_PyNumber_Int(x) (PyLong_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Long(x))
-#else
-#define __Pyx_PyNumber_Int(x) (PyInt_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Int(x))
-#endif
-#define __Pyx_PyNumber_Float(x) (PyFloat_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Float(x))
-#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII
-static int __Pyx_sys_getdefaultencoding_not_ascii;
-static int __Pyx_init_sys_getdefaultencoding_params(void) {
- PyObject* sys;
- PyObject* default_encoding = NULL;
- PyObject* ascii_chars_u = NULL;
- PyObject* ascii_chars_b = NULL;
- const char* default_encoding_c;
- sys = PyImport_ImportModule("sys");
- if (!sys) goto bad;
- default_encoding = PyObject_CallMethod(sys, (char*) "getdefaultencoding", NULL);
- Py_DECREF(sys);
- if (!default_encoding) goto bad;
- default_encoding_c = PyBytes_AsString(default_encoding);
- if (!default_encoding_c) goto bad;
- if (strcmp(default_encoding_c, "ascii") == 0) {
- __Pyx_sys_getdefaultencoding_not_ascii = 0;
- } else {
- char ascii_chars[128];
- int c;
- for (c = 0; c < 128; c++) {
- ascii_chars[c] = c;
- }
- __Pyx_sys_getdefaultencoding_not_ascii = 1;
- ascii_chars_u = PyUnicode_DecodeASCII(ascii_chars, 128, NULL);
- if (!ascii_chars_u) goto bad;
- ascii_chars_b = PyUnicode_AsEncodedString(ascii_chars_u, default_encoding_c, NULL);
- if (!ascii_chars_b || !PyBytes_Check(ascii_chars_b) || memcmp(ascii_chars, PyBytes_AS_STRING(ascii_chars_b), 128) != 0) {
- PyErr_Format(
- PyExc_ValueError,
- "This module compiled with c_string_encoding=ascii, but default encoding '%.200s' is not a superset of ascii.",
- default_encoding_c);
- goto bad;
- }
- Py_DECREF(ascii_chars_u);
- Py_DECREF(ascii_chars_b);
- }
- Py_DECREF(default_encoding);
- return 0;
-bad:
- Py_XDECREF(default_encoding);
- Py_XDECREF(ascii_chars_u);
- Py_XDECREF(ascii_chars_b);
- return -1;
-}
-#endif
-#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT && PY_MAJOR_VERSION >= 3
-#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_DecodeUTF8(c_str, size, NULL)
-#else
-#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_Decode(c_str, size, __PYX_DEFAULT_STRING_ENCODING, NULL)
-#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT
-static char* __PYX_DEFAULT_STRING_ENCODING;
-static int __Pyx_init_sys_getdefaultencoding_params(void) {
- PyObject* sys;
- PyObject* default_encoding = NULL;
- char* default_encoding_c;
- sys = PyImport_ImportModule("sys");
- if (!sys) goto bad;
- default_encoding = PyObject_CallMethod(sys, (char*) (const char*) "getdefaultencoding", NULL);
- Py_DECREF(sys);
- if (!default_encoding) goto bad;
- default_encoding_c = PyBytes_AsString(default_encoding);
- if (!default_encoding_c) goto bad;
- __PYX_DEFAULT_STRING_ENCODING = (char*) malloc(strlen(default_encoding_c) + 1);
- if (!__PYX_DEFAULT_STRING_ENCODING) goto bad;
- strcpy(__PYX_DEFAULT_STRING_ENCODING, default_encoding_c);
- Py_DECREF(default_encoding);
- return 0;
-bad:
- Py_XDECREF(default_encoding);
- return -1;
-}
-#endif
-#endif
-
-
-/* Test for GCC > 2.95 */
-#if defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95)))
- #define likely(x) __builtin_expect(!!(x), 1)
- #define unlikely(x) __builtin_expect(!!(x), 0)
-#else /* !__GNUC__ or GCC < 2.95 */
- #define likely(x) (x)
- #define unlikely(x) (x)
-#endif /* __GNUC__ */
-static CYTHON_INLINE void __Pyx_pretend_to_initialize(void* ptr) { (void)ptr; }
-
-static PyObject *__pyx_m = NULL;
-static PyObject *__pyx_d;
-static PyObject *__pyx_b;
-static PyObject *__pyx_cython_runtime = NULL;
-static PyObject *__pyx_empty_tuple;
-static PyObject *__pyx_empty_bytes;
-static PyObject *__pyx_empty_unicode;
-static int __pyx_lineno;
-static int __pyx_clineno = 0;
-static const char * __pyx_cfilenm= __FILE__;
-static const char *__pyx_filename;
-
-
-static const char *__pyx_f[] = {
- "core.pyx",
- "stringsource",
-};
-/* NoFastGil.proto */
-#define __Pyx_PyGILState_Ensure PyGILState_Ensure
-#define __Pyx_PyGILState_Release PyGILState_Release
-#define __Pyx_FastGIL_Remember()
-#define __Pyx_FastGIL_Forget()
-#define __Pyx_FastGilFuncInit()
-
-/* MemviewSliceStruct.proto */
-struct __pyx_memoryview_obj;
-typedef struct {
- struct __pyx_memoryview_obj *memview;
- char *data;
- Py_ssize_t shape[8];
- Py_ssize_t strides[8];
- Py_ssize_t suboffsets[8];
-} __Pyx_memviewslice;
-#define __Pyx_MemoryView_Len(m) (m.shape[0])
-
-/* Atomics.proto */
-#include
-#ifndef CYTHON_ATOMICS
- #define CYTHON_ATOMICS 1
-#endif
-#define __PYX_CYTHON_ATOMICS_ENABLED() CYTHON_ATOMICS
-#define __pyx_atomic_int_type int
-#if CYTHON_ATOMICS && (__GNUC__ >= 5 || (__GNUC__ == 4 &&\
- (__GNUC_MINOR__ > 1 ||\
- (__GNUC_MINOR__ == 1 && __GNUC_PATCHLEVEL__ >= 2))))
- #define __pyx_atomic_incr_aligned(value) __sync_fetch_and_add(value, 1)
- #define __pyx_atomic_decr_aligned(value) __sync_fetch_and_sub(value, 1)
- #ifdef __PYX_DEBUG_ATOMICS
- #warning "Using GNU atomics"
- #endif
-#elif CYTHON_ATOMICS && defined(_MSC_VER) && CYTHON_COMPILING_IN_NOGIL
- #include
- #undef __pyx_atomic_int_type
- #define __pyx_atomic_int_type long
- #pragma intrinsic (_InterlockedExchangeAdd)
- #define __pyx_atomic_incr_aligned(value) _InterlockedExchangeAdd(value, 1)
- #define __pyx_atomic_decr_aligned(value) _InterlockedExchangeAdd(value, -1)
- #ifdef __PYX_DEBUG_ATOMICS
- #pragma message ("Using MSVC atomics")
- #endif
-#else
- #undef CYTHON_ATOMICS
- #define CYTHON_ATOMICS 0
- #ifdef __PYX_DEBUG_ATOMICS
- #warning "Not using atomics"
- #endif
-#endif
-typedef volatile __pyx_atomic_int_type __pyx_atomic_int;
-#if CYTHON_ATOMICS
- #define __pyx_add_acquisition_count(memview)\
- __pyx_atomic_incr_aligned(__pyx_get_slice_count_pointer(memview))
- #define __pyx_sub_acquisition_count(memview)\
- __pyx_atomic_decr_aligned(__pyx_get_slice_count_pointer(memview))
-#else
- #define __pyx_add_acquisition_count(memview)\
- __pyx_add_acquisition_count_locked(__pyx_get_slice_count_pointer(memview), memview->lock)
- #define __pyx_sub_acquisition_count(memview)\
- __pyx_sub_acquisition_count_locked(__pyx_get_slice_count_pointer(memview), memview->lock)
-#endif
-
-/* ForceInitThreads.proto */
-#ifndef __PYX_FORCE_INIT_THREADS
- #define __PYX_FORCE_INIT_THREADS 0
-#endif
-
-/* BufferFormatStructs.proto */
-#define IS_UNSIGNED(type) (((type) -1) > 0)
-struct __Pyx_StructField_;
-#define __PYX_BUF_FLAGS_PACKED_STRUCT (1 << 0)
-typedef struct {
- const char* name;
- struct __Pyx_StructField_* fields;
- size_t size;
- size_t arraysize[8];
- int ndim;
- char typegroup;
- char is_unsigned;
- int flags;
-} __Pyx_TypeInfo;
-typedef struct __Pyx_StructField_ {
- __Pyx_TypeInfo* type;
- const char* name;
- size_t offset;
-} __Pyx_StructField;
-typedef struct {
- __Pyx_StructField* field;
- size_t parent_offset;
-} __Pyx_BufFmt_StackElem;
-typedef struct {
- __Pyx_StructField root;
- __Pyx_BufFmt_StackElem* head;
- size_t fmt_offset;
- size_t new_count, enc_count;
- size_t struct_alignment;
- int is_complex;
- char enc_type;
- char new_packmode;
- char enc_packmode;
- char is_valid_array;
-} __Pyx_BufFmt_Context;
-
-
-/*--- Type declarations ---*/
-struct __pyx_array_obj;
-struct __pyx_MemviewEnum_obj;
-struct __pyx_memoryview_obj;
-struct __pyx_memoryviewslice_obj;
-struct __pyx_opt_args_15monotonic_align_4core_maximum_path_each;
-
-/* "monotonic_align/core.pyx":7
- * @cython.boundscheck(False)
- * @cython.wraparound(False)
- * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<<
- * cdef int x
- * cdef int y
- */
-struct __pyx_opt_args_15monotonic_align_4core_maximum_path_each {
- int __pyx_n;
- float max_neg_val;
-};
-
-/* "View.MemoryView":106
- *
- * @cname("__pyx_array")
- * cdef class array: # <<<<<<<<<<<<<<
- *
- * cdef:
- */
-struct __pyx_array_obj {
- PyObject_HEAD
- struct __pyx_vtabstruct_array *__pyx_vtab;
- char *data;
- Py_ssize_t len;
- char *format;
- int ndim;
- Py_ssize_t *_shape;
- Py_ssize_t *_strides;
- Py_ssize_t itemsize;
- PyObject *mode;
- PyObject *_format;
- void (*callback_free_data)(void *);
- int free_data;
- int dtype_is_object;
-};
-
-
-/* "View.MemoryView":280
- *
- * @cname('__pyx_MemviewEnum')
- * cdef class Enum(object): # <<<<<<<<<<<<<<
- * cdef object name
- * def __init__(self, name):
- */
-struct __pyx_MemviewEnum_obj {
- PyObject_HEAD
- PyObject *name;
-};
-
-
-/* "View.MemoryView":331
- *
- * @cname('__pyx_memoryview')
- * cdef class memoryview(object): # <<<<<<<<<<<<<<
- *
- * cdef object obj
- */
-struct __pyx_memoryview_obj {
- PyObject_HEAD
- struct __pyx_vtabstruct_memoryview *__pyx_vtab;
- PyObject *obj;
- PyObject *_size;
- PyObject *_array_interface;
- PyThread_type_lock lock;
- __pyx_atomic_int acquisition_count[2];
- __pyx_atomic_int *acquisition_count_aligned_p;
- Py_buffer view;
- int flags;
- int dtype_is_object;
- __Pyx_TypeInfo *typeinfo;
-};
-
-
-/* "View.MemoryView":967
- *
- * @cname('__pyx_memoryviewslice')
- * cdef class _memoryviewslice(memoryview): # <<<<<<<<<<<<<<
- * "Internal class for passing memoryview slices to Python"
- *
- */
-struct __pyx_memoryviewslice_obj {
- struct __pyx_memoryview_obj __pyx_base;
- __Pyx_memviewslice from_slice;
- PyObject *from_object;
- PyObject *(*to_object_func)(char *);
- int (*to_dtype_func)(char *, PyObject *);
-};
-
-
-
-/* "View.MemoryView":106
- *
- * @cname("__pyx_array")
- * cdef class array: # <<<<<<<<<<<<<<
- *
- * cdef:
- */
-
-struct __pyx_vtabstruct_array {
- PyObject *(*get_memview)(struct __pyx_array_obj *);
-};
-static struct __pyx_vtabstruct_array *__pyx_vtabptr_array;
-
-
-/* "View.MemoryView":331
- *
- * @cname('__pyx_memoryview')
- * cdef class memoryview(object): # <<<<<<<<<<<<<<
- *
- * cdef object obj
- */
-
-struct __pyx_vtabstruct_memoryview {
- char *(*get_item_pointer)(struct __pyx_memoryview_obj *, PyObject *);
- PyObject *(*is_slice)(struct __pyx_memoryview_obj *, PyObject *);
- PyObject *(*setitem_slice_assignment)(struct __pyx_memoryview_obj *, PyObject *, PyObject *);
- PyObject *(*setitem_slice_assign_scalar)(struct __pyx_memoryview_obj *, struct __pyx_memoryview_obj *, PyObject *);
- PyObject *(*setitem_indexed)(struct __pyx_memoryview_obj *, PyObject *, PyObject *);
- PyObject *(*convert_item_to_object)(struct __pyx_memoryview_obj *, char *);
- PyObject *(*assign_item_from_object)(struct __pyx_memoryview_obj *, char *, PyObject *);
-};
-static struct __pyx_vtabstruct_memoryview *__pyx_vtabptr_memoryview;
-
-
-/* "View.MemoryView":967
- *
- * @cname('__pyx_memoryviewslice')
- * cdef class _memoryviewslice(memoryview): # <<<<<<<<<<<<<<
- * "Internal class for passing memoryview slices to Python"
- *
- */
-
-struct __pyx_vtabstruct__memoryviewslice {
- struct __pyx_vtabstruct_memoryview __pyx_base;
-};
-static struct __pyx_vtabstruct__memoryviewslice *__pyx_vtabptr__memoryviewslice;
-
-/* --- Runtime support code (head) --- */
-/* Refnanny.proto */
-#ifndef CYTHON_REFNANNY
- #define CYTHON_REFNANNY 0
-#endif
-#if CYTHON_REFNANNY
- typedef struct {
- void (*INCREF)(void*, PyObject*, int);
- void (*DECREF)(void*, PyObject*, int);
- void (*GOTREF)(void*, PyObject*, int);
- void (*GIVEREF)(void*, PyObject*, int);
- void* (*SetupContext)(const char*, int, const char*);
- void (*FinishContext)(void**);
- } __Pyx_RefNannyAPIStruct;
- static __Pyx_RefNannyAPIStruct *__Pyx_RefNanny = NULL;
- static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname);
- #define __Pyx_RefNannyDeclarations void *__pyx_refnanny = NULL;
-#ifdef WITH_THREAD
- #define __Pyx_RefNannySetupContext(name, acquire_gil)\
- if (acquire_gil) {\
- PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\
- __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\
- PyGILState_Release(__pyx_gilstate_save);\
- } else {\
- __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\
- }
-#else
- #define __Pyx_RefNannySetupContext(name, acquire_gil)\
- __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__)
-#endif
- #define __Pyx_RefNannyFinishContext()\
- __Pyx_RefNanny->FinishContext(&__pyx_refnanny)
- #define __Pyx_INCREF(r) __Pyx_RefNanny->INCREF(__pyx_refnanny, (PyObject *)(r), __LINE__)
- #define __Pyx_DECREF(r) __Pyx_RefNanny->DECREF(__pyx_refnanny, (PyObject *)(r), __LINE__)
- #define __Pyx_GOTREF(r) __Pyx_RefNanny->GOTREF(__pyx_refnanny, (PyObject *)(r), __LINE__)
- #define __Pyx_GIVEREF(r) __Pyx_RefNanny->GIVEREF(__pyx_refnanny, (PyObject *)(r), __LINE__)
- #define __Pyx_XINCREF(r) do { if((r) != NULL) {__Pyx_INCREF(r); }} while(0)
- #define __Pyx_XDECREF(r) do { if((r) != NULL) {__Pyx_DECREF(r); }} while(0)
- #define __Pyx_XGOTREF(r) do { if((r) != NULL) {__Pyx_GOTREF(r); }} while(0)
- #define __Pyx_XGIVEREF(r) do { if((r) != NULL) {__Pyx_GIVEREF(r);}} while(0)
-#else
- #define __Pyx_RefNannyDeclarations
- #define __Pyx_RefNannySetupContext(name, acquire_gil)
- #define __Pyx_RefNannyFinishContext()
- #define __Pyx_INCREF(r) Py_INCREF(r)
- #define __Pyx_DECREF(r) Py_DECREF(r)
- #define __Pyx_GOTREF(r)
- #define __Pyx_GIVEREF(r)
- #define __Pyx_XINCREF(r) Py_XINCREF(r)
- #define __Pyx_XDECREF(r) Py_XDECREF(r)
- #define __Pyx_XGOTREF(r)
- #define __Pyx_XGIVEREF(r)
-#endif
-#define __Pyx_XDECREF_SET(r, v) do {\
- PyObject *tmp = (PyObject *) r;\
- r = v; __Pyx_XDECREF(tmp);\
- } while (0)
-#define __Pyx_DECREF_SET(r, v) do {\
- PyObject *tmp = (PyObject *) r;\
- r = v; __Pyx_DECREF(tmp);\
- } while (0)
-#define __Pyx_CLEAR(r) do { PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);} while(0)
-#define __Pyx_XCLEAR(r) do { if((r) != NULL) {PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);}} while(0)
-
-/* PyObjectGetAttrStr.proto */
-#if CYTHON_USE_TYPE_SLOTS
-static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name);
-#else
-#define __Pyx_PyObject_GetAttrStr(o,n) PyObject_GetAttr(o,n)
-#endif
-
-/* GetBuiltinName.proto */
-static PyObject *__Pyx_GetBuiltinName(PyObject *name);
-
-/* MemviewSliceInit.proto */
-#define __Pyx_BUF_MAX_NDIMS %(BUF_MAX_NDIMS)d
-#define __Pyx_MEMVIEW_DIRECT 1
-#define __Pyx_MEMVIEW_PTR 2
-#define __Pyx_MEMVIEW_FULL 4
-#define __Pyx_MEMVIEW_CONTIG 8
-#define __Pyx_MEMVIEW_STRIDED 16
-#define __Pyx_MEMVIEW_FOLLOW 32
-#define __Pyx_IS_C_CONTIG 1
-#define __Pyx_IS_F_CONTIG 2
-static int __Pyx_init_memviewslice(
- struct __pyx_memoryview_obj *memview,
- int ndim,
- __Pyx_memviewslice *memviewslice,
- int memview_is_new_reference);
-static CYTHON_INLINE int __pyx_add_acquisition_count_locked(
- __pyx_atomic_int *acquisition_count, PyThread_type_lock lock);
-static CYTHON_INLINE int __pyx_sub_acquisition_count_locked(
- __pyx_atomic_int *acquisition_count, PyThread_type_lock lock);
-#define __pyx_get_slice_count_pointer(memview) (memview->acquisition_count_aligned_p)
-#define __pyx_get_slice_count(memview) (*__pyx_get_slice_count_pointer(memview))
-#define __PYX_INC_MEMVIEW(slice, have_gil) __Pyx_INC_MEMVIEW(slice, have_gil, __LINE__)
-#define __PYX_XDEC_MEMVIEW(slice, have_gil) __Pyx_XDEC_MEMVIEW(slice, have_gil, __LINE__)
-static CYTHON_INLINE void __Pyx_INC_MEMVIEW(__Pyx_memviewslice *, int, int);
-static CYTHON_INLINE void __Pyx_XDEC_MEMVIEW(__Pyx_memviewslice *, int, int);
-
-/* RaiseArgTupleInvalid.proto */
-static void __Pyx_RaiseArgtupleInvalid(const char* func_name, int exact,
- Py_ssize_t num_min, Py_ssize_t num_max, Py_ssize_t num_found);
-
-/* RaiseDoubleKeywords.proto */
-static void __Pyx_RaiseDoubleKeywordsError(const char* func_name, PyObject* kw_name);
-
-/* ParseKeywords.proto */
-static int __Pyx_ParseOptionalKeywords(PyObject *kwds, PyObject **argnames[],\
- PyObject *kwds2, PyObject *values[], Py_ssize_t num_pos_args,\
- const char* function_name);
-
-/* None.proto */
-static CYTHON_INLINE void __Pyx_RaiseUnboundLocalError(const char *varname);
-
-/* ArgTypeTest.proto */
-#define __Pyx_ArgTypeTest(obj, type, none_allowed, name, exact)\
- ((likely((Py_TYPE(obj) == type) | (none_allowed && (obj == Py_None)))) ? 1 :\
- __Pyx__ArgTypeTest(obj, type, name, exact))
-static int __Pyx__ArgTypeTest(PyObject *obj, PyTypeObject *type, const char *name, int exact);
-
-/* PyObjectCall.proto */
-#if CYTHON_COMPILING_IN_CPYTHON
-static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw);
-#else
-#define __Pyx_PyObject_Call(func, arg, kw) PyObject_Call(func, arg, kw)
-#endif
-
-/* PyThreadStateGet.proto */
-#if CYTHON_FAST_THREAD_STATE
-#define __Pyx_PyThreadState_declare PyThreadState *__pyx_tstate;
-#define __Pyx_PyThreadState_assign __pyx_tstate = __Pyx_PyThreadState_Current;
-#define __Pyx_PyErr_Occurred() __pyx_tstate->curexc_type
-#else
-#define __Pyx_PyThreadState_declare
-#define __Pyx_PyThreadState_assign
-#define __Pyx_PyErr_Occurred() PyErr_Occurred()
-#endif
-
-/* PyErrFetchRestore.proto */
-#if CYTHON_FAST_THREAD_STATE
-#define __Pyx_PyErr_Clear() __Pyx_ErrRestore(NULL, NULL, NULL)
-#define __Pyx_ErrRestoreWithState(type, value, tb) __Pyx_ErrRestoreInState(PyThreadState_GET(), type, value, tb)
-#define __Pyx_ErrFetchWithState(type, value, tb) __Pyx_ErrFetchInState(PyThreadState_GET(), type, value, tb)
-#define __Pyx_ErrRestore(type, value, tb) __Pyx_ErrRestoreInState(__pyx_tstate, type, value, tb)
-#define __Pyx_ErrFetch(type, value, tb) __Pyx_ErrFetchInState(__pyx_tstate, type, value, tb)
-static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb);
-static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb);
-#if CYTHON_COMPILING_IN_CPYTHON
-#define __Pyx_PyErr_SetNone(exc) (Py_INCREF(exc), __Pyx_ErrRestore((exc), NULL, NULL))
-#else
-#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc)
-#endif
-#else
-#define __Pyx_PyErr_Clear() PyErr_Clear()
-#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc)
-#define __Pyx_ErrRestoreWithState(type, value, tb) PyErr_Restore(type, value, tb)
-#define __Pyx_ErrFetchWithState(type, value, tb) PyErr_Fetch(type, value, tb)
-#define __Pyx_ErrRestoreInState(tstate, type, value, tb) PyErr_Restore(type, value, tb)
-#define __Pyx_ErrFetchInState(tstate, type, value, tb) PyErr_Fetch(type, value, tb)
-#define __Pyx_ErrRestore(type, value, tb) PyErr_Restore(type, value, tb)
-#define __Pyx_ErrFetch(type, value, tb) PyErr_Fetch(type, value, tb)
-#endif
-
-/* RaiseException.proto */
-static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause);
-
-/* PyCFunctionFastCall.proto */
-#if CYTHON_FAST_PYCCALL
-static CYTHON_INLINE PyObject *__Pyx_PyCFunction_FastCall(PyObject *func, PyObject **args, Py_ssize_t nargs);
-#else
-#define __Pyx_PyCFunction_FastCall(func, args, nargs) (assert(0), NULL)
-#endif
-
-/* PyFunctionFastCall.proto */
-#if CYTHON_FAST_PYCALL
-#define __Pyx_PyFunction_FastCall(func, args, nargs)\
- __Pyx_PyFunction_FastCallDict((func), (args), (nargs), NULL)
-#if 1 || PY_VERSION_HEX < 0x030600B1
-static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs);
-#else
-#define __Pyx_PyFunction_FastCallDict(func, args, nargs, kwargs) _PyFunction_FastCallDict(func, args, nargs, kwargs)
-#endif
-#define __Pyx_BUILD_ASSERT_EXPR(cond)\
- (sizeof(char [1 - 2*!(cond)]) - 1)
-#ifndef Py_MEMBER_SIZE
-#define Py_MEMBER_SIZE(type, member) sizeof(((type *)0)->member)
-#endif
-#if CYTHON_FAST_PYCALL
- static size_t __pyx_pyframe_localsplus_offset = 0;
- #include "frameobject.h"
-#if PY_VERSION_HEX >= 0x030b00a6
- #ifndef Py_BUILD_CORE
- #define Py_BUILD_CORE 1
- #endif
- #include "internal/pycore_frame.h"
-#endif
- #define __Pxy_PyFrame_Initialize_Offsets()\
- ((void)__Pyx_BUILD_ASSERT_EXPR(sizeof(PyFrameObject) == offsetof(PyFrameObject, f_localsplus) + Py_MEMBER_SIZE(PyFrameObject, f_localsplus)),\
- (void)(__pyx_pyframe_localsplus_offset = ((size_t)PyFrame_Type.tp_basicsize) - Py_MEMBER_SIZE(PyFrameObject, f_localsplus)))
- #define __Pyx_PyFrame_GetLocalsplus(frame)\
- (assert(__pyx_pyframe_localsplus_offset), (PyObject **)(((char *)(frame)) + __pyx_pyframe_localsplus_offset))
-#endif // CYTHON_FAST_PYCALL
-#endif
-
-/* PyObjectCall2Args.proto */
-static CYTHON_UNUSED PyObject* __Pyx_PyObject_Call2Args(PyObject* function, PyObject* arg1, PyObject* arg2);
-
-/* PyObjectCallMethO.proto */
-#if CYTHON_COMPILING_IN_CPYTHON
-static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg);
-#endif
-
-/* PyObjectCallOneArg.proto */
-static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg);
-
-/* IncludeStringH.proto */
-#include
-
-/* BytesEquals.proto */
-static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals);
-
-/* UnicodeEquals.proto */
-static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals);
-
-/* StrEquals.proto */
-#if PY_MAJOR_VERSION >= 3
-#define __Pyx_PyString_Equals __Pyx_PyUnicode_Equals
-#else
-#define __Pyx_PyString_Equals __Pyx_PyBytes_Equals
-#endif
-
-/* DivInt[Py_ssize_t].proto */
-static CYTHON_INLINE Py_ssize_t __Pyx_div_Py_ssize_t(Py_ssize_t, Py_ssize_t);
-
-/* UnaryNegOverflows.proto */
-#define UNARY_NEG_WOULD_OVERFLOW(x)\
- (((x) < 0) & ((unsigned long)(x) == 0-(unsigned long)(x)))
-
-static CYTHON_UNUSED int __pyx_array_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/
-static PyObject *__pyx_array_get_memview(struct __pyx_array_obj *); /*proto*/
-/* GetAttr.proto */
-static CYTHON_INLINE PyObject *__Pyx_GetAttr(PyObject *, PyObject *);
-
-/* GetItemInt.proto */
-#define __Pyx_GetItemInt(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\
- (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\
- __Pyx_GetItemInt_Fast(o, (Py_ssize_t)i, is_list, wraparound, boundscheck) :\
- (is_list ? (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL) :\
- __Pyx_GetItemInt_Generic(o, to_py_func(i))))
-#define __Pyx_GetItemInt_List(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\
- (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\
- __Pyx_GetItemInt_List_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\
- (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL))
-static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i,
- int wraparound, int boundscheck);
-#define __Pyx_GetItemInt_Tuple(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\
- (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\
- __Pyx_GetItemInt_Tuple_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\
- (PyErr_SetString(PyExc_IndexError, "tuple index out of range"), (PyObject*)NULL))
-static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i,
- int wraparound, int boundscheck);
-static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j);
-static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i,
- int is_list, int wraparound, int boundscheck);
-
-/* ObjectGetItem.proto */
-#if CYTHON_USE_TYPE_SLOTS
-static CYTHON_INLINE PyObject *__Pyx_PyObject_GetItem(PyObject *obj, PyObject* key);
-#else
-#define __Pyx_PyObject_GetItem(obj, key) PyObject_GetItem(obj, key)
-#endif
-
-/* decode_c_string_utf16.proto */
-static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16(const char *s, Py_ssize_t size, const char *errors) {
- int byteorder = 0;
- return PyUnicode_DecodeUTF16(s, size, errors, &byteorder);
-}
-static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16LE(const char *s, Py_ssize_t size, const char *errors) {
- int byteorder = -1;
- return PyUnicode_DecodeUTF16(s, size, errors, &byteorder);
-}
-static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16BE(const char *s, Py_ssize_t size, const char *errors) {
- int byteorder = 1;
- return PyUnicode_DecodeUTF16(s, size, errors, &byteorder);
-}
-
-/* decode_c_string.proto */
-static CYTHON_INLINE PyObject* __Pyx_decode_c_string(
- const char* cstring, Py_ssize_t start, Py_ssize_t stop,
- const char* encoding, const char* errors,
- PyObject* (*decode_func)(const char *s, Py_ssize_t size, const char *errors));
-
-/* PyErrExceptionMatches.proto */
-#if CYTHON_FAST_THREAD_STATE
-#define __Pyx_PyErr_ExceptionMatches(err) __Pyx_PyErr_ExceptionMatchesInState(__pyx_tstate, err)
-static CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadState* tstate, PyObject* err);
-#else
-#define __Pyx_PyErr_ExceptionMatches(err) PyErr_ExceptionMatches(err)
-#endif
-
-/* GetAttr3.proto */
-static CYTHON_INLINE PyObject *__Pyx_GetAttr3(PyObject *, PyObject *, PyObject *);
-
-/* PyDictVersioning.proto */
-#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS
-#define __PYX_DICT_VERSION_INIT ((PY_UINT64_T) -1)
-#define __PYX_GET_DICT_VERSION(dict) (((PyDictObject*)(dict))->ma_version_tag)
-#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var)\
- (version_var) = __PYX_GET_DICT_VERSION(dict);\
- (cache_var) = (value);
-#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) {\
- static PY_UINT64_T __pyx_dict_version = 0;\
- static PyObject *__pyx_dict_cached_value = NULL;\
- if (likely(__PYX_GET_DICT_VERSION(DICT) == __pyx_dict_version)) {\
- (VAR) = __pyx_dict_cached_value;\
- } else {\
- (VAR) = __pyx_dict_cached_value = (LOOKUP);\
- __pyx_dict_version = __PYX_GET_DICT_VERSION(DICT);\
- }\
-}
-static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj);
-static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj);
-static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version);
-#else
-#define __PYX_GET_DICT_VERSION(dict) (0)
-#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var)
-#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) (VAR) = (LOOKUP);
-#endif
-
-/* GetModuleGlobalName.proto */
-#if CYTHON_USE_DICT_VERSIONS
-#define __Pyx_GetModuleGlobalName(var, name) do {\
- static PY_UINT64_T __pyx_dict_version = 0;\
- static PyObject *__pyx_dict_cached_value = NULL;\
- (var) = (likely(__pyx_dict_version == __PYX_GET_DICT_VERSION(__pyx_d))) ?\
- (likely(__pyx_dict_cached_value) ? __Pyx_NewRef(__pyx_dict_cached_value) : __Pyx_GetBuiltinName(name)) :\
- __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\
-} while(0)
-#define __Pyx_GetModuleGlobalNameUncached(var, name) do {\
- PY_UINT64_T __pyx_dict_version;\
- PyObject *__pyx_dict_cached_value;\
- (var) = __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\
-} while(0)
-static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value);
-#else
-#define __Pyx_GetModuleGlobalName(var, name) (var) = __Pyx__GetModuleGlobalName(name)
-#define __Pyx_GetModuleGlobalNameUncached(var, name) (var) = __Pyx__GetModuleGlobalName(name)
-static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name);
-#endif
-
-/* RaiseTooManyValuesToUnpack.proto */
-static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected);
-
-/* RaiseNeedMoreValuesToUnpack.proto */
-static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index);
-
-/* RaiseNoneIterError.proto */
-static CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void);
-
-/* ExtTypeTest.proto */
-static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type);
-
-/* GetTopmostException.proto */
-#if CYTHON_USE_EXC_INFO_STACK
-static _PyErr_StackItem * __Pyx_PyErr_GetTopmostException(PyThreadState *tstate);
-#endif
-
-/* SaveResetException.proto */
-#if CYTHON_FAST_THREAD_STATE
-#define __Pyx_ExceptionSave(type, value, tb) __Pyx__ExceptionSave(__pyx_tstate, type, value, tb)
-static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb);
-#define __Pyx_ExceptionReset(type, value, tb) __Pyx__ExceptionReset(__pyx_tstate, type, value, tb)
-static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb);
-#else
-#define __Pyx_ExceptionSave(type, value, tb) PyErr_GetExcInfo(type, value, tb)
-#define __Pyx_ExceptionReset(type, value, tb) PyErr_SetExcInfo(type, value, tb)
-#endif
-
-/* GetException.proto */
-#if CYTHON_FAST_THREAD_STATE
-#define __Pyx_GetException(type, value, tb) __Pyx__GetException(__pyx_tstate, type, value, tb)
-static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb);
-#else
-static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb);
-#endif
-
-/* SwapException.proto */
-#if CYTHON_FAST_THREAD_STATE
-#define __Pyx_ExceptionSwap(type, value, tb) __Pyx__ExceptionSwap(__pyx_tstate, type, value, tb)
-static CYTHON_INLINE void __Pyx__ExceptionSwap(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb);
-#else
-static CYTHON_INLINE void __Pyx_ExceptionSwap(PyObject **type, PyObject **value, PyObject **tb);
-#endif
-
-/* Import.proto */
-static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level);
-
-/* FastTypeChecks.proto */
-#if CYTHON_COMPILING_IN_CPYTHON
-#define __Pyx_TypeCheck(obj, type) __Pyx_IsSubtype(Py_TYPE(obj), (PyTypeObject *)type)
-static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b);
-static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches(PyObject *err, PyObject *type);
-static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches2(PyObject *err, PyObject *type1, PyObject *type2);
-#else
-#define __Pyx_TypeCheck(obj, type) PyObject_TypeCheck(obj, (PyTypeObject *)type)
-#define __Pyx_PyErr_GivenExceptionMatches(err, type) PyErr_GivenExceptionMatches(err, type)
-#define __Pyx_PyErr_GivenExceptionMatches2(err, type1, type2) (PyErr_GivenExceptionMatches(err, type1) || PyErr_GivenExceptionMatches(err, type2))
-#endif
-#define __Pyx_PyException_Check(obj) __Pyx_TypeCheck(obj, PyExc_Exception)
-
-static CYTHON_UNUSED int __pyx_memoryview_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/
-/* ListCompAppend.proto */
-#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS
-static CYTHON_INLINE int __Pyx_ListComp_Append(PyObject* list, PyObject* x) {
- PyListObject* L = (PyListObject*) list;
- Py_ssize_t len = Py_SIZE(list);
- if (likely(L->allocated > len)) {
- Py_INCREF(x);
- PyList_SET_ITEM(list, len, x);
- __Pyx_SET_SIZE(list, len + 1);
- return 0;
- }
- return PyList_Append(list, x);
-}
-#else
-#define __Pyx_ListComp_Append(L,x) PyList_Append(L,x)
-#endif
-
-/* PyIntBinop.proto */
-#if !CYTHON_COMPILING_IN_PYPY
-static PyObject* __Pyx_PyInt_AddObjC(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check);
-#else
-#define __Pyx_PyInt_AddObjC(op1, op2, intval, inplace, zerodivision_check)\
- (inplace ? PyNumber_InPlaceAdd(op1, op2) : PyNumber_Add(op1, op2))
-#endif
-
-/* ListExtend.proto */
-static CYTHON_INLINE int __Pyx_PyList_Extend(PyObject* L, PyObject* v) {
-#if CYTHON_COMPILING_IN_CPYTHON
- PyObject* none = _PyList_Extend((PyListObject*)L, v);
- if (unlikely(!none))
- return -1;
- Py_DECREF(none);
- return 0;
-#else
- return PyList_SetSlice(L, PY_SSIZE_T_MAX, PY_SSIZE_T_MAX, v);
-#endif
-}
-
-/* ListAppend.proto */
-#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS
-static CYTHON_INLINE int __Pyx_PyList_Append(PyObject* list, PyObject* x) {
- PyListObject* L = (PyListObject*) list;
- Py_ssize_t len = Py_SIZE(list);
- if (likely(L->allocated > len) & likely(len > (L->allocated >> 1))) {
- Py_INCREF(x);
- PyList_SET_ITEM(list, len, x);
- __Pyx_SET_SIZE(list, len + 1);
- return 0;
- }
- return PyList_Append(list, x);
-}
-#else
-#define __Pyx_PyList_Append(L,x) PyList_Append(L,x)
-#endif
-
-/* DivInt[long].proto */
-static CYTHON_INLINE long __Pyx_div_long(long, long);
-
-/* PySequenceContains.proto */
-static CYTHON_INLINE int __Pyx_PySequence_ContainsTF(PyObject* item, PyObject* seq, int eq) {
- int result = PySequence_Contains(seq, item);
- return unlikely(result < 0) ? result : (result == (eq == Py_EQ));
-}
-
-/* ImportFrom.proto */
-static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name);
-
-/* HasAttr.proto */
-static CYTHON_INLINE int __Pyx_HasAttr(PyObject *, PyObject *);
-
-/* PyObject_GenericGetAttrNoDict.proto */
-#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000
-static CYTHON_INLINE PyObject* __Pyx_PyObject_GenericGetAttrNoDict(PyObject* obj, PyObject* attr_name);
-#else
-#define __Pyx_PyObject_GenericGetAttrNoDict PyObject_GenericGetAttr
-#endif
-
-/* PyObject_GenericGetAttr.proto */
-#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000
-static PyObject* __Pyx_PyObject_GenericGetAttr(PyObject* obj, PyObject* attr_name);
-#else
-#define __Pyx_PyObject_GenericGetAttr PyObject_GenericGetAttr
-#endif
-
-/* SetVTable.proto */
-static int __Pyx_SetVtable(PyObject *dict, void *vtable);
-
-/* PyObjectGetAttrStrNoError.proto */
-static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStrNoError(PyObject* obj, PyObject* attr_name);
-
-/* SetupReduce.proto */
-static int __Pyx_setup_reduce(PyObject* type_obj);
-
-/* CLineInTraceback.proto */
-#ifdef CYTHON_CLINE_IN_TRACEBACK
-#define __Pyx_CLineForTraceback(tstate, c_line) (((CYTHON_CLINE_IN_TRACEBACK)) ? c_line : 0)
-#else
-static int __Pyx_CLineForTraceback(PyThreadState *tstate, int c_line);
-#endif
-
-/* CodeObjectCache.proto */
-typedef struct {
- PyCodeObject* code_object;
- int code_line;
-} __Pyx_CodeObjectCacheEntry;
-struct __Pyx_CodeObjectCache {
- int count;
- int max_count;
- __Pyx_CodeObjectCacheEntry* entries;
-};
-static struct __Pyx_CodeObjectCache __pyx_code_cache = {0,0,NULL};
-static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line);
-static PyCodeObject *__pyx_find_code_object(int code_line);
-static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object);
-
-/* AddTraceback.proto */
-static void __Pyx_AddTraceback(const char *funcname, int c_line,
- int py_line, const char *filename);
-
-#if PY_MAJOR_VERSION < 3
- static int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags);
- static void __Pyx_ReleaseBuffer(Py_buffer *view);
-#else
- #define __Pyx_GetBuffer PyObject_GetBuffer
- #define __Pyx_ReleaseBuffer PyBuffer_Release
-#endif
-
-
-/* BufferStructDeclare.proto */
-typedef struct {
- Py_ssize_t shape, strides, suboffsets;
-} __Pyx_Buf_DimInfo;
-typedef struct {
- size_t refcount;
- Py_buffer pybuffer;
-} __Pyx_Buffer;
-typedef struct {
- __Pyx_Buffer *rcbuffer;
- char *data;
- __Pyx_Buf_DimInfo diminfo[8];
-} __Pyx_LocalBuf_ND;
-
-/* MemviewSliceIsContig.proto */
-static int __pyx_memviewslice_is_contig(const __Pyx_memviewslice mvs, char order, int ndim);
-
-/* OverlappingSlices.proto */
-static int __pyx_slices_overlap(__Pyx_memviewslice *slice1,
- __Pyx_memviewslice *slice2,
- int ndim, size_t itemsize);
-
-/* Capsule.proto */
-static CYTHON_INLINE PyObject *__pyx_capsule_create(void *p, const char *sig);
-
-/* IsLittleEndian.proto */
-static CYTHON_INLINE int __Pyx_Is_Little_Endian(void);
-
-/* BufferFormatCheck.proto */
-static const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const char* ts);
-static void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx,
- __Pyx_BufFmt_StackElem* stack,
- __Pyx_TypeInfo* type);
-
-/* TypeInfoCompare.proto */
-static int __pyx_typeinfo_cmp(__Pyx_TypeInfo *a, __Pyx_TypeInfo *b);
-
-/* MemviewSliceValidateAndInit.proto */
-static int __Pyx_ValidateAndInit_memviewslice(
- int *axes_specs,
- int c_or_f_flag,
- int buf_flags,
- int ndim,
- __Pyx_TypeInfo *dtype,
- __Pyx_BufFmt_StackElem stack[],
- __Pyx_memviewslice *memviewslice,
- PyObject *original_obj);
-
-/* ObjectToMemviewSlice.proto */
-static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_int(PyObject *, int writable_flag);
-
-/* ObjectToMemviewSlice.proto */
-static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_float(PyObject *, int writable_flag);
-
-/* ObjectToMemviewSlice.proto */
-static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_dc_int(PyObject *, int writable_flag);
-
-/* GCCDiagnostics.proto */
-#if defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6))
-#define __Pyx_HAS_GCC_DIAGNOSTIC
-#endif
-
-/* MemviewSliceCopyTemplate.proto */
-static __Pyx_memviewslice
-__pyx_memoryview_copy_new_contig(const __Pyx_memviewslice *from_mvs,
- const char *mode, int ndim,
- size_t sizeof_dtype, int contig_flag,
- int dtype_is_object);
-
-/* CIntToPy.proto */
-static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value);
-
-/* CIntFromPy.proto */
-static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *);
-
-/* CIntToPy.proto */
-static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value);
-
-/* CIntFromPy.proto */
-static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *);
-
-/* CIntFromPy.proto */
-static CYTHON_INLINE char __Pyx_PyInt_As_char(PyObject *);
-
-/* CheckBinaryVersion.proto */
-static int __Pyx_check_binary_version(void);
-
-/* InitStrings.proto */
-static int __Pyx_InitStrings(__Pyx_StringTabEntry *t);
-
-static PyObject *__pyx_array_get_memview(struct __pyx_array_obj *__pyx_v_self); /* proto*/
-static char *__pyx_memoryview_get_item_pointer(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index); /* proto*/
-static PyObject *__pyx_memoryview_is_slice(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj); /* proto*/
-static PyObject *__pyx_memoryview_setitem_slice_assignment(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_dst, PyObject *__pyx_v_src); /* proto*/
-static PyObject *__pyx_memoryview_setitem_slice_assign_scalar(struct __pyx_memoryview_obj *__pyx_v_self, struct __pyx_memoryview_obj *__pyx_v_dst, PyObject *__pyx_v_value); /* proto*/
-static PyObject *__pyx_memoryview_setitem_indexed(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value); /* proto*/
-static PyObject *__pyx_memoryview_convert_item_to_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp); /* proto*/
-static PyObject *__pyx_memoryview_assign_item_from_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value); /* proto*/
-static PyObject *__pyx_memoryviewslice_convert_item_to_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp); /* proto*/
-static PyObject *__pyx_memoryviewslice_assign_item_from_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value); /* proto*/
-
-/* Module declarations from 'cython.view' */
-
-/* Module declarations from 'cython' */
-
-/* Module declarations from 'monotonic_align.core' */
-static PyTypeObject *__pyx_array_type = 0;
-static PyTypeObject *__pyx_MemviewEnum_type = 0;
-static PyTypeObject *__pyx_memoryview_type = 0;
-static PyTypeObject *__pyx_memoryviewslice_type = 0;
-static PyObject *generic = 0;
-static PyObject *strided = 0;
-static PyObject *indirect = 0;
-static PyObject *contiguous = 0;
-static PyObject *indirect_contiguous = 0;
-static int __pyx_memoryview_thread_locks_used;
-static PyThread_type_lock __pyx_memoryview_thread_locks[8];
-static void __pyx_f_15monotonic_align_4core_maximum_path_each(__Pyx_memviewslice, __Pyx_memviewslice, int, int, struct __pyx_opt_args_15monotonic_align_4core_maximum_path_each *__pyx_optional_args); /*proto*/
-static void __pyx_f_15monotonic_align_4core_maximum_path_c(__Pyx_memviewslice, __Pyx_memviewslice, __Pyx_memviewslice, __Pyx_memviewslice, int __pyx_skip_dispatch); /*proto*/
-static struct __pyx_array_obj *__pyx_array_new(PyObject *, Py_ssize_t, char *, char *, char *); /*proto*/
-static void *__pyx_align_pointer(void *, size_t); /*proto*/
-static PyObject *__pyx_memoryview_new(PyObject *, int, int, __Pyx_TypeInfo *); /*proto*/
-static CYTHON_INLINE int __pyx_memoryview_check(PyObject *); /*proto*/
-static PyObject *_unellipsify(PyObject *, int); /*proto*/
-static PyObject *assert_direct_dimensions(Py_ssize_t *, int); /*proto*/
-static struct __pyx_memoryview_obj *__pyx_memview_slice(struct __pyx_memoryview_obj *, PyObject *); /*proto*/
-static int __pyx_memoryview_slice_memviewslice(__Pyx_memviewslice *, Py_ssize_t, Py_ssize_t, Py_ssize_t, int, int, int *, Py_ssize_t, Py_ssize_t, Py_ssize_t, int, int, int, int); /*proto*/
-static char *__pyx_pybuffer_index(Py_buffer *, char *, Py_ssize_t, Py_ssize_t); /*proto*/
-static int __pyx_memslice_transpose(__Pyx_memviewslice *); /*proto*/
-static PyObject *__pyx_memoryview_fromslice(__Pyx_memviewslice, int, PyObject *(*)(char *), int (*)(char *, PyObject *), int); /*proto*/
-static __Pyx_memviewslice *__pyx_memoryview_get_slice_from_memoryview(struct __pyx_memoryview_obj *, __Pyx_memviewslice *); /*proto*/
-static void __pyx_memoryview_slice_copy(struct __pyx_memoryview_obj *, __Pyx_memviewslice *); /*proto*/
-static PyObject *__pyx_memoryview_copy_object(struct __pyx_memoryview_obj *); /*proto*/
-static PyObject *__pyx_memoryview_copy_object_from_slice(struct __pyx_memoryview_obj *, __Pyx_memviewslice *); /*proto*/
-static Py_ssize_t abs_py_ssize_t(Py_ssize_t); /*proto*/
-static char __pyx_get_best_slice_order(__Pyx_memviewslice *, int); /*proto*/
-static void _copy_strided_to_strided(char *, Py_ssize_t *, char *, Py_ssize_t *, Py_ssize_t *, Py_ssize_t *, int, size_t); /*proto*/
-static void copy_strided_to_strided(__Pyx_memviewslice *, __Pyx_memviewslice *, int, size_t); /*proto*/
-static Py_ssize_t __pyx_memoryview_slice_get_size(__Pyx_memviewslice *, int); /*proto*/
-static Py_ssize_t __pyx_fill_contig_strides_array(Py_ssize_t *, Py_ssize_t *, Py_ssize_t, int, char); /*proto*/
-static void *__pyx_memoryview_copy_data_to_temp(__Pyx_memviewslice *, __Pyx_memviewslice *, char, int); /*proto*/
-static int __pyx_memoryview_err_extents(int, Py_ssize_t, Py_ssize_t); /*proto*/
-static int __pyx_memoryview_err_dim(PyObject *, char *, int); /*proto*/
-static int __pyx_memoryview_err(PyObject *, char *); /*proto*/
-static int __pyx_memoryview_copy_contents(__Pyx_memviewslice, __Pyx_memviewslice, int, int, int); /*proto*/
-static void __pyx_memoryview_broadcast_leading(__Pyx_memviewslice *, int, int); /*proto*/
-static void __pyx_memoryview_refcount_copying(__Pyx_memviewslice *, int, int, int); /*proto*/
-static void __pyx_memoryview_refcount_objects_in_slice_with_gil(char *, Py_ssize_t *, Py_ssize_t *, int, int); /*proto*/
-static void __pyx_memoryview_refcount_objects_in_slice(char *, Py_ssize_t *, Py_ssize_t *, int, int); /*proto*/
-static void __pyx_memoryview_slice_assign_scalar(__Pyx_memviewslice *, int, size_t, void *, int); /*proto*/
-static void __pyx_memoryview__slice_assign_scalar(char *, Py_ssize_t *, Py_ssize_t *, int, size_t, void *); /*proto*/
-static PyObject *__pyx_unpickle_Enum__set_state(struct __pyx_MemviewEnum_obj *, PyObject *); /*proto*/
-static __Pyx_TypeInfo __Pyx_TypeInfo_int = { "int", NULL, sizeof(int), { 0 }, 0, IS_UNSIGNED(int) ? 'U' : 'I', IS_UNSIGNED(int), 0 };
-static __Pyx_TypeInfo __Pyx_TypeInfo_float = { "float", NULL, sizeof(float), { 0 }, 0, 'R', 0, 0 };
-#define __Pyx_MODULE_NAME "monotonic_align.core"
-extern int __pyx_module_is_main_monotonic_align__core;
-int __pyx_module_is_main_monotonic_align__core = 0;
-
-/* Implementation of 'monotonic_align.core' */
-static PyObject *__pyx_builtin_range;
-static PyObject *__pyx_builtin_ValueError;
-static PyObject *__pyx_builtin_MemoryError;
-static PyObject *__pyx_builtin_enumerate;
-static PyObject *__pyx_builtin_TypeError;
-static PyObject *__pyx_builtin_Ellipsis;
-static PyObject *__pyx_builtin_id;
-static PyObject *__pyx_builtin_IndexError;
-static const char __pyx_k_O[] = "O";
-static const char __pyx_k_c[] = "c";
-static const char __pyx_k_id[] = "id";
-static const char __pyx_k_new[] = "__new__";
-static const char __pyx_k_obj[] = "obj";
-static const char __pyx_k_base[] = "base";
-static const char __pyx_k_dict[] = "__dict__";
-static const char __pyx_k_main[] = "__main__";
-static const char __pyx_k_mode[] = "mode";
-static const char __pyx_k_name[] = "name";
-static const char __pyx_k_ndim[] = "ndim";
-static const char __pyx_k_pack[] = "pack";
-static const char __pyx_k_size[] = "size";
-static const char __pyx_k_step[] = "step";
-static const char __pyx_k_stop[] = "stop";
-static const char __pyx_k_t_xs[] = "t_xs";
-static const char __pyx_k_t_ys[] = "t_ys";
-static const char __pyx_k_test[] = "__test__";
-static const char __pyx_k_ASCII[] = "ASCII";
-static const char __pyx_k_class[] = "__class__";
-static const char __pyx_k_error[] = "error";
-static const char __pyx_k_flags[] = "flags";
-static const char __pyx_k_paths[] = "paths";
-static const char __pyx_k_range[] = "range";
-static const char __pyx_k_shape[] = "shape";
-static const char __pyx_k_start[] = "start";
-static const char __pyx_k_encode[] = "encode";
-static const char __pyx_k_format[] = "format";
-static const char __pyx_k_import[] = "__import__";
-static const char __pyx_k_name_2[] = "__name__";
-static const char __pyx_k_pickle[] = "pickle";
-static const char __pyx_k_reduce[] = "__reduce__";
-static const char __pyx_k_struct[] = "struct";
-static const char __pyx_k_unpack[] = "unpack";
-static const char __pyx_k_update[] = "update";
-static const char __pyx_k_values[] = "values";
-static const char __pyx_k_fortran[] = "fortran";
-static const char __pyx_k_memview[] = "memview";
-static const char __pyx_k_Ellipsis[] = "Ellipsis";
-static const char __pyx_k_getstate[] = "__getstate__";
-static const char __pyx_k_itemsize[] = "itemsize";
-static const char __pyx_k_pyx_type[] = "__pyx_type";
-static const char __pyx_k_setstate[] = "__setstate__";
-static const char __pyx_k_TypeError[] = "TypeError";
-static const char __pyx_k_enumerate[] = "enumerate";
-static const char __pyx_k_pyx_state[] = "__pyx_state";
-static const char __pyx_k_reduce_ex[] = "__reduce_ex__";
-static const char __pyx_k_IndexError[] = "IndexError";
-static const char __pyx_k_ValueError[] = "ValueError";
-static const char __pyx_k_pyx_result[] = "__pyx_result";
-static const char __pyx_k_pyx_vtable[] = "__pyx_vtable__";
-static const char __pyx_k_MemoryError[] = "MemoryError";
-static const char __pyx_k_PickleError[] = "PickleError";
-static const char __pyx_k_pyx_checksum[] = "__pyx_checksum";
-static const char __pyx_k_stringsource[] = "stringsource";
-static const char __pyx_k_pyx_getbuffer[] = "__pyx_getbuffer";
-static const char __pyx_k_reduce_cython[] = "__reduce_cython__";
-static const char __pyx_k_View_MemoryView[] = "View.MemoryView";
-static const char __pyx_k_allocate_buffer[] = "allocate_buffer";
-static const char __pyx_k_dtype_is_object[] = "dtype_is_object";
-static const char __pyx_k_pyx_PickleError[] = "__pyx_PickleError";
-static const char __pyx_k_setstate_cython[] = "__setstate_cython__";
-static const char __pyx_k_pyx_unpickle_Enum[] = "__pyx_unpickle_Enum";
-static const char __pyx_k_cline_in_traceback[] = "cline_in_traceback";
-static const char __pyx_k_strided_and_direct[] = "";
-static const char __pyx_k_strided_and_indirect[] = "";
-static const char __pyx_k_contiguous_and_direct[] = "";
-static const char __pyx_k_MemoryView_of_r_object[] = "";
-static const char __pyx_k_MemoryView_of_r_at_0x_x[] = "";
-static const char __pyx_k_contiguous_and_indirect[] = "";
-static const char __pyx_k_Cannot_index_with_type_s[] = "Cannot index with type '%s'";
-static const char __pyx_k_Invalid_shape_in_axis_d_d[] = "Invalid shape in axis %d: %d.";
-static const char __pyx_k_itemsize_0_for_cython_array[] = "itemsize <= 0 for cython.array";
-static const char __pyx_k_unable_to_allocate_array_data[] = "unable to allocate array data.";
-static const char __pyx_k_strided_and_direct_or_indirect[] = "";
-static const char __pyx_k_Buffer_view_does_not_expose_stri[] = "Buffer view does not expose strides";
-static const char __pyx_k_Can_only_create_a_buffer_that_is[] = "Can only create a buffer that is contiguous in memory.";
-static const char __pyx_k_Cannot_assign_to_read_only_memor[] = "Cannot assign to read-only memoryview";
-static const char __pyx_k_Cannot_create_writable_memory_vi[] = "Cannot create writable memory view from read-only memoryview";
-static const char __pyx_k_Empty_shape_tuple_for_cython_arr[] = "Empty shape tuple for cython.array";
-static const char __pyx_k_Incompatible_checksums_0x_x_vs_0[] = "Incompatible checksums (0x%x vs (0xb068931, 0x82a3537, 0x6ae9995) = (name))";
-static const char __pyx_k_Indirect_dimensions_not_supporte[] = "Indirect dimensions not supported";
-static const char __pyx_k_Invalid_mode_expected_c_or_fortr[] = "Invalid mode, expected 'c' or 'fortran', got %s";
-static const char __pyx_k_Out_of_bounds_on_buffer_access_a[] = "Out of bounds on buffer access (axis %d)";
-static const char __pyx_k_Unable_to_convert_item_to_object[] = "Unable to convert item to object";
-static const char __pyx_k_got_differing_extents_in_dimensi[] = "got differing extents in dimension %d (got %d and %d)";
-static const char __pyx_k_no_default___reduce___due_to_non[] = "no default __reduce__ due to non-trivial __cinit__";
-static const char __pyx_k_unable_to_allocate_shape_and_str[] = "unable to allocate shape and strides.";
-static PyObject *__pyx_n_s_ASCII;
-static PyObject *__pyx_kp_s_Buffer_view_does_not_expose_stri;
-static PyObject *__pyx_kp_s_Can_only_create_a_buffer_that_is;
-static PyObject *__pyx_kp_s_Cannot_assign_to_read_only_memor;
-static PyObject *__pyx_kp_s_Cannot_create_writable_memory_vi;
-static PyObject *__pyx_kp_s_Cannot_index_with_type_s;
-static PyObject *__pyx_n_s_Ellipsis;
-static PyObject *__pyx_kp_s_Empty_shape_tuple_for_cython_arr;
-static PyObject *__pyx_kp_s_Incompatible_checksums_0x_x_vs_0;
-static PyObject *__pyx_n_s_IndexError;
-static PyObject *__pyx_kp_s_Indirect_dimensions_not_supporte;
-static PyObject *__pyx_kp_s_Invalid_mode_expected_c_or_fortr;
-static PyObject *__pyx_kp_s_Invalid_shape_in_axis_d_d;
-static PyObject *__pyx_n_s_MemoryError;
-static PyObject *__pyx_kp_s_MemoryView_of_r_at_0x_x;
-static PyObject *__pyx_kp_s_MemoryView_of_r_object;
-static PyObject *__pyx_n_b_O;
-static PyObject *__pyx_kp_s_Out_of_bounds_on_buffer_access_a;
-static PyObject *__pyx_n_s_PickleError;
-static PyObject *__pyx_n_s_TypeError;
-static PyObject *__pyx_kp_s_Unable_to_convert_item_to_object;
-static PyObject *__pyx_n_s_ValueError;
-static PyObject *__pyx_n_s_View_MemoryView;
-static PyObject *__pyx_n_s_allocate_buffer;
-static PyObject *__pyx_n_s_base;
-static PyObject *__pyx_n_s_c;
-static PyObject *__pyx_n_u_c;
-static PyObject *__pyx_n_s_class;
-static PyObject *__pyx_n_s_cline_in_traceback;
-static PyObject *__pyx_kp_s_contiguous_and_direct;
-static PyObject *__pyx_kp_s_contiguous_and_indirect;
-static PyObject *__pyx_n_s_dict;
-static PyObject *__pyx_n_s_dtype_is_object;
-static PyObject *__pyx_n_s_encode;
-static PyObject *__pyx_n_s_enumerate;
-static PyObject *__pyx_n_s_error;
-static PyObject *__pyx_n_s_flags;
-static PyObject *__pyx_n_s_format;
-static PyObject *__pyx_n_s_fortran;
-static PyObject *__pyx_n_u_fortran;
-static PyObject *__pyx_n_s_getstate;
-static PyObject *__pyx_kp_s_got_differing_extents_in_dimensi;
-static PyObject *__pyx_n_s_id;
-static PyObject *__pyx_n_s_import;
-static PyObject *__pyx_n_s_itemsize;
-static PyObject *__pyx_kp_s_itemsize_0_for_cython_array;
-static PyObject *__pyx_n_s_main;
-static PyObject *__pyx_n_s_memview;
-static PyObject *__pyx_n_s_mode;
-static PyObject *__pyx_n_s_name;
-static PyObject *__pyx_n_s_name_2;
-static PyObject *__pyx_n_s_ndim;
-static PyObject *__pyx_n_s_new;
-static PyObject *__pyx_kp_s_no_default___reduce___due_to_non;
-static PyObject *__pyx_n_s_obj;
-static PyObject *__pyx_n_s_pack;
-static PyObject *__pyx_n_s_paths;
-static PyObject *__pyx_n_s_pickle;
-static PyObject *__pyx_n_s_pyx_PickleError;
-static PyObject *__pyx_n_s_pyx_checksum;
-static PyObject *__pyx_n_s_pyx_getbuffer;
-static PyObject *__pyx_n_s_pyx_result;
-static PyObject *__pyx_n_s_pyx_state;
-static PyObject *__pyx_n_s_pyx_type;
-static PyObject *__pyx_n_s_pyx_unpickle_Enum;
-static PyObject *__pyx_n_s_pyx_vtable;
-static PyObject *__pyx_n_s_range;
-static PyObject *__pyx_n_s_reduce;
-static PyObject *__pyx_n_s_reduce_cython;
-static PyObject *__pyx_n_s_reduce_ex;
-static PyObject *__pyx_n_s_setstate;
-static PyObject *__pyx_n_s_setstate_cython;
-static PyObject *__pyx_n_s_shape;
-static PyObject *__pyx_n_s_size;
-static PyObject *__pyx_n_s_start;
-static PyObject *__pyx_n_s_step;
-static PyObject *__pyx_n_s_stop;
-static PyObject *__pyx_kp_s_strided_and_direct;
-static PyObject *__pyx_kp_s_strided_and_direct_or_indirect;
-static PyObject *__pyx_kp_s_strided_and_indirect;
-static PyObject *__pyx_kp_s_stringsource;
-static PyObject *__pyx_n_s_struct;
-static PyObject *__pyx_n_s_t_xs;
-static PyObject *__pyx_n_s_t_ys;
-static PyObject *__pyx_n_s_test;
-static PyObject *__pyx_kp_s_unable_to_allocate_array_data;
-static PyObject *__pyx_kp_s_unable_to_allocate_shape_and_str;
-static PyObject *__pyx_n_s_unpack;
-static PyObject *__pyx_n_s_update;
-static PyObject *__pyx_n_s_values;
-static PyObject *__pyx_pf_15monotonic_align_4core_maximum_path_c(CYTHON_UNUSED PyObject *__pyx_self, __Pyx_memviewslice __pyx_v_paths, __Pyx_memviewslice __pyx_v_values, __Pyx_memviewslice __pyx_v_t_ys, __Pyx_memviewslice __pyx_v_t_xs); /* proto */
-static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array___cinit__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_shape, Py_ssize_t __pyx_v_itemsize, PyObject *__pyx_v_format, PyObject *__pyx_v_mode, int __pyx_v_allocate_buffer); /* proto */
-static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_2__getbuffer__(struct __pyx_array_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /* proto */
-static void __pyx_array___pyx_pf_15View_dot_MemoryView_5array_4__dealloc__(struct __pyx_array_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf_15View_dot_MemoryView_5array_7memview___get__(struct __pyx_array_obj *__pyx_v_self); /* proto */
-static Py_ssize_t __pyx_array___pyx_pf_15View_dot_MemoryView_5array_6__len__(struct __pyx_array_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_8__getattr__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_attr); /* proto */
-static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_10__getitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item); /* proto */
-static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_12__setitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value); /* proto */
-static PyObject *__pyx_pf___pyx_array___reduce_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf___pyx_array_2__setstate_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */
-static int __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum___init__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v_name); /* proto */
-static PyObject *__pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum_2__repr__(struct __pyx_MemviewEnum_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf___pyx_MemviewEnum___reduce_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf___pyx_MemviewEnum_2__setstate_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v___pyx_state); /* proto */
-static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview___cinit__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj, int __pyx_v_flags, int __pyx_v_dtype_is_object); /* proto */
-static void __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_2__dealloc__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_4__getitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index); /* proto */
-static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_6__setitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value); /* proto */
-static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_8__getbuffer__(struct __pyx_memoryview_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /* proto */
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_1T___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4base___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_5shape___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_7strides___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_10suboffsets___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4ndim___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_8itemsize___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_6nbytes___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4size___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static Py_ssize_t __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_10__len__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_12__repr__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_14__str__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_16is_c_contig(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_18is_f_contig(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_20copy(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_22copy_fortran(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf___pyx_memoryview___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf___pyx_memoryview_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */
-static void __pyx_memoryviewslice___pyx_pf_15View_dot_MemoryView_16_memoryviewslice___dealloc__(struct __pyx_memoryviewslice_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf_15View_dot_MemoryView_16_memoryviewslice_4base___get__(struct __pyx_memoryviewslice_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf___pyx_memoryviewslice___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf___pyx_memoryviewslice_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */
-static PyObject *__pyx_pf_15View_dot_MemoryView___pyx_unpickle_Enum(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v___pyx_type, long __pyx_v___pyx_checksum, PyObject *__pyx_v___pyx_state); /* proto */
-static PyObject *__pyx_tp_new_array(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/
-static PyObject *__pyx_tp_new_Enum(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/
-static PyObject *__pyx_tp_new_memoryview(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/
-static PyObject *__pyx_tp_new__memoryviewslice(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/
-static PyObject *__pyx_int_0;
-static PyObject *__pyx_int_1;
-static PyObject *__pyx_int_112105877;
-static PyObject *__pyx_int_136983863;
-static PyObject *__pyx_int_184977713;
-static PyObject *__pyx_int_neg_1;
-static float __pyx_k_;
-static PyObject *__pyx_tuple__2;
-static PyObject *__pyx_tuple__3;
-static PyObject *__pyx_tuple__4;
-static PyObject *__pyx_tuple__5;
-static PyObject *__pyx_tuple__6;
-static PyObject *__pyx_tuple__7;
-static PyObject *__pyx_tuple__8;
-static PyObject *__pyx_tuple__9;
-static PyObject *__pyx_slice__16;
-static PyObject *__pyx_tuple__10;
-static PyObject *__pyx_tuple__11;
-static PyObject *__pyx_tuple__12;
-static PyObject *__pyx_tuple__13;
-static PyObject *__pyx_tuple__14;
-static PyObject *__pyx_tuple__15;
-static PyObject *__pyx_tuple__17;
-static PyObject *__pyx_tuple__18;
-static PyObject *__pyx_tuple__19;
-static PyObject *__pyx_tuple__20;
-static PyObject *__pyx_tuple__21;
-static PyObject *__pyx_tuple__22;
-static PyObject *__pyx_tuple__23;
-static PyObject *__pyx_tuple__24;
-static PyObject *__pyx_tuple__25;
-static PyObject *__pyx_tuple__26;
-static PyObject *__pyx_codeobj__27;
-/* Late includes */
-
-/* "monotonic_align/core.pyx":7
- * @cython.boundscheck(False)
- * @cython.wraparound(False)
- * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<<
- * cdef int x
- * cdef int y
- */
-
-static void __pyx_f_15monotonic_align_4core_maximum_path_each(__Pyx_memviewslice __pyx_v_path, __Pyx_memviewslice __pyx_v_value, int __pyx_v_t_y, int __pyx_v_t_x, struct __pyx_opt_args_15monotonic_align_4core_maximum_path_each *__pyx_optional_args) {
- float __pyx_v_max_neg_val = __pyx_k_;
- int __pyx_v_x;
- int __pyx_v_y;
- float __pyx_v_v_prev;
- float __pyx_v_v_cur;
- int __pyx_v_index;
- int __pyx_t_1;
- int __pyx_t_2;
- int __pyx_t_3;
- long __pyx_t_4;
- int __pyx_t_5;
- long __pyx_t_6;
- long __pyx_t_7;
- int __pyx_t_8;
- Py_ssize_t __pyx_t_9;
- Py_ssize_t __pyx_t_10;
- float __pyx_t_11;
- float __pyx_t_12;
- float __pyx_t_13;
- int __pyx_t_14;
- Py_ssize_t __pyx_t_15;
- Py_ssize_t __pyx_t_16;
- if (__pyx_optional_args) {
- if (__pyx_optional_args->__pyx_n > 0) {
- __pyx_v_max_neg_val = __pyx_optional_args->max_neg_val;
- }
- }
-
- /* "monotonic_align/core.pyx":13
- * cdef float v_cur
- * cdef float tmp
- * cdef int index = t_x - 1 # <<<<<<<<<<<<<<
- *
- * for y in range(t_y):
- */
- __pyx_v_index = (__pyx_v_t_x - 1);
-
- /* "monotonic_align/core.pyx":15
- * cdef int index = t_x - 1
- *
- * for y in range(t_y): # <<<<<<<<<<<<<<
- * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)):
- * if x == y:
- */
- __pyx_t_1 = __pyx_v_t_y;
- __pyx_t_2 = __pyx_t_1;
- for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) {
- __pyx_v_y = __pyx_t_3;
-
- /* "monotonic_align/core.pyx":16
- *
- * for y in range(t_y):
- * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): # <<<<<<<<<<<<<<
- * if x == y:
- * v_cur = max_neg_val
- */
- __pyx_t_4 = (__pyx_v_y + 1);
- __pyx_t_5 = __pyx_v_t_x;
- if (((__pyx_t_4 < __pyx_t_5) != 0)) {
- __pyx_t_6 = __pyx_t_4;
- } else {
- __pyx_t_6 = __pyx_t_5;
- }
- __pyx_t_4 = __pyx_t_6;
- __pyx_t_5 = ((__pyx_v_t_x + __pyx_v_y) - __pyx_v_t_y);
- __pyx_t_6 = 0;
- if (((__pyx_t_5 > __pyx_t_6) != 0)) {
- __pyx_t_7 = __pyx_t_5;
- } else {
- __pyx_t_7 = __pyx_t_6;
- }
- __pyx_t_6 = __pyx_t_4;
- for (__pyx_t_5 = __pyx_t_7; __pyx_t_5 < __pyx_t_6; __pyx_t_5+=1) {
- __pyx_v_x = __pyx_t_5;
-
- /* "monotonic_align/core.pyx":17
- * for y in range(t_y):
- * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)):
- * if x == y: # <<<<<<<<<<<<<<
- * v_cur = max_neg_val
- * else:
- */
- __pyx_t_8 = ((__pyx_v_x == __pyx_v_y) != 0);
- if (__pyx_t_8) {
-
- /* "monotonic_align/core.pyx":18
- * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)):
- * if x == y:
- * v_cur = max_neg_val # <<<<<<<<<<<<<<
- * else:
- * v_cur = value[y-1, x]
- */
- __pyx_v_v_cur = __pyx_v_max_neg_val;
-
- /* "monotonic_align/core.pyx":17
- * for y in range(t_y):
- * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)):
- * if x == y: # <<<<<<<<<<<<<<
- * v_cur = max_neg_val
- * else:
- */
- goto __pyx_L7;
- }
-
- /* "monotonic_align/core.pyx":20
- * v_cur = max_neg_val
- * else:
- * v_cur = value[y-1, x] # <<<<<<<<<<<<<<
- * if x == 0:
- * if y == 0:
- */
- /*else*/ {
- __pyx_t_9 = (__pyx_v_y - 1);
- __pyx_t_10 = __pyx_v_x;
- __pyx_v_v_cur = (*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_9 * __pyx_v_value.strides[0]) )) + __pyx_t_10)) )));
- }
- __pyx_L7:;
-
- /* "monotonic_align/core.pyx":21
- * else:
- * v_cur = value[y-1, x]
- * if x == 0: # <<<<<<<<<<<<<<
- * if y == 0:
- * v_prev = 0.
- */
- __pyx_t_8 = ((__pyx_v_x == 0) != 0);
- if (__pyx_t_8) {
-
- /* "monotonic_align/core.pyx":22
- * v_cur = value[y-1, x]
- * if x == 0:
- * if y == 0: # <<<<<<<<<<<<<<
- * v_prev = 0.
- * else:
- */
- __pyx_t_8 = ((__pyx_v_y == 0) != 0);
- if (__pyx_t_8) {
-
- /* "monotonic_align/core.pyx":23
- * if x == 0:
- * if y == 0:
- * v_prev = 0. # <<<<<<<<<<<<<<
- * else:
- * v_prev = max_neg_val
- */
- __pyx_v_v_prev = 0.;
-
- /* "monotonic_align/core.pyx":22
- * v_cur = value[y-1, x]
- * if x == 0:
- * if y == 0: # <<<<<<<<<<<<<<
- * v_prev = 0.
- * else:
- */
- goto __pyx_L9;
- }
-
- /* "monotonic_align/core.pyx":25
- * v_prev = 0.
- * else:
- * v_prev = max_neg_val # <<<<<<<<<<<<<<
- * else:
- * v_prev = value[y-1, x-1]
- */
- /*else*/ {
- __pyx_v_v_prev = __pyx_v_max_neg_val;
- }
- __pyx_L9:;
-
- /* "monotonic_align/core.pyx":21
- * else:
- * v_cur = value[y-1, x]
- * if x == 0: # <<<<<<<<<<<<<<
- * if y == 0:
- * v_prev = 0.
- */
- goto __pyx_L8;
- }
-
- /* "monotonic_align/core.pyx":27
- * v_prev = max_neg_val
- * else:
- * v_prev = value[y-1, x-1] # <<<<<<<<<<<<<<
- * value[y, x] += max(v_prev, v_cur)
- *
- */
- /*else*/ {
- __pyx_t_10 = (__pyx_v_y - 1);
- __pyx_t_9 = (__pyx_v_x - 1);
- __pyx_v_v_prev = (*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_10 * __pyx_v_value.strides[0]) )) + __pyx_t_9)) )));
- }
- __pyx_L8:;
-
- /* "monotonic_align/core.pyx":28
- * else:
- * v_prev = value[y-1, x-1]
- * value[y, x] += max(v_prev, v_cur) # <<<<<<<<<<<<<<
- *
- * for y in range(t_y - 1, -1, -1):
- */
- __pyx_t_11 = __pyx_v_v_cur;
- __pyx_t_12 = __pyx_v_v_prev;
- if (((__pyx_t_11 > __pyx_t_12) != 0)) {
- __pyx_t_13 = __pyx_t_11;
- } else {
- __pyx_t_13 = __pyx_t_12;
- }
- __pyx_t_9 = __pyx_v_y;
- __pyx_t_10 = __pyx_v_x;
- *((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_9 * __pyx_v_value.strides[0]) )) + __pyx_t_10)) )) += __pyx_t_13;
- }
- }
-
- /* "monotonic_align/core.pyx":30
- * value[y, x] += max(v_prev, v_cur)
- *
- * for y in range(t_y - 1, -1, -1): # <<<<<<<<<<<<<<
- * path[y, index] = 1
- * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]):
- */
- for (__pyx_t_1 = (__pyx_v_t_y - 1); __pyx_t_1 > -1; __pyx_t_1-=1) {
- __pyx_v_y = __pyx_t_1;
-
- /* "monotonic_align/core.pyx":31
- *
- * for y in range(t_y - 1, -1, -1):
- * path[y, index] = 1 # <<<<<<<<<<<<<<
- * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]):
- * index = index - 1
- */
- __pyx_t_10 = __pyx_v_y;
- __pyx_t_9 = __pyx_v_index;
- *((int *) ( /* dim=1 */ ((char *) (((int *) ( /* dim=0 */ (__pyx_v_path.data + __pyx_t_10 * __pyx_v_path.strides[0]) )) + __pyx_t_9)) )) = 1;
-
- /* "monotonic_align/core.pyx":32
- * for y in range(t_y - 1, -1, -1):
- * path[y, index] = 1
- * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): # <<<<<<<<<<<<<<
- * index = index - 1
- *
- */
- __pyx_t_14 = ((__pyx_v_index != 0) != 0);
- if (__pyx_t_14) {
- } else {
- __pyx_t_8 = __pyx_t_14;
- goto __pyx_L13_bool_binop_done;
- }
- __pyx_t_14 = ((__pyx_v_index == __pyx_v_y) != 0);
- if (!__pyx_t_14) {
- } else {
- __pyx_t_8 = __pyx_t_14;
- goto __pyx_L13_bool_binop_done;
- }
- __pyx_t_9 = (__pyx_v_y - 1);
- __pyx_t_10 = __pyx_v_index;
- __pyx_t_15 = (__pyx_v_y - 1);
- __pyx_t_16 = (__pyx_v_index - 1);
- __pyx_t_14 = (((*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_9 * __pyx_v_value.strides[0]) )) + __pyx_t_10)) ))) < (*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_15 * __pyx_v_value.strides[0]) )) + __pyx_t_16)) )))) != 0);
- __pyx_t_8 = __pyx_t_14;
- __pyx_L13_bool_binop_done:;
- if (__pyx_t_8) {
-
- /* "monotonic_align/core.pyx":33
- * path[y, index] = 1
- * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]):
- * index = index - 1 # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_v_index = (__pyx_v_index - 1);
-
- /* "monotonic_align/core.pyx":32
- * for y in range(t_y - 1, -1, -1):
- * path[y, index] = 1
- * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): # <<<<<<<<<<<<<<
- * index = index - 1
- *
- */
- }
- }
-
- /* "monotonic_align/core.pyx":7
- * @cython.boundscheck(False)
- * @cython.wraparound(False)
- * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<<
- * cdef int x
- * cdef int y
- */
-
- /* function exit code */
-}
-
-/* "monotonic_align/core.pyx":38
- * @cython.boundscheck(False)
- * @cython.wraparound(False)
- * cpdef void maximum_path_c(int[:,:,::1] paths, float[:,:,::1] values, int[::1] t_ys, int[::1] t_xs) nogil: # <<<<<<<<<<<<<<
- * cdef int b = paths.shape[0]
- * cdef int i
- */
-
-static PyObject *__pyx_pw_15monotonic_align_4core_1maximum_path_c(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
-static void __pyx_f_15monotonic_align_4core_maximum_path_c(__Pyx_memviewslice __pyx_v_paths, __Pyx_memviewslice __pyx_v_values, __Pyx_memviewslice __pyx_v_t_ys, __Pyx_memviewslice __pyx_v_t_xs, CYTHON_UNUSED int __pyx_skip_dispatch) {
- CYTHON_UNUSED int __pyx_v_b;
- int __pyx_v_i;
- int __pyx_t_1;
- int __pyx_t_2;
- int __pyx_t_3;
- __Pyx_memviewslice __pyx_t_4 = { 0, 0, { 0 }, { 0 }, { 0 } };
- __Pyx_memviewslice __pyx_t_5 = { 0, 0, { 0 }, { 0 }, { 0 } };
- Py_ssize_t __pyx_t_6;
- Py_ssize_t __pyx_t_7;
-
- /* "monotonic_align/core.pyx":39
- * @cython.wraparound(False)
- * cpdef void maximum_path_c(int[:,:,::1] paths, float[:,:,::1] values, int[::1] t_ys, int[::1] t_xs) nogil:
- * cdef int b = paths.shape[0] # <<<<<<<<<<<<<<
- * cdef int i
- * for i in prange(b, nogil=True):
- */
- __pyx_v_b = (__pyx_v_paths.shape[0]);
-
- /* "monotonic_align/core.pyx":41
- * cdef int b = paths.shape[0]
- * cdef int i
- * for i in prange(b, nogil=True): # <<<<<<<<<<<<<<
- * maximum_path_each(paths[i], values[i], t_ys[i], t_xs[i])
- */
- {
- #ifdef WITH_THREAD
- PyThreadState *_save;
- Py_UNBLOCK_THREADS
- __Pyx_FastGIL_Remember();
- #endif
- /*try:*/ {
- __pyx_t_1 = __pyx_v_b;
- if ((1 == 0)) abort();
- {
- #if ((defined(__APPLE__) || defined(__OSX__)) && (defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95)))))
- #undef likely
- #undef unlikely
- #define likely(x) (x)
- #define unlikely(x) (x)
- #endif
- __pyx_t_3 = (__pyx_t_1 - 0 + 1 - 1/abs(1)) / 1;
- if (__pyx_t_3 > 0)
- {
- #ifdef _OPENMP
- #pragma omp parallel private(__pyx_t_6, __pyx_t_7) firstprivate(__pyx_t_4, __pyx_t_5)
- #endif /* _OPENMP */
- {
- #ifdef _OPENMP
- #pragma omp for firstprivate(__pyx_v_i) lastprivate(__pyx_v_i)
- #endif /* _OPENMP */
- for (__pyx_t_2 = 0; __pyx_t_2 < __pyx_t_3; __pyx_t_2++){
- {
- __pyx_v_i = (int)(0 + 1 * __pyx_t_2);
-
- /* "monotonic_align/core.pyx":42
- * cdef int i
- * for i in prange(b, nogil=True):
- * maximum_path_each(paths[i], values[i], t_ys[i], t_xs[i]) # <<<<<<<<<<<<<<
- */
- __pyx_t_4.data = __pyx_v_paths.data;
- __pyx_t_4.memview = __pyx_v_paths.memview;
- __PYX_INC_MEMVIEW(&__pyx_t_4, 0);
- {
- Py_ssize_t __pyx_tmp_idx = __pyx_v_i;
- Py_ssize_t __pyx_tmp_stride = __pyx_v_paths.strides[0];
- __pyx_t_4.data += __pyx_tmp_idx * __pyx_tmp_stride;
-}
-
-__pyx_t_4.shape[0] = __pyx_v_paths.shape[1];
-__pyx_t_4.strides[0] = __pyx_v_paths.strides[1];
- __pyx_t_4.suboffsets[0] = -1;
-
-__pyx_t_4.shape[1] = __pyx_v_paths.shape[2];
-__pyx_t_4.strides[1] = __pyx_v_paths.strides[2];
- __pyx_t_4.suboffsets[1] = -1;
-
-__pyx_t_5.data = __pyx_v_values.data;
- __pyx_t_5.memview = __pyx_v_values.memview;
- __PYX_INC_MEMVIEW(&__pyx_t_5, 0);
- {
- Py_ssize_t __pyx_tmp_idx = __pyx_v_i;
- Py_ssize_t __pyx_tmp_stride = __pyx_v_values.strides[0];
- __pyx_t_5.data += __pyx_tmp_idx * __pyx_tmp_stride;
-}
-
-__pyx_t_5.shape[0] = __pyx_v_values.shape[1];
-__pyx_t_5.strides[0] = __pyx_v_values.strides[1];
- __pyx_t_5.suboffsets[0] = -1;
-
-__pyx_t_5.shape[1] = __pyx_v_values.shape[2];
-__pyx_t_5.strides[1] = __pyx_v_values.strides[2];
- __pyx_t_5.suboffsets[1] = -1;
-
-__pyx_t_6 = __pyx_v_i;
- __pyx_t_7 = __pyx_v_i;
- __pyx_f_15monotonic_align_4core_maximum_path_each(__pyx_t_4, __pyx_t_5, (*((int *) ( /* dim=0 */ ((char *) (((int *) __pyx_v_t_ys.data) + __pyx_t_6)) ))), (*((int *) ( /* dim=0 */ ((char *) (((int *) __pyx_v_t_xs.data) + __pyx_t_7)) ))), NULL);
- __PYX_XDEC_MEMVIEW(&__pyx_t_4, 0);
- __pyx_t_4.memview = NULL;
- __pyx_t_4.data = NULL;
- __PYX_XDEC_MEMVIEW(&__pyx_t_5, 0);
- __pyx_t_5.memview = NULL;
- __pyx_t_5.data = NULL;
- }
- }
- }
- }
- }
- #if ((defined(__APPLE__) || defined(__OSX__)) && (defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95)))))
- #undef likely
- #undef unlikely
- #define likely(x) __builtin_expect(!!(x), 1)
- #define unlikely(x) __builtin_expect(!!(x), 0)
- #endif
- }
-
- /* "monotonic_align/core.pyx":41
- * cdef int b = paths.shape[0]
- * cdef int i
- * for i in prange(b, nogil=True): # <<<<<<<<<<<<<<
- * maximum_path_each(paths[i], values[i], t_ys[i], t_xs[i])
- */
- /*finally:*/ {
- /*normal exit:*/{
- #ifdef WITH_THREAD
- __Pyx_FastGIL_Forget();
- Py_BLOCK_THREADS
- #endif
- goto __pyx_L5;
- }
- __pyx_L5:;
- }
- }
-
- /* "monotonic_align/core.pyx":38
- * @cython.boundscheck(False)
- * @cython.wraparound(False)
- * cpdef void maximum_path_c(int[:,:,::1] paths, float[:,:,::1] values, int[::1] t_ys, int[::1] t_xs) nogil: # <<<<<<<<<<<<<<
- * cdef int b = paths.shape[0]
- * cdef int i
- */
-
- /* function exit code */
-}
-
-/* Python wrapper */
-static PyObject *__pyx_pw_15monotonic_align_4core_1maximum_path_c(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
-static PyObject *__pyx_pw_15monotonic_align_4core_1maximum_path_c(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
- __Pyx_memviewslice __pyx_v_paths = { 0, 0, { 0 }, { 0 }, { 0 } };
- __Pyx_memviewslice __pyx_v_values = { 0, 0, { 0 }, { 0 }, { 0 } };
- __Pyx_memviewslice __pyx_v_t_ys = { 0, 0, { 0 }, { 0 }, { 0 } };
- __Pyx_memviewslice __pyx_v_t_xs = { 0, 0, { 0 }, { 0 }, { 0 } };
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("maximum_path_c (wrapper)", 0);
- {
- static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_paths,&__pyx_n_s_values,&__pyx_n_s_t_ys,&__pyx_n_s_t_xs,0};
- PyObject* values[4] = {0,0,0,0};
- if (unlikely(__pyx_kwds)) {
- Py_ssize_t kw_args;
- const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
- switch (pos_args) {
- case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3);
- CYTHON_FALLTHROUGH;
- case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
- CYTHON_FALLTHROUGH;
- case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
- CYTHON_FALLTHROUGH;
- case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
- CYTHON_FALLTHROUGH;
- case 0: break;
- default: goto __pyx_L5_argtuple_error;
- }
- kw_args = PyDict_Size(__pyx_kwds);
- switch (pos_args) {
- case 0:
- if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_paths)) != 0)) kw_args--;
- else goto __pyx_L5_argtuple_error;
- CYTHON_FALLTHROUGH;
- case 1:
- if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_values)) != 0)) kw_args--;
- else {
- __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, 1); __PYX_ERR(0, 38, __pyx_L3_error)
- }
- CYTHON_FALLTHROUGH;
- case 2:
- if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_t_ys)) != 0)) kw_args--;
- else {
- __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, 2); __PYX_ERR(0, 38, __pyx_L3_error)
- }
- CYTHON_FALLTHROUGH;
- case 3:
- if (likely((values[3] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_t_xs)) != 0)) kw_args--;
- else {
- __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, 3); __PYX_ERR(0, 38, __pyx_L3_error)
- }
- }
- if (unlikely(kw_args > 0)) {
- if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "maximum_path_c") < 0)) __PYX_ERR(0, 38, __pyx_L3_error)
- }
- } else if (PyTuple_GET_SIZE(__pyx_args) != 4) {
- goto __pyx_L5_argtuple_error;
- } else {
- values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
- values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
- values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
- values[3] = PyTuple_GET_ITEM(__pyx_args, 3);
- }
- __pyx_v_paths = __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_int(values[0], PyBUF_WRITABLE); if (unlikely(!__pyx_v_paths.memview)) __PYX_ERR(0, 38, __pyx_L3_error)
- __pyx_v_values = __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_float(values[1], PyBUF_WRITABLE); if (unlikely(!__pyx_v_values.memview)) __PYX_ERR(0, 38, __pyx_L3_error)
- __pyx_v_t_ys = __Pyx_PyObject_to_MemoryviewSlice_dc_int(values[2], PyBUF_WRITABLE); if (unlikely(!__pyx_v_t_ys.memview)) __PYX_ERR(0, 38, __pyx_L3_error)
- __pyx_v_t_xs = __Pyx_PyObject_to_MemoryviewSlice_dc_int(values[3], PyBUF_WRITABLE); if (unlikely(!__pyx_v_t_xs.memview)) __PYX_ERR(0, 38, __pyx_L3_error)
- }
- goto __pyx_L4_argument_unpacking_done;
- __pyx_L5_argtuple_error:;
- __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 38, __pyx_L3_error)
- __pyx_L3_error:;
- __Pyx_AddTraceback("monotonic_align.core.maximum_path_c", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __Pyx_RefNannyFinishContext();
- return NULL;
- __pyx_L4_argument_unpacking_done:;
- __pyx_r = __pyx_pf_15monotonic_align_4core_maximum_path_c(__pyx_self, __pyx_v_paths, __pyx_v_values, __pyx_v_t_ys, __pyx_v_t_xs);
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_15monotonic_align_4core_maximum_path_c(CYTHON_UNUSED PyObject *__pyx_self, __Pyx_memviewslice __pyx_v_paths, __Pyx_memviewslice __pyx_v_values, __Pyx_memviewslice __pyx_v_t_ys, __Pyx_memviewslice __pyx_v_t_xs) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("maximum_path_c", 0);
- __Pyx_XDECREF(__pyx_r);
- if (unlikely(!__pyx_v_paths.memview)) { __Pyx_RaiseUnboundLocalError("paths"); __PYX_ERR(0, 38, __pyx_L1_error) }
- if (unlikely(!__pyx_v_values.memview)) { __Pyx_RaiseUnboundLocalError("values"); __PYX_ERR(0, 38, __pyx_L1_error) }
- if (unlikely(!__pyx_v_t_ys.memview)) { __Pyx_RaiseUnboundLocalError("t_ys"); __PYX_ERR(0, 38, __pyx_L1_error) }
- if (unlikely(!__pyx_v_t_xs.memview)) { __Pyx_RaiseUnboundLocalError("t_xs"); __PYX_ERR(0, 38, __pyx_L1_error) }
- __pyx_t_1 = __Pyx_void_to_None(__pyx_f_15monotonic_align_4core_maximum_path_c(__pyx_v_paths, __pyx_v_values, __pyx_v_t_ys, __pyx_v_t_xs, 0)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 38, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_r = __pyx_t_1;
- __pyx_t_1 = 0;
- goto __pyx_L0;
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_AddTraceback("monotonic_align.core.maximum_path_c", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __PYX_XDEC_MEMVIEW(&__pyx_v_paths, 1);
- __PYX_XDEC_MEMVIEW(&__pyx_v_values, 1);
- __PYX_XDEC_MEMVIEW(&__pyx_v_t_ys, 1);
- __PYX_XDEC_MEMVIEW(&__pyx_v_t_xs, 1);
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":123
- * cdef bint dtype_is_object
- *
- * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, # <<<<<<<<<<<<<<
- * mode="c", bint allocate_buffer=True):
- *
- */
-
-/* Python wrapper */
-static int __pyx_array___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
-static int __pyx_array___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
- PyObject *__pyx_v_shape = 0;
- Py_ssize_t __pyx_v_itemsize;
- PyObject *__pyx_v_format = 0;
- PyObject *__pyx_v_mode = 0;
- int __pyx_v_allocate_buffer;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__cinit__ (wrapper)", 0);
- {
- static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_shape,&__pyx_n_s_itemsize,&__pyx_n_s_format,&__pyx_n_s_mode,&__pyx_n_s_allocate_buffer,0};
- PyObject* values[5] = {0,0,0,0,0};
- values[3] = ((PyObject *)__pyx_n_s_c);
- if (unlikely(__pyx_kwds)) {
- Py_ssize_t kw_args;
- const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
- switch (pos_args) {
- case 5: values[4] = PyTuple_GET_ITEM(__pyx_args, 4);
- CYTHON_FALLTHROUGH;
- case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3);
- CYTHON_FALLTHROUGH;
- case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
- CYTHON_FALLTHROUGH;
- case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
- CYTHON_FALLTHROUGH;
- case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
- CYTHON_FALLTHROUGH;
- case 0: break;
- default: goto __pyx_L5_argtuple_error;
- }
- kw_args = PyDict_Size(__pyx_kwds);
- switch (pos_args) {
- case 0:
- if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_shape)) != 0)) kw_args--;
- else goto __pyx_L5_argtuple_error;
- CYTHON_FALLTHROUGH;
- case 1:
- if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_itemsize)) != 0)) kw_args--;
- else {
- __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 3, 5, 1); __PYX_ERR(1, 123, __pyx_L3_error)
- }
- CYTHON_FALLTHROUGH;
- case 2:
- if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_format)) != 0)) kw_args--;
- else {
- __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 3, 5, 2); __PYX_ERR(1, 123, __pyx_L3_error)
- }
- CYTHON_FALLTHROUGH;
- case 3:
- if (kw_args > 0) {
- PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_mode);
- if (value) { values[3] = value; kw_args--; }
- }
- CYTHON_FALLTHROUGH;
- case 4:
- if (kw_args > 0) {
- PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_allocate_buffer);
- if (value) { values[4] = value; kw_args--; }
- }
- }
- if (unlikely(kw_args > 0)) {
- if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__cinit__") < 0)) __PYX_ERR(1, 123, __pyx_L3_error)
- }
- } else {
- switch (PyTuple_GET_SIZE(__pyx_args)) {
- case 5: values[4] = PyTuple_GET_ITEM(__pyx_args, 4);
- CYTHON_FALLTHROUGH;
- case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3);
- CYTHON_FALLTHROUGH;
- case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
- values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
- values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
- break;
- default: goto __pyx_L5_argtuple_error;
- }
- }
- __pyx_v_shape = ((PyObject*)values[0]);
- __pyx_v_itemsize = __Pyx_PyIndex_AsSsize_t(values[1]); if (unlikely((__pyx_v_itemsize == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 123, __pyx_L3_error)
- __pyx_v_format = values[2];
- __pyx_v_mode = values[3];
- if (values[4]) {
- __pyx_v_allocate_buffer = __Pyx_PyObject_IsTrue(values[4]); if (unlikely((__pyx_v_allocate_buffer == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 124, __pyx_L3_error)
- } else {
-
- /* "View.MemoryView":124
- *
- * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None,
- * mode="c", bint allocate_buffer=True): # <<<<<<<<<<<<<<
- *
- * cdef int idx
- */
- __pyx_v_allocate_buffer = ((int)1);
- }
- }
- goto __pyx_L4_argument_unpacking_done;
- __pyx_L5_argtuple_error:;
- __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 3, 5, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(1, 123, __pyx_L3_error)
- __pyx_L3_error:;
- __Pyx_AddTraceback("View.MemoryView.array.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __Pyx_RefNannyFinishContext();
- return -1;
- __pyx_L4_argument_unpacking_done:;
- if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_shape), (&PyTuple_Type), 1, "shape", 1))) __PYX_ERR(1, 123, __pyx_L1_error)
- if (unlikely(((PyObject *)__pyx_v_format) == Py_None)) {
- PyErr_Format(PyExc_TypeError, "Argument '%.200s' must not be None", "format"); __PYX_ERR(1, 123, __pyx_L1_error)
- }
- __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array___cinit__(((struct __pyx_array_obj *)__pyx_v_self), __pyx_v_shape, __pyx_v_itemsize, __pyx_v_format, __pyx_v_mode, __pyx_v_allocate_buffer);
-
- /* "View.MemoryView":123
- * cdef bint dtype_is_object
- *
- * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, # <<<<<<<<<<<<<<
- * mode="c", bint allocate_buffer=True):
- *
- */
-
- /* function exit code */
- goto __pyx_L0;
- __pyx_L1_error:;
- __pyx_r = -1;
- __pyx_L0:;
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array___cinit__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_shape, Py_ssize_t __pyx_v_itemsize, PyObject *__pyx_v_format, PyObject *__pyx_v_mode, int __pyx_v_allocate_buffer) {
- int __pyx_v_idx;
- Py_ssize_t __pyx_v_i;
- Py_ssize_t __pyx_v_dim;
- PyObject **__pyx_v_p;
- char __pyx_v_order;
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- Py_ssize_t __pyx_t_1;
- int __pyx_t_2;
- PyObject *__pyx_t_3 = NULL;
- int __pyx_t_4;
- PyObject *__pyx_t_5 = NULL;
- PyObject *__pyx_t_6 = NULL;
- char *__pyx_t_7;
- int __pyx_t_8;
- Py_ssize_t __pyx_t_9;
- PyObject *__pyx_t_10 = NULL;
- Py_ssize_t __pyx_t_11;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__cinit__", 0);
- __Pyx_INCREF(__pyx_v_format);
-
- /* "View.MemoryView":130
- * cdef PyObject **p
- *
- * self.ndim = len(shape) # <<<<<<<<<<<<<<
- * self.itemsize = itemsize
- *
- */
- if (unlikely(__pyx_v_shape == Py_None)) {
- PyErr_SetString(PyExc_TypeError, "object of type 'NoneType' has no len()");
- __PYX_ERR(1, 130, __pyx_L1_error)
- }
- __pyx_t_1 = PyTuple_GET_SIZE(__pyx_v_shape); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(1, 130, __pyx_L1_error)
- __pyx_v_self->ndim = ((int)__pyx_t_1);
-
- /* "View.MemoryView":131
- *
- * self.ndim = len(shape)
- * self.itemsize = itemsize # <<<<<<<<<<<<<<
- *
- * if not self.ndim:
- */
- __pyx_v_self->itemsize = __pyx_v_itemsize;
-
- /* "View.MemoryView":133
- * self.itemsize = itemsize
- *
- * if not self.ndim: # <<<<<<<<<<<<<<
- * raise ValueError("Empty shape tuple for cython.array")
- *
- */
- __pyx_t_2 = ((!(__pyx_v_self->ndim != 0)) != 0);
- if (unlikely(__pyx_t_2)) {
-
- /* "View.MemoryView":134
- *
- * if not self.ndim:
- * raise ValueError("Empty shape tuple for cython.array") # <<<<<<<<<<<<<<
- *
- * if itemsize <= 0:
- */
- __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__2, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 134, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_Raise(__pyx_t_3, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __PYX_ERR(1, 134, __pyx_L1_error)
-
- /* "View.MemoryView":133
- * self.itemsize = itemsize
- *
- * if not self.ndim: # <<<<<<<<<<<<<<
- * raise ValueError("Empty shape tuple for cython.array")
- *
- */
- }
-
- /* "View.MemoryView":136
- * raise ValueError("Empty shape tuple for cython.array")
- *
- * if itemsize <= 0: # <<<<<<<<<<<<<<
- * raise ValueError("itemsize <= 0 for cython.array")
- *
- */
- __pyx_t_2 = ((__pyx_v_itemsize <= 0) != 0);
- if (unlikely(__pyx_t_2)) {
-
- /* "View.MemoryView":137
- *
- * if itemsize <= 0:
- * raise ValueError("itemsize <= 0 for cython.array") # <<<<<<<<<<<<<<
- *
- * if not isinstance(format, bytes):
- */
- __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__3, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 137, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_Raise(__pyx_t_3, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __PYX_ERR(1, 137, __pyx_L1_error)
-
- /* "View.MemoryView":136
- * raise ValueError("Empty shape tuple for cython.array")
- *
- * if itemsize <= 0: # <<<<<<<<<<<<<<
- * raise ValueError("itemsize <= 0 for cython.array")
- *
- */
- }
-
- /* "View.MemoryView":139
- * raise ValueError("itemsize <= 0 for cython.array")
- *
- * if not isinstance(format, bytes): # <<<<<<<<<<<<<<
- * format = format.encode('ASCII')
- * self._format = format # keep a reference to the byte string
- */
- __pyx_t_2 = PyBytes_Check(__pyx_v_format);
- __pyx_t_4 = ((!(__pyx_t_2 != 0)) != 0);
- if (__pyx_t_4) {
-
- /* "View.MemoryView":140
- *
- * if not isinstance(format, bytes):
- * format = format.encode('ASCII') # <<<<<<<<<<<<<<
- * self._format = format # keep a reference to the byte string
- * self.format = self._format
- */
- __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_v_format, __pyx_n_s_encode); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 140, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- __pyx_t_6 = NULL;
- if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_5))) {
- __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_5);
- if (likely(__pyx_t_6)) {
- PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5);
- __Pyx_INCREF(__pyx_t_6);
- __Pyx_INCREF(function);
- __Pyx_DECREF_SET(__pyx_t_5, function);
- }
- }
- __pyx_t_3 = (__pyx_t_6) ? __Pyx_PyObject_Call2Args(__pyx_t_5, __pyx_t_6, __pyx_n_s_ASCII) : __Pyx_PyObject_CallOneArg(__pyx_t_5, __pyx_n_s_ASCII);
- __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;
- if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 140, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
- __Pyx_DECREF_SET(__pyx_v_format, __pyx_t_3);
- __pyx_t_3 = 0;
-
- /* "View.MemoryView":139
- * raise ValueError("itemsize <= 0 for cython.array")
- *
- * if not isinstance(format, bytes): # <<<<<<<<<<<<<<
- * format = format.encode('ASCII')
- * self._format = format # keep a reference to the byte string
- */
- }
-
- /* "View.MemoryView":141
- * if not isinstance(format, bytes):
- * format = format.encode('ASCII')
- * self._format = format # keep a reference to the byte string # <<<<<<<<<<<<<<
- * self.format = self._format
- *
- */
- if (!(likely(PyBytes_CheckExact(__pyx_v_format))||((__pyx_v_format) == Py_None)||((void)PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "bytes", Py_TYPE(__pyx_v_format)->tp_name), 0))) __PYX_ERR(1, 141, __pyx_L1_error)
- __pyx_t_3 = __pyx_v_format;
- __Pyx_INCREF(__pyx_t_3);
- __Pyx_GIVEREF(__pyx_t_3);
- __Pyx_GOTREF(__pyx_v_self->_format);
- __Pyx_DECREF(__pyx_v_self->_format);
- __pyx_v_self->_format = ((PyObject*)__pyx_t_3);
- __pyx_t_3 = 0;
-
- /* "View.MemoryView":142
- * format = format.encode('ASCII')
- * self._format = format # keep a reference to the byte string
- * self.format = self._format # <<<<<<<<<<<<<<
- *
- *
- */
- if (unlikely(__pyx_v_self->_format == Py_None)) {
- PyErr_SetString(PyExc_TypeError, "expected bytes, NoneType found");
- __PYX_ERR(1, 142, __pyx_L1_error)
- }
- __pyx_t_7 = __Pyx_PyBytes_AsWritableString(__pyx_v_self->_format); if (unlikely((!__pyx_t_7) && PyErr_Occurred())) __PYX_ERR(1, 142, __pyx_L1_error)
- __pyx_v_self->format = __pyx_t_7;
-
- /* "View.MemoryView":145
- *
- *
- * self._shape = PyObject_Malloc(sizeof(Py_ssize_t)*self.ndim*2) # <<<<<<<<<<<<<<
- * self._strides = self._shape + self.ndim
- *
- */
- __pyx_v_self->_shape = ((Py_ssize_t *)PyObject_Malloc((((sizeof(Py_ssize_t)) * __pyx_v_self->ndim) * 2)));
-
- /* "View.MemoryView":146
- *
- * self._shape = PyObject_Malloc(sizeof(Py_ssize_t)*self.ndim*2)
- * self._strides = self._shape + self.ndim # <<<<<<<<<<<<<<
- *
- * if not self._shape:
- */
- __pyx_v_self->_strides = (__pyx_v_self->_shape + __pyx_v_self->ndim);
-
- /* "View.MemoryView":148
- * self._strides = self._shape + self.ndim
- *
- * if not self._shape: # <<<<<<<<<<<<<<
- * raise MemoryError("unable to allocate shape and strides.")
- *
- */
- __pyx_t_4 = ((!(__pyx_v_self->_shape != 0)) != 0);
- if (unlikely(__pyx_t_4)) {
-
- /* "View.MemoryView":149
- *
- * if not self._shape:
- * raise MemoryError("unable to allocate shape and strides.") # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_MemoryError, __pyx_tuple__4, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 149, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_Raise(__pyx_t_3, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __PYX_ERR(1, 149, __pyx_L1_error)
-
- /* "View.MemoryView":148
- * self._strides = self._shape + self.ndim
- *
- * if not self._shape: # <<<<<<<<<<<<<<
- * raise MemoryError("unable to allocate shape and strides.")
- *
- */
- }
-
- /* "View.MemoryView":152
- *
- *
- * for idx, dim in enumerate(shape): # <<<<<<<<<<<<<<
- * if dim <= 0:
- * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim))
- */
- __pyx_t_8 = 0;
- __pyx_t_3 = __pyx_v_shape; __Pyx_INCREF(__pyx_t_3); __pyx_t_1 = 0;
- for (;;) {
- if (__pyx_t_1 >= PyTuple_GET_SIZE(__pyx_t_3)) break;
- #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- __pyx_t_5 = PyTuple_GET_ITEM(__pyx_t_3, __pyx_t_1); __Pyx_INCREF(__pyx_t_5); __pyx_t_1++; if (unlikely(0 < 0)) __PYX_ERR(1, 152, __pyx_L1_error)
- #else
- __pyx_t_5 = PySequence_ITEM(__pyx_t_3, __pyx_t_1); __pyx_t_1++; if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 152, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- #endif
- __pyx_t_9 = __Pyx_PyIndex_AsSsize_t(__pyx_t_5); if (unlikely((__pyx_t_9 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 152, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
- __pyx_v_dim = __pyx_t_9;
- __pyx_v_idx = __pyx_t_8;
- __pyx_t_8 = (__pyx_t_8 + 1);
-
- /* "View.MemoryView":153
- *
- * for idx, dim in enumerate(shape):
- * if dim <= 0: # <<<<<<<<<<<<<<
- * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim))
- * self._shape[idx] = dim
- */
- __pyx_t_4 = ((__pyx_v_dim <= 0) != 0);
- if (unlikely(__pyx_t_4)) {
-
- /* "View.MemoryView":154
- * for idx, dim in enumerate(shape):
- * if dim <= 0:
- * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim)) # <<<<<<<<<<<<<<
- * self._shape[idx] = dim
- *
- */
- __pyx_t_5 = __Pyx_PyInt_From_int(__pyx_v_idx); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 154, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- __pyx_t_6 = PyInt_FromSsize_t(__pyx_v_dim); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 154, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_6);
- __pyx_t_10 = PyTuple_New(2); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 154, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_10);
- __Pyx_GIVEREF(__pyx_t_5);
- PyTuple_SET_ITEM(__pyx_t_10, 0, __pyx_t_5);
- __Pyx_GIVEREF(__pyx_t_6);
- PyTuple_SET_ITEM(__pyx_t_10, 1, __pyx_t_6);
- __pyx_t_5 = 0;
- __pyx_t_6 = 0;
- __pyx_t_6 = __Pyx_PyString_Format(__pyx_kp_s_Invalid_shape_in_axis_d_d, __pyx_t_10); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 154, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_6);
- __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;
- __pyx_t_10 = __Pyx_PyObject_CallOneArg(__pyx_builtin_ValueError, __pyx_t_6); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 154, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_10);
- __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
- __Pyx_Raise(__pyx_t_10, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;
- __PYX_ERR(1, 154, __pyx_L1_error)
-
- /* "View.MemoryView":153
- *
- * for idx, dim in enumerate(shape):
- * if dim <= 0: # <<<<<<<<<<<<<<
- * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim))
- * self._shape[idx] = dim
- */
- }
-
- /* "View.MemoryView":155
- * if dim <= 0:
- * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim))
- * self._shape[idx] = dim # <<<<<<<<<<<<<<
- *
- * cdef char order
- */
- (__pyx_v_self->_shape[__pyx_v_idx]) = __pyx_v_dim;
-
- /* "View.MemoryView":152
- *
- *
- * for idx, dim in enumerate(shape): # <<<<<<<<<<<<<<
- * if dim <= 0:
- * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim))
- */
- }
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
-
- /* "View.MemoryView":158
- *
- * cdef char order
- * if mode == 'fortran': # <<<<<<<<<<<<<<
- * order = b'F'
- * self.mode = u'fortran'
- */
- __pyx_t_4 = (__Pyx_PyString_Equals(__pyx_v_mode, __pyx_n_s_fortran, Py_EQ)); if (unlikely(__pyx_t_4 < 0)) __PYX_ERR(1, 158, __pyx_L1_error)
- if (__pyx_t_4) {
-
- /* "View.MemoryView":159
- * cdef char order
- * if mode == 'fortran':
- * order = b'F' # <<<<<<<<<<<<<<
- * self.mode = u'fortran'
- * elif mode == 'c':
- */
- __pyx_v_order = 'F';
-
- /* "View.MemoryView":160
- * if mode == 'fortran':
- * order = b'F'
- * self.mode = u'fortran' # <<<<<<<<<<<<<<
- * elif mode == 'c':
- * order = b'C'
- */
- __Pyx_INCREF(__pyx_n_u_fortran);
- __Pyx_GIVEREF(__pyx_n_u_fortran);
- __Pyx_GOTREF(__pyx_v_self->mode);
- __Pyx_DECREF(__pyx_v_self->mode);
- __pyx_v_self->mode = __pyx_n_u_fortran;
-
- /* "View.MemoryView":158
- *
- * cdef char order
- * if mode == 'fortran': # <<<<<<<<<<<<<<
- * order = b'F'
- * self.mode = u'fortran'
- */
- goto __pyx_L10;
- }
-
- /* "View.MemoryView":161
- * order = b'F'
- * self.mode = u'fortran'
- * elif mode == 'c': # <<<<<<<<<<<<<<
- * order = b'C'
- * self.mode = u'c'
- */
- __pyx_t_4 = (__Pyx_PyString_Equals(__pyx_v_mode, __pyx_n_s_c, Py_EQ)); if (unlikely(__pyx_t_4 < 0)) __PYX_ERR(1, 161, __pyx_L1_error)
- if (likely(__pyx_t_4)) {
-
- /* "View.MemoryView":162
- * self.mode = u'fortran'
- * elif mode == 'c':
- * order = b'C' # <<<<<<<<<<<<<<
- * self.mode = u'c'
- * else:
- */
- __pyx_v_order = 'C';
-
- /* "View.MemoryView":163
- * elif mode == 'c':
- * order = b'C'
- * self.mode = u'c' # <<<<<<<<<<<<<<
- * else:
- * raise ValueError("Invalid mode, expected 'c' or 'fortran', got %s" % mode)
- */
- __Pyx_INCREF(__pyx_n_u_c);
- __Pyx_GIVEREF(__pyx_n_u_c);
- __Pyx_GOTREF(__pyx_v_self->mode);
- __Pyx_DECREF(__pyx_v_self->mode);
- __pyx_v_self->mode = __pyx_n_u_c;
-
- /* "View.MemoryView":161
- * order = b'F'
- * self.mode = u'fortran'
- * elif mode == 'c': # <<<<<<<<<<<<<<
- * order = b'C'
- * self.mode = u'c'
- */
- goto __pyx_L10;
- }
-
- /* "View.MemoryView":165
- * self.mode = u'c'
- * else:
- * raise ValueError("Invalid mode, expected 'c' or 'fortran', got %s" % mode) # <<<<<<<<<<<<<<
- *
- * self.len = fill_contig_strides_array(self._shape, self._strides,
- */
- /*else*/ {
- __pyx_t_3 = __Pyx_PyString_FormatSafe(__pyx_kp_s_Invalid_mode_expected_c_or_fortr, __pyx_v_mode); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 165, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_t_10 = __Pyx_PyObject_CallOneArg(__pyx_builtin_ValueError, __pyx_t_3); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 165, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_10);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __Pyx_Raise(__pyx_t_10, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;
- __PYX_ERR(1, 165, __pyx_L1_error)
- }
- __pyx_L10:;
-
- /* "View.MemoryView":167
- * raise ValueError("Invalid mode, expected 'c' or 'fortran', got %s" % mode)
- *
- * self.len = fill_contig_strides_array(self._shape, self._strides, # <<<<<<<<<<<<<<
- * itemsize, self.ndim, order)
- *
- */
- __pyx_v_self->len = __pyx_fill_contig_strides_array(__pyx_v_self->_shape, __pyx_v_self->_strides, __pyx_v_itemsize, __pyx_v_self->ndim, __pyx_v_order);
-
- /* "View.MemoryView":170
- * itemsize, self.ndim, order)
- *
- * self.free_data = allocate_buffer # <<<<<<<<<<<<<<
- * self.dtype_is_object = format == b'O'
- * if allocate_buffer:
- */
- __pyx_v_self->free_data = __pyx_v_allocate_buffer;
-
- /* "View.MemoryView":171
- *
- * self.free_data = allocate_buffer
- * self.dtype_is_object = format == b'O' # <<<<<<<<<<<<<<
- * if allocate_buffer:
- *
- */
- __pyx_t_10 = PyObject_RichCompare(__pyx_v_format, __pyx_n_b_O, Py_EQ); __Pyx_XGOTREF(__pyx_t_10); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 171, __pyx_L1_error)
- __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_t_10); if (unlikely((__pyx_t_4 == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 171, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;
- __pyx_v_self->dtype_is_object = __pyx_t_4;
-
- /* "View.MemoryView":172
- * self.free_data = allocate_buffer
- * self.dtype_is_object = format == b'O'
- * if allocate_buffer: # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_t_4 = (__pyx_v_allocate_buffer != 0);
- if (__pyx_t_4) {
-
- /* "View.MemoryView":175
- *
- *
- * self.data = malloc(self.len) # <<<<<<<<<<<<<<
- * if not self.data:
- * raise MemoryError("unable to allocate array data.")
- */
- __pyx_v_self->data = ((char *)malloc(__pyx_v_self->len));
-
- /* "View.MemoryView":176
- *
- * self.data = malloc(self.len)
- * if not self.data: # <<<<<<<<<<<<<<
- * raise MemoryError("unable to allocate array data.")
- *
- */
- __pyx_t_4 = ((!(__pyx_v_self->data != 0)) != 0);
- if (unlikely(__pyx_t_4)) {
-
- /* "View.MemoryView":177
- * self.data = malloc(self.len)
- * if not self.data:
- * raise MemoryError("unable to allocate array data.") # <<<<<<<<<<<<<<
- *
- * if self.dtype_is_object:
- */
- __pyx_t_10 = __Pyx_PyObject_Call(__pyx_builtin_MemoryError, __pyx_tuple__5, NULL); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 177, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_10);
- __Pyx_Raise(__pyx_t_10, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;
- __PYX_ERR(1, 177, __pyx_L1_error)
-
- /* "View.MemoryView":176
- *
- * self.data = malloc(self.len)
- * if not self.data: # <<<<<<<<<<<<<<
- * raise MemoryError("unable to allocate array data.")
- *
- */
- }
-
- /* "View.MemoryView":179
- * raise MemoryError("unable to allocate array data.")
- *
- * if self.dtype_is_object: # <<<<<<<<<<<<<<
- * p = self.data
- * for i in range(self.len / itemsize):
- */
- __pyx_t_4 = (__pyx_v_self->dtype_is_object != 0);
- if (__pyx_t_4) {
-
- /* "View.MemoryView":180
- *
- * if self.dtype_is_object:
- * p = self.data # <<<<<<<<<<<<<<
- * for i in range(self.len / itemsize):
- * p[i] = Py_None
- */
- __pyx_v_p = ((PyObject **)__pyx_v_self->data);
-
- /* "View.MemoryView":181
- * if self.dtype_is_object:
- * p = self.data
- * for i in range(self.len / itemsize): # <<<<<<<<<<<<<<
- * p[i] = Py_None
- * Py_INCREF(Py_None)
- */
- if (unlikely(__pyx_v_itemsize == 0)) {
- PyErr_SetString(PyExc_ZeroDivisionError, "integer division or modulo by zero");
- __PYX_ERR(1, 181, __pyx_L1_error)
- }
- else if (sizeof(Py_ssize_t) == sizeof(long) && (!(((Py_ssize_t)-1) > 0)) && unlikely(__pyx_v_itemsize == (Py_ssize_t)-1) && unlikely(UNARY_NEG_WOULD_OVERFLOW(__pyx_v_self->len))) {
- PyErr_SetString(PyExc_OverflowError, "value too large to perform division");
- __PYX_ERR(1, 181, __pyx_L1_error)
- }
- __pyx_t_1 = __Pyx_div_Py_ssize_t(__pyx_v_self->len, __pyx_v_itemsize);
- __pyx_t_9 = __pyx_t_1;
- for (__pyx_t_11 = 0; __pyx_t_11 < __pyx_t_9; __pyx_t_11+=1) {
- __pyx_v_i = __pyx_t_11;
-
- /* "View.MemoryView":182
- * p = self.data
- * for i in range(self.len / itemsize):
- * p[i] = Py_None # <<<<<<<<<<<<<<
- * Py_INCREF(Py_None)
- *
- */
- (__pyx_v_p[__pyx_v_i]) = Py_None;
-
- /* "View.MemoryView":183
- * for i in range(self.len / itemsize):
- * p[i] = Py_None
- * Py_INCREF(Py_None) # <<<<<<<<<<<<<<
- *
- * @cname('getbuffer')
- */
- Py_INCREF(Py_None);
- }
-
- /* "View.MemoryView":179
- * raise MemoryError("unable to allocate array data.")
- *
- * if self.dtype_is_object: # <<<<<<<<<<<<<<
- * p = self.data
- * for i in range(self.len / itemsize):
- */
- }
-
- /* "View.MemoryView":172
- * self.free_data = allocate_buffer
- * self.dtype_is_object = format == b'O'
- * if allocate_buffer: # <<<<<<<<<<<<<<
- *
- *
- */
- }
-
- /* "View.MemoryView":123
- * cdef bint dtype_is_object
- *
- * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, # <<<<<<<<<<<<<<
- * mode="c", bint allocate_buffer=True):
- *
- */
-
- /* function exit code */
- __pyx_r = 0;
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_XDECREF(__pyx_t_5);
- __Pyx_XDECREF(__pyx_t_6);
- __Pyx_XDECREF(__pyx_t_10);
- __Pyx_AddTraceback("View.MemoryView.array.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = -1;
- __pyx_L0:;
- __Pyx_XDECREF(__pyx_v_format);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":186
- *
- * @cname('getbuffer')
- * def __getbuffer__(self, Py_buffer *info, int flags): # <<<<<<<<<<<<<<
- * cdef int bufmode = -1
- * if self.mode == u"c":
- */
-
-/* Python wrapper */
-static CYTHON_UNUSED int __pyx_array_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/
-static CYTHON_UNUSED int __pyx_array_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) {
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__getbuffer__ (wrapper)", 0);
- __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_2__getbuffer__(((struct __pyx_array_obj *)__pyx_v_self), ((Py_buffer *)__pyx_v_info), ((int)__pyx_v_flags));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_2__getbuffer__(struct __pyx_array_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) {
- int __pyx_v_bufmode;
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- int __pyx_t_2;
- PyObject *__pyx_t_3 = NULL;
- char *__pyx_t_4;
- Py_ssize_t __pyx_t_5;
- int __pyx_t_6;
- Py_ssize_t *__pyx_t_7;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- if (__pyx_v_info == NULL) {
- PyErr_SetString(PyExc_BufferError, "PyObject_GetBuffer: view==NULL argument is obsolete");
- return -1;
- }
- __Pyx_RefNannySetupContext("__getbuffer__", 0);
- __pyx_v_info->obj = Py_None; __Pyx_INCREF(Py_None);
- __Pyx_GIVEREF(__pyx_v_info->obj);
-
- /* "View.MemoryView":187
- * @cname('getbuffer')
- * def __getbuffer__(self, Py_buffer *info, int flags):
- * cdef int bufmode = -1 # <<<<<<<<<<<<<<
- * if self.mode == u"c":
- * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS
- */
- __pyx_v_bufmode = -1;
-
- /* "View.MemoryView":188
- * def __getbuffer__(self, Py_buffer *info, int flags):
- * cdef int bufmode = -1
- * if self.mode == u"c": # <<<<<<<<<<<<<<
- * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS
- * elif self.mode == u"fortran":
- */
- __pyx_t_1 = (__Pyx_PyUnicode_Equals(__pyx_v_self->mode, __pyx_n_u_c, Py_EQ)); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 188, __pyx_L1_error)
- __pyx_t_2 = (__pyx_t_1 != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":189
- * cdef int bufmode = -1
- * if self.mode == u"c":
- * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS # <<<<<<<<<<<<<<
- * elif self.mode == u"fortran":
- * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS
- */
- __pyx_v_bufmode = (PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS);
-
- /* "View.MemoryView":188
- * def __getbuffer__(self, Py_buffer *info, int flags):
- * cdef int bufmode = -1
- * if self.mode == u"c": # <<<<<<<<<<<<<<
- * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS
- * elif self.mode == u"fortran":
- */
- goto __pyx_L3;
- }
-
- /* "View.MemoryView":190
- * if self.mode == u"c":
- * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS
- * elif self.mode == u"fortran": # <<<<<<<<<<<<<<
- * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS
- * if not (flags & bufmode):
- */
- __pyx_t_2 = (__Pyx_PyUnicode_Equals(__pyx_v_self->mode, __pyx_n_u_fortran, Py_EQ)); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(1, 190, __pyx_L1_error)
- __pyx_t_1 = (__pyx_t_2 != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":191
- * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS
- * elif self.mode == u"fortran":
- * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS # <<<<<<<<<<<<<<
- * if not (flags & bufmode):
- * raise ValueError("Can only create a buffer that is contiguous in memory.")
- */
- __pyx_v_bufmode = (PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS);
-
- /* "View.MemoryView":190
- * if self.mode == u"c":
- * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS
- * elif self.mode == u"fortran": # <<<<<<<<<<<<<<
- * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS
- * if not (flags & bufmode):
- */
- }
- __pyx_L3:;
-
- /* "View.MemoryView":192
- * elif self.mode == u"fortran":
- * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS
- * if not (flags & bufmode): # <<<<<<<<<<<<<<
- * raise ValueError("Can only create a buffer that is contiguous in memory.")
- * info.buf = self.data
- */
- __pyx_t_1 = ((!((__pyx_v_flags & __pyx_v_bufmode) != 0)) != 0);
- if (unlikely(__pyx_t_1)) {
-
- /* "View.MemoryView":193
- * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS
- * if not (flags & bufmode):
- * raise ValueError("Can only create a buffer that is contiguous in memory.") # <<<<<<<<<<<<<<
- * info.buf = self.data
- * info.len = self.len
- */
- __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__6, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 193, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_Raise(__pyx_t_3, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __PYX_ERR(1, 193, __pyx_L1_error)
-
- /* "View.MemoryView":192
- * elif self.mode == u"fortran":
- * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS
- * if not (flags & bufmode): # <<<<<<<<<<<<<<
- * raise ValueError("Can only create a buffer that is contiguous in memory.")
- * info.buf = self.data
- */
- }
-
- /* "View.MemoryView":194
- * if not (flags & bufmode):
- * raise ValueError("Can only create a buffer that is contiguous in memory.")
- * info.buf = self.data # <<<<<<<<<<<<<<
- * info.len = self.len
- * info.ndim = self.ndim
- */
- __pyx_t_4 = __pyx_v_self->data;
- __pyx_v_info->buf = __pyx_t_4;
-
- /* "View.MemoryView":195
- * raise ValueError("Can only create a buffer that is contiguous in memory.")
- * info.buf = self.data
- * info.len = self.len # <<<<<<<<<<<<<<
- * info.ndim = self.ndim
- * info.shape = self._shape
- */
- __pyx_t_5 = __pyx_v_self->len;
- __pyx_v_info->len = __pyx_t_5;
-
- /* "View.MemoryView":196
- * info.buf = self.data
- * info.len = self.len
- * info.ndim = self.ndim # <<<<<<<<<<<<<<
- * info.shape = self._shape
- * info.strides = self._strides
- */
- __pyx_t_6 = __pyx_v_self->ndim;
- __pyx_v_info->ndim = __pyx_t_6;
-
- /* "View.MemoryView":197
- * info.len = self.len
- * info.ndim = self.ndim
- * info.shape = self._shape # <<<<<<<<<<<<<<
- * info.strides = self._strides
- * info.suboffsets = NULL
- */
- __pyx_t_7 = __pyx_v_self->_shape;
- __pyx_v_info->shape = __pyx_t_7;
-
- /* "View.MemoryView":198
- * info.ndim = self.ndim
- * info.shape = self._shape
- * info.strides = self._strides # <<<<<<<<<<<<<<
- * info.suboffsets = NULL
- * info.itemsize = self.itemsize
- */
- __pyx_t_7 = __pyx_v_self->_strides;
- __pyx_v_info->strides = __pyx_t_7;
-
- /* "View.MemoryView":199
- * info.shape = self._shape
- * info.strides = self._strides
- * info.suboffsets = NULL # <<<<<<<<<<<<<<
- * info.itemsize = self.itemsize
- * info.readonly = 0
- */
- __pyx_v_info->suboffsets = NULL;
-
- /* "View.MemoryView":200
- * info.strides = self._strides
- * info.suboffsets = NULL
- * info.itemsize = self.itemsize # <<<<<<<<<<<<<<
- * info.readonly = 0
- *
- */
- __pyx_t_5 = __pyx_v_self->itemsize;
- __pyx_v_info->itemsize = __pyx_t_5;
-
- /* "View.MemoryView":201
- * info.suboffsets = NULL
- * info.itemsize = self.itemsize
- * info.readonly = 0 # <<<<<<<<<<<<<<
- *
- * if flags & PyBUF_FORMAT:
- */
- __pyx_v_info->readonly = 0;
-
- /* "View.MemoryView":203
- * info.readonly = 0
- *
- * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<<
- * info.format = self.format
- * else:
- */
- __pyx_t_1 = ((__pyx_v_flags & PyBUF_FORMAT) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":204
- *
- * if flags & PyBUF_FORMAT:
- * info.format = self.format # <<<<<<<<<<<<<<
- * else:
- * info.format = NULL
- */
- __pyx_t_4 = __pyx_v_self->format;
- __pyx_v_info->format = __pyx_t_4;
-
- /* "View.MemoryView":203
- * info.readonly = 0
- *
- * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<<
- * info.format = self.format
- * else:
- */
- goto __pyx_L5;
- }
-
- /* "View.MemoryView":206
- * info.format = self.format
- * else:
- * info.format = NULL # <<<<<<<<<<<<<<
- *
- * info.obj = self
- */
- /*else*/ {
- __pyx_v_info->format = NULL;
- }
- __pyx_L5:;
-
- /* "View.MemoryView":208
- * info.format = NULL
- *
- * info.obj = self # <<<<<<<<<<<<<<
- *
- * __pyx_getbuffer = capsule( &__pyx_array_getbuffer, "getbuffer(obj, view, flags)")
- */
- __Pyx_INCREF(((PyObject *)__pyx_v_self));
- __Pyx_GIVEREF(((PyObject *)__pyx_v_self));
- __Pyx_GOTREF(__pyx_v_info->obj);
- __Pyx_DECREF(__pyx_v_info->obj);
- __pyx_v_info->obj = ((PyObject *)__pyx_v_self);
-
- /* "View.MemoryView":186
- *
- * @cname('getbuffer')
- * def __getbuffer__(self, Py_buffer *info, int flags): # <<<<<<<<<<<<<<
- * cdef int bufmode = -1
- * if self.mode == u"c":
- */
-
- /* function exit code */
- __pyx_r = 0;
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_AddTraceback("View.MemoryView.array.__getbuffer__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = -1;
- if (__pyx_v_info->obj != NULL) {
- __Pyx_GOTREF(__pyx_v_info->obj);
- __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0;
- }
- goto __pyx_L2;
- __pyx_L0:;
- if (__pyx_v_info->obj == Py_None) {
- __Pyx_GOTREF(__pyx_v_info->obj);
- __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0;
- }
- __pyx_L2:;
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":212
- * __pyx_getbuffer = capsule( &__pyx_array_getbuffer, "getbuffer(obj, view, flags)")
- *
- * def __dealloc__(array self): # <<<<<<<<<<<<<<
- * if self.callback_free_data != NULL:
- * self.callback_free_data(self.data)
- */
-
-/* Python wrapper */
-static void __pyx_array___dealloc__(PyObject *__pyx_v_self); /*proto*/
-static void __pyx_array___dealloc__(PyObject *__pyx_v_self) {
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__dealloc__ (wrapper)", 0);
- __pyx_array___pyx_pf_15View_dot_MemoryView_5array_4__dealloc__(((struct __pyx_array_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
-}
-
-static void __pyx_array___pyx_pf_15View_dot_MemoryView_5array_4__dealloc__(struct __pyx_array_obj *__pyx_v_self) {
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- __Pyx_RefNannySetupContext("__dealloc__", 0);
-
- /* "View.MemoryView":213
- *
- * def __dealloc__(array self):
- * if self.callback_free_data != NULL: # <<<<<<<<<<<<<<
- * self.callback_free_data(self.data)
- * elif self.free_data:
- */
- __pyx_t_1 = ((__pyx_v_self->callback_free_data != NULL) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":214
- * def __dealloc__(array self):
- * if self.callback_free_data != NULL:
- * self.callback_free_data(self.data) # <<<<<<<<<<<<<<
- * elif self.free_data:
- * if self.dtype_is_object:
- */
- __pyx_v_self->callback_free_data(__pyx_v_self->data);
-
- /* "View.MemoryView":213
- *
- * def __dealloc__(array self):
- * if self.callback_free_data != NULL: # <<<<<<<<<<<<<<
- * self.callback_free_data(self.data)
- * elif self.free_data:
- */
- goto __pyx_L3;
- }
-
- /* "View.MemoryView":215
- * if self.callback_free_data != NULL:
- * self.callback_free_data(self.data)
- * elif self.free_data: # <<<<<<<<<<<<<<
- * if self.dtype_is_object:
- * refcount_objects_in_slice(self.data, self._shape,
- */
- __pyx_t_1 = (__pyx_v_self->free_data != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":216
- * self.callback_free_data(self.data)
- * elif self.free_data:
- * if self.dtype_is_object: # <<<<<<<<<<<<<<
- * refcount_objects_in_slice(self.data, self._shape,
- * self._strides, self.ndim, False)
- */
- __pyx_t_1 = (__pyx_v_self->dtype_is_object != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":217
- * elif self.free_data:
- * if self.dtype_is_object:
- * refcount_objects_in_slice(self.data, self._shape, # <<<<<<<<<<<<<<
- * self._strides, self.ndim, False)
- * free(self.data)
- */
- __pyx_memoryview_refcount_objects_in_slice(__pyx_v_self->data, __pyx_v_self->_shape, __pyx_v_self->_strides, __pyx_v_self->ndim, 0);
-
- /* "View.MemoryView":216
- * self.callback_free_data(self.data)
- * elif self.free_data:
- * if self.dtype_is_object: # <<<<<<<<<<<<<<
- * refcount_objects_in_slice(self.data, self._shape,
- * self._strides, self.ndim, False)
- */
- }
-
- /* "View.MemoryView":219
- * refcount_objects_in_slice(self.data, self._shape,
- * self._strides, self.ndim, False)
- * free(self.data) # <<<<<<<<<<<<<<
- * PyObject_Free(self._shape)
- *
- */
- free(__pyx_v_self->data);
-
- /* "View.MemoryView":215
- * if self.callback_free_data != NULL:
- * self.callback_free_data(self.data)
- * elif self.free_data: # <<<<<<<<<<<<<<
- * if self.dtype_is_object:
- * refcount_objects_in_slice(self.data, self._shape,
- */
- }
- __pyx_L3:;
-
- /* "View.MemoryView":220
- * self._strides, self.ndim, False)
- * free(self.data)
- * PyObject_Free(self._shape) # <<<<<<<<<<<<<<
- *
- * @property
- */
- PyObject_Free(__pyx_v_self->_shape);
-
- /* "View.MemoryView":212
- * __pyx_getbuffer = capsule( &__pyx_array_getbuffer, "getbuffer(obj, view, flags)")
- *
- * def __dealloc__(array self): # <<<<<<<<<<<<<<
- * if self.callback_free_data != NULL:
- * self.callback_free_data(self.data)
- */
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
-}
-
-/* "View.MemoryView":223
- *
- * @property
- * def memview(self): # <<<<<<<<<<<<<<
- * return self.get_memview()
- *
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw_15View_dot_MemoryView_5array_7memview_1__get__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_pw_15View_dot_MemoryView_5array_7memview_1__get__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__get__ (wrapper)", 0);
- __pyx_r = __pyx_pf_15View_dot_MemoryView_5array_7memview___get__(((struct __pyx_array_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_15View_dot_MemoryView_5array_7memview___get__(struct __pyx_array_obj *__pyx_v_self) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__get__", 0);
-
- /* "View.MemoryView":224
- * @property
- * def memview(self):
- * return self.get_memview() # <<<<<<<<<<<<<<
- *
- * @cname('get_memview')
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_1 = ((struct __pyx_vtabstruct_array *)__pyx_v_self->__pyx_vtab)->get_memview(__pyx_v_self); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 224, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_r = __pyx_t_1;
- __pyx_t_1 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":223
- *
- * @property
- * def memview(self): # <<<<<<<<<<<<<<
- * return self.get_memview()
- *
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_AddTraceback("View.MemoryView.array.memview.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":227
- *
- * @cname('get_memview')
- * cdef get_memview(self): # <<<<<<<<<<<<<<
- * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE
- * return memoryview(self, flags, self.dtype_is_object)
- */
-
-static PyObject *__pyx_array_get_memview(struct __pyx_array_obj *__pyx_v_self) {
- int __pyx_v_flags;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- PyObject *__pyx_t_2 = NULL;
- PyObject *__pyx_t_3 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("get_memview", 0);
-
- /* "View.MemoryView":228
- * @cname('get_memview')
- * cdef get_memview(self):
- * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE # <<<<<<<<<<<<<<
- * return memoryview(self, flags, self.dtype_is_object)
- *
- */
- __pyx_v_flags = ((PyBUF_ANY_CONTIGUOUS | PyBUF_FORMAT) | PyBUF_WRITABLE);
-
- /* "View.MemoryView":229
- * cdef get_memview(self):
- * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE
- * return memoryview(self, flags, self.dtype_is_object) # <<<<<<<<<<<<<<
- *
- * def __len__(self):
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_flags); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 229, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_v_self->dtype_is_object); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 229, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 229, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_INCREF(((PyObject *)__pyx_v_self));
- __Pyx_GIVEREF(((PyObject *)__pyx_v_self));
- PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_v_self));
- __Pyx_GIVEREF(__pyx_t_1);
- PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1);
- __Pyx_GIVEREF(__pyx_t_2);
- PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_t_2);
- __pyx_t_1 = 0;
- __pyx_t_2 = 0;
- __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryview_type), __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 229, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __pyx_r = __pyx_t_2;
- __pyx_t_2 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":227
- *
- * @cname('get_memview')
- * cdef get_memview(self): # <<<<<<<<<<<<<<
- * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE
- * return memoryview(self, flags, self.dtype_is_object)
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_AddTraceback("View.MemoryView.array.get_memview", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":231
- * return memoryview(self, flags, self.dtype_is_object)
- *
- * def __len__(self): # <<<<<<<<<<<<<<
- * return self._shape[0]
- *
- */
-
-/* Python wrapper */
-static Py_ssize_t __pyx_array___len__(PyObject *__pyx_v_self); /*proto*/
-static Py_ssize_t __pyx_array___len__(PyObject *__pyx_v_self) {
- Py_ssize_t __pyx_r;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__len__ (wrapper)", 0);
- __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_6__len__(((struct __pyx_array_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static Py_ssize_t __pyx_array___pyx_pf_15View_dot_MemoryView_5array_6__len__(struct __pyx_array_obj *__pyx_v_self) {
- Py_ssize_t __pyx_r;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__len__", 0);
-
- /* "View.MemoryView":232
- *
- * def __len__(self):
- * return self._shape[0] # <<<<<<<<<<<<<<
- *
- * def __getattr__(self, attr):
- */
- __pyx_r = (__pyx_v_self->_shape[0]);
- goto __pyx_L0;
-
- /* "View.MemoryView":231
- * return memoryview(self, flags, self.dtype_is_object)
- *
- * def __len__(self): # <<<<<<<<<<<<<<
- * return self._shape[0]
- *
- */
-
- /* function exit code */
- __pyx_L0:;
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":234
- * return self._shape[0]
- *
- * def __getattr__(self, attr): # <<<<<<<<<<<<<<
- * return getattr(self.memview, attr)
- *
- */
-
-/* Python wrapper */
-static PyObject *__pyx_array___getattr__(PyObject *__pyx_v_self, PyObject *__pyx_v_attr); /*proto*/
-static PyObject *__pyx_array___getattr__(PyObject *__pyx_v_self, PyObject *__pyx_v_attr) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__getattr__ (wrapper)", 0);
- __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_8__getattr__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v_attr));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_8__getattr__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_attr) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- PyObject *__pyx_t_2 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__getattr__", 0);
-
- /* "View.MemoryView":235
- *
- * def __getattr__(self, attr):
- * return getattr(self.memview, attr) # <<<<<<<<<<<<<<
- *
- * def __getitem__(self, item):
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_memview); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 235, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_2 = __Pyx_GetAttr(__pyx_t_1, __pyx_v_attr); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 235, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __pyx_r = __pyx_t_2;
- __pyx_t_2 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":234
- * return self._shape[0]
- *
- * def __getattr__(self, attr): # <<<<<<<<<<<<<<
- * return getattr(self.memview, attr)
- *
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_AddTraceback("View.MemoryView.array.__getattr__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":237
- * return getattr(self.memview, attr)
- *
- * def __getitem__(self, item): # <<<<<<<<<<<<<<
- * return self.memview[item]
- *
- */
-
-/* Python wrapper */
-static PyObject *__pyx_array___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item); /*proto*/
-static PyObject *__pyx_array___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__getitem__ (wrapper)", 0);
- __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_10__getitem__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v_item));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_10__getitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- PyObject *__pyx_t_2 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__getitem__", 0);
-
- /* "View.MemoryView":238
- *
- * def __getitem__(self, item):
- * return self.memview[item] # <<<<<<<<<<<<<<
- *
- * def __setitem__(self, item, value):
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_memview); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 238, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_2 = __Pyx_PyObject_GetItem(__pyx_t_1, __pyx_v_item); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 238, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __pyx_r = __pyx_t_2;
- __pyx_t_2 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":237
- * return getattr(self.memview, attr)
- *
- * def __getitem__(self, item): # <<<<<<<<<<<<<<
- * return self.memview[item]
- *
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_AddTraceback("View.MemoryView.array.__getitem__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":240
- * return self.memview[item]
- *
- * def __setitem__(self, item, value): # <<<<<<<<<<<<<<
- * self.memview[item] = value
- *
- */
-
-/* Python wrapper */
-static int __pyx_array___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value); /*proto*/
-static int __pyx_array___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value) {
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__setitem__ (wrapper)", 0);
- __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_12__setitem__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v_item), ((PyObject *)__pyx_v_value));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_12__setitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value) {
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__setitem__", 0);
-
- /* "View.MemoryView":241
- *
- * def __setitem__(self, item, value):
- * self.memview[item] = value # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_memview); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 241, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- if (unlikely(PyObject_SetItem(__pyx_t_1, __pyx_v_item, __pyx_v_value) < 0)) __PYX_ERR(1, 241, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
-
- /* "View.MemoryView":240
- * return self.memview[item]
- *
- * def __setitem__(self, item, value): # <<<<<<<<<<<<<<
- * self.memview[item] = value
- *
- */
-
- /* function exit code */
- __pyx_r = 0;
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_AddTraceback("View.MemoryView.array.__setitem__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = -1;
- __pyx_L0:;
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "(tree fragment)":1
- * def __reduce_cython__(self): # <<<<<<<<<<<<<<
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state):
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw___pyx_array_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/
-static PyObject *__pyx_pw___pyx_array_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0);
- __pyx_r = __pyx_pf___pyx_array___reduce_cython__(((struct __pyx_array_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf___pyx_array___reduce_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__reduce_cython__", 0);
-
- /* "(tree fragment)":2
- * def __reduce_cython__(self):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<<
- * def __setstate_cython__(self, __pyx_state):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- */
- __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__7, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_Raise(__pyx_t_1, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __PYX_ERR(1, 2, __pyx_L1_error)
-
- /* "(tree fragment)":1
- * def __reduce_cython__(self): # <<<<<<<<<<<<<<
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state):
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_AddTraceback("View.MemoryView.array.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "(tree fragment)":3
- * def __reduce_cython__(self):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<<
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw___pyx_array_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/
-static PyObject *__pyx_pw___pyx_array_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0);
- __pyx_r = __pyx_pf___pyx_array_2__setstate_cython__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf___pyx_array_2__setstate_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__setstate_cython__", 0);
-
- /* "(tree fragment)":4
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<<
- */
- __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__8, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_Raise(__pyx_t_1, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __PYX_ERR(1, 4, __pyx_L1_error)
-
- /* "(tree fragment)":3
- * def __reduce_cython__(self):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<<
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_AddTraceback("View.MemoryView.array.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":245
- *
- * @cname("__pyx_array_new")
- * cdef array array_cwrapper(tuple shape, Py_ssize_t itemsize, char *format, # <<<<<<<<<<<<<<
- * char *mode, char *buf):
- * cdef array result
- */
-
-static struct __pyx_array_obj *__pyx_array_new(PyObject *__pyx_v_shape, Py_ssize_t __pyx_v_itemsize, char *__pyx_v_format, char *__pyx_v_mode, char *__pyx_v_buf) {
- struct __pyx_array_obj *__pyx_v_result = 0;
- struct __pyx_array_obj *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- PyObject *__pyx_t_2 = NULL;
- PyObject *__pyx_t_3 = NULL;
- PyObject *__pyx_t_4 = NULL;
- PyObject *__pyx_t_5 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("array_cwrapper", 0);
-
- /* "View.MemoryView":249
- * cdef array result
- *
- * if buf == NULL: # <<<<<<<<<<<<<<
- * result = array(shape, itemsize, format, mode.decode('ASCII'))
- * else:
- */
- __pyx_t_1 = ((__pyx_v_buf == NULL) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":250
- *
- * if buf == NULL:
- * result = array(shape, itemsize, format, mode.decode('ASCII')) # <<<<<<<<<<<<<<
- * else:
- * result = array(shape, itemsize, format, mode.decode('ASCII'),
- */
- __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_itemsize); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 250, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_3 = __Pyx_PyBytes_FromString(__pyx_v_format); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 250, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_t_4 = __Pyx_decode_c_string(__pyx_v_mode, 0, strlen(__pyx_v_mode), NULL, NULL, PyUnicode_DecodeASCII); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 250, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __pyx_t_5 = PyTuple_New(4); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 250, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- __Pyx_INCREF(__pyx_v_shape);
- __Pyx_GIVEREF(__pyx_v_shape);
- PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_v_shape);
- __Pyx_GIVEREF(__pyx_t_2);
- PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_2);
- __Pyx_GIVEREF(__pyx_t_3);
- PyTuple_SET_ITEM(__pyx_t_5, 2, __pyx_t_3);
- __Pyx_GIVEREF(__pyx_t_4);
- PyTuple_SET_ITEM(__pyx_t_5, 3, __pyx_t_4);
- __pyx_t_2 = 0;
- __pyx_t_3 = 0;
- __pyx_t_4 = 0;
- __pyx_t_4 = __Pyx_PyObject_Call(((PyObject *)__pyx_array_type), __pyx_t_5, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 250, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
- __pyx_v_result = ((struct __pyx_array_obj *)__pyx_t_4);
- __pyx_t_4 = 0;
-
- /* "View.MemoryView":249
- * cdef array result
- *
- * if buf == NULL: # <<<<<<<<<<<<<<
- * result = array(shape, itemsize, format, mode.decode('ASCII'))
- * else:
- */
- goto __pyx_L3;
- }
-
- /* "View.MemoryView":252
- * result = array(shape, itemsize, format, mode.decode('ASCII'))
- * else:
- * result = array(shape, itemsize, format, mode.decode('ASCII'), # <<<<<<<<<<<<<<
- * allocate_buffer=False)
- * result.data = buf
- */
- /*else*/ {
- __pyx_t_4 = PyInt_FromSsize_t(__pyx_v_itemsize); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 252, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __pyx_t_5 = __Pyx_PyBytes_FromString(__pyx_v_format); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 252, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- __pyx_t_3 = __Pyx_decode_c_string(__pyx_v_mode, 0, strlen(__pyx_v_mode), NULL, NULL, PyUnicode_DecodeASCII); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 252, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_t_2 = PyTuple_New(4); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 252, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_INCREF(__pyx_v_shape);
- __Pyx_GIVEREF(__pyx_v_shape);
- PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_shape);
- __Pyx_GIVEREF(__pyx_t_4);
- PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_4);
- __Pyx_GIVEREF(__pyx_t_5);
- PyTuple_SET_ITEM(__pyx_t_2, 2, __pyx_t_5);
- __Pyx_GIVEREF(__pyx_t_3);
- PyTuple_SET_ITEM(__pyx_t_2, 3, __pyx_t_3);
- __pyx_t_4 = 0;
- __pyx_t_5 = 0;
- __pyx_t_3 = 0;
-
- /* "View.MemoryView":253
- * else:
- * result = array(shape, itemsize, format, mode.decode('ASCII'),
- * allocate_buffer=False) # <<<<<<<<<<<<<<
- * result.data = buf
- *
- */
- __pyx_t_3 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 253, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- if (PyDict_SetItem(__pyx_t_3, __pyx_n_s_allocate_buffer, Py_False) < 0) __PYX_ERR(1, 253, __pyx_L1_error)
-
- /* "View.MemoryView":252
- * result = array(shape, itemsize, format, mode.decode('ASCII'))
- * else:
- * result = array(shape, itemsize, format, mode.decode('ASCII'), # <<<<<<<<<<<<<<
- * allocate_buffer=False)
- * result.data = buf
- */
- __pyx_t_5 = __Pyx_PyObject_Call(((PyObject *)__pyx_array_type), __pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 252, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __pyx_v_result = ((struct __pyx_array_obj *)__pyx_t_5);
- __pyx_t_5 = 0;
-
- /* "View.MemoryView":254
- * result = array(shape, itemsize, format, mode.decode('ASCII'),
- * allocate_buffer=False)
- * result.data = buf # <<<<<<<<<<<<<<
- *
- * return result
- */
- __pyx_v_result->data = __pyx_v_buf;
- }
- __pyx_L3:;
-
- /* "View.MemoryView":256
- * result.data = buf
- *
- * return result # <<<<<<<<<<<<<<
- *
- *
- */
- __Pyx_XDECREF(((PyObject *)__pyx_r));
- __Pyx_INCREF(((PyObject *)__pyx_v_result));
- __pyx_r = __pyx_v_result;
- goto __pyx_L0;
-
- /* "View.MemoryView":245
- *
- * @cname("__pyx_array_new")
- * cdef array array_cwrapper(tuple shape, Py_ssize_t itemsize, char *format, # <<<<<<<<<<<<<<
- * char *mode, char *buf):
- * cdef array result
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_XDECREF(__pyx_t_4);
- __Pyx_XDECREF(__pyx_t_5);
- __Pyx_AddTraceback("View.MemoryView.array_cwrapper", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XDECREF((PyObject *)__pyx_v_result);
- __Pyx_XGIVEREF((PyObject *)__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":282
- * cdef class Enum(object):
- * cdef object name
- * def __init__(self, name): # <<<<<<<<<<<<<<
- * self.name = name
- * def __repr__(self):
- */
-
-/* Python wrapper */
-static int __pyx_MemviewEnum___init__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
-static int __pyx_MemviewEnum___init__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
- PyObject *__pyx_v_name = 0;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__init__ (wrapper)", 0);
- {
- static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_name,0};
- PyObject* values[1] = {0};
- if (unlikely(__pyx_kwds)) {
- Py_ssize_t kw_args;
- const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
- switch (pos_args) {
- case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
- CYTHON_FALLTHROUGH;
- case 0: break;
- default: goto __pyx_L5_argtuple_error;
- }
- kw_args = PyDict_Size(__pyx_kwds);
- switch (pos_args) {
- case 0:
- if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_name)) != 0)) kw_args--;
- else goto __pyx_L5_argtuple_error;
- }
- if (unlikely(kw_args > 0)) {
- if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__init__") < 0)) __PYX_ERR(1, 282, __pyx_L3_error)
- }
- } else if (PyTuple_GET_SIZE(__pyx_args) != 1) {
- goto __pyx_L5_argtuple_error;
- } else {
- values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
- }
- __pyx_v_name = values[0];
- }
- goto __pyx_L4_argument_unpacking_done;
- __pyx_L5_argtuple_error:;
- __Pyx_RaiseArgtupleInvalid("__init__", 1, 1, 1, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(1, 282, __pyx_L3_error)
- __pyx_L3_error:;
- __Pyx_AddTraceback("View.MemoryView.Enum.__init__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __Pyx_RefNannyFinishContext();
- return -1;
- __pyx_L4_argument_unpacking_done:;
- __pyx_r = __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum___init__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self), __pyx_v_name);
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static int __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum___init__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v_name) {
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__init__", 0);
-
- /* "View.MemoryView":283
- * cdef object name
- * def __init__(self, name):
- * self.name = name # <<<<<<<<<<<<<<
- * def __repr__(self):
- * return self.name
- */
- __Pyx_INCREF(__pyx_v_name);
- __Pyx_GIVEREF(__pyx_v_name);
- __Pyx_GOTREF(__pyx_v_self->name);
- __Pyx_DECREF(__pyx_v_self->name);
- __pyx_v_self->name = __pyx_v_name;
-
- /* "View.MemoryView":282
- * cdef class Enum(object):
- * cdef object name
- * def __init__(self, name): # <<<<<<<<<<<<<<
- * self.name = name
- * def __repr__(self):
- */
-
- /* function exit code */
- __pyx_r = 0;
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":284
- * def __init__(self, name):
- * self.name = name
- * def __repr__(self): # <<<<<<<<<<<<<<
- * return self.name
- *
- */
-
-/* Python wrapper */
-static PyObject *__pyx_MemviewEnum___repr__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_MemviewEnum___repr__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__repr__ (wrapper)", 0);
- __pyx_r = __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum_2__repr__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum_2__repr__(struct __pyx_MemviewEnum_obj *__pyx_v_self) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__repr__", 0);
-
- /* "View.MemoryView":285
- * self.name = name
- * def __repr__(self):
- * return self.name # <<<<<<<<<<<<<<
- *
- * cdef generic = Enum("")
- */
- __Pyx_XDECREF(__pyx_r);
- __Pyx_INCREF(__pyx_v_self->name);
- __pyx_r = __pyx_v_self->name;
- goto __pyx_L0;
-
- /* "View.MemoryView":284
- * def __init__(self, name):
- * self.name = name
- * def __repr__(self): # <<<<<<<<<<<<<<
- * return self.name
- *
- */
-
- /* function exit code */
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "(tree fragment)":1
- * def __reduce_cython__(self): # <<<<<<<<<<<<<<
- * cdef tuple state
- * cdef object _dict
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw___pyx_MemviewEnum_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/
-static PyObject *__pyx_pw___pyx_MemviewEnum_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0);
- __pyx_r = __pyx_pf___pyx_MemviewEnum___reduce_cython__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf___pyx_MemviewEnum___reduce_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self) {
- PyObject *__pyx_v_state = 0;
- PyObject *__pyx_v__dict = 0;
- int __pyx_v_use_setstate;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_t_2;
- int __pyx_t_3;
- PyObject *__pyx_t_4 = NULL;
- PyObject *__pyx_t_5 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__reduce_cython__", 0);
-
- /* "(tree fragment)":5
- * cdef object _dict
- * cdef bint use_setstate
- * state = (self.name,) # <<<<<<<<<<<<<<
- * _dict = getattr(self, '__dict__', None)
- * if _dict is not None:
- */
- __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 5, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_INCREF(__pyx_v_self->name);
- __Pyx_GIVEREF(__pyx_v_self->name);
- PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_self->name);
- __pyx_v_state = ((PyObject*)__pyx_t_1);
- __pyx_t_1 = 0;
-
- /* "(tree fragment)":6
- * cdef bint use_setstate
- * state = (self.name,)
- * _dict = getattr(self, '__dict__', None) # <<<<<<<<<<<<<<
- * if _dict is not None:
- * state += (_dict,)
- */
- __pyx_t_1 = __Pyx_GetAttr3(((PyObject *)__pyx_v_self), __pyx_n_s_dict, Py_None); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 6, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_v__dict = __pyx_t_1;
- __pyx_t_1 = 0;
-
- /* "(tree fragment)":7
- * state = (self.name,)
- * _dict = getattr(self, '__dict__', None)
- * if _dict is not None: # <<<<<<<<<<<<<<
- * state += (_dict,)
- * use_setstate = True
- */
- __pyx_t_2 = (__pyx_v__dict != Py_None);
- __pyx_t_3 = (__pyx_t_2 != 0);
- if (__pyx_t_3) {
-
- /* "(tree fragment)":8
- * _dict = getattr(self, '__dict__', None)
- * if _dict is not None:
- * state += (_dict,) # <<<<<<<<<<<<<<
- * use_setstate = True
- * else:
- */
- __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 8, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_INCREF(__pyx_v__dict);
- __Pyx_GIVEREF(__pyx_v__dict);
- PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v__dict);
- __pyx_t_4 = PyNumber_InPlaceAdd(__pyx_v_state, __pyx_t_1); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 8, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __Pyx_DECREF_SET(__pyx_v_state, ((PyObject*)__pyx_t_4));
- __pyx_t_4 = 0;
-
- /* "(tree fragment)":9
- * if _dict is not None:
- * state += (_dict,)
- * use_setstate = True # <<<<<<<<<<<<<<
- * else:
- * use_setstate = self.name is not None
- */
- __pyx_v_use_setstate = 1;
-
- /* "(tree fragment)":7
- * state = (self.name,)
- * _dict = getattr(self, '__dict__', None)
- * if _dict is not None: # <<<<<<<<<<<<<<
- * state += (_dict,)
- * use_setstate = True
- */
- goto __pyx_L3;
- }
-
- /* "(tree fragment)":11
- * use_setstate = True
- * else:
- * use_setstate = self.name is not None # <<<<<<<<<<<<<<
- * if use_setstate:
- * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state
- */
- /*else*/ {
- __pyx_t_3 = (__pyx_v_self->name != Py_None);
- __pyx_v_use_setstate = __pyx_t_3;
- }
- __pyx_L3:;
-
- /* "(tree fragment)":12
- * else:
- * use_setstate = self.name is not None
- * if use_setstate: # <<<<<<<<<<<<<<
- * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state
- * else:
- */
- __pyx_t_3 = (__pyx_v_use_setstate != 0);
- if (__pyx_t_3) {
-
- /* "(tree fragment)":13
- * use_setstate = self.name is not None
- * if use_setstate:
- * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state # <<<<<<<<<<<<<<
- * else:
- * return __pyx_unpickle_Enum, (type(self), 0xb068931, state)
- */
- __Pyx_XDECREF(__pyx_r);
- __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_pyx_unpickle_Enum); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 13, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __pyx_t_1 = PyTuple_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 13, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_INCREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
- __Pyx_GIVEREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
- PyTuple_SET_ITEM(__pyx_t_1, 0, ((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
- __Pyx_INCREF(__pyx_int_184977713);
- __Pyx_GIVEREF(__pyx_int_184977713);
- PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_int_184977713);
- __Pyx_INCREF(Py_None);
- __Pyx_GIVEREF(Py_None);
- PyTuple_SET_ITEM(__pyx_t_1, 2, Py_None);
- __pyx_t_5 = PyTuple_New(3); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 13, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- __Pyx_GIVEREF(__pyx_t_4);
- PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_4);
- __Pyx_GIVEREF(__pyx_t_1);
- PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_1);
- __Pyx_INCREF(__pyx_v_state);
- __Pyx_GIVEREF(__pyx_v_state);
- PyTuple_SET_ITEM(__pyx_t_5, 2, __pyx_v_state);
- __pyx_t_4 = 0;
- __pyx_t_1 = 0;
- __pyx_r = __pyx_t_5;
- __pyx_t_5 = 0;
- goto __pyx_L0;
-
- /* "(tree fragment)":12
- * else:
- * use_setstate = self.name is not None
- * if use_setstate: # <<<<<<<<<<<<<<
- * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state
- * else:
- */
- }
-
- /* "(tree fragment)":15
- * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state
- * else:
- * return __pyx_unpickle_Enum, (type(self), 0xb068931, state) # <<<<<<<<<<<<<<
- * def __setstate_cython__(self, __pyx_state):
- * __pyx_unpickle_Enum__set_state(self, __pyx_state)
- */
- /*else*/ {
- __Pyx_XDECREF(__pyx_r);
- __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_pyx_unpickle_Enum); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 15, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- __pyx_t_1 = PyTuple_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 15, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_INCREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
- __Pyx_GIVEREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
- PyTuple_SET_ITEM(__pyx_t_1, 0, ((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
- __Pyx_INCREF(__pyx_int_184977713);
- __Pyx_GIVEREF(__pyx_int_184977713);
- PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_int_184977713);
- __Pyx_INCREF(__pyx_v_state);
- __Pyx_GIVEREF(__pyx_v_state);
- PyTuple_SET_ITEM(__pyx_t_1, 2, __pyx_v_state);
- __pyx_t_4 = PyTuple_New(2); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 15, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __Pyx_GIVEREF(__pyx_t_5);
- PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_5);
- __Pyx_GIVEREF(__pyx_t_1);
- PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_1);
- __pyx_t_5 = 0;
- __pyx_t_1 = 0;
- __pyx_r = __pyx_t_4;
- __pyx_t_4 = 0;
- goto __pyx_L0;
- }
-
- /* "(tree fragment)":1
- * def __reduce_cython__(self): # <<<<<<<<<<<<<<
- * cdef tuple state
- * cdef object _dict
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_4);
- __Pyx_XDECREF(__pyx_t_5);
- __Pyx_AddTraceback("View.MemoryView.Enum.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XDECREF(__pyx_v_state);
- __Pyx_XDECREF(__pyx_v__dict);
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "(tree fragment)":16
- * else:
- * return __pyx_unpickle_Enum, (type(self), 0xb068931, state)
- * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<<
- * __pyx_unpickle_Enum__set_state(self, __pyx_state)
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw___pyx_MemviewEnum_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/
-static PyObject *__pyx_pw___pyx_MemviewEnum_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0);
- __pyx_r = __pyx_pf___pyx_MemviewEnum_2__setstate_cython__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf___pyx_MemviewEnum_2__setstate_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v___pyx_state) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__setstate_cython__", 0);
-
- /* "(tree fragment)":17
- * return __pyx_unpickle_Enum, (type(self), 0xb068931, state)
- * def __setstate_cython__(self, __pyx_state):
- * __pyx_unpickle_Enum__set_state(self, __pyx_state) # <<<<<<<<<<<<<<
- */
- if (!(likely(PyTuple_CheckExact(__pyx_v___pyx_state))||((__pyx_v___pyx_state) == Py_None)||((void)PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "tuple", Py_TYPE(__pyx_v___pyx_state)->tp_name), 0))) __PYX_ERR(1, 17, __pyx_L1_error)
- __pyx_t_1 = __pyx_unpickle_Enum__set_state(__pyx_v_self, ((PyObject*)__pyx_v___pyx_state)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 17, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
-
- /* "(tree fragment)":16
- * else:
- * return __pyx_unpickle_Enum, (type(self), 0xb068931, state)
- * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<<
- * __pyx_unpickle_Enum__set_state(self, __pyx_state)
- */
-
- /* function exit code */
- __pyx_r = Py_None; __Pyx_INCREF(Py_None);
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_AddTraceback("View.MemoryView.Enum.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":299
- *
- * @cname('__pyx_align_pointer')
- * cdef void *align_pointer(void *memory, size_t alignment) nogil: # <<<<<<<<<<<<<<
- * "Align pointer memory on a given boundary"
- * cdef Py_intptr_t aligned_p = memory
- */
-
-static void *__pyx_align_pointer(void *__pyx_v_memory, size_t __pyx_v_alignment) {
- Py_intptr_t __pyx_v_aligned_p;
- size_t __pyx_v_offset;
- void *__pyx_r;
- int __pyx_t_1;
-
- /* "View.MemoryView":301
- * cdef void *align_pointer(void *memory, size_t alignment) nogil:
- * "Align pointer memory on a given boundary"
- * cdef Py_intptr_t aligned_p = memory # <<<<<<<<<<<<<<
- * cdef size_t offset
- *
- */
- __pyx_v_aligned_p = ((Py_intptr_t)__pyx_v_memory);
-
- /* "View.MemoryView":305
- *
- * with cython.cdivision(True):
- * offset = aligned_p % alignment # <<<<<<<<<<<<<<
- *
- * if offset > 0:
- */
- __pyx_v_offset = (__pyx_v_aligned_p % __pyx_v_alignment);
-
- /* "View.MemoryView":307
- * offset = aligned_p % alignment
- *
- * if offset > 0: # <<<<<<<<<<<<<<
- * aligned_p += alignment - offset
- *
- */
- __pyx_t_1 = ((__pyx_v_offset > 0) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":308
- *
- * if offset > 0:
- * aligned_p += alignment - offset # <<<<<<<<<<<<<<
- *
- * return aligned_p
- */
- __pyx_v_aligned_p = (__pyx_v_aligned_p + (__pyx_v_alignment - __pyx_v_offset));
-
- /* "View.MemoryView":307
- * offset = aligned_p % alignment
- *
- * if offset > 0: # <<<<<<<<<<<<<<
- * aligned_p += alignment - offset
- *
- */
- }
-
- /* "View.MemoryView":310
- * aligned_p += alignment - offset
- *
- * return aligned_p # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_r = ((void *)__pyx_v_aligned_p);
- goto __pyx_L0;
-
- /* "View.MemoryView":299
- *
- * @cname('__pyx_align_pointer')
- * cdef void *align_pointer(void *memory, size_t alignment) nogil: # <<<<<<<<<<<<<<
- * "Align pointer memory on a given boundary"
- * cdef Py_intptr_t aligned_p = memory
- */
-
- /* function exit code */
- __pyx_L0:;
- return __pyx_r;
-}
-
-/* "View.MemoryView":346
- * cdef __Pyx_TypeInfo *typeinfo
- *
- * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False): # <<<<<<<<<<<<<<
- * self.obj = obj
- * self.flags = flags
- */
-
-/* Python wrapper */
-static int __pyx_memoryview___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
-static int __pyx_memoryview___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
- PyObject *__pyx_v_obj = 0;
- int __pyx_v_flags;
- int __pyx_v_dtype_is_object;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__cinit__ (wrapper)", 0);
- {
- static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_obj,&__pyx_n_s_flags,&__pyx_n_s_dtype_is_object,0};
- PyObject* values[3] = {0,0,0};
- if (unlikely(__pyx_kwds)) {
- Py_ssize_t kw_args;
- const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
- switch (pos_args) {
- case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
- CYTHON_FALLTHROUGH;
- case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
- CYTHON_FALLTHROUGH;
- case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
- CYTHON_FALLTHROUGH;
- case 0: break;
- default: goto __pyx_L5_argtuple_error;
- }
- kw_args = PyDict_Size(__pyx_kwds);
- switch (pos_args) {
- case 0:
- if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_obj)) != 0)) kw_args--;
- else goto __pyx_L5_argtuple_error;
- CYTHON_FALLTHROUGH;
- case 1:
- if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_flags)) != 0)) kw_args--;
- else {
- __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 2, 3, 1); __PYX_ERR(1, 346, __pyx_L3_error)
- }
- CYTHON_FALLTHROUGH;
- case 2:
- if (kw_args > 0) {
- PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_dtype_is_object);
- if (value) { values[2] = value; kw_args--; }
- }
- }
- if (unlikely(kw_args > 0)) {
- if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__cinit__") < 0)) __PYX_ERR(1, 346, __pyx_L3_error)
- }
- } else {
- switch (PyTuple_GET_SIZE(__pyx_args)) {
- case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
- CYTHON_FALLTHROUGH;
- case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
- values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
- break;
- default: goto __pyx_L5_argtuple_error;
- }
- }
- __pyx_v_obj = values[0];
- __pyx_v_flags = __Pyx_PyInt_As_int(values[1]); if (unlikely((__pyx_v_flags == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 346, __pyx_L3_error)
- if (values[2]) {
- __pyx_v_dtype_is_object = __Pyx_PyObject_IsTrue(values[2]); if (unlikely((__pyx_v_dtype_is_object == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 346, __pyx_L3_error)
- } else {
- __pyx_v_dtype_is_object = ((int)0);
- }
- }
- goto __pyx_L4_argument_unpacking_done;
- __pyx_L5_argtuple_error:;
- __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 2, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(1, 346, __pyx_L3_error)
- __pyx_L3_error:;
- __Pyx_AddTraceback("View.MemoryView.memoryview.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __Pyx_RefNannyFinishContext();
- return -1;
- __pyx_L4_argument_unpacking_done:;
- __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview___cinit__(((struct __pyx_memoryview_obj *)__pyx_v_self), __pyx_v_obj, __pyx_v_flags, __pyx_v_dtype_is_object);
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview___cinit__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj, int __pyx_v_flags, int __pyx_v_dtype_is_object) {
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- int __pyx_t_2;
- int __pyx_t_3;
- int __pyx_t_4;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__cinit__", 0);
-
- /* "View.MemoryView":347
- *
- * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False):
- * self.obj = obj # <<<<<<<<<<<<<<
- * self.flags = flags
- * if type(self) is memoryview or obj is not None:
- */
- __Pyx_INCREF(__pyx_v_obj);
- __Pyx_GIVEREF(__pyx_v_obj);
- __Pyx_GOTREF(__pyx_v_self->obj);
- __Pyx_DECREF(__pyx_v_self->obj);
- __pyx_v_self->obj = __pyx_v_obj;
-
- /* "View.MemoryView":348
- * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False):
- * self.obj = obj
- * self.flags = flags # <<<<<<<<<<<<<<
- * if type(self) is memoryview or obj is not None:
- * __Pyx_GetBuffer(obj, &self.view, flags)
- */
- __pyx_v_self->flags = __pyx_v_flags;
-
- /* "View.MemoryView":349
- * self.obj = obj
- * self.flags = flags
- * if type(self) is memoryview or obj is not None: # <<<<<<<<<<<<<<
- * __Pyx_GetBuffer(obj, &self.view, flags)
- * if self.view.obj == NULL:
- */
- __pyx_t_2 = (((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))) == ((PyObject *)__pyx_memoryview_type));
- __pyx_t_3 = (__pyx_t_2 != 0);
- if (!__pyx_t_3) {
- } else {
- __pyx_t_1 = __pyx_t_3;
- goto __pyx_L4_bool_binop_done;
- }
- __pyx_t_3 = (__pyx_v_obj != Py_None);
- __pyx_t_2 = (__pyx_t_3 != 0);
- __pyx_t_1 = __pyx_t_2;
- __pyx_L4_bool_binop_done:;
- if (__pyx_t_1) {
-
- /* "View.MemoryView":350
- * self.flags = flags
- * if type(self) is memoryview or obj is not None:
- * __Pyx_GetBuffer(obj, &self.view, flags) # <<<<<<<<<<<<<<
- * if self.view.obj == NULL:
- * (<__pyx_buffer *> &self.view).obj = Py_None
- */
- __pyx_t_4 = __Pyx_GetBuffer(__pyx_v_obj, (&__pyx_v_self->view), __pyx_v_flags); if (unlikely(__pyx_t_4 == ((int)-1))) __PYX_ERR(1, 350, __pyx_L1_error)
-
- /* "View.MemoryView":351
- * if type(self) is memoryview or obj is not None:
- * __Pyx_GetBuffer(obj, &self.view, flags)
- * if self.view.obj == NULL: # <<<<<<<<<<<<<<
- * (<__pyx_buffer *> &self.view).obj = Py_None
- * Py_INCREF(Py_None)
- */
- __pyx_t_1 = ((((PyObject *)__pyx_v_self->view.obj) == NULL) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":352
- * __Pyx_GetBuffer(obj, &self.view, flags)
- * if self.view.obj == NULL:
- * (<__pyx_buffer *> &self.view).obj = Py_None # <<<<<<<<<<<<<<
- * Py_INCREF(Py_None)
- *
- */
- ((Py_buffer *)(&__pyx_v_self->view))->obj = Py_None;
-
- /* "View.MemoryView":353
- * if self.view.obj == NULL:
- * (<__pyx_buffer *> &self.view).obj = Py_None
- * Py_INCREF(Py_None) # <<<<<<<<<<<<<<
- *
- * if not __PYX_CYTHON_ATOMICS_ENABLED():
- */
- Py_INCREF(Py_None);
-
- /* "View.MemoryView":351
- * if type(self) is memoryview or obj is not None:
- * __Pyx_GetBuffer(obj, &self.view, flags)
- * if self.view.obj == NULL: # <<<<<<<<<<<<<<
- * (<__pyx_buffer *> &self.view).obj = Py_None
- * Py_INCREF(Py_None)
- */
- }
-
- /* "View.MemoryView":349
- * self.obj = obj
- * self.flags = flags
- * if type(self) is memoryview or obj is not None: # <<<<<<<<<<<<<<
- * __Pyx_GetBuffer(obj, &self.view, flags)
- * if self.view.obj == NULL:
- */
- }
-
- /* "View.MemoryView":355
- * Py_INCREF(Py_None)
- *
- * if not __PYX_CYTHON_ATOMICS_ENABLED(): # <<<<<<<<<<<<<<
- * global __pyx_memoryview_thread_locks_used
- * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED:
- */
- __pyx_t_1 = ((!(__PYX_CYTHON_ATOMICS_ENABLED() != 0)) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":357
- * if not __PYX_CYTHON_ATOMICS_ENABLED():
- * global __pyx_memoryview_thread_locks_used
- * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED: # <<<<<<<<<<<<<<
- * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]
- * __pyx_memoryview_thread_locks_used += 1
- */
- __pyx_t_1 = ((__pyx_memoryview_thread_locks_used < 8) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":358
- * global __pyx_memoryview_thread_locks_used
- * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED:
- * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] # <<<<<<<<<<<<<<
- * __pyx_memoryview_thread_locks_used += 1
- * if self.lock is NULL:
- */
- __pyx_v_self->lock = (__pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]);
-
- /* "View.MemoryView":359
- * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED:
- * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]
- * __pyx_memoryview_thread_locks_used += 1 # <<<<<<<<<<<<<<
- * if self.lock is NULL:
- * self.lock = PyThread_allocate_lock()
- */
- __pyx_memoryview_thread_locks_used = (__pyx_memoryview_thread_locks_used + 1);
-
- /* "View.MemoryView":357
- * if not __PYX_CYTHON_ATOMICS_ENABLED():
- * global __pyx_memoryview_thread_locks_used
- * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED: # <<<<<<<<<<<<<<
- * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]
- * __pyx_memoryview_thread_locks_used += 1
- */
- }
-
- /* "View.MemoryView":360
- * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]
- * __pyx_memoryview_thread_locks_used += 1
- * if self.lock is NULL: # <<<<<<<<<<<<<<
- * self.lock = PyThread_allocate_lock()
- * if self.lock is NULL:
- */
- __pyx_t_1 = ((__pyx_v_self->lock == NULL) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":361
- * __pyx_memoryview_thread_locks_used += 1
- * if self.lock is NULL:
- * self.lock = PyThread_allocate_lock() # <<<<<<<<<<<<<<
- * if self.lock is NULL:
- * raise MemoryError
- */
- __pyx_v_self->lock = PyThread_allocate_lock();
-
- /* "View.MemoryView":362
- * if self.lock is NULL:
- * self.lock = PyThread_allocate_lock()
- * if self.lock is NULL: # <<<<<<<<<<<<<<
- * raise MemoryError
- *
- */
- __pyx_t_1 = ((__pyx_v_self->lock == NULL) != 0);
- if (unlikely(__pyx_t_1)) {
-
- /* "View.MemoryView":363
- * self.lock = PyThread_allocate_lock()
- * if self.lock is NULL:
- * raise MemoryError # <<<<<<<<<<<<<<
- *
- * if flags & PyBUF_FORMAT:
- */
- PyErr_NoMemory(); __PYX_ERR(1, 363, __pyx_L1_error)
-
- /* "View.MemoryView":362
- * if self.lock is NULL:
- * self.lock = PyThread_allocate_lock()
- * if self.lock is NULL: # <<<<<<<<<<<<<<
- * raise MemoryError
- *
- */
- }
-
- /* "View.MemoryView":360
- * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]
- * __pyx_memoryview_thread_locks_used += 1
- * if self.lock is NULL: # <<<<<<<<<<<<<<
- * self.lock = PyThread_allocate_lock()
- * if self.lock is NULL:
- */
- }
-
- /* "View.MemoryView":355
- * Py_INCREF(Py_None)
- *
- * if not __PYX_CYTHON_ATOMICS_ENABLED(): # <<<<<<<<<<<<<<
- * global __pyx_memoryview_thread_locks_used
- * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED:
- */
- }
-
- /* "View.MemoryView":365
- * raise MemoryError
- *
- * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<<
- * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0')
- * else:
- */
- __pyx_t_1 = ((__pyx_v_flags & PyBUF_FORMAT) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":366
- *
- * if flags & PyBUF_FORMAT:
- * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0') # <<<<<<<<<<<<<<
- * else:
- * self.dtype_is_object = dtype_is_object
- */
- __pyx_t_2 = (((__pyx_v_self->view.format[0]) == 'O') != 0);
- if (__pyx_t_2) {
- } else {
- __pyx_t_1 = __pyx_t_2;
- goto __pyx_L12_bool_binop_done;
- }
- __pyx_t_2 = (((__pyx_v_self->view.format[1]) == '\x00') != 0);
- __pyx_t_1 = __pyx_t_2;
- __pyx_L12_bool_binop_done:;
- __pyx_v_self->dtype_is_object = __pyx_t_1;
-
- /* "View.MemoryView":365
- * raise MemoryError
- *
- * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<<
- * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0')
- * else:
- */
- goto __pyx_L11;
- }
-
- /* "View.MemoryView":368
- * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0')
- * else:
- * self.dtype_is_object = dtype_is_object # <<<<<<<<<<<<<<
- *
- * self.acquisition_count_aligned_p = <__pyx_atomic_int *> align_pointer(
- */
- /*else*/ {
- __pyx_v_self->dtype_is_object = __pyx_v_dtype_is_object;
- }
- __pyx_L11:;
-
- /* "View.MemoryView":370
- * self.dtype_is_object = dtype_is_object
- *
- * self.acquisition_count_aligned_p = <__pyx_atomic_int *> align_pointer( # <<<<<<<<<<<<<<
- * &self.acquisition_count[0], sizeof(__pyx_atomic_int))
- * self.typeinfo = NULL
- */
- __pyx_v_self->acquisition_count_aligned_p = ((__pyx_atomic_int *)__pyx_align_pointer(((void *)(&(__pyx_v_self->acquisition_count[0]))), (sizeof(__pyx_atomic_int))));
-
- /* "View.MemoryView":372
- * self.acquisition_count_aligned_p = <__pyx_atomic_int *> align_pointer(
- * &self.acquisition_count[0], sizeof(__pyx_atomic_int))
- * self.typeinfo = NULL # <<<<<<<<<<<<<<
- *
- * def __dealloc__(memoryview self):
- */
- __pyx_v_self->typeinfo = NULL;
-
- /* "View.MemoryView":346
- * cdef __Pyx_TypeInfo *typeinfo
- *
- * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False): # <<<<<<<<<<<<<<
- * self.obj = obj
- * self.flags = flags
- */
-
- /* function exit code */
- __pyx_r = 0;
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_AddTraceback("View.MemoryView.memoryview.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = -1;
- __pyx_L0:;
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":374
- * self.typeinfo = NULL
- *
- * def __dealloc__(memoryview self): # <<<<<<<<<<<<<<
- * if self.obj is not None:
- * __Pyx_ReleaseBuffer(&self.view)
- */
-
-/* Python wrapper */
-static void __pyx_memoryview___dealloc__(PyObject *__pyx_v_self); /*proto*/
-static void __pyx_memoryview___dealloc__(PyObject *__pyx_v_self) {
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__dealloc__ (wrapper)", 0);
- __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_2__dealloc__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
-}
-
-static void __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_2__dealloc__(struct __pyx_memoryview_obj *__pyx_v_self) {
- int __pyx_v_i;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- int __pyx_t_2;
- int __pyx_t_3;
- int __pyx_t_4;
- int __pyx_t_5;
- PyThread_type_lock __pyx_t_6;
- PyThread_type_lock __pyx_t_7;
- __Pyx_RefNannySetupContext("__dealloc__", 0);
-
- /* "View.MemoryView":375
- *
- * def __dealloc__(memoryview self):
- * if self.obj is not None: # <<<<<<<<<<<<<<
- * __Pyx_ReleaseBuffer(&self.view)
- * elif (<__pyx_buffer *> &self.view).obj == Py_None:
- */
- __pyx_t_1 = (__pyx_v_self->obj != Py_None);
- __pyx_t_2 = (__pyx_t_1 != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":376
- * def __dealloc__(memoryview self):
- * if self.obj is not None:
- * __Pyx_ReleaseBuffer(&self.view) # <<<<<<<<<<<<<<
- * elif (<__pyx_buffer *> &self.view).obj == Py_None:
- *
- */
- __Pyx_ReleaseBuffer((&__pyx_v_self->view));
-
- /* "View.MemoryView":375
- *
- * def __dealloc__(memoryview self):
- * if self.obj is not None: # <<<<<<<<<<<<<<
- * __Pyx_ReleaseBuffer(&self.view)
- * elif (<__pyx_buffer *> &self.view).obj == Py_None:
- */
- goto __pyx_L3;
- }
-
- /* "View.MemoryView":377
- * if self.obj is not None:
- * __Pyx_ReleaseBuffer(&self.view)
- * elif (<__pyx_buffer *> &self.view).obj == Py_None: # <<<<<<<<<<<<<<
- *
- * (<__pyx_buffer *> &self.view).obj = NULL
- */
- __pyx_t_2 = ((((Py_buffer *)(&__pyx_v_self->view))->obj == Py_None) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":379
- * elif (<__pyx_buffer *> &self.view).obj == Py_None:
- *
- * (<__pyx_buffer *> &self.view).obj = NULL # <<<<<<<<<<<<<<
- * Py_DECREF(Py_None)
- *
- */
- ((Py_buffer *)(&__pyx_v_self->view))->obj = NULL;
-
- /* "View.MemoryView":380
- *
- * (<__pyx_buffer *> &self.view).obj = NULL
- * Py_DECREF(Py_None) # <<<<<<<<<<<<<<
- *
- * cdef int i
- */
- Py_DECREF(Py_None);
-
- /* "View.MemoryView":377
- * if self.obj is not None:
- * __Pyx_ReleaseBuffer(&self.view)
- * elif (<__pyx_buffer *> &self.view).obj == Py_None: # <<<<<<<<<<<<<<
- *
- * (<__pyx_buffer *> &self.view).obj = NULL
- */
- }
- __pyx_L3:;
-
- /* "View.MemoryView":384
- * cdef int i
- * global __pyx_memoryview_thread_locks_used
- * if self.lock != NULL: # <<<<<<<<<<<<<<
- * for i in range(__pyx_memoryview_thread_locks_used):
- * if __pyx_memoryview_thread_locks[i] is self.lock:
- */
- __pyx_t_2 = ((__pyx_v_self->lock != NULL) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":385
- * global __pyx_memoryview_thread_locks_used
- * if self.lock != NULL:
- * for i in range(__pyx_memoryview_thread_locks_used): # <<<<<<<<<<<<<<
- * if __pyx_memoryview_thread_locks[i] is self.lock:
- * __pyx_memoryview_thread_locks_used -= 1
- */
- __pyx_t_3 = __pyx_memoryview_thread_locks_used;
- __pyx_t_4 = __pyx_t_3;
- for (__pyx_t_5 = 0; __pyx_t_5 < __pyx_t_4; __pyx_t_5+=1) {
- __pyx_v_i = __pyx_t_5;
-
- /* "View.MemoryView":386
- * if self.lock != NULL:
- * for i in range(__pyx_memoryview_thread_locks_used):
- * if __pyx_memoryview_thread_locks[i] is self.lock: # <<<<<<<<<<<<<<
- * __pyx_memoryview_thread_locks_used -= 1
- * if i != __pyx_memoryview_thread_locks_used:
- */
- __pyx_t_2 = (((__pyx_memoryview_thread_locks[__pyx_v_i]) == __pyx_v_self->lock) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":387
- * for i in range(__pyx_memoryview_thread_locks_used):
- * if __pyx_memoryview_thread_locks[i] is self.lock:
- * __pyx_memoryview_thread_locks_used -= 1 # <<<<<<<<<<<<<<
- * if i != __pyx_memoryview_thread_locks_used:
- * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = (
- */
- __pyx_memoryview_thread_locks_used = (__pyx_memoryview_thread_locks_used - 1);
-
- /* "View.MemoryView":388
- * if __pyx_memoryview_thread_locks[i] is self.lock:
- * __pyx_memoryview_thread_locks_used -= 1
- * if i != __pyx_memoryview_thread_locks_used: # <<<<<<<<<<<<<<
- * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = (
- * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i])
- */
- __pyx_t_2 = ((__pyx_v_i != __pyx_memoryview_thread_locks_used) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":390
- * if i != __pyx_memoryview_thread_locks_used:
- * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = (
- * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) # <<<<<<<<<<<<<<
- * break
- * else:
- */
- __pyx_t_6 = (__pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]);
- __pyx_t_7 = (__pyx_memoryview_thread_locks[__pyx_v_i]);
-
- /* "View.MemoryView":389
- * __pyx_memoryview_thread_locks_used -= 1
- * if i != __pyx_memoryview_thread_locks_used:
- * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( # <<<<<<<<<<<<<<
- * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i])
- * break
- */
- (__pyx_memoryview_thread_locks[__pyx_v_i]) = __pyx_t_6;
- (__pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]) = __pyx_t_7;
-
- /* "View.MemoryView":388
- * if __pyx_memoryview_thread_locks[i] is self.lock:
- * __pyx_memoryview_thread_locks_used -= 1
- * if i != __pyx_memoryview_thread_locks_used: # <<<<<<<<<<<<<<
- * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = (
- * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i])
- */
- }
-
- /* "View.MemoryView":391
- * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = (
- * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i])
- * break # <<<<<<<<<<<<<<
- * else:
- * PyThread_free_lock(self.lock)
- */
- goto __pyx_L6_break;
-
- /* "View.MemoryView":386
- * if self.lock != NULL:
- * for i in range(__pyx_memoryview_thread_locks_used):
- * if __pyx_memoryview_thread_locks[i] is self.lock: # <<<<<<<<<<<<<<
- * __pyx_memoryview_thread_locks_used -= 1
- * if i != __pyx_memoryview_thread_locks_used:
- */
- }
- }
- /*else*/ {
-
- /* "View.MemoryView":393
- * break
- * else:
- * PyThread_free_lock(self.lock) # <<<<<<<<<<<<<<
- *
- * cdef char *get_item_pointer(memoryview self, object index) except NULL:
- */
- PyThread_free_lock(__pyx_v_self->lock);
- }
- __pyx_L6_break:;
-
- /* "View.MemoryView":384
- * cdef int i
- * global __pyx_memoryview_thread_locks_used
- * if self.lock != NULL: # <<<<<<<<<<<<<<
- * for i in range(__pyx_memoryview_thread_locks_used):
- * if __pyx_memoryview_thread_locks[i] is self.lock:
- */
- }
-
- /* "View.MemoryView":374
- * self.typeinfo = NULL
- *
- * def __dealloc__(memoryview self): # <<<<<<<<<<<<<<
- * if self.obj is not None:
- * __Pyx_ReleaseBuffer(&self.view)
- */
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
-}
-
-/* "View.MemoryView":395
- * PyThread_free_lock(self.lock)
- *
- * cdef char *get_item_pointer(memoryview self, object index) except NULL: # <<<<<<<<<<<<<<
- * cdef Py_ssize_t dim
- * cdef char *itemp = self.view.buf
- */
-
-static char *__pyx_memoryview_get_item_pointer(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index) {
- Py_ssize_t __pyx_v_dim;
- char *__pyx_v_itemp;
- PyObject *__pyx_v_idx = NULL;
- char *__pyx_r;
- __Pyx_RefNannyDeclarations
- Py_ssize_t __pyx_t_1;
- PyObject *__pyx_t_2 = NULL;
- Py_ssize_t __pyx_t_3;
- PyObject *(*__pyx_t_4)(PyObject *);
- PyObject *__pyx_t_5 = NULL;
- Py_ssize_t __pyx_t_6;
- char *__pyx_t_7;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("get_item_pointer", 0);
-
- /* "View.MemoryView":397
- * cdef char *get_item_pointer(memoryview self, object index) except NULL:
- * cdef Py_ssize_t dim
- * cdef char *itemp = self.view.buf # <<<<<<<<<<<<<<
- *
- * for dim, idx in enumerate(index):
- */
- __pyx_v_itemp = ((char *)__pyx_v_self->view.buf);
-
- /* "View.MemoryView":399
- * cdef char *itemp = self.view.buf
- *
- * for dim, idx in enumerate(index): # <<<<<<<<<<<<<<
- * itemp = pybuffer_index(&self.view, itemp, idx, dim)
- *
- */
- __pyx_t_1 = 0;
- if (likely(PyList_CheckExact(__pyx_v_index)) || PyTuple_CheckExact(__pyx_v_index)) {
- __pyx_t_2 = __pyx_v_index; __Pyx_INCREF(__pyx_t_2); __pyx_t_3 = 0;
- __pyx_t_4 = NULL;
- } else {
- __pyx_t_3 = -1; __pyx_t_2 = PyObject_GetIter(__pyx_v_index); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 399, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_4 = Py_TYPE(__pyx_t_2)->tp_iternext; if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 399, __pyx_L1_error)
- }
- for (;;) {
- if (likely(!__pyx_t_4)) {
- if (likely(PyList_CheckExact(__pyx_t_2))) {
- if (__pyx_t_3 >= PyList_GET_SIZE(__pyx_t_2)) break;
- #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- __pyx_t_5 = PyList_GET_ITEM(__pyx_t_2, __pyx_t_3); __Pyx_INCREF(__pyx_t_5); __pyx_t_3++; if (unlikely(0 < 0)) __PYX_ERR(1, 399, __pyx_L1_error)
- #else
- __pyx_t_5 = PySequence_ITEM(__pyx_t_2, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 399, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- #endif
- } else {
- if (__pyx_t_3 >= PyTuple_GET_SIZE(__pyx_t_2)) break;
- #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- __pyx_t_5 = PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_3); __Pyx_INCREF(__pyx_t_5); __pyx_t_3++; if (unlikely(0 < 0)) __PYX_ERR(1, 399, __pyx_L1_error)
- #else
- __pyx_t_5 = PySequence_ITEM(__pyx_t_2, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 399, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- #endif
- }
- } else {
- __pyx_t_5 = __pyx_t_4(__pyx_t_2);
- if (unlikely(!__pyx_t_5)) {
- PyObject* exc_type = PyErr_Occurred();
- if (exc_type) {
- if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();
- else __PYX_ERR(1, 399, __pyx_L1_error)
- }
- break;
- }
- __Pyx_GOTREF(__pyx_t_5);
- }
- __Pyx_XDECREF_SET(__pyx_v_idx, __pyx_t_5);
- __pyx_t_5 = 0;
- __pyx_v_dim = __pyx_t_1;
- __pyx_t_1 = (__pyx_t_1 + 1);
-
- /* "View.MemoryView":400
- *
- * for dim, idx in enumerate(index):
- * itemp = pybuffer_index(&self.view, itemp, idx, dim) # <<<<<<<<<<<<<<
- *
- * return itemp
- */
- __pyx_t_6 = __Pyx_PyIndex_AsSsize_t(__pyx_v_idx); if (unlikely((__pyx_t_6 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 400, __pyx_L1_error)
- __pyx_t_7 = __pyx_pybuffer_index((&__pyx_v_self->view), __pyx_v_itemp, __pyx_t_6, __pyx_v_dim); if (unlikely(__pyx_t_7 == ((char *)NULL))) __PYX_ERR(1, 400, __pyx_L1_error)
- __pyx_v_itemp = __pyx_t_7;
-
- /* "View.MemoryView":399
- * cdef char *itemp = self.view.buf
- *
- * for dim, idx in enumerate(index): # <<<<<<<<<<<<<<
- * itemp = pybuffer_index(&self.view, itemp, idx, dim)
- *
- */
- }
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
-
- /* "View.MemoryView":402
- * itemp = pybuffer_index(&self.view, itemp, idx, dim)
- *
- * return itemp # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_r = __pyx_v_itemp;
- goto __pyx_L0;
-
- /* "View.MemoryView":395
- * PyThread_free_lock(self.lock)
- *
- * cdef char *get_item_pointer(memoryview self, object index) except NULL: # <<<<<<<<<<<<<<
- * cdef Py_ssize_t dim
- * cdef char *itemp = self.view.buf
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_5);
- __Pyx_AddTraceback("View.MemoryView.memoryview.get_item_pointer", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XDECREF(__pyx_v_idx);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":405
- *
- *
- * def __getitem__(memoryview self, object index): # <<<<<<<<<<<<<<
- * if index is Ellipsis:
- * return self
- */
-
-/* Python wrapper */
-static PyObject *__pyx_memoryview___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index); /*proto*/
-static PyObject *__pyx_memoryview___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__getitem__ (wrapper)", 0);
- __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_4__getitem__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((PyObject *)__pyx_v_index));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_4__getitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index) {
- PyObject *__pyx_v_have_slices = NULL;
- PyObject *__pyx_v_indices = NULL;
- char *__pyx_v_itemp;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- int __pyx_t_2;
- PyObject *__pyx_t_3 = NULL;
- PyObject *__pyx_t_4 = NULL;
- PyObject *__pyx_t_5 = NULL;
- char *__pyx_t_6;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__getitem__", 0);
-
- /* "View.MemoryView":406
- *
- * def __getitem__(memoryview self, object index):
- * if index is Ellipsis: # <<<<<<<<<<<<<<
- * return self
- *
- */
- __pyx_t_1 = (__pyx_v_index == __pyx_builtin_Ellipsis);
- __pyx_t_2 = (__pyx_t_1 != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":407
- * def __getitem__(memoryview self, object index):
- * if index is Ellipsis:
- * return self # <<<<<<<<<<<<<<
- *
- * have_slices, indices = _unellipsify(index, self.view.ndim)
- */
- __Pyx_XDECREF(__pyx_r);
- __Pyx_INCREF(((PyObject *)__pyx_v_self));
- __pyx_r = ((PyObject *)__pyx_v_self);
- goto __pyx_L0;
-
- /* "View.MemoryView":406
- *
- * def __getitem__(memoryview self, object index):
- * if index is Ellipsis: # <<<<<<<<<<<<<<
- * return self
- *
- */
- }
-
- /* "View.MemoryView":409
- * return self
- *
- * have_slices, indices = _unellipsify(index, self.view.ndim) # <<<<<<<<<<<<<<
- *
- * cdef char *itemp
- */
- __pyx_t_3 = _unellipsify(__pyx_v_index, __pyx_v_self->view.ndim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 409, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- if (likely(__pyx_t_3 != Py_None)) {
- PyObject* sequence = __pyx_t_3;
- Py_ssize_t size = __Pyx_PySequence_SIZE(sequence);
- if (unlikely(size != 2)) {
- if (size > 2) __Pyx_RaiseTooManyValuesError(2);
- else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size);
- __PYX_ERR(1, 409, __pyx_L1_error)
- }
- #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- __pyx_t_4 = PyTuple_GET_ITEM(sequence, 0);
- __pyx_t_5 = PyTuple_GET_ITEM(sequence, 1);
- __Pyx_INCREF(__pyx_t_4);
- __Pyx_INCREF(__pyx_t_5);
- #else
- __pyx_t_4 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 409, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __pyx_t_5 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 409, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- #endif
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- } else {
- __Pyx_RaiseNoneNotIterableError(); __PYX_ERR(1, 409, __pyx_L1_error)
- }
- __pyx_v_have_slices = __pyx_t_4;
- __pyx_t_4 = 0;
- __pyx_v_indices = __pyx_t_5;
- __pyx_t_5 = 0;
-
- /* "View.MemoryView":412
- *
- * cdef char *itemp
- * if have_slices: # <<<<<<<<<<<<<<
- * return memview_slice(self, indices)
- * else:
- */
- __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_v_have_slices); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(1, 412, __pyx_L1_error)
- if (__pyx_t_2) {
-
- /* "View.MemoryView":413
- * cdef char *itemp
- * if have_slices:
- * return memview_slice(self, indices) # <<<<<<<<<<<<<<
- * else:
- * itemp = self.get_item_pointer(indices)
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_3 = ((PyObject *)__pyx_memview_slice(__pyx_v_self, __pyx_v_indices)); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 413, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_r = __pyx_t_3;
- __pyx_t_3 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":412
- *
- * cdef char *itemp
- * if have_slices: # <<<<<<<<<<<<<<
- * return memview_slice(self, indices)
- * else:
- */
- }
-
- /* "View.MemoryView":415
- * return memview_slice(self, indices)
- * else:
- * itemp = self.get_item_pointer(indices) # <<<<<<<<<<<<<<
- * return self.convert_item_to_object(itemp)
- *
- */
- /*else*/ {
- __pyx_t_6 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->get_item_pointer(__pyx_v_self, __pyx_v_indices); if (unlikely(__pyx_t_6 == ((char *)NULL))) __PYX_ERR(1, 415, __pyx_L1_error)
- __pyx_v_itemp = __pyx_t_6;
-
- /* "View.MemoryView":416
- * else:
- * itemp = self.get_item_pointer(indices)
- * return self.convert_item_to_object(itemp) # <<<<<<<<<<<<<<
- *
- * def __setitem__(memoryview self, object index, object value):
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_3 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->convert_item_to_object(__pyx_v_self, __pyx_v_itemp); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 416, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_r = __pyx_t_3;
- __pyx_t_3 = 0;
- goto __pyx_L0;
- }
-
- /* "View.MemoryView":405
- *
- *
- * def __getitem__(memoryview self, object index): # <<<<<<<<<<<<<<
- * if index is Ellipsis:
- * return self
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_XDECREF(__pyx_t_4);
- __Pyx_XDECREF(__pyx_t_5);
- __Pyx_AddTraceback("View.MemoryView.memoryview.__getitem__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XDECREF(__pyx_v_have_slices);
- __Pyx_XDECREF(__pyx_v_indices);
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":418
- * return self.convert_item_to_object(itemp)
- *
- * def __setitem__(memoryview self, object index, object value): # <<<<<<<<<<<<<<
- * if self.view.readonly:
- * raise TypeError("Cannot assign to read-only memoryview")
- */
-
-/* Python wrapper */
-static int __pyx_memoryview___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value); /*proto*/
-static int __pyx_memoryview___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value) {
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__setitem__ (wrapper)", 0);
- __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_6__setitem__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((PyObject *)__pyx_v_index), ((PyObject *)__pyx_v_value));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_6__setitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value) {
- PyObject *__pyx_v_have_slices = NULL;
- PyObject *__pyx_v_obj = NULL;
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- PyObject *__pyx_t_2 = NULL;
- PyObject *__pyx_t_3 = NULL;
- PyObject *__pyx_t_4 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__setitem__", 0);
- __Pyx_INCREF(__pyx_v_index);
-
- /* "View.MemoryView":419
- *
- * def __setitem__(memoryview self, object index, object value):
- * if self.view.readonly: # <<<<<<<<<<<<<<
- * raise TypeError("Cannot assign to read-only memoryview")
- *
- */
- __pyx_t_1 = (__pyx_v_self->view.readonly != 0);
- if (unlikely(__pyx_t_1)) {
-
- /* "View.MemoryView":420
- * def __setitem__(memoryview self, object index, object value):
- * if self.view.readonly:
- * raise TypeError("Cannot assign to read-only memoryview") # <<<<<<<<<<<<<<
- *
- * have_slices, index = _unellipsify(index, self.view.ndim)
- */
- __pyx_t_2 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__9, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 420, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_Raise(__pyx_t_2, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __PYX_ERR(1, 420, __pyx_L1_error)
-
- /* "View.MemoryView":419
- *
- * def __setitem__(memoryview self, object index, object value):
- * if self.view.readonly: # <<<<<<<<<<<<<<
- * raise TypeError("Cannot assign to read-only memoryview")
- *
- */
- }
-
- /* "View.MemoryView":422
- * raise TypeError("Cannot assign to read-only memoryview")
- *
- * have_slices, index = _unellipsify(index, self.view.ndim) # <<<<<<<<<<<<<<
- *
- * if have_slices:
- */
- __pyx_t_2 = _unellipsify(__pyx_v_index, __pyx_v_self->view.ndim); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 422, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- if (likely(__pyx_t_2 != Py_None)) {
- PyObject* sequence = __pyx_t_2;
- Py_ssize_t size = __Pyx_PySequence_SIZE(sequence);
- if (unlikely(size != 2)) {
- if (size > 2) __Pyx_RaiseTooManyValuesError(2);
- else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size);
- __PYX_ERR(1, 422, __pyx_L1_error)
- }
- #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- __pyx_t_3 = PyTuple_GET_ITEM(sequence, 0);
- __pyx_t_4 = PyTuple_GET_ITEM(sequence, 1);
- __Pyx_INCREF(__pyx_t_3);
- __Pyx_INCREF(__pyx_t_4);
- #else
- __pyx_t_3 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 422, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_t_4 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 422, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- #endif
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- } else {
- __Pyx_RaiseNoneNotIterableError(); __PYX_ERR(1, 422, __pyx_L1_error)
- }
- __pyx_v_have_slices = __pyx_t_3;
- __pyx_t_3 = 0;
- __Pyx_DECREF_SET(__pyx_v_index, __pyx_t_4);
- __pyx_t_4 = 0;
-
- /* "View.MemoryView":424
- * have_slices, index = _unellipsify(index, self.view.ndim)
- *
- * if have_slices: # <<<<<<<<<<<<<<
- * obj = self.is_slice(value)
- * if obj:
- */
- __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_v_have_slices); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 424, __pyx_L1_error)
- if (__pyx_t_1) {
-
- /* "View.MemoryView":425
- *
- * if have_slices:
- * obj = self.is_slice(value) # <<<<<<<<<<<<<<
- * if obj:
- * self.setitem_slice_assignment(self[index], obj)
- */
- __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->is_slice(__pyx_v_self, __pyx_v_value); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 425, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_v_obj = __pyx_t_2;
- __pyx_t_2 = 0;
-
- /* "View.MemoryView":426
- * if have_slices:
- * obj = self.is_slice(value)
- * if obj: # <<<<<<<<<<<<<<
- * self.setitem_slice_assignment(self[index], obj)
- * else:
- */
- __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_v_obj); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 426, __pyx_L1_error)
- if (__pyx_t_1) {
-
- /* "View.MemoryView":427
- * obj = self.is_slice(value)
- * if obj:
- * self.setitem_slice_assignment(self[index], obj) # <<<<<<<<<<<<<<
- * else:
- * self.setitem_slice_assign_scalar(self[index], value)
- */
- __pyx_t_2 = __Pyx_PyObject_GetItem(((PyObject *)__pyx_v_self), __pyx_v_index); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 427, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_4 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->setitem_slice_assignment(__pyx_v_self, __pyx_t_2, __pyx_v_obj); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 427, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
-
- /* "View.MemoryView":426
- * if have_slices:
- * obj = self.is_slice(value)
- * if obj: # <<<<<<<<<<<<<<
- * self.setitem_slice_assignment(self[index], obj)
- * else:
- */
- goto __pyx_L5;
- }
-
- /* "View.MemoryView":429
- * self.setitem_slice_assignment(self[index], obj)
- * else:
- * self.setitem_slice_assign_scalar(self[index], value) # <<<<<<<<<<<<<<
- * else:
- * self.setitem_indexed(index, value)
- */
- /*else*/ {
- __pyx_t_4 = __Pyx_PyObject_GetItem(((PyObject *)__pyx_v_self), __pyx_v_index); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 429, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- if (!(likely(((__pyx_t_4) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_4, __pyx_memoryview_type))))) __PYX_ERR(1, 429, __pyx_L1_error)
- __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->setitem_slice_assign_scalar(__pyx_v_self, ((struct __pyx_memoryview_obj *)__pyx_t_4), __pyx_v_value); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 429, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- }
- __pyx_L5:;
-
- /* "View.MemoryView":424
- * have_slices, index = _unellipsify(index, self.view.ndim)
- *
- * if have_slices: # <<<<<<<<<<<<<<
- * obj = self.is_slice(value)
- * if obj:
- */
- goto __pyx_L4;
- }
-
- /* "View.MemoryView":431
- * self.setitem_slice_assign_scalar(self[index], value)
- * else:
- * self.setitem_indexed(index, value) # <<<<<<<<<<<<<<
- *
- * cdef is_slice(self, obj):
- */
- /*else*/ {
- __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->setitem_indexed(__pyx_v_self, __pyx_v_index, __pyx_v_value); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 431, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- }
- __pyx_L4:;
-
- /* "View.MemoryView":418
- * return self.convert_item_to_object(itemp)
- *
- * def __setitem__(memoryview self, object index, object value): # <<<<<<<<<<<<<<
- * if self.view.readonly:
- * raise TypeError("Cannot assign to read-only memoryview")
- */
-
- /* function exit code */
- __pyx_r = 0;
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_XDECREF(__pyx_t_4);
- __Pyx_AddTraceback("View.MemoryView.memoryview.__setitem__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = -1;
- __pyx_L0:;
- __Pyx_XDECREF(__pyx_v_have_slices);
- __Pyx_XDECREF(__pyx_v_obj);
- __Pyx_XDECREF(__pyx_v_index);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":433
- * self.setitem_indexed(index, value)
- *
- * cdef is_slice(self, obj): # <<<<<<<<<<<<<<
- * if not isinstance(obj, memoryview):
- * try:
- */
-
-static PyObject *__pyx_memoryview_is_slice(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- int __pyx_t_2;
- PyObject *__pyx_t_3 = NULL;
- PyObject *__pyx_t_4 = NULL;
- PyObject *__pyx_t_5 = NULL;
- PyObject *__pyx_t_6 = NULL;
- PyObject *__pyx_t_7 = NULL;
- PyObject *__pyx_t_8 = NULL;
- int __pyx_t_9;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("is_slice", 0);
- __Pyx_INCREF(__pyx_v_obj);
-
- /* "View.MemoryView":434
- *
- * cdef is_slice(self, obj):
- * if not isinstance(obj, memoryview): # <<<<<<<<<<<<<<
- * try:
- * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS,
- */
- __pyx_t_1 = __Pyx_TypeCheck(__pyx_v_obj, __pyx_memoryview_type);
- __pyx_t_2 = ((!(__pyx_t_1 != 0)) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":435
- * cdef is_slice(self, obj):
- * if not isinstance(obj, memoryview):
- * try: # <<<<<<<<<<<<<<
- * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS,
- * self.dtype_is_object)
- */
- {
- __Pyx_PyThreadState_declare
- __Pyx_PyThreadState_assign
- __Pyx_ExceptionSave(&__pyx_t_3, &__pyx_t_4, &__pyx_t_5);
- __Pyx_XGOTREF(__pyx_t_3);
- __Pyx_XGOTREF(__pyx_t_4);
- __Pyx_XGOTREF(__pyx_t_5);
- /*try:*/ {
-
- /* "View.MemoryView":436
- * if not isinstance(obj, memoryview):
- * try:
- * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, # <<<<<<<<<<<<<<
- * self.dtype_is_object)
- * except TypeError:
- */
- __pyx_t_6 = __Pyx_PyInt_From_int(((__pyx_v_self->flags & (~PyBUF_WRITABLE)) | PyBUF_ANY_CONTIGUOUS)); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 436, __pyx_L4_error)
- __Pyx_GOTREF(__pyx_t_6);
-
- /* "View.MemoryView":437
- * try:
- * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS,
- * self.dtype_is_object) # <<<<<<<<<<<<<<
- * except TypeError:
- * return None
- */
- __pyx_t_7 = __Pyx_PyBool_FromLong(__pyx_v_self->dtype_is_object); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 437, __pyx_L4_error)
- __Pyx_GOTREF(__pyx_t_7);
-
- /* "View.MemoryView":436
- * if not isinstance(obj, memoryview):
- * try:
- * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, # <<<<<<<<<<<<<<
- * self.dtype_is_object)
- * except TypeError:
- */
- __pyx_t_8 = PyTuple_New(3); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 436, __pyx_L4_error)
- __Pyx_GOTREF(__pyx_t_8);
- __Pyx_INCREF(__pyx_v_obj);
- __Pyx_GIVEREF(__pyx_v_obj);
- PyTuple_SET_ITEM(__pyx_t_8, 0, __pyx_v_obj);
- __Pyx_GIVEREF(__pyx_t_6);
- PyTuple_SET_ITEM(__pyx_t_8, 1, __pyx_t_6);
- __Pyx_GIVEREF(__pyx_t_7);
- PyTuple_SET_ITEM(__pyx_t_8, 2, __pyx_t_7);
- __pyx_t_6 = 0;
- __pyx_t_7 = 0;
- __pyx_t_7 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryview_type), __pyx_t_8, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 436, __pyx_L4_error)
- __Pyx_GOTREF(__pyx_t_7);
- __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
- __Pyx_DECREF_SET(__pyx_v_obj, __pyx_t_7);
- __pyx_t_7 = 0;
-
- /* "View.MemoryView":435
- * cdef is_slice(self, obj):
- * if not isinstance(obj, memoryview):
- * try: # <<<<<<<<<<<<<<
- * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS,
- * self.dtype_is_object)
- */
- }
- __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
- __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;
- __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;
- goto __pyx_L9_try_end;
- __pyx_L4_error:;
- __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;
- __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;
- __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0;
-
- /* "View.MemoryView":438
- * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS,
- * self.dtype_is_object)
- * except TypeError: # <<<<<<<<<<<<<<
- * return None
- *
- */
- __pyx_t_9 = __Pyx_PyErr_ExceptionMatches(__pyx_builtin_TypeError);
- if (__pyx_t_9) {
- __Pyx_AddTraceback("View.MemoryView.memoryview.is_slice", __pyx_clineno, __pyx_lineno, __pyx_filename);
- if (__Pyx_GetException(&__pyx_t_7, &__pyx_t_8, &__pyx_t_6) < 0) __PYX_ERR(1, 438, __pyx_L6_except_error)
- __Pyx_GOTREF(__pyx_t_7);
- __Pyx_GOTREF(__pyx_t_8);
- __Pyx_GOTREF(__pyx_t_6);
-
- /* "View.MemoryView":439
- * self.dtype_is_object)
- * except TypeError:
- * return None # <<<<<<<<<<<<<<
- *
- * return obj
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_r = Py_None; __Pyx_INCREF(Py_None);
- __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
- __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
- __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
- goto __pyx_L7_except_return;
- }
- goto __pyx_L6_except_error;
- __pyx_L6_except_error:;
-
- /* "View.MemoryView":435
- * cdef is_slice(self, obj):
- * if not isinstance(obj, memoryview):
- * try: # <<<<<<<<<<<<<<
- * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS,
- * self.dtype_is_object)
- */
- __Pyx_XGIVEREF(__pyx_t_3);
- __Pyx_XGIVEREF(__pyx_t_4);
- __Pyx_XGIVEREF(__pyx_t_5);
- __Pyx_ExceptionReset(__pyx_t_3, __pyx_t_4, __pyx_t_5);
- goto __pyx_L1_error;
- __pyx_L7_except_return:;
- __Pyx_XGIVEREF(__pyx_t_3);
- __Pyx_XGIVEREF(__pyx_t_4);
- __Pyx_XGIVEREF(__pyx_t_5);
- __Pyx_ExceptionReset(__pyx_t_3, __pyx_t_4, __pyx_t_5);
- goto __pyx_L0;
- __pyx_L9_try_end:;
- }
-
- /* "View.MemoryView":434
- *
- * cdef is_slice(self, obj):
- * if not isinstance(obj, memoryview): # <<<<<<<<<<<<<<
- * try:
- * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS,
- */
- }
-
- /* "View.MemoryView":441
- * return None
- *
- * return obj # <<<<<<<<<<<<<<
- *
- * cdef setitem_slice_assignment(self, dst, src):
- */
- __Pyx_XDECREF(__pyx_r);
- __Pyx_INCREF(__pyx_v_obj);
- __pyx_r = __pyx_v_obj;
- goto __pyx_L0;
-
- /* "View.MemoryView":433
- * self.setitem_indexed(index, value)
- *
- * cdef is_slice(self, obj): # <<<<<<<<<<<<<<
- * if not isinstance(obj, memoryview):
- * try:
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_6);
- __Pyx_XDECREF(__pyx_t_7);
- __Pyx_XDECREF(__pyx_t_8);
- __Pyx_AddTraceback("View.MemoryView.memoryview.is_slice", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XDECREF(__pyx_v_obj);
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":443
- * return obj
- *
- * cdef setitem_slice_assignment(self, dst, src): # <<<<<<<<<<<<<<
- * cdef __Pyx_memviewslice dst_slice
- * cdef __Pyx_memviewslice src_slice
- */
-
-static PyObject *__pyx_memoryview_setitem_slice_assignment(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_dst, PyObject *__pyx_v_src) {
- __Pyx_memviewslice __pyx_v_dst_slice;
- __Pyx_memviewslice __pyx_v_src_slice;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- __Pyx_memviewslice *__pyx_t_1;
- __Pyx_memviewslice *__pyx_t_2;
- PyObject *__pyx_t_3 = NULL;
- int __pyx_t_4;
- int __pyx_t_5;
- int __pyx_t_6;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("setitem_slice_assignment", 0);
-
- /* "View.MemoryView":447
- * cdef __Pyx_memviewslice src_slice
- *
- * memoryview_copy_contents(get_slice_from_memview(src, &src_slice)[0], # <<<<<<<<<<<<<<
- * get_slice_from_memview(dst, &dst_slice)[0],
- * src.ndim, dst.ndim, self.dtype_is_object)
- */
- if (!(likely(((__pyx_v_src) == Py_None) || likely(__Pyx_TypeTest(__pyx_v_src, __pyx_memoryview_type))))) __PYX_ERR(1, 447, __pyx_L1_error)
- __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(((struct __pyx_memoryview_obj *)__pyx_v_src), (&__pyx_v_src_slice)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 447, __pyx_L1_error)
-
- /* "View.MemoryView":448
- *
- * memoryview_copy_contents(get_slice_from_memview(src, &src_slice)[0],
- * get_slice_from_memview(dst, &dst_slice)[0], # <<<<<<<<<<<<<<
- * src.ndim, dst.ndim, self.dtype_is_object)
- *
- */
- if (!(likely(((__pyx_v_dst) == Py_None) || likely(__Pyx_TypeTest(__pyx_v_dst, __pyx_memoryview_type))))) __PYX_ERR(1, 448, __pyx_L1_error)
- __pyx_t_2 = __pyx_memoryview_get_slice_from_memoryview(((struct __pyx_memoryview_obj *)__pyx_v_dst), (&__pyx_v_dst_slice)); if (unlikely(__pyx_t_2 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 448, __pyx_L1_error)
-
- /* "View.MemoryView":449
- * memoryview_copy_contents(get_slice_from_memview(src, &src_slice)[0],
- * get_slice_from_memview(dst, &dst_slice)[0],
- * src.ndim, dst.ndim, self.dtype_is_object) # <<<<<<<<<<<<<<
- *
- * cdef setitem_slice_assign_scalar(self, memoryview dst, value):
- */
- __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_src, __pyx_n_s_ndim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 449, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_t_4 = __Pyx_PyInt_As_int(__pyx_t_3); if (unlikely((__pyx_t_4 == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 449, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_dst, __pyx_n_s_ndim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 449, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_t_5 = __Pyx_PyInt_As_int(__pyx_t_3); if (unlikely((__pyx_t_5 == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 449, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
-
- /* "View.MemoryView":447
- * cdef __Pyx_memviewslice src_slice
- *
- * memoryview_copy_contents(get_slice_from_memview(src, &src_slice)[0], # <<<<<<<<<<<<<<
- * get_slice_from_memview(dst, &dst_slice)[0],
- * src.ndim, dst.ndim, self.dtype_is_object)
- */
- __pyx_t_6 = __pyx_memoryview_copy_contents((__pyx_t_1[0]), (__pyx_t_2[0]), __pyx_t_4, __pyx_t_5, __pyx_v_self->dtype_is_object); if (unlikely(__pyx_t_6 == ((int)-1))) __PYX_ERR(1, 447, __pyx_L1_error)
-
- /* "View.MemoryView":443
- * return obj
- *
- * cdef setitem_slice_assignment(self, dst, src): # <<<<<<<<<<<<<<
- * cdef __Pyx_memviewslice dst_slice
- * cdef __Pyx_memviewslice src_slice
- */
-
- /* function exit code */
- __pyx_r = Py_None; __Pyx_INCREF(Py_None);
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_AddTraceback("View.MemoryView.memoryview.setitem_slice_assignment", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":451
- * src.ndim, dst.ndim, self.dtype_is_object)
- *
- * cdef setitem_slice_assign_scalar(self, memoryview dst, value): # <<<<<<<<<<<<<<
- * cdef int array[128]
- * cdef void *tmp = NULL
- */
-
-static PyObject *__pyx_memoryview_setitem_slice_assign_scalar(struct __pyx_memoryview_obj *__pyx_v_self, struct __pyx_memoryview_obj *__pyx_v_dst, PyObject *__pyx_v_value) {
- int __pyx_v_array[0x80];
- void *__pyx_v_tmp;
- void *__pyx_v_item;
- __Pyx_memviewslice *__pyx_v_dst_slice;
- __Pyx_memviewslice __pyx_v_tmp_slice;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- __Pyx_memviewslice *__pyx_t_1;
- int __pyx_t_2;
- PyObject *__pyx_t_3 = NULL;
- int __pyx_t_4;
- int __pyx_t_5;
- char const *__pyx_t_6;
- PyObject *__pyx_t_7 = NULL;
- PyObject *__pyx_t_8 = NULL;
- PyObject *__pyx_t_9 = NULL;
- PyObject *__pyx_t_10 = NULL;
- PyObject *__pyx_t_11 = NULL;
- PyObject *__pyx_t_12 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("setitem_slice_assign_scalar", 0);
-
- /* "View.MemoryView":453
- * cdef setitem_slice_assign_scalar(self, memoryview dst, value):
- * cdef int array[128]
- * cdef void *tmp = NULL # <<<<<<<<<<<<<<
- * cdef void *item
- *
- */
- __pyx_v_tmp = NULL;
-
- /* "View.MemoryView":458
- * cdef __Pyx_memviewslice *dst_slice
- * cdef __Pyx_memviewslice tmp_slice
- * dst_slice = get_slice_from_memview(dst, &tmp_slice) # <<<<<<<<<<<<<<
- *
- * if self.view.itemsize > sizeof(array):
- */
- __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(__pyx_v_dst, (&__pyx_v_tmp_slice)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 458, __pyx_L1_error)
- __pyx_v_dst_slice = __pyx_t_1;
-
- /* "View.MemoryView":460
- * dst_slice = get_slice_from_memview(dst, &tmp_slice)
- *
- * if self.view.itemsize > sizeof(array): # <<<<<<<<<<<<<<
- * tmp = PyMem_Malloc(self.view.itemsize)
- * if tmp == NULL:
- */
- __pyx_t_2 = ((((size_t)__pyx_v_self->view.itemsize) > (sizeof(__pyx_v_array))) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":461
- *
- * if self.view.itemsize > sizeof(array):
- * tmp = PyMem_Malloc(self.view.itemsize) # <<<<<<<<<<<<<<
- * if tmp == NULL:
- * raise MemoryError
- */
- __pyx_v_tmp = PyMem_Malloc(__pyx_v_self->view.itemsize);
-
- /* "View.MemoryView":462
- * if self.view.itemsize > sizeof(array):
- * tmp = PyMem_Malloc(self.view.itemsize)
- * if tmp == NULL: # <<<<<<<<<<<<<<
- * raise MemoryError
- * item = tmp
- */
- __pyx_t_2 = ((__pyx_v_tmp == NULL) != 0);
- if (unlikely(__pyx_t_2)) {
-
- /* "View.MemoryView":463
- * tmp = PyMem_Malloc(self.view.itemsize)
- * if tmp == NULL:
- * raise MemoryError # <<<<<<<<<<<<<<
- * item = tmp
- * else:
- */
- PyErr_NoMemory(); __PYX_ERR(1, 463, __pyx_L1_error)
-
- /* "View.MemoryView":462
- * if self.view.itemsize > sizeof(array):
- * tmp = PyMem_Malloc(self.view.itemsize)
- * if tmp == NULL: # <<<<<<<<<<<<<<
- * raise MemoryError
- * item = tmp
- */
- }
-
- /* "View.MemoryView":464
- * if tmp == NULL:
- * raise MemoryError
- * item = tmp # <<<<<<<<<<<<<<
- * else:
- * item = array
- */
- __pyx_v_item = __pyx_v_tmp;
-
- /* "View.MemoryView":460
- * dst_slice = get_slice_from_memview(dst, &tmp_slice)
- *
- * if self.view.itemsize > sizeof(array): # <<<<<<<<<<<<<<
- * tmp = PyMem_Malloc(self.view.itemsize)
- * if tmp == NULL:
- */
- goto __pyx_L3;
- }
-
- /* "View.MemoryView":466
- * item = tmp
- * else:
- * item = array # <<<<<<<<<<<<<<
- *
- * try:
- */
- /*else*/ {
- __pyx_v_item = ((void *)__pyx_v_array);
- }
- __pyx_L3:;
-
- /* "View.MemoryView":468
- * item = array
- *
- * try: # <<<<<<<<<<<<<<
- * if self.dtype_is_object:
- * ( item)[0] = value
- */
- /*try:*/ {
-
- /* "View.MemoryView":469
- *
- * try:
- * if self.dtype_is_object: # <<<<<<<<<<<<<<
- * ( item)[0] = value
- * else:
- */
- __pyx_t_2 = (__pyx_v_self->dtype_is_object != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":470
- * try:
- * if self.dtype_is_object:
- * ( item)[0] = value # <<<<<<<<<<<<<<
- * else:
- * self.assign_item_from_object( item, value)
- */
- (((PyObject **)__pyx_v_item)[0]) = ((PyObject *)__pyx_v_value);
-
- /* "View.MemoryView":469
- *
- * try:
- * if self.dtype_is_object: # <<<<<<<<<<<<<<
- * ( item)[0] = value
- * else:
- */
- goto __pyx_L8;
- }
-
- /* "View.MemoryView":472
- * ( item)[0] = value
- * else:
- * self.assign_item_from_object( item, value) # <<<<<<<<<<<<<<
- *
- *
- */
- /*else*/ {
- __pyx_t_3 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->assign_item_from_object(__pyx_v_self, ((char *)__pyx_v_item), __pyx_v_value); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 472, __pyx_L6_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- }
- __pyx_L8:;
-
- /* "View.MemoryView":476
- *
- *
- * if self.view.suboffsets != NULL: # <<<<<<<<<<<<<<
- * assert_direct_dimensions(self.view.suboffsets, self.view.ndim)
- * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize,
- */
- __pyx_t_2 = ((__pyx_v_self->view.suboffsets != NULL) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":477
- *
- * if self.view.suboffsets != NULL:
- * assert_direct_dimensions(self.view.suboffsets, self.view.ndim) # <<<<<<<<<<<<<<
- * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize,
- * item, self.dtype_is_object)
- */
- __pyx_t_3 = assert_direct_dimensions(__pyx_v_self->view.suboffsets, __pyx_v_self->view.ndim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 477, __pyx_L6_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
-
- /* "View.MemoryView":476
- *
- *
- * if self.view.suboffsets != NULL: # <<<<<<<<<<<<<<
- * assert_direct_dimensions(self.view.suboffsets, self.view.ndim)
- * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize,
- */
- }
-
- /* "View.MemoryView":478
- * if self.view.suboffsets != NULL:
- * assert_direct_dimensions(self.view.suboffsets, self.view.ndim)
- * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize, # <<<<<<<<<<<<<<
- * item, self.dtype_is_object)
- * finally:
- */
- __pyx_memoryview_slice_assign_scalar(__pyx_v_dst_slice, __pyx_v_dst->view.ndim, __pyx_v_self->view.itemsize, __pyx_v_item, __pyx_v_self->dtype_is_object);
- }
-
- /* "View.MemoryView":481
- * item, self.dtype_is_object)
- * finally:
- * PyMem_Free(tmp) # <<<<<<<<<<<<<<
- *
- * cdef setitem_indexed(self, index, value):
- */
- /*finally:*/ {
- /*normal exit:*/{
- PyMem_Free(__pyx_v_tmp);
- goto __pyx_L7;
- }
- __pyx_L6_error:;
- /*exception exit:*/{
- __Pyx_PyThreadState_declare
- __Pyx_PyThreadState_assign
- __pyx_t_7 = 0; __pyx_t_8 = 0; __pyx_t_9 = 0; __pyx_t_10 = 0; __pyx_t_11 = 0; __pyx_t_12 = 0;
- __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
- if (PY_MAJOR_VERSION >= 3) __Pyx_ExceptionSwap(&__pyx_t_10, &__pyx_t_11, &__pyx_t_12);
- if ((PY_MAJOR_VERSION < 3) || unlikely(__Pyx_GetException(&__pyx_t_7, &__pyx_t_8, &__pyx_t_9) < 0)) __Pyx_ErrFetch(&__pyx_t_7, &__pyx_t_8, &__pyx_t_9);
- __Pyx_XGOTREF(__pyx_t_7);
- __Pyx_XGOTREF(__pyx_t_8);
- __Pyx_XGOTREF(__pyx_t_9);
- __Pyx_XGOTREF(__pyx_t_10);
- __Pyx_XGOTREF(__pyx_t_11);
- __Pyx_XGOTREF(__pyx_t_12);
- __pyx_t_4 = __pyx_lineno; __pyx_t_5 = __pyx_clineno; __pyx_t_6 = __pyx_filename;
- {
- PyMem_Free(__pyx_v_tmp);
- }
- if (PY_MAJOR_VERSION >= 3) {
- __Pyx_XGIVEREF(__pyx_t_10);
- __Pyx_XGIVEREF(__pyx_t_11);
- __Pyx_XGIVEREF(__pyx_t_12);
- __Pyx_ExceptionReset(__pyx_t_10, __pyx_t_11, __pyx_t_12);
- }
- __Pyx_XGIVEREF(__pyx_t_7);
- __Pyx_XGIVEREF(__pyx_t_8);
- __Pyx_XGIVEREF(__pyx_t_9);
- __Pyx_ErrRestore(__pyx_t_7, __pyx_t_8, __pyx_t_9);
- __pyx_t_7 = 0; __pyx_t_8 = 0; __pyx_t_9 = 0; __pyx_t_10 = 0; __pyx_t_11 = 0; __pyx_t_12 = 0;
- __pyx_lineno = __pyx_t_4; __pyx_clineno = __pyx_t_5; __pyx_filename = __pyx_t_6;
- goto __pyx_L1_error;
- }
- __pyx_L7:;
- }
-
- /* "View.MemoryView":451
- * src.ndim, dst.ndim, self.dtype_is_object)
- *
- * cdef setitem_slice_assign_scalar(self, memoryview dst, value): # <<<<<<<<<<<<<<
- * cdef int array[128]
- * cdef void *tmp = NULL
- */
-
- /* function exit code */
- __pyx_r = Py_None; __Pyx_INCREF(Py_None);
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_AddTraceback("View.MemoryView.memoryview.setitem_slice_assign_scalar", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":483
- * PyMem_Free(tmp)
- *
- * cdef setitem_indexed(self, index, value): # <<<<<<<<<<<<<<
- * cdef char *itemp = self.get_item_pointer(index)
- * self.assign_item_from_object(itemp, value)
- */
-
-static PyObject *__pyx_memoryview_setitem_indexed(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value) {
- char *__pyx_v_itemp;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- char *__pyx_t_1;
- PyObject *__pyx_t_2 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("setitem_indexed", 0);
-
- /* "View.MemoryView":484
- *
- * cdef setitem_indexed(self, index, value):
- * cdef char *itemp = self.get_item_pointer(index) # <<<<<<<<<<<<<<
- * self.assign_item_from_object(itemp, value)
- *
- */
- __pyx_t_1 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->get_item_pointer(__pyx_v_self, __pyx_v_index); if (unlikely(__pyx_t_1 == ((char *)NULL))) __PYX_ERR(1, 484, __pyx_L1_error)
- __pyx_v_itemp = __pyx_t_1;
-
- /* "View.MemoryView":485
- * cdef setitem_indexed(self, index, value):
- * cdef char *itemp = self.get_item_pointer(index)
- * self.assign_item_from_object(itemp, value) # <<<<<<<<<<<<<<
- *
- * cdef convert_item_to_object(self, char *itemp):
- */
- __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->assign_item_from_object(__pyx_v_self, __pyx_v_itemp, __pyx_v_value); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 485, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
-
- /* "View.MemoryView":483
- * PyMem_Free(tmp)
- *
- * cdef setitem_indexed(self, index, value): # <<<<<<<<<<<<<<
- * cdef char *itemp = self.get_item_pointer(index)
- * self.assign_item_from_object(itemp, value)
- */
-
- /* function exit code */
- __pyx_r = Py_None; __Pyx_INCREF(Py_None);
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_AddTraceback("View.MemoryView.memoryview.setitem_indexed", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":487
- * self.assign_item_from_object(itemp, value)
- *
- * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<<
- * """Only used if instantiated manually by the user, or if Cython doesn't
- * know how to convert the type"""
- */
-
-static PyObject *__pyx_memoryview_convert_item_to_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp) {
- PyObject *__pyx_v_struct = NULL;
- PyObject *__pyx_v_bytesitem = 0;
- PyObject *__pyx_v_result = NULL;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- PyObject *__pyx_t_2 = NULL;
- PyObject *__pyx_t_3 = NULL;
- PyObject *__pyx_t_4 = NULL;
- PyObject *__pyx_t_5 = NULL;
- PyObject *__pyx_t_6 = NULL;
- PyObject *__pyx_t_7 = NULL;
- int __pyx_t_8;
- PyObject *__pyx_t_9 = NULL;
- size_t __pyx_t_10;
- int __pyx_t_11;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("convert_item_to_object", 0);
-
- /* "View.MemoryView":490
- * """Only used if instantiated manually by the user, or if Cython doesn't
- * know how to convert the type"""
- * import struct # <<<<<<<<<<<<<<
- * cdef bytes bytesitem
- *
- */
- __pyx_t_1 = __Pyx_Import(__pyx_n_s_struct, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 490, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_v_struct = __pyx_t_1;
- __pyx_t_1 = 0;
-
- /* "View.MemoryView":493
- * cdef bytes bytesitem
- *
- * bytesitem = itemp[:self.view.itemsize] # <<<<<<<<<<<<<<
- * try:
- * result = struct.unpack(self.view.format, bytesitem)
- */
- __pyx_t_1 = __Pyx_PyBytes_FromStringAndSize(__pyx_v_itemp + 0, __pyx_v_self->view.itemsize - 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 493, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_v_bytesitem = ((PyObject*)__pyx_t_1);
- __pyx_t_1 = 0;
-
- /* "View.MemoryView":494
- *
- * bytesitem = itemp[:self.view.itemsize]
- * try: # <<<<<<<<<<<<<<
- * result = struct.unpack(self.view.format, bytesitem)
- * except struct.error:
- */
- {
- __Pyx_PyThreadState_declare
- __Pyx_PyThreadState_assign
- __Pyx_ExceptionSave(&__pyx_t_2, &__pyx_t_3, &__pyx_t_4);
- __Pyx_XGOTREF(__pyx_t_2);
- __Pyx_XGOTREF(__pyx_t_3);
- __Pyx_XGOTREF(__pyx_t_4);
- /*try:*/ {
-
- /* "View.MemoryView":495
- * bytesitem = itemp[:self.view.itemsize]
- * try:
- * result = struct.unpack(self.view.format, bytesitem) # <<<<<<<<<<<<<<
- * except struct.error:
- * raise ValueError("Unable to convert item to object")
- */
- __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_unpack); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 495, __pyx_L3_error)
- __Pyx_GOTREF(__pyx_t_5);
- __pyx_t_6 = __Pyx_PyBytes_FromString(__pyx_v_self->view.format); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 495, __pyx_L3_error)
- __Pyx_GOTREF(__pyx_t_6);
- __pyx_t_7 = NULL;
- __pyx_t_8 = 0;
- if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_5))) {
- __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_5);
- if (likely(__pyx_t_7)) {
- PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5);
- __Pyx_INCREF(__pyx_t_7);
- __Pyx_INCREF(function);
- __Pyx_DECREF_SET(__pyx_t_5, function);
- __pyx_t_8 = 1;
- }
- }
- #if CYTHON_FAST_PYCALL
- if (PyFunction_Check(__pyx_t_5)) {
- PyObject *__pyx_temp[3] = {__pyx_t_7, __pyx_t_6, __pyx_v_bytesitem};
- __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_5, __pyx_temp+1-__pyx_t_8, 2+__pyx_t_8); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 495, __pyx_L3_error)
- __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
- } else
- #endif
- #if CYTHON_FAST_PYCCALL
- if (__Pyx_PyFastCFunction_Check(__pyx_t_5)) {
- PyObject *__pyx_temp[3] = {__pyx_t_7, __pyx_t_6, __pyx_v_bytesitem};
- __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_5, __pyx_temp+1-__pyx_t_8, 2+__pyx_t_8); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 495, __pyx_L3_error)
- __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
- } else
- #endif
- {
- __pyx_t_9 = PyTuple_New(2+__pyx_t_8); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 495, __pyx_L3_error)
- __Pyx_GOTREF(__pyx_t_9);
- if (__pyx_t_7) {
- __Pyx_GIVEREF(__pyx_t_7); PyTuple_SET_ITEM(__pyx_t_9, 0, __pyx_t_7); __pyx_t_7 = NULL;
- }
- __Pyx_GIVEREF(__pyx_t_6);
- PyTuple_SET_ITEM(__pyx_t_9, 0+__pyx_t_8, __pyx_t_6);
- __Pyx_INCREF(__pyx_v_bytesitem);
- __Pyx_GIVEREF(__pyx_v_bytesitem);
- PyTuple_SET_ITEM(__pyx_t_9, 1+__pyx_t_8, __pyx_v_bytesitem);
- __pyx_t_6 = 0;
- __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_5, __pyx_t_9, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 495, __pyx_L3_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
- }
- __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
- __pyx_v_result = __pyx_t_1;
- __pyx_t_1 = 0;
-
- /* "View.MemoryView":494
- *
- * bytesitem = itemp[:self.view.itemsize]
- * try: # <<<<<<<<<<<<<<
- * result = struct.unpack(self.view.format, bytesitem)
- * except struct.error:
- */
- }
-
- /* "View.MemoryView":499
- * raise ValueError("Unable to convert item to object")
- * else:
- * if len(self.view.format) == 1: # <<<<<<<<<<<<<<
- * return result[0]
- * return result
- */
- /*else:*/ {
- __pyx_t_10 = strlen(__pyx_v_self->view.format);
- __pyx_t_11 = ((__pyx_t_10 == 1) != 0);
- if (__pyx_t_11) {
-
- /* "View.MemoryView":500
- * else:
- * if len(self.view.format) == 1:
- * return result[0] # <<<<<<<<<<<<<<
- * return result
- *
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_result, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 500, __pyx_L5_except_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_r = __pyx_t_1;
- __pyx_t_1 = 0;
- goto __pyx_L6_except_return;
-
- /* "View.MemoryView":499
- * raise ValueError("Unable to convert item to object")
- * else:
- * if len(self.view.format) == 1: # <<<<<<<<<<<<<<
- * return result[0]
- * return result
- */
- }
-
- /* "View.MemoryView":501
- * if len(self.view.format) == 1:
- * return result[0]
- * return result # <<<<<<<<<<<<<<
- *
- * cdef assign_item_from_object(self, char *itemp, object value):
- */
- __Pyx_XDECREF(__pyx_r);
- __Pyx_INCREF(__pyx_v_result);
- __pyx_r = __pyx_v_result;
- goto __pyx_L6_except_return;
- }
- __pyx_L3_error:;
- __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0;
- __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;
- __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;
- __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;
- __Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0;
-
- /* "View.MemoryView":496
- * try:
- * result = struct.unpack(self.view.format, bytesitem)
- * except struct.error: # <<<<<<<<<<<<<<
- * raise ValueError("Unable to convert item to object")
- * else:
- */
- __Pyx_ErrFetch(&__pyx_t_1, &__pyx_t_5, &__pyx_t_9);
- __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_error); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 496, __pyx_L5_except_error)
- __Pyx_GOTREF(__pyx_t_6);
- __pyx_t_8 = __Pyx_PyErr_GivenExceptionMatches(__pyx_t_1, __pyx_t_6);
- __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
- __Pyx_ErrRestore(__pyx_t_1, __pyx_t_5, __pyx_t_9);
- __pyx_t_1 = 0; __pyx_t_5 = 0; __pyx_t_9 = 0;
- if (__pyx_t_8) {
- __Pyx_AddTraceback("View.MemoryView.memoryview.convert_item_to_object", __pyx_clineno, __pyx_lineno, __pyx_filename);
- if (__Pyx_GetException(&__pyx_t_9, &__pyx_t_5, &__pyx_t_1) < 0) __PYX_ERR(1, 496, __pyx_L5_except_error)
- __Pyx_GOTREF(__pyx_t_9);
- __Pyx_GOTREF(__pyx_t_5);
- __Pyx_GOTREF(__pyx_t_1);
-
- /* "View.MemoryView":497
- * result = struct.unpack(self.view.format, bytesitem)
- * except struct.error:
- * raise ValueError("Unable to convert item to object") # <<<<<<<<<<<<<<
- * else:
- * if len(self.view.format) == 1:
- */
- __pyx_t_6 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__10, NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 497, __pyx_L5_except_error)
- __Pyx_GOTREF(__pyx_t_6);
- __Pyx_Raise(__pyx_t_6, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
- __PYX_ERR(1, 497, __pyx_L5_except_error)
- }
- goto __pyx_L5_except_error;
- __pyx_L5_except_error:;
-
- /* "View.MemoryView":494
- *
- * bytesitem = itemp[:self.view.itemsize]
- * try: # <<<<<<<<<<<<<<
- * result = struct.unpack(self.view.format, bytesitem)
- * except struct.error:
- */
- __Pyx_XGIVEREF(__pyx_t_2);
- __Pyx_XGIVEREF(__pyx_t_3);
- __Pyx_XGIVEREF(__pyx_t_4);
- __Pyx_ExceptionReset(__pyx_t_2, __pyx_t_3, __pyx_t_4);
- goto __pyx_L1_error;
- __pyx_L6_except_return:;
- __Pyx_XGIVEREF(__pyx_t_2);
- __Pyx_XGIVEREF(__pyx_t_3);
- __Pyx_XGIVEREF(__pyx_t_4);
- __Pyx_ExceptionReset(__pyx_t_2, __pyx_t_3, __pyx_t_4);
- goto __pyx_L0;
- }
-
- /* "View.MemoryView":487
- * self.assign_item_from_object(itemp, value)
- *
- * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<<
- * """Only used if instantiated manually by the user, or if Cython doesn't
- * know how to convert the type"""
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_5);
- __Pyx_XDECREF(__pyx_t_6);
- __Pyx_XDECREF(__pyx_t_7);
- __Pyx_XDECREF(__pyx_t_9);
- __Pyx_AddTraceback("View.MemoryView.memoryview.convert_item_to_object", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XDECREF(__pyx_v_struct);
- __Pyx_XDECREF(__pyx_v_bytesitem);
- __Pyx_XDECREF(__pyx_v_result);
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":503
- * return result
- *
- * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<<
- * """Only used if instantiated manually by the user, or if Cython doesn't
- * know how to convert the type"""
- */
-
-static PyObject *__pyx_memoryview_assign_item_from_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value) {
- PyObject *__pyx_v_struct = NULL;
- char __pyx_v_c;
- PyObject *__pyx_v_bytesvalue = 0;
- Py_ssize_t __pyx_v_i;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_t_2;
- int __pyx_t_3;
- PyObject *__pyx_t_4 = NULL;
- PyObject *__pyx_t_5 = NULL;
- PyObject *__pyx_t_6 = NULL;
- int __pyx_t_7;
- PyObject *__pyx_t_8 = NULL;
- Py_ssize_t __pyx_t_9;
- PyObject *__pyx_t_10 = NULL;
- char *__pyx_t_11;
- char *__pyx_t_12;
- char *__pyx_t_13;
- char *__pyx_t_14;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("assign_item_from_object", 0);
-
- /* "View.MemoryView":506
- * """Only used if instantiated manually by the user, or if Cython doesn't
- * know how to convert the type"""
- * import struct # <<<<<<<<<<<<<<
- * cdef char c
- * cdef bytes bytesvalue
- */
- __pyx_t_1 = __Pyx_Import(__pyx_n_s_struct, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 506, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_v_struct = __pyx_t_1;
- __pyx_t_1 = 0;
-
- /* "View.MemoryView":511
- * cdef Py_ssize_t i
- *
- * if isinstance(value, tuple): # <<<<<<<<<<<<<<
- * bytesvalue = struct.pack(self.view.format, *value)
- * else:
- */
- __pyx_t_2 = PyTuple_Check(__pyx_v_value);
- __pyx_t_3 = (__pyx_t_2 != 0);
- if (__pyx_t_3) {
-
- /* "View.MemoryView":512
- *
- * if isinstance(value, tuple):
- * bytesvalue = struct.pack(self.view.format, *value) # <<<<<<<<<<<<<<
- * else:
- * bytesvalue = struct.pack(self.view.format, value)
- */
- __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_pack); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 512, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_4 = __Pyx_PyBytes_FromString(__pyx_v_self->view.format); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 512, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 512, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- __Pyx_GIVEREF(__pyx_t_4);
- PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_4);
- __pyx_t_4 = 0;
- __pyx_t_4 = __Pyx_PySequence_Tuple(__pyx_v_value); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 512, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __pyx_t_6 = PyNumber_Add(__pyx_t_5, __pyx_t_4); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 512, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_6);
- __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
- __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
- __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_6, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 512, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
- if (!(likely(PyBytes_CheckExact(__pyx_t_4))||((__pyx_t_4) == Py_None)||((void)PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "bytes", Py_TYPE(__pyx_t_4)->tp_name), 0))) __PYX_ERR(1, 512, __pyx_L1_error)
- __pyx_v_bytesvalue = ((PyObject*)__pyx_t_4);
- __pyx_t_4 = 0;
-
- /* "View.MemoryView":511
- * cdef Py_ssize_t i
- *
- * if isinstance(value, tuple): # <<<<<<<<<<<<<<
- * bytesvalue = struct.pack(self.view.format, *value)
- * else:
- */
- goto __pyx_L3;
- }
-
- /* "View.MemoryView":514
- * bytesvalue = struct.pack(self.view.format, *value)
- * else:
- * bytesvalue = struct.pack(self.view.format, value) # <<<<<<<<<<<<<<
- *
- * for i, c in enumerate(bytesvalue):
- */
- /*else*/ {
- __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_pack); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 514, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_6);
- __pyx_t_1 = __Pyx_PyBytes_FromString(__pyx_v_self->view.format); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 514, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_5 = NULL;
- __pyx_t_7 = 0;
- if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_6))) {
- __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_6);
- if (likely(__pyx_t_5)) {
- PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_6);
- __Pyx_INCREF(__pyx_t_5);
- __Pyx_INCREF(function);
- __Pyx_DECREF_SET(__pyx_t_6, function);
- __pyx_t_7 = 1;
- }
- }
- #if CYTHON_FAST_PYCALL
- if (PyFunction_Check(__pyx_t_6)) {
- PyObject *__pyx_temp[3] = {__pyx_t_5, __pyx_t_1, __pyx_v_value};
- __pyx_t_4 = __Pyx_PyFunction_FastCall(__pyx_t_6, __pyx_temp+1-__pyx_t_7, 2+__pyx_t_7); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 514, __pyx_L1_error)
- __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;
- __Pyx_GOTREF(__pyx_t_4);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- } else
- #endif
- #if CYTHON_FAST_PYCCALL
- if (__Pyx_PyFastCFunction_Check(__pyx_t_6)) {
- PyObject *__pyx_temp[3] = {__pyx_t_5, __pyx_t_1, __pyx_v_value};
- __pyx_t_4 = __Pyx_PyCFunction_FastCall(__pyx_t_6, __pyx_temp+1-__pyx_t_7, 2+__pyx_t_7); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 514, __pyx_L1_error)
- __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;
- __Pyx_GOTREF(__pyx_t_4);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- } else
- #endif
- {
- __pyx_t_8 = PyTuple_New(2+__pyx_t_7); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 514, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_8);
- if (__pyx_t_5) {
- __Pyx_GIVEREF(__pyx_t_5); PyTuple_SET_ITEM(__pyx_t_8, 0, __pyx_t_5); __pyx_t_5 = NULL;
- }
- __Pyx_GIVEREF(__pyx_t_1);
- PyTuple_SET_ITEM(__pyx_t_8, 0+__pyx_t_7, __pyx_t_1);
- __Pyx_INCREF(__pyx_v_value);
- __Pyx_GIVEREF(__pyx_v_value);
- PyTuple_SET_ITEM(__pyx_t_8, 1+__pyx_t_7, __pyx_v_value);
- __pyx_t_1 = 0;
- __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_6, __pyx_t_8, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 514, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
- }
- __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
- if (!(likely(PyBytes_CheckExact(__pyx_t_4))||((__pyx_t_4) == Py_None)||((void)PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "bytes", Py_TYPE(__pyx_t_4)->tp_name), 0))) __PYX_ERR(1, 514, __pyx_L1_error)
- __pyx_v_bytesvalue = ((PyObject*)__pyx_t_4);
- __pyx_t_4 = 0;
- }
- __pyx_L3:;
-
- /* "View.MemoryView":516
- * bytesvalue = struct.pack(self.view.format, value)
- *
- * for i, c in enumerate(bytesvalue): # <<<<<<<<<<<<<<
- * itemp[i] = c
- *
- */
- __pyx_t_9 = 0;
- if (unlikely(__pyx_v_bytesvalue == Py_None)) {
- PyErr_SetString(PyExc_TypeError, "'NoneType' is not iterable");
- __PYX_ERR(1, 516, __pyx_L1_error)
- }
- __Pyx_INCREF(__pyx_v_bytesvalue);
- __pyx_t_10 = __pyx_v_bytesvalue;
- __pyx_t_12 = PyBytes_AS_STRING(__pyx_t_10);
- __pyx_t_13 = (__pyx_t_12 + PyBytes_GET_SIZE(__pyx_t_10));
- for (__pyx_t_14 = __pyx_t_12; __pyx_t_14 < __pyx_t_13; __pyx_t_14++) {
- __pyx_t_11 = __pyx_t_14;
- __pyx_v_c = (__pyx_t_11[0]);
-
- /* "View.MemoryView":517
- *
- * for i, c in enumerate(bytesvalue):
- * itemp[i] = c # <<<<<<<<<<<<<<
- *
- * @cname('getbuffer')
- */
- __pyx_v_i = __pyx_t_9;
-
- /* "View.MemoryView":516
- * bytesvalue = struct.pack(self.view.format, value)
- *
- * for i, c in enumerate(bytesvalue): # <<<<<<<<<<<<<<
- * itemp[i] = c
- *
- */
- __pyx_t_9 = (__pyx_t_9 + 1);
-
- /* "View.MemoryView":517
- *
- * for i, c in enumerate(bytesvalue):
- * itemp[i] = c # <<<<<<<<<<<<<<
- *
- * @cname('getbuffer')
- */
- (__pyx_v_itemp[__pyx_v_i]) = __pyx_v_c;
- }
- __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;
-
- /* "View.MemoryView":503
- * return result
- *
- * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<<
- * """Only used if instantiated manually by the user, or if Cython doesn't
- * know how to convert the type"""
- */
-
- /* function exit code */
- __pyx_r = Py_None; __Pyx_INCREF(Py_None);
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_4);
- __Pyx_XDECREF(__pyx_t_5);
- __Pyx_XDECREF(__pyx_t_6);
- __Pyx_XDECREF(__pyx_t_8);
- __Pyx_XDECREF(__pyx_t_10);
- __Pyx_AddTraceback("View.MemoryView.memoryview.assign_item_from_object", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XDECREF(__pyx_v_struct);
- __Pyx_XDECREF(__pyx_v_bytesvalue);
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":520
- *
- * @cname('getbuffer')
- * def __getbuffer__(self, Py_buffer *info, int flags): # <<<<<<<<<<<<<<
- * if flags & PyBUF_WRITABLE and self.view.readonly:
- * raise ValueError("Cannot create writable memory view from read-only memoryview")
- */
-
-/* Python wrapper */
-static CYTHON_UNUSED int __pyx_memoryview_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/
-static CYTHON_UNUSED int __pyx_memoryview_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) {
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__getbuffer__ (wrapper)", 0);
- __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_8__getbuffer__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((Py_buffer *)__pyx_v_info), ((int)__pyx_v_flags));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_8__getbuffer__(struct __pyx_memoryview_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) {
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- int __pyx_t_2;
- PyObject *__pyx_t_3 = NULL;
- Py_ssize_t *__pyx_t_4;
- char *__pyx_t_5;
- void *__pyx_t_6;
- int __pyx_t_7;
- Py_ssize_t __pyx_t_8;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- if (__pyx_v_info == NULL) {
- PyErr_SetString(PyExc_BufferError, "PyObject_GetBuffer: view==NULL argument is obsolete");
- return -1;
- }
- __Pyx_RefNannySetupContext("__getbuffer__", 0);
- __pyx_v_info->obj = Py_None; __Pyx_INCREF(Py_None);
- __Pyx_GIVEREF(__pyx_v_info->obj);
-
- /* "View.MemoryView":521
- * @cname('getbuffer')
- * def __getbuffer__(self, Py_buffer *info, int flags):
- * if flags & PyBUF_WRITABLE and self.view.readonly: # <<<<<<<<<<<<<<
- * raise ValueError("Cannot create writable memory view from read-only memoryview")
- *
- */
- __pyx_t_2 = ((__pyx_v_flags & PyBUF_WRITABLE) != 0);
- if (__pyx_t_2) {
- } else {
- __pyx_t_1 = __pyx_t_2;
- goto __pyx_L4_bool_binop_done;
- }
- __pyx_t_2 = (__pyx_v_self->view.readonly != 0);
- __pyx_t_1 = __pyx_t_2;
- __pyx_L4_bool_binop_done:;
- if (unlikely(__pyx_t_1)) {
-
- /* "View.MemoryView":522
- * def __getbuffer__(self, Py_buffer *info, int flags):
- * if flags & PyBUF_WRITABLE and self.view.readonly:
- * raise ValueError("Cannot create writable memory view from read-only memoryview") # <<<<<<<<<<<<<<
- *
- * if flags & PyBUF_ND:
- */
- __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__11, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 522, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_Raise(__pyx_t_3, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __PYX_ERR(1, 522, __pyx_L1_error)
-
- /* "View.MemoryView":521
- * @cname('getbuffer')
- * def __getbuffer__(self, Py_buffer *info, int flags):
- * if flags & PyBUF_WRITABLE and self.view.readonly: # <<<<<<<<<<<<<<
- * raise ValueError("Cannot create writable memory view from read-only memoryview")
- *
- */
- }
-
- /* "View.MemoryView":524
- * raise ValueError("Cannot create writable memory view from read-only memoryview")
- *
- * if flags & PyBUF_ND: # <<<<<<<<<<<<<<
- * info.shape = self.view.shape
- * else:
- */
- __pyx_t_1 = ((__pyx_v_flags & PyBUF_ND) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":525
- *
- * if flags & PyBUF_ND:
- * info.shape = self.view.shape # <<<<<<<<<<<<<<
- * else:
- * info.shape = NULL
- */
- __pyx_t_4 = __pyx_v_self->view.shape;
- __pyx_v_info->shape = __pyx_t_4;
-
- /* "View.MemoryView":524
- * raise ValueError("Cannot create writable memory view from read-only memoryview")
- *
- * if flags & PyBUF_ND: # <<<<<<<<<<<<<<
- * info.shape = self.view.shape
- * else:
- */
- goto __pyx_L6;
- }
-
- /* "View.MemoryView":527
- * info.shape = self.view.shape
- * else:
- * info.shape = NULL # <<<<<<<<<<<<<<
- *
- * if flags & PyBUF_STRIDES:
- */
- /*else*/ {
- __pyx_v_info->shape = NULL;
- }
- __pyx_L6:;
-
- /* "View.MemoryView":529
- * info.shape = NULL
- *
- * if flags & PyBUF_STRIDES: # <<<<<<<<<<<<<<
- * info.strides = self.view.strides
- * else:
- */
- __pyx_t_1 = ((__pyx_v_flags & PyBUF_STRIDES) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":530
- *
- * if flags & PyBUF_STRIDES:
- * info.strides = self.view.strides # <<<<<<<<<<<<<<
- * else:
- * info.strides = NULL
- */
- __pyx_t_4 = __pyx_v_self->view.strides;
- __pyx_v_info->strides = __pyx_t_4;
-
- /* "View.MemoryView":529
- * info.shape = NULL
- *
- * if flags & PyBUF_STRIDES: # <<<<<<<<<<<<<<
- * info.strides = self.view.strides
- * else:
- */
- goto __pyx_L7;
- }
-
- /* "View.MemoryView":532
- * info.strides = self.view.strides
- * else:
- * info.strides = NULL # <<<<<<<<<<<<<<
- *
- * if flags & PyBUF_INDIRECT:
- */
- /*else*/ {
- __pyx_v_info->strides = NULL;
- }
- __pyx_L7:;
-
- /* "View.MemoryView":534
- * info.strides = NULL
- *
- * if flags & PyBUF_INDIRECT: # <<<<<<<<<<<<<<
- * info.suboffsets = self.view.suboffsets
- * else:
- */
- __pyx_t_1 = ((__pyx_v_flags & PyBUF_INDIRECT) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":535
- *
- * if flags & PyBUF_INDIRECT:
- * info.suboffsets = self.view.suboffsets # <<<<<<<<<<<<<<
- * else:
- * info.suboffsets = NULL
- */
- __pyx_t_4 = __pyx_v_self->view.suboffsets;
- __pyx_v_info->suboffsets = __pyx_t_4;
-
- /* "View.MemoryView":534
- * info.strides = NULL
- *
- * if flags & PyBUF_INDIRECT: # <<<<<<<<<<<<<<
- * info.suboffsets = self.view.suboffsets
- * else:
- */
- goto __pyx_L8;
- }
-
- /* "View.MemoryView":537
- * info.suboffsets = self.view.suboffsets
- * else:
- * info.suboffsets = NULL # <<<<<<<<<<<<<<
- *
- * if flags & PyBUF_FORMAT:
- */
- /*else*/ {
- __pyx_v_info->suboffsets = NULL;
- }
- __pyx_L8:;
-
- /* "View.MemoryView":539
- * info.suboffsets = NULL
- *
- * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<<
- * info.format = self.view.format
- * else:
- */
- __pyx_t_1 = ((__pyx_v_flags & PyBUF_FORMAT) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":540
- *
- * if flags & PyBUF_FORMAT:
- * info.format = self.view.format # <<<<<<<<<<<<<<
- * else:
- * info.format = NULL
- */
- __pyx_t_5 = __pyx_v_self->view.format;
- __pyx_v_info->format = __pyx_t_5;
-
- /* "View.MemoryView":539
- * info.suboffsets = NULL
- *
- * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<<
- * info.format = self.view.format
- * else:
- */
- goto __pyx_L9;
- }
-
- /* "View.MemoryView":542
- * info.format = self.view.format
- * else:
- * info.format = NULL # <<<<<<<<<<<<<<
- *
- * info.buf = self.view.buf
- */
- /*else*/ {
- __pyx_v_info->format = NULL;
- }
- __pyx_L9:;
-
- /* "View.MemoryView":544
- * info.format = NULL
- *
- * info.buf = self.view.buf # <<<<<<<<<<<<<<
- * info.ndim = self.view.ndim
- * info.itemsize = self.view.itemsize
- */
- __pyx_t_6 = __pyx_v_self->view.buf;
- __pyx_v_info->buf = __pyx_t_6;
-
- /* "View.MemoryView":545
- *
- * info.buf = self.view.buf
- * info.ndim = self.view.ndim # <<<<<<<<<<<<<<
- * info.itemsize = self.view.itemsize
- * info.len = self.view.len
- */
- __pyx_t_7 = __pyx_v_self->view.ndim;
- __pyx_v_info->ndim = __pyx_t_7;
-
- /* "View.MemoryView":546
- * info.buf = self.view.buf
- * info.ndim = self.view.ndim
- * info.itemsize = self.view.itemsize # <<<<<<<<<<<<<<
- * info.len = self.view.len
- * info.readonly = self.view.readonly
- */
- __pyx_t_8 = __pyx_v_self->view.itemsize;
- __pyx_v_info->itemsize = __pyx_t_8;
-
- /* "View.MemoryView":547
- * info.ndim = self.view.ndim
- * info.itemsize = self.view.itemsize
- * info.len = self.view.len # <<<<<<<<<<<<<<
- * info.readonly = self.view.readonly
- * info.obj = self
- */
- __pyx_t_8 = __pyx_v_self->view.len;
- __pyx_v_info->len = __pyx_t_8;
-
- /* "View.MemoryView":548
- * info.itemsize = self.view.itemsize
- * info.len = self.view.len
- * info.readonly = self.view.readonly # <<<<<<<<<<<<<<
- * info.obj = self
- *
- */
- __pyx_t_1 = __pyx_v_self->view.readonly;
- __pyx_v_info->readonly = __pyx_t_1;
-
- /* "View.MemoryView":549
- * info.len = self.view.len
- * info.readonly = self.view.readonly
- * info.obj = self # <<<<<<<<<<<<<<
- *
- * __pyx_getbuffer = capsule( &__pyx_memoryview_getbuffer, "getbuffer(obj, view, flags)")
- */
- __Pyx_INCREF(((PyObject *)__pyx_v_self));
- __Pyx_GIVEREF(((PyObject *)__pyx_v_self));
- __Pyx_GOTREF(__pyx_v_info->obj);
- __Pyx_DECREF(__pyx_v_info->obj);
- __pyx_v_info->obj = ((PyObject *)__pyx_v_self);
-
- /* "View.MemoryView":520
- *
- * @cname('getbuffer')
- * def __getbuffer__(self, Py_buffer *info, int flags): # <<<<<<<<<<<<<<
- * if flags & PyBUF_WRITABLE and self.view.readonly:
- * raise ValueError("Cannot create writable memory view from read-only memoryview")
- */
-
- /* function exit code */
- __pyx_r = 0;
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_AddTraceback("View.MemoryView.memoryview.__getbuffer__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = -1;
- if (__pyx_v_info->obj != NULL) {
- __Pyx_GOTREF(__pyx_v_info->obj);
- __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0;
- }
- goto __pyx_L2;
- __pyx_L0:;
- if (__pyx_v_info->obj == Py_None) {
- __Pyx_GOTREF(__pyx_v_info->obj);
- __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0;
- }
- __pyx_L2:;
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":555
- *
- * @property
- * def T(self): # <<<<<<<<<<<<<<
- * cdef _memoryviewslice result = memoryview_copy(self)
- * transpose_memslice(&result.from_slice)
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_1T_1__get__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_1T_1__get__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__get__ (wrapper)", 0);
- __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_1T___get__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_1T___get__(struct __pyx_memoryview_obj *__pyx_v_self) {
- struct __pyx_memoryviewslice_obj *__pyx_v_result = 0;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_t_2;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__get__", 0);
-
- /* "View.MemoryView":556
- * @property
- * def T(self):
- * cdef _memoryviewslice result = memoryview_copy(self) # <<<<<<<<<<<<<<
- * transpose_memslice(&result.from_slice)
- * return result
- */
- __pyx_t_1 = __pyx_memoryview_copy_object(__pyx_v_self); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 556, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- if (!(likely(((__pyx_t_1) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_1, __pyx_memoryviewslice_type))))) __PYX_ERR(1, 556, __pyx_L1_error)
- __pyx_v_result = ((struct __pyx_memoryviewslice_obj *)__pyx_t_1);
- __pyx_t_1 = 0;
-
- /* "View.MemoryView":557
- * def T(self):
- * cdef _memoryviewslice result = memoryview_copy(self)
- * transpose_memslice(&result.from_slice) # <<<<<<<<<<<<<<
- * return result
- *
- */
- __pyx_t_2 = __pyx_memslice_transpose((&__pyx_v_result->from_slice)); if (unlikely(__pyx_t_2 == ((int)0))) __PYX_ERR(1, 557, __pyx_L1_error)
-
- /* "View.MemoryView":558
- * cdef _memoryviewslice result = memoryview_copy(self)
- * transpose_memslice(&result.from_slice)
- * return result # <<<<<<<<<<<<<<
- *
- * @property
- */
- __Pyx_XDECREF(__pyx_r);
- __Pyx_INCREF(((PyObject *)__pyx_v_result));
- __pyx_r = ((PyObject *)__pyx_v_result);
- goto __pyx_L0;
-
- /* "View.MemoryView":555
- *
- * @property
- * def T(self): # <<<<<<<<<<<<<<
- * cdef _memoryviewslice result = memoryview_copy(self)
- * transpose_memslice(&result.from_slice)
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_AddTraceback("View.MemoryView.memoryview.T.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XDECREF((PyObject *)__pyx_v_result);
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":561
- *
- * @property
- * def base(self): # <<<<<<<<<<<<<<
- * return self.obj
- *
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4base_1__get__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4base_1__get__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__get__ (wrapper)", 0);
- __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_4base___get__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4base___get__(struct __pyx_memoryview_obj *__pyx_v_self) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__get__", 0);
-
- /* "View.MemoryView":562
- * @property
- * def base(self):
- * return self.obj # <<<<<<<<<<<<<<
- *
- * @property
- */
- __Pyx_XDECREF(__pyx_r);
- __Pyx_INCREF(__pyx_v_self->obj);
- __pyx_r = __pyx_v_self->obj;
- goto __pyx_L0;
-
- /* "View.MemoryView":561
- *
- * @property
- * def base(self): # <<<<<<<<<<<<<<
- * return self.obj
- *
- */
-
- /* function exit code */
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":565
- *
- * @property
- * def shape(self): # <<<<<<<<<<<<<<
- * return tuple([length for length in self.view.shape[:self.view.ndim]])
- *
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_5shape_1__get__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_5shape_1__get__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__get__ (wrapper)", 0);
- __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_5shape___get__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_5shape___get__(struct __pyx_memoryview_obj *__pyx_v_self) {
- Py_ssize_t __pyx_v_length;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- Py_ssize_t *__pyx_t_2;
- Py_ssize_t *__pyx_t_3;
- Py_ssize_t *__pyx_t_4;
- PyObject *__pyx_t_5 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__get__", 0);
-
- /* "View.MemoryView":566
- * @property
- * def shape(self):
- * return tuple([length for length in self.view.shape[:self.view.ndim]]) # <<<<<<<<<<<<<<
- *
- * @property
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 566, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_3 = (__pyx_v_self->view.shape + __pyx_v_self->view.ndim);
- for (__pyx_t_4 = __pyx_v_self->view.shape; __pyx_t_4 < __pyx_t_3; __pyx_t_4++) {
- __pyx_t_2 = __pyx_t_4;
- __pyx_v_length = (__pyx_t_2[0]);
- __pyx_t_5 = PyInt_FromSsize_t(__pyx_v_length); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 566, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- if (unlikely(__Pyx_ListComp_Append(__pyx_t_1, (PyObject*)__pyx_t_5))) __PYX_ERR(1, 566, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
- }
- __pyx_t_5 = PyList_AsTuple(((PyObject*)__pyx_t_1)); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 566, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __pyx_r = __pyx_t_5;
- __pyx_t_5 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":565
- *
- * @property
- * def shape(self): # <<<<<<<<<<<<<<
- * return tuple([length for length in self.view.shape[:self.view.ndim]])
- *
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_5);
- __Pyx_AddTraceback("View.MemoryView.memoryview.shape.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":569
- *
- * @property
- * def strides(self): # <<<<<<<<<<<<<<
- * if self.view.strides == NULL:
- *
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_7strides_1__get__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_7strides_1__get__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__get__ (wrapper)", 0);
- __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_7strides___get__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_7strides___get__(struct __pyx_memoryview_obj *__pyx_v_self) {
- Py_ssize_t __pyx_v_stride;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- PyObject *__pyx_t_2 = NULL;
- Py_ssize_t *__pyx_t_3;
- Py_ssize_t *__pyx_t_4;
- Py_ssize_t *__pyx_t_5;
- PyObject *__pyx_t_6 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__get__", 0);
-
- /* "View.MemoryView":570
- * @property
- * def strides(self):
- * if self.view.strides == NULL: # <<<<<<<<<<<<<<
- *
- * raise ValueError("Buffer view does not expose strides")
- */
- __pyx_t_1 = ((__pyx_v_self->view.strides == NULL) != 0);
- if (unlikely(__pyx_t_1)) {
-
- /* "View.MemoryView":572
- * if self.view.strides == NULL:
- *
- * raise ValueError("Buffer view does not expose strides") # <<<<<<<<<<<<<<
- *
- * return tuple([stride for stride in self.view.strides[:self.view.ndim]])
- */
- __pyx_t_2 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__12, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 572, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_Raise(__pyx_t_2, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __PYX_ERR(1, 572, __pyx_L1_error)
-
- /* "View.MemoryView":570
- * @property
- * def strides(self):
- * if self.view.strides == NULL: # <<<<<<<<<<<<<<
- *
- * raise ValueError("Buffer view does not expose strides")
- */
- }
-
- /* "View.MemoryView":574
- * raise ValueError("Buffer view does not expose strides")
- *
- * return tuple([stride for stride in self.view.strides[:self.view.ndim]]) # <<<<<<<<<<<<<<
- *
- * @property
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_2 = PyList_New(0); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 574, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_4 = (__pyx_v_self->view.strides + __pyx_v_self->view.ndim);
- for (__pyx_t_5 = __pyx_v_self->view.strides; __pyx_t_5 < __pyx_t_4; __pyx_t_5++) {
- __pyx_t_3 = __pyx_t_5;
- __pyx_v_stride = (__pyx_t_3[0]);
- __pyx_t_6 = PyInt_FromSsize_t(__pyx_v_stride); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 574, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_6);
- if (unlikely(__Pyx_ListComp_Append(__pyx_t_2, (PyObject*)__pyx_t_6))) __PYX_ERR(1, 574, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
- }
- __pyx_t_6 = PyList_AsTuple(((PyObject*)__pyx_t_2)); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 574, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_6);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __pyx_r = __pyx_t_6;
- __pyx_t_6 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":569
- *
- * @property
- * def strides(self): # <<<<<<<<<<<<<<
- * if self.view.strides == NULL:
- *
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_6);
- __Pyx_AddTraceback("View.MemoryView.memoryview.strides.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":577
- *
- * @property
- * def suboffsets(self): # <<<<<<<<<<<<<<
- * if self.view.suboffsets == NULL:
- * return (-1,) * self.view.ndim
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_10suboffsets_1__get__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_10suboffsets_1__get__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__get__ (wrapper)", 0);
- __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_10suboffsets___get__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_10suboffsets___get__(struct __pyx_memoryview_obj *__pyx_v_self) {
- Py_ssize_t __pyx_v_suboffset;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- PyObject *__pyx_t_2 = NULL;
- PyObject *__pyx_t_3 = NULL;
- Py_ssize_t *__pyx_t_4;
- Py_ssize_t *__pyx_t_5;
- Py_ssize_t *__pyx_t_6;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__get__", 0);
-
- /* "View.MemoryView":578
- * @property
- * def suboffsets(self):
- * if self.view.suboffsets == NULL: # <<<<<<<<<<<<<<
- * return (-1,) * self.view.ndim
- *
- */
- __pyx_t_1 = ((__pyx_v_self->view.suboffsets == NULL) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":579
- * def suboffsets(self):
- * if self.view.suboffsets == NULL:
- * return (-1,) * self.view.ndim # <<<<<<<<<<<<<<
- *
- * return tuple([suboffset for suboffset in self.view.suboffsets[:self.view.ndim]])
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_2 = __Pyx_PyInt_From_int(__pyx_v_self->view.ndim); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 579, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_3 = PyNumber_Multiply(__pyx_tuple__13, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 579, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __pyx_r = __pyx_t_3;
- __pyx_t_3 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":578
- * @property
- * def suboffsets(self):
- * if self.view.suboffsets == NULL: # <<<<<<<<<<<<<<
- * return (-1,) * self.view.ndim
- *
- */
- }
-
- /* "View.MemoryView":581
- * return (-1,) * self.view.ndim
- *
- * return tuple([suboffset for suboffset in self.view.suboffsets[:self.view.ndim]]) # <<<<<<<<<<<<<<
- *
- * @property
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_3 = PyList_New(0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 581, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_t_5 = (__pyx_v_self->view.suboffsets + __pyx_v_self->view.ndim);
- for (__pyx_t_6 = __pyx_v_self->view.suboffsets; __pyx_t_6 < __pyx_t_5; __pyx_t_6++) {
- __pyx_t_4 = __pyx_t_6;
- __pyx_v_suboffset = (__pyx_t_4[0]);
- __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_suboffset); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 581, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- if (unlikely(__Pyx_ListComp_Append(__pyx_t_3, (PyObject*)__pyx_t_2))) __PYX_ERR(1, 581, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- }
- __pyx_t_2 = PyList_AsTuple(((PyObject*)__pyx_t_3)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 581, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __pyx_r = __pyx_t_2;
- __pyx_t_2 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":577
- *
- * @property
- * def suboffsets(self): # <<<<<<<<<<<<<<
- * if self.view.suboffsets == NULL:
- * return (-1,) * self.view.ndim
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_AddTraceback("View.MemoryView.memoryview.suboffsets.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":584
- *
- * @property
- * def ndim(self): # <<<<<<<<<<<<<<
- * return self.view.ndim
- *
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4ndim_1__get__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4ndim_1__get__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__get__ (wrapper)", 0);
- __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_4ndim___get__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4ndim___get__(struct __pyx_memoryview_obj *__pyx_v_self) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__get__", 0);
-
- /* "View.MemoryView":585
- * @property
- * def ndim(self):
- * return self.view.ndim # <<<<<<<<<<<<<<
- *
- * @property
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_self->view.ndim); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 585, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_r = __pyx_t_1;
- __pyx_t_1 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":584
- *
- * @property
- * def ndim(self): # <<<<<<<<<<<<<<
- * return self.view.ndim
- *
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_AddTraceback("View.MemoryView.memoryview.ndim.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":588
- *
- * @property
- * def itemsize(self): # <<<<<<<<<<<<<<
- * return self.view.itemsize
- *
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_8itemsize_1__get__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_8itemsize_1__get__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__get__ (wrapper)", 0);
- __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_8itemsize___get__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_8itemsize___get__(struct __pyx_memoryview_obj *__pyx_v_self) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__get__", 0);
-
- /* "View.MemoryView":589
- * @property
- * def itemsize(self):
- * return self.view.itemsize # <<<<<<<<<<<<<<
- *
- * @property
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_1 = PyInt_FromSsize_t(__pyx_v_self->view.itemsize); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 589, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_r = __pyx_t_1;
- __pyx_t_1 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":588
- *
- * @property
- * def itemsize(self): # <<<<<<<<<<<<<<
- * return self.view.itemsize
- *
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_AddTraceback("View.MemoryView.memoryview.itemsize.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":592
- *
- * @property
- * def nbytes(self): # <<<<<<<<<<<<<<
- * return self.size * self.view.itemsize
- *
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_6nbytes_1__get__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_6nbytes_1__get__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__get__ (wrapper)", 0);
- __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_6nbytes___get__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_6nbytes___get__(struct __pyx_memoryview_obj *__pyx_v_self) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- PyObject *__pyx_t_2 = NULL;
- PyObject *__pyx_t_3 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__get__", 0);
-
- /* "View.MemoryView":593
- * @property
- * def nbytes(self):
- * return self.size * self.view.itemsize # <<<<<<<<<<<<<<
- *
- * @property
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_size); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 593, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_self->view.itemsize); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 593, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_3 = PyNumber_Multiply(__pyx_t_1, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 593, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __pyx_r = __pyx_t_3;
- __pyx_t_3 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":592
- *
- * @property
- * def nbytes(self): # <<<<<<<<<<<<<<
- * return self.size * self.view.itemsize
- *
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_AddTraceback("View.MemoryView.memoryview.nbytes.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":596
- *
- * @property
- * def size(self): # <<<<<<<<<<<<<<
- * if self._size is None:
- * result = 1
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4size_1__get__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4size_1__get__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__get__ (wrapper)", 0);
- __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_4size___get__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4size___get__(struct __pyx_memoryview_obj *__pyx_v_self) {
- PyObject *__pyx_v_result = NULL;
- PyObject *__pyx_v_length = NULL;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- int __pyx_t_2;
- Py_ssize_t *__pyx_t_3;
- Py_ssize_t *__pyx_t_4;
- Py_ssize_t *__pyx_t_5;
- PyObject *__pyx_t_6 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__get__", 0);
-
- /* "View.MemoryView":597
- * @property
- * def size(self):
- * if self._size is None: # <<<<<<<<<<<<<<
- * result = 1
- *
- */
- __pyx_t_1 = (__pyx_v_self->_size == Py_None);
- __pyx_t_2 = (__pyx_t_1 != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":598
- * def size(self):
- * if self._size is None:
- * result = 1 # <<<<<<<<<<<<<<
- *
- * for length in self.view.shape[:self.view.ndim]:
- */
- __Pyx_INCREF(__pyx_int_1);
- __pyx_v_result = __pyx_int_1;
-
- /* "View.MemoryView":600
- * result = 1
- *
- * for length in self.view.shape[:self.view.ndim]: # <<<<<<<<<<<<<<
- * result *= length
- *
- */
- __pyx_t_4 = (__pyx_v_self->view.shape + __pyx_v_self->view.ndim);
- for (__pyx_t_5 = __pyx_v_self->view.shape; __pyx_t_5 < __pyx_t_4; __pyx_t_5++) {
- __pyx_t_3 = __pyx_t_5;
- __pyx_t_6 = PyInt_FromSsize_t((__pyx_t_3[0])); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 600, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_6);
- __Pyx_XDECREF_SET(__pyx_v_length, __pyx_t_6);
- __pyx_t_6 = 0;
-
- /* "View.MemoryView":601
- *
- * for length in self.view.shape[:self.view.ndim]:
- * result *= length # <<<<<<<<<<<<<<
- *
- * self._size = result
- */
- __pyx_t_6 = PyNumber_InPlaceMultiply(__pyx_v_result, __pyx_v_length); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 601, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_6);
- __Pyx_DECREF_SET(__pyx_v_result, __pyx_t_6);
- __pyx_t_6 = 0;
- }
-
- /* "View.MemoryView":603
- * result *= length
- *
- * self._size = result # <<<<<<<<<<<<<<
- *
- * return self._size
- */
- __Pyx_INCREF(__pyx_v_result);
- __Pyx_GIVEREF(__pyx_v_result);
- __Pyx_GOTREF(__pyx_v_self->_size);
- __Pyx_DECREF(__pyx_v_self->_size);
- __pyx_v_self->_size = __pyx_v_result;
-
- /* "View.MemoryView":597
- * @property
- * def size(self):
- * if self._size is None: # <<<<<<<<<<<<<<
- * result = 1
- *
- */
- }
-
- /* "View.MemoryView":605
- * self._size = result
- *
- * return self._size # <<<<<<<<<<<<<<
- *
- * def __len__(self):
- */
- __Pyx_XDECREF(__pyx_r);
- __Pyx_INCREF(__pyx_v_self->_size);
- __pyx_r = __pyx_v_self->_size;
- goto __pyx_L0;
-
- /* "View.MemoryView":596
- *
- * @property
- * def size(self): # <<<<<<<<<<<<<<
- * if self._size is None:
- * result = 1
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_6);
- __Pyx_AddTraceback("View.MemoryView.memoryview.size.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XDECREF(__pyx_v_result);
- __Pyx_XDECREF(__pyx_v_length);
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":607
- * return self._size
- *
- * def __len__(self): # <<<<<<<<<<<<<<
- * if self.view.ndim >= 1:
- * return self.view.shape[0]
- */
-
-/* Python wrapper */
-static Py_ssize_t __pyx_memoryview___len__(PyObject *__pyx_v_self); /*proto*/
-static Py_ssize_t __pyx_memoryview___len__(PyObject *__pyx_v_self) {
- Py_ssize_t __pyx_r;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__len__ (wrapper)", 0);
- __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_10__len__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static Py_ssize_t __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_10__len__(struct __pyx_memoryview_obj *__pyx_v_self) {
- Py_ssize_t __pyx_r;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- __Pyx_RefNannySetupContext("__len__", 0);
-
- /* "View.MemoryView":608
- *
- * def __len__(self):
- * if self.view.ndim >= 1: # <<<<<<<<<<<<<<
- * return self.view.shape[0]
- *
- */
- __pyx_t_1 = ((__pyx_v_self->view.ndim >= 1) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":609
- * def __len__(self):
- * if self.view.ndim >= 1:
- * return self.view.shape[0] # <<<<<<<<<<<<<<
- *
- * return 0
- */
- __pyx_r = (__pyx_v_self->view.shape[0]);
- goto __pyx_L0;
-
- /* "View.MemoryView":608
- *
- * def __len__(self):
- * if self.view.ndim >= 1: # <<<<<<<<<<<<<<
- * return self.view.shape[0]
- *
- */
- }
-
- /* "View.MemoryView":611
- * return self.view.shape[0]
- *
- * return 0 # <<<<<<<<<<<<<<
- *
- * def __repr__(self):
- */
- __pyx_r = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":607
- * return self._size
- *
- * def __len__(self): # <<<<<<<<<<<<<<
- * if self.view.ndim >= 1:
- * return self.view.shape[0]
- */
-
- /* function exit code */
- __pyx_L0:;
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":613
- * return 0
- *
- * def __repr__(self): # <<<<<<<<<<<<<<
- * return "" % (self.base.__class__.__name__,
- * id(self))
- */
-
-/* Python wrapper */
-static PyObject *__pyx_memoryview___repr__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_memoryview___repr__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__repr__ (wrapper)", 0);
- __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_12__repr__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_12__repr__(struct __pyx_memoryview_obj *__pyx_v_self) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- PyObject *__pyx_t_2 = NULL;
- PyObject *__pyx_t_3 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__repr__", 0);
-
- /* "View.MemoryView":614
- *
- * def __repr__(self):
- * return "" % (self.base.__class__.__name__, # <<<<<<<<<<<<<<
- * id(self))
- *
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_base); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 614, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_class); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 614, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_name_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 614, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
-
- /* "View.MemoryView":615
- * def __repr__(self):
- * return "" % (self.base.__class__.__name__,
- * id(self)) # <<<<<<<<<<<<<<
- *
- * def __str__(self):
- */
- __pyx_t_2 = __Pyx_PyObject_CallOneArg(__pyx_builtin_id, ((PyObject *)__pyx_v_self)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 615, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
-
- /* "View.MemoryView":614
- *
- * def __repr__(self):
- * return "" % (self.base.__class__.__name__, # <<<<<<<<<<<<<<
- * id(self))
- *
- */
- __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 614, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_GIVEREF(__pyx_t_1);
- PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_1);
- __Pyx_GIVEREF(__pyx_t_2);
- PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_2);
- __pyx_t_1 = 0;
- __pyx_t_2 = 0;
- __pyx_t_2 = __Pyx_PyString_Format(__pyx_kp_s_MemoryView_of_r_at_0x_x, __pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 614, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __pyx_r = __pyx_t_2;
- __pyx_t_2 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":613
- * return 0
- *
- * def __repr__(self): # <<<<<<<<<<<<<<
- * return "" % (self.base.__class__.__name__,
- * id(self))
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_AddTraceback("View.MemoryView.memoryview.__repr__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":617
- * id(self))
- *
- * def __str__(self): # <<<<<<<<<<<<<<
- * return "" % (self.base.__class__.__name__,)
- *
- */
-
-/* Python wrapper */
-static PyObject *__pyx_memoryview___str__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_memoryview___str__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__str__ (wrapper)", 0);
- __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_14__str__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_14__str__(struct __pyx_memoryview_obj *__pyx_v_self) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- PyObject *__pyx_t_2 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__str__", 0);
-
- /* "View.MemoryView":618
- *
- * def __str__(self):
- * return "" % (self.base.__class__.__name__,) # <<<<<<<<<<<<<<
- *
- *
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_base); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 618, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_class); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 618, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_name_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 618, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 618, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_GIVEREF(__pyx_t_1);
- PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_1);
- __pyx_t_1 = 0;
- __pyx_t_1 = __Pyx_PyString_Format(__pyx_kp_s_MemoryView_of_r_object, __pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 618, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __pyx_r = __pyx_t_1;
- __pyx_t_1 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":617
- * id(self))
- *
- * def __str__(self): # <<<<<<<<<<<<<<
- * return "" % (self.base.__class__.__name__,)
- *
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_AddTraceback("View.MemoryView.memoryview.__str__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":621
- *
- *
- * def is_c_contig(self): # <<<<<<<<<<<<<<
- * cdef __Pyx_memviewslice *mslice
- * cdef __Pyx_memviewslice tmp
- */
-
-/* Python wrapper */
-static PyObject *__pyx_memoryview_is_c_contig(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/
-static PyObject *__pyx_memoryview_is_c_contig(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("is_c_contig (wrapper)", 0);
- __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_16is_c_contig(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_16is_c_contig(struct __pyx_memoryview_obj *__pyx_v_self) {
- __Pyx_memviewslice *__pyx_v_mslice;
- __Pyx_memviewslice __pyx_v_tmp;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- __Pyx_memviewslice *__pyx_t_1;
- PyObject *__pyx_t_2 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("is_c_contig", 0);
-
- /* "View.MemoryView":624
- * cdef __Pyx_memviewslice *mslice
- * cdef __Pyx_memviewslice tmp
- * mslice = get_slice_from_memview(self, &tmp) # <<<<<<<<<<<<<<
- * return slice_is_contig(mslice[0], 'C', self.view.ndim)
- *
- */
- __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(__pyx_v_self, (&__pyx_v_tmp)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 624, __pyx_L1_error)
- __pyx_v_mslice = __pyx_t_1;
-
- /* "View.MemoryView":625
- * cdef __Pyx_memviewslice tmp
- * mslice = get_slice_from_memview(self, &tmp)
- * return slice_is_contig(mslice[0], 'C', self.view.ndim) # <<<<<<<<<<<<<<
- *
- * def is_f_contig(self):
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_memviewslice_is_contig((__pyx_v_mslice[0]), 'C', __pyx_v_self->view.ndim)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 625, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_r = __pyx_t_2;
- __pyx_t_2 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":621
- *
- *
- * def is_c_contig(self): # <<<<<<<<<<<<<<
- * cdef __Pyx_memviewslice *mslice
- * cdef __Pyx_memviewslice tmp
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_AddTraceback("View.MemoryView.memoryview.is_c_contig", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":627
- * return slice_is_contig(mslice[0], 'C', self.view.ndim)
- *
- * def is_f_contig(self): # <<<<<<<<<<<<<<
- * cdef __Pyx_memviewslice *mslice
- * cdef __Pyx_memviewslice tmp
- */
-
-/* Python wrapper */
-static PyObject *__pyx_memoryview_is_f_contig(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/
-static PyObject *__pyx_memoryview_is_f_contig(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("is_f_contig (wrapper)", 0);
- __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_18is_f_contig(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_18is_f_contig(struct __pyx_memoryview_obj *__pyx_v_self) {
- __Pyx_memviewslice *__pyx_v_mslice;
- __Pyx_memviewslice __pyx_v_tmp;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- __Pyx_memviewslice *__pyx_t_1;
- PyObject *__pyx_t_2 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("is_f_contig", 0);
-
- /* "View.MemoryView":630
- * cdef __Pyx_memviewslice *mslice
- * cdef __Pyx_memviewslice tmp
- * mslice = get_slice_from_memview(self, &tmp) # <<<<<<<<<<<<<<
- * return slice_is_contig(mslice[0], 'F', self.view.ndim)
- *
- */
- __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(__pyx_v_self, (&__pyx_v_tmp)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 630, __pyx_L1_error)
- __pyx_v_mslice = __pyx_t_1;
-
- /* "View.MemoryView":631
- * cdef __Pyx_memviewslice tmp
- * mslice = get_slice_from_memview(self, &tmp)
- * return slice_is_contig(mslice[0], 'F', self.view.ndim) # <<<<<<<<<<<<<<
- *
- * def copy(self):
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_memviewslice_is_contig((__pyx_v_mslice[0]), 'F', __pyx_v_self->view.ndim)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 631, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_r = __pyx_t_2;
- __pyx_t_2 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":627
- * return slice_is_contig(mslice[0], 'C', self.view.ndim)
- *
- * def is_f_contig(self): # <<<<<<<<<<<<<<
- * cdef __Pyx_memviewslice *mslice
- * cdef __Pyx_memviewslice tmp
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_AddTraceback("View.MemoryView.memoryview.is_f_contig", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":633
- * return slice_is_contig(mslice[0], 'F', self.view.ndim)
- *
- * def copy(self): # <<<<<<<<<<<<<<
- * cdef __Pyx_memviewslice mslice
- * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS
- */
-
-/* Python wrapper */
-static PyObject *__pyx_memoryview_copy(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/
-static PyObject *__pyx_memoryview_copy(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("copy (wrapper)", 0);
- __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_20copy(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_20copy(struct __pyx_memoryview_obj *__pyx_v_self) {
- __Pyx_memviewslice __pyx_v_mslice;
- int __pyx_v_flags;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- __Pyx_memviewslice __pyx_t_1;
- PyObject *__pyx_t_2 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("copy", 0);
-
- /* "View.MemoryView":635
- * def copy(self):
- * cdef __Pyx_memviewslice mslice
- * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS # <<<<<<<<<<<<<<
- *
- * slice_copy(self, &mslice)
- */
- __pyx_v_flags = (__pyx_v_self->flags & (~PyBUF_F_CONTIGUOUS));
-
- /* "View.MemoryView":637
- * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS
- *
- * slice_copy(self, &mslice) # <<<<<<<<<<<<<<
- * mslice = slice_copy_contig(&mslice, "c", self.view.ndim,
- * self.view.itemsize,
- */
- __pyx_memoryview_slice_copy(__pyx_v_self, (&__pyx_v_mslice));
-
- /* "View.MemoryView":638
- *
- * slice_copy(self, &mslice)
- * mslice = slice_copy_contig(&mslice, "c", self.view.ndim, # <<<<<<<<<<<<<<
- * self.view.itemsize,
- * flags|PyBUF_C_CONTIGUOUS,
- */
- __pyx_t_1 = __pyx_memoryview_copy_new_contig((&__pyx_v_mslice), ((char *)"c"), __pyx_v_self->view.ndim, __pyx_v_self->view.itemsize, (__pyx_v_flags | PyBUF_C_CONTIGUOUS), __pyx_v_self->dtype_is_object); if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 638, __pyx_L1_error)
- __pyx_v_mslice = __pyx_t_1;
-
- /* "View.MemoryView":643
- * self.dtype_is_object)
- *
- * return memoryview_copy_from_slice(self, &mslice) # <<<<<<<<<<<<<<
- *
- * def copy_fortran(self):
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_2 = __pyx_memoryview_copy_object_from_slice(__pyx_v_self, (&__pyx_v_mslice)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 643, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_r = __pyx_t_2;
- __pyx_t_2 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":633
- * return slice_is_contig(mslice[0], 'F', self.view.ndim)
- *
- * def copy(self): # <<<<<<<<<<<<<<
- * cdef __Pyx_memviewslice mslice
- * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_AddTraceback("View.MemoryView.memoryview.copy", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":645
- * return memoryview_copy_from_slice(self, &mslice)
- *
- * def copy_fortran(self): # <<<<<<<<<<<<<<
- * cdef __Pyx_memviewslice src, dst
- * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS
- */
-
-/* Python wrapper */
-static PyObject *__pyx_memoryview_copy_fortran(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/
-static PyObject *__pyx_memoryview_copy_fortran(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("copy_fortran (wrapper)", 0);
- __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_22copy_fortran(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_22copy_fortran(struct __pyx_memoryview_obj *__pyx_v_self) {
- __Pyx_memviewslice __pyx_v_src;
- __Pyx_memviewslice __pyx_v_dst;
- int __pyx_v_flags;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- __Pyx_memviewslice __pyx_t_1;
- PyObject *__pyx_t_2 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("copy_fortran", 0);
-
- /* "View.MemoryView":647
- * def copy_fortran(self):
- * cdef __Pyx_memviewslice src, dst
- * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS # <<<<<<<<<<<<<<
- *
- * slice_copy(self, &src)
- */
- __pyx_v_flags = (__pyx_v_self->flags & (~PyBUF_C_CONTIGUOUS));
-
- /* "View.MemoryView":649
- * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS
- *
- * slice_copy(self, &src) # <<<<<<<<<<<<<<
- * dst = slice_copy_contig(&src, "fortran", self.view.ndim,
- * self.view.itemsize,
- */
- __pyx_memoryview_slice_copy(__pyx_v_self, (&__pyx_v_src));
-
- /* "View.MemoryView":650
- *
- * slice_copy(self, &src)
- * dst = slice_copy_contig(&src, "fortran", self.view.ndim, # <<<<<<<<<<<<<<
- * self.view.itemsize,
- * flags|PyBUF_F_CONTIGUOUS,
- */
- __pyx_t_1 = __pyx_memoryview_copy_new_contig((&__pyx_v_src), ((char *)"fortran"), __pyx_v_self->view.ndim, __pyx_v_self->view.itemsize, (__pyx_v_flags | PyBUF_F_CONTIGUOUS), __pyx_v_self->dtype_is_object); if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 650, __pyx_L1_error)
- __pyx_v_dst = __pyx_t_1;
-
- /* "View.MemoryView":655
- * self.dtype_is_object)
- *
- * return memoryview_copy_from_slice(self, &dst) # <<<<<<<<<<<<<<
- *
- *
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_2 = __pyx_memoryview_copy_object_from_slice(__pyx_v_self, (&__pyx_v_dst)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 655, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_r = __pyx_t_2;
- __pyx_t_2 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":645
- * return memoryview_copy_from_slice(self, &mslice)
- *
- * def copy_fortran(self): # <<<<<<<<<<<<<<
- * cdef __Pyx_memviewslice src, dst
- * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_AddTraceback("View.MemoryView.memoryview.copy_fortran", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "(tree fragment)":1
- * def __reduce_cython__(self): # <<<<<<<<<<<<<<
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state):
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw___pyx_memoryview_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/
-static PyObject *__pyx_pw___pyx_memoryview_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0);
- __pyx_r = __pyx_pf___pyx_memoryview___reduce_cython__(((struct __pyx_memoryview_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf___pyx_memoryview___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__reduce_cython__", 0);
-
- /* "(tree fragment)":2
- * def __reduce_cython__(self):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<<
- * def __setstate_cython__(self, __pyx_state):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- */
- __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__14, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_Raise(__pyx_t_1, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __PYX_ERR(1, 2, __pyx_L1_error)
-
- /* "(tree fragment)":1
- * def __reduce_cython__(self): # <<<<<<<<<<<<<<
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state):
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_AddTraceback("View.MemoryView.memoryview.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "(tree fragment)":3
- * def __reduce_cython__(self):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<<
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw___pyx_memoryview_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/
-static PyObject *__pyx_pw___pyx_memoryview_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0);
- __pyx_r = __pyx_pf___pyx_memoryview_2__setstate_cython__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf___pyx_memoryview_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__setstate_cython__", 0);
-
- /* "(tree fragment)":4
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<<
- */
- __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__15, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_Raise(__pyx_t_1, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __PYX_ERR(1, 4, __pyx_L1_error)
-
- /* "(tree fragment)":3
- * def __reduce_cython__(self):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<<
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_AddTraceback("View.MemoryView.memoryview.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":659
- *
- * @cname('__pyx_memoryview_new')
- * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo): # <<<<<<<<<<<<<<
- * cdef memoryview result = memoryview(o, flags, dtype_is_object)
- * result.typeinfo = typeinfo
- */
-
-static PyObject *__pyx_memoryview_new(PyObject *__pyx_v_o, int __pyx_v_flags, int __pyx_v_dtype_is_object, __Pyx_TypeInfo *__pyx_v_typeinfo) {
- struct __pyx_memoryview_obj *__pyx_v_result = 0;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- PyObject *__pyx_t_2 = NULL;
- PyObject *__pyx_t_3 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("memoryview_cwrapper", 0);
-
- /* "View.MemoryView":660
- * @cname('__pyx_memoryview_new')
- * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo):
- * cdef memoryview result = memoryview(o, flags, dtype_is_object) # <<<<<<<<<<<<<<
- * result.typeinfo = typeinfo
- * return result
- */
- __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_flags); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 660, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_v_dtype_is_object); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 660, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 660, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_INCREF(__pyx_v_o);
- __Pyx_GIVEREF(__pyx_v_o);
- PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_o);
- __Pyx_GIVEREF(__pyx_t_1);
- PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1);
- __Pyx_GIVEREF(__pyx_t_2);
- PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_t_2);
- __pyx_t_1 = 0;
- __pyx_t_2 = 0;
- __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryview_type), __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 660, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __pyx_v_result = ((struct __pyx_memoryview_obj *)__pyx_t_2);
- __pyx_t_2 = 0;
-
- /* "View.MemoryView":661
- * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo):
- * cdef memoryview result = memoryview(o, flags, dtype_is_object)
- * result.typeinfo = typeinfo # <<<<<<<<<<<<<<
- * return result
- *
- */
- __pyx_v_result->typeinfo = __pyx_v_typeinfo;
-
- /* "View.MemoryView":662
- * cdef memoryview result = memoryview(o, flags, dtype_is_object)
- * result.typeinfo = typeinfo
- * return result # <<<<<<<<<<<<<<
- *
- * @cname('__pyx_memoryview_check')
- */
- __Pyx_XDECREF(__pyx_r);
- __Pyx_INCREF(((PyObject *)__pyx_v_result));
- __pyx_r = ((PyObject *)__pyx_v_result);
- goto __pyx_L0;
-
- /* "View.MemoryView":659
- *
- * @cname('__pyx_memoryview_new')
- * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo): # <<<<<<<<<<<<<<
- * cdef memoryview result = memoryview(o, flags, dtype_is_object)
- * result.typeinfo = typeinfo
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_AddTraceback("View.MemoryView.memoryview_cwrapper", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XDECREF((PyObject *)__pyx_v_result);
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":665
- *
- * @cname('__pyx_memoryview_check')
- * cdef inline bint memoryview_check(object o): # <<<<<<<<<<<<<<
- * return isinstance(o, memoryview)
- *
- */
-
-static CYTHON_INLINE int __pyx_memoryview_check(PyObject *__pyx_v_o) {
- int __pyx_r;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- __Pyx_RefNannySetupContext("memoryview_check", 0);
-
- /* "View.MemoryView":666
- * @cname('__pyx_memoryview_check')
- * cdef inline bint memoryview_check(object o):
- * return isinstance(o, memoryview) # <<<<<<<<<<<<<<
- *
- * cdef tuple _unellipsify(object index, int ndim):
- */
- __pyx_t_1 = __Pyx_TypeCheck(__pyx_v_o, __pyx_memoryview_type);
- __pyx_r = __pyx_t_1;
- goto __pyx_L0;
-
- /* "View.MemoryView":665
- *
- * @cname('__pyx_memoryview_check')
- * cdef inline bint memoryview_check(object o): # <<<<<<<<<<<<<<
- * return isinstance(o, memoryview)
- *
- */
-
- /* function exit code */
- __pyx_L0:;
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":668
- * return isinstance(o, memoryview)
- *
- * cdef tuple _unellipsify(object index, int ndim): # <<<<<<<<<<<<<<
- * """
- * Replace all ellipses with full slices and fill incomplete indices with
- */
-
-static PyObject *_unellipsify(PyObject *__pyx_v_index, int __pyx_v_ndim) {
- PyObject *__pyx_v_tup = NULL;
- PyObject *__pyx_v_result = NULL;
- int __pyx_v_have_slices;
- int __pyx_v_seen_ellipsis;
- CYTHON_UNUSED PyObject *__pyx_v_idx = NULL;
- PyObject *__pyx_v_item = NULL;
- Py_ssize_t __pyx_v_nslices;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- int __pyx_t_2;
- PyObject *__pyx_t_3 = NULL;
- PyObject *__pyx_t_4 = NULL;
- Py_ssize_t __pyx_t_5;
- PyObject *(*__pyx_t_6)(PyObject *);
- PyObject *__pyx_t_7 = NULL;
- Py_ssize_t __pyx_t_8;
- int __pyx_t_9;
- int __pyx_t_10;
- PyObject *__pyx_t_11 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("_unellipsify", 0);
-
- /* "View.MemoryView":673
- * full slices.
- * """
- * if not isinstance(index, tuple): # <<<<<<<<<<<<<<
- * tup = (index,)
- * else:
- */
- __pyx_t_1 = PyTuple_Check(__pyx_v_index);
- __pyx_t_2 = ((!(__pyx_t_1 != 0)) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":674
- * """
- * if not isinstance(index, tuple):
- * tup = (index,) # <<<<<<<<<<<<<<
- * else:
- * tup = index
- */
- __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 674, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_INCREF(__pyx_v_index);
- __Pyx_GIVEREF(__pyx_v_index);
- PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_index);
- __pyx_v_tup = __pyx_t_3;
- __pyx_t_3 = 0;
-
- /* "View.MemoryView":673
- * full slices.
- * """
- * if not isinstance(index, tuple): # <<<<<<<<<<<<<<
- * tup = (index,)
- * else:
- */
- goto __pyx_L3;
- }
-
- /* "View.MemoryView":676
- * tup = (index,)
- * else:
- * tup = index # <<<<<<<<<<<<<<
- *
- * result = []
- */
- /*else*/ {
- __Pyx_INCREF(__pyx_v_index);
- __pyx_v_tup = __pyx_v_index;
- }
- __pyx_L3:;
-
- /* "View.MemoryView":678
- * tup = index
- *
- * result = [] # <<<<<<<<<<<<<<
- * have_slices = False
- * seen_ellipsis = False
- */
- __pyx_t_3 = PyList_New(0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 678, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_v_result = ((PyObject*)__pyx_t_3);
- __pyx_t_3 = 0;
-
- /* "View.MemoryView":679
- *
- * result = []
- * have_slices = False # <<<<<<<<<<<<<<
- * seen_ellipsis = False
- * for idx, item in enumerate(tup):
- */
- __pyx_v_have_slices = 0;
-
- /* "View.MemoryView":680
- * result = []
- * have_slices = False
- * seen_ellipsis = False # <<<<<<<<<<<<<<
- * for idx, item in enumerate(tup):
- * if item is Ellipsis:
- */
- __pyx_v_seen_ellipsis = 0;
-
- /* "View.MemoryView":681
- * have_slices = False
- * seen_ellipsis = False
- * for idx, item in enumerate(tup): # <<<<<<<<<<<<<<
- * if item is Ellipsis:
- * if not seen_ellipsis:
- */
- __Pyx_INCREF(__pyx_int_0);
- __pyx_t_3 = __pyx_int_0;
- if (likely(PyList_CheckExact(__pyx_v_tup)) || PyTuple_CheckExact(__pyx_v_tup)) {
- __pyx_t_4 = __pyx_v_tup; __Pyx_INCREF(__pyx_t_4); __pyx_t_5 = 0;
- __pyx_t_6 = NULL;
- } else {
- __pyx_t_5 = -1; __pyx_t_4 = PyObject_GetIter(__pyx_v_tup); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 681, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __pyx_t_6 = Py_TYPE(__pyx_t_4)->tp_iternext; if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 681, __pyx_L1_error)
- }
- for (;;) {
- if (likely(!__pyx_t_6)) {
- if (likely(PyList_CheckExact(__pyx_t_4))) {
- if (__pyx_t_5 >= PyList_GET_SIZE(__pyx_t_4)) break;
- #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- __pyx_t_7 = PyList_GET_ITEM(__pyx_t_4, __pyx_t_5); __Pyx_INCREF(__pyx_t_7); __pyx_t_5++; if (unlikely(0 < 0)) __PYX_ERR(1, 681, __pyx_L1_error)
- #else
- __pyx_t_7 = PySequence_ITEM(__pyx_t_4, __pyx_t_5); __pyx_t_5++; if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 681, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_7);
- #endif
- } else {
- if (__pyx_t_5 >= PyTuple_GET_SIZE(__pyx_t_4)) break;
- #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- __pyx_t_7 = PyTuple_GET_ITEM(__pyx_t_4, __pyx_t_5); __Pyx_INCREF(__pyx_t_7); __pyx_t_5++; if (unlikely(0 < 0)) __PYX_ERR(1, 681, __pyx_L1_error)
- #else
- __pyx_t_7 = PySequence_ITEM(__pyx_t_4, __pyx_t_5); __pyx_t_5++; if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 681, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_7);
- #endif
- }
- } else {
- __pyx_t_7 = __pyx_t_6(__pyx_t_4);
- if (unlikely(!__pyx_t_7)) {
- PyObject* exc_type = PyErr_Occurred();
- if (exc_type) {
- if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();
- else __PYX_ERR(1, 681, __pyx_L1_error)
- }
- break;
- }
- __Pyx_GOTREF(__pyx_t_7);
- }
- __Pyx_XDECREF_SET(__pyx_v_item, __pyx_t_7);
- __pyx_t_7 = 0;
- __Pyx_INCREF(__pyx_t_3);
- __Pyx_XDECREF_SET(__pyx_v_idx, __pyx_t_3);
- __pyx_t_7 = __Pyx_PyInt_AddObjC(__pyx_t_3, __pyx_int_1, 1, 0, 0); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 681, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_7);
- __Pyx_DECREF(__pyx_t_3);
- __pyx_t_3 = __pyx_t_7;
- __pyx_t_7 = 0;
-
- /* "View.MemoryView":682
- * seen_ellipsis = False
- * for idx, item in enumerate(tup):
- * if item is Ellipsis: # <<<<<<<<<<<<<<
- * if not seen_ellipsis:
- * result.extend([slice(None)] * (ndim - len(tup) + 1))
- */
- __pyx_t_2 = (__pyx_v_item == __pyx_builtin_Ellipsis);
- __pyx_t_1 = (__pyx_t_2 != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":683
- * for idx, item in enumerate(tup):
- * if item is Ellipsis:
- * if not seen_ellipsis: # <<<<<<<<<<<<<<
- * result.extend([slice(None)] * (ndim - len(tup) + 1))
- * seen_ellipsis = True
- */
- __pyx_t_1 = ((!(__pyx_v_seen_ellipsis != 0)) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":684
- * if item is Ellipsis:
- * if not seen_ellipsis:
- * result.extend([slice(None)] * (ndim - len(tup) + 1)) # <<<<<<<<<<<<<<
- * seen_ellipsis = True
- * else:
- */
- __pyx_t_8 = PyObject_Length(__pyx_v_tup); if (unlikely(__pyx_t_8 == ((Py_ssize_t)-1))) __PYX_ERR(1, 684, __pyx_L1_error)
- __pyx_t_7 = PyList_New(1 * ((((__pyx_v_ndim - __pyx_t_8) + 1)<0) ? 0:((__pyx_v_ndim - __pyx_t_8) + 1))); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 684, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_7);
- { Py_ssize_t __pyx_temp;
- for (__pyx_temp=0; __pyx_temp < ((__pyx_v_ndim - __pyx_t_8) + 1); __pyx_temp++) {
- __Pyx_INCREF(__pyx_slice__16);
- __Pyx_GIVEREF(__pyx_slice__16);
- PyList_SET_ITEM(__pyx_t_7, __pyx_temp, __pyx_slice__16);
- }
- }
- __pyx_t_9 = __Pyx_PyList_Extend(__pyx_v_result, __pyx_t_7); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 684, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
-
- /* "View.MemoryView":685
- * if not seen_ellipsis:
- * result.extend([slice(None)] * (ndim - len(tup) + 1))
- * seen_ellipsis = True # <<<<<<<<<<<<<<
- * else:
- * result.append(slice(None))
- */
- __pyx_v_seen_ellipsis = 1;
-
- /* "View.MemoryView":683
- * for idx, item in enumerate(tup):
- * if item is Ellipsis:
- * if not seen_ellipsis: # <<<<<<<<<<<<<<
- * result.extend([slice(None)] * (ndim - len(tup) + 1))
- * seen_ellipsis = True
- */
- goto __pyx_L7;
- }
-
- /* "View.MemoryView":687
- * seen_ellipsis = True
- * else:
- * result.append(slice(None)) # <<<<<<<<<<<<<<
- * have_slices = True
- * else:
- */
- /*else*/ {
- __pyx_t_9 = __Pyx_PyList_Append(__pyx_v_result, __pyx_slice__16); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 687, __pyx_L1_error)
- }
- __pyx_L7:;
-
- /* "View.MemoryView":688
- * else:
- * result.append(slice(None))
- * have_slices = True # <<<<<<<<<<<<<<
- * else:
- * if not isinstance(item, slice) and not PyIndex_Check(item):
- */
- __pyx_v_have_slices = 1;
-
- /* "View.MemoryView":682
- * seen_ellipsis = False
- * for idx, item in enumerate(tup):
- * if item is Ellipsis: # <<<<<<<<<<<<<<
- * if not seen_ellipsis:
- * result.extend([slice(None)] * (ndim - len(tup) + 1))
- */
- goto __pyx_L6;
- }
-
- /* "View.MemoryView":690
- * have_slices = True
- * else:
- * if not isinstance(item, slice) and not PyIndex_Check(item): # <<<<<<<<<<<<<<
- * raise TypeError("Cannot index with type '%s'" % type(item))
- *
- */
- /*else*/ {
- __pyx_t_2 = PySlice_Check(__pyx_v_item);
- __pyx_t_10 = ((!(__pyx_t_2 != 0)) != 0);
- if (__pyx_t_10) {
- } else {
- __pyx_t_1 = __pyx_t_10;
- goto __pyx_L9_bool_binop_done;
- }
- __pyx_t_10 = ((!(PyIndex_Check(__pyx_v_item) != 0)) != 0);
- __pyx_t_1 = __pyx_t_10;
- __pyx_L9_bool_binop_done:;
- if (unlikely(__pyx_t_1)) {
-
- /* "View.MemoryView":691
- * else:
- * if not isinstance(item, slice) and not PyIndex_Check(item):
- * raise TypeError("Cannot index with type '%s'" % type(item)) # <<<<<<<<<<<<<<
- *
- * have_slices = have_slices or isinstance(item, slice)
- */
- __pyx_t_7 = __Pyx_PyString_FormatSafe(__pyx_kp_s_Cannot_index_with_type_s, ((PyObject *)Py_TYPE(__pyx_v_item))); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 691, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_7);
- __pyx_t_11 = __Pyx_PyObject_CallOneArg(__pyx_builtin_TypeError, __pyx_t_7); if (unlikely(!__pyx_t_11)) __PYX_ERR(1, 691, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_11);
- __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
- __Pyx_Raise(__pyx_t_11, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;
- __PYX_ERR(1, 691, __pyx_L1_error)
-
- /* "View.MemoryView":690
- * have_slices = True
- * else:
- * if not isinstance(item, slice) and not PyIndex_Check(item): # <<<<<<<<<<<<<<
- * raise TypeError("Cannot index with type '%s'" % type(item))
- *
- */
- }
-
- /* "View.MemoryView":693
- * raise TypeError("Cannot index with type '%s'" % type(item))
- *
- * have_slices = have_slices or isinstance(item, slice) # <<<<<<<<<<<<<<
- * result.append(item)
- *
- */
- __pyx_t_10 = (__pyx_v_have_slices != 0);
- if (!__pyx_t_10) {
- } else {
- __pyx_t_1 = __pyx_t_10;
- goto __pyx_L11_bool_binop_done;
- }
- __pyx_t_10 = PySlice_Check(__pyx_v_item);
- __pyx_t_2 = (__pyx_t_10 != 0);
- __pyx_t_1 = __pyx_t_2;
- __pyx_L11_bool_binop_done:;
- __pyx_v_have_slices = __pyx_t_1;
-
- /* "View.MemoryView":694
- *
- * have_slices = have_slices or isinstance(item, slice)
- * result.append(item) # <<<<<<<<<<<<<<
- *
- * nslices = ndim - len(result)
- */
- __pyx_t_9 = __Pyx_PyList_Append(__pyx_v_result, __pyx_v_item); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 694, __pyx_L1_error)
- }
- __pyx_L6:;
-
- /* "View.MemoryView":681
- * have_slices = False
- * seen_ellipsis = False
- * for idx, item in enumerate(tup): # <<<<<<<<<<<<<<
- * if item is Ellipsis:
- * if not seen_ellipsis:
- */
- }
- __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
-
- /* "View.MemoryView":696
- * result.append(item)
- *
- * nslices = ndim - len(result) # <<<<<<<<<<<<<<
- * if nslices:
- * result.extend([slice(None)] * nslices)
- */
- __pyx_t_5 = PyList_GET_SIZE(__pyx_v_result); if (unlikely(__pyx_t_5 == ((Py_ssize_t)-1))) __PYX_ERR(1, 696, __pyx_L1_error)
- __pyx_v_nslices = (__pyx_v_ndim - __pyx_t_5);
-
- /* "View.MemoryView":697
- *
- * nslices = ndim - len(result)
- * if nslices: # <<<<<<<<<<<<<<
- * result.extend([slice(None)] * nslices)
- *
- */
- __pyx_t_1 = (__pyx_v_nslices != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":698
- * nslices = ndim - len(result)
- * if nslices:
- * result.extend([slice(None)] * nslices) # <<<<<<<<<<<<<<
- *
- * return have_slices or nslices, tuple(result)
- */
- __pyx_t_3 = PyList_New(1 * ((__pyx_v_nslices<0) ? 0:__pyx_v_nslices)); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 698, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- { Py_ssize_t __pyx_temp;
- for (__pyx_temp=0; __pyx_temp < __pyx_v_nslices; __pyx_temp++) {
- __Pyx_INCREF(__pyx_slice__16);
- __Pyx_GIVEREF(__pyx_slice__16);
- PyList_SET_ITEM(__pyx_t_3, __pyx_temp, __pyx_slice__16);
- }
- }
- __pyx_t_9 = __Pyx_PyList_Extend(__pyx_v_result, __pyx_t_3); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 698, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
-
- /* "View.MemoryView":697
- *
- * nslices = ndim - len(result)
- * if nslices: # <<<<<<<<<<<<<<
- * result.extend([slice(None)] * nslices)
- *
- */
- }
-
- /* "View.MemoryView":700
- * result.extend([slice(None)] * nslices)
- *
- * return have_slices or nslices, tuple(result) # <<<<<<<<<<<<<<
- *
- * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim):
- */
- __Pyx_XDECREF(__pyx_r);
- if (!__pyx_v_have_slices) {
- } else {
- __pyx_t_4 = __Pyx_PyBool_FromLong(__pyx_v_have_slices); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 700, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __pyx_t_3 = __pyx_t_4;
- __pyx_t_4 = 0;
- goto __pyx_L14_bool_binop_done;
- }
- __pyx_t_4 = PyInt_FromSsize_t(__pyx_v_nslices); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 700, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __pyx_t_3 = __pyx_t_4;
- __pyx_t_4 = 0;
- __pyx_L14_bool_binop_done:;
- __pyx_t_4 = PyList_AsTuple(__pyx_v_result); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 700, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __pyx_t_11 = PyTuple_New(2); if (unlikely(!__pyx_t_11)) __PYX_ERR(1, 700, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_11);
- __Pyx_GIVEREF(__pyx_t_3);
- PyTuple_SET_ITEM(__pyx_t_11, 0, __pyx_t_3);
- __Pyx_GIVEREF(__pyx_t_4);
- PyTuple_SET_ITEM(__pyx_t_11, 1, __pyx_t_4);
- __pyx_t_3 = 0;
- __pyx_t_4 = 0;
- __pyx_r = ((PyObject*)__pyx_t_11);
- __pyx_t_11 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":668
- * return isinstance(o, memoryview)
- *
- * cdef tuple _unellipsify(object index, int ndim): # <<<<<<<<<<<<<<
- * """
- * Replace all ellipses with full slices and fill incomplete indices with
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_XDECREF(__pyx_t_4);
- __Pyx_XDECREF(__pyx_t_7);
- __Pyx_XDECREF(__pyx_t_11);
- __Pyx_AddTraceback("View.MemoryView._unellipsify", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XDECREF(__pyx_v_tup);
- __Pyx_XDECREF(__pyx_v_result);
- __Pyx_XDECREF(__pyx_v_idx);
- __Pyx_XDECREF(__pyx_v_item);
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":702
- * return have_slices or nslices, tuple(result)
- *
- * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): # <<<<<<<<<<<<<<
- * for suboffset in suboffsets[:ndim]:
- * if suboffset >= 0:
- */
-
-static PyObject *assert_direct_dimensions(Py_ssize_t *__pyx_v_suboffsets, int __pyx_v_ndim) {
- Py_ssize_t __pyx_v_suboffset;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- Py_ssize_t *__pyx_t_1;
- Py_ssize_t *__pyx_t_2;
- Py_ssize_t *__pyx_t_3;
- int __pyx_t_4;
- PyObject *__pyx_t_5 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("assert_direct_dimensions", 0);
-
- /* "View.MemoryView":703
- *
- * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim):
- * for suboffset in suboffsets[:ndim]: # <<<<<<<<<<<<<<
- * if suboffset >= 0:
- * raise ValueError("Indirect dimensions not supported")
- */
- __pyx_t_2 = (__pyx_v_suboffsets + __pyx_v_ndim);
- for (__pyx_t_3 = __pyx_v_suboffsets; __pyx_t_3 < __pyx_t_2; __pyx_t_3++) {
- __pyx_t_1 = __pyx_t_3;
- __pyx_v_suboffset = (__pyx_t_1[0]);
-
- /* "View.MemoryView":704
- * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim):
- * for suboffset in suboffsets[:ndim]:
- * if suboffset >= 0: # <<<<<<<<<<<<<<
- * raise ValueError("Indirect dimensions not supported")
- *
- */
- __pyx_t_4 = ((__pyx_v_suboffset >= 0) != 0);
- if (unlikely(__pyx_t_4)) {
-
- /* "View.MemoryView":705
- * for suboffset in suboffsets[:ndim]:
- * if suboffset >= 0:
- * raise ValueError("Indirect dimensions not supported") # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_t_5 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__17, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 705, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- __Pyx_Raise(__pyx_t_5, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
- __PYX_ERR(1, 705, __pyx_L1_error)
-
- /* "View.MemoryView":704
- * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim):
- * for suboffset in suboffsets[:ndim]:
- * if suboffset >= 0: # <<<<<<<<<<<<<<
- * raise ValueError("Indirect dimensions not supported")
- *
- */
- }
- }
-
- /* "View.MemoryView":702
- * return have_slices or nslices, tuple(result)
- *
- * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): # <<<<<<<<<<<<<<
- * for suboffset in suboffsets[:ndim]:
- * if suboffset >= 0:
- */
-
- /* function exit code */
- __pyx_r = Py_None; __Pyx_INCREF(Py_None);
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_5);
- __Pyx_AddTraceback("View.MemoryView.assert_direct_dimensions", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":712
- *
- * @cname('__pyx_memview_slice')
- * cdef memoryview memview_slice(memoryview memview, object indices): # <<<<<<<<<<<<<<
- * cdef int new_ndim = 0, suboffset_dim = -1, dim
- * cdef bint negative_step
- */
-
-static struct __pyx_memoryview_obj *__pyx_memview_slice(struct __pyx_memoryview_obj *__pyx_v_memview, PyObject *__pyx_v_indices) {
- int __pyx_v_new_ndim;
- int __pyx_v_suboffset_dim;
- int __pyx_v_dim;
- __Pyx_memviewslice __pyx_v_src;
- __Pyx_memviewslice __pyx_v_dst;
- __Pyx_memviewslice *__pyx_v_p_src;
- struct __pyx_memoryviewslice_obj *__pyx_v_memviewsliceobj = 0;
- __Pyx_memviewslice *__pyx_v_p_dst;
- int *__pyx_v_p_suboffset_dim;
- Py_ssize_t __pyx_v_start;
- Py_ssize_t __pyx_v_stop;
- Py_ssize_t __pyx_v_step;
- int __pyx_v_have_start;
- int __pyx_v_have_stop;
- int __pyx_v_have_step;
- PyObject *__pyx_v_index = NULL;
- struct __pyx_memoryview_obj *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- int __pyx_t_2;
- PyObject *__pyx_t_3 = NULL;
- struct __pyx_memoryview_obj *__pyx_t_4;
- char *__pyx_t_5;
- int __pyx_t_6;
- Py_ssize_t __pyx_t_7;
- PyObject *(*__pyx_t_8)(PyObject *);
- PyObject *__pyx_t_9 = NULL;
- Py_ssize_t __pyx_t_10;
- int __pyx_t_11;
- Py_ssize_t __pyx_t_12;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("memview_slice", 0);
-
- /* "View.MemoryView":713
- * @cname('__pyx_memview_slice')
- * cdef memoryview memview_slice(memoryview memview, object indices):
- * cdef int new_ndim = 0, suboffset_dim = -1, dim # <<<<<<<<<<<<<<
- * cdef bint negative_step
- * cdef __Pyx_memviewslice src, dst
- */
- __pyx_v_new_ndim = 0;
- __pyx_v_suboffset_dim = -1;
-
- /* "View.MemoryView":720
- *
- *
- * memset(&dst, 0, sizeof(dst)) # <<<<<<<<<<<<<<
- *
- * cdef _memoryviewslice memviewsliceobj
- */
- (void)(memset((&__pyx_v_dst), 0, (sizeof(__pyx_v_dst))));
-
- /* "View.MemoryView":724
- * cdef _memoryviewslice memviewsliceobj
- *
- * assert memview.view.ndim > 0 # <<<<<<<<<<<<<<
- *
- * if isinstance(memview, _memoryviewslice):
- */
- #ifndef CYTHON_WITHOUT_ASSERTIONS
- if (unlikely(!Py_OptimizeFlag)) {
- if (unlikely(!((__pyx_v_memview->view.ndim > 0) != 0))) {
- PyErr_SetNone(PyExc_AssertionError);
- __PYX_ERR(1, 724, __pyx_L1_error)
- }
- }
- #endif
-
- /* "View.MemoryView":726
- * assert memview.view.ndim > 0
- *
- * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<<
- * memviewsliceobj = memview
- * p_src = &memviewsliceobj.from_slice
- */
- __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type);
- __pyx_t_2 = (__pyx_t_1 != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":727
- *
- * if isinstance(memview, _memoryviewslice):
- * memviewsliceobj = memview # <<<<<<<<<<<<<<
- * p_src = &memviewsliceobj.from_slice
- * else:
- */
- if (!(likely(((((PyObject *)__pyx_v_memview)) == Py_None) || likely(__Pyx_TypeTest(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type))))) __PYX_ERR(1, 727, __pyx_L1_error)
- __pyx_t_3 = ((PyObject *)__pyx_v_memview);
- __Pyx_INCREF(__pyx_t_3);
- __pyx_v_memviewsliceobj = ((struct __pyx_memoryviewslice_obj *)__pyx_t_3);
- __pyx_t_3 = 0;
-
- /* "View.MemoryView":728
- * if isinstance(memview, _memoryviewslice):
- * memviewsliceobj = memview
- * p_src = &memviewsliceobj.from_slice # <<<<<<<<<<<<<<
- * else:
- * slice_copy(memview, &src)
- */
- __pyx_v_p_src = (&__pyx_v_memviewsliceobj->from_slice);
-
- /* "View.MemoryView":726
- * assert memview.view.ndim > 0
- *
- * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<<
- * memviewsliceobj = memview
- * p_src = &memviewsliceobj.from_slice
- */
- goto __pyx_L3;
- }
-
- /* "View.MemoryView":730
- * p_src = &memviewsliceobj.from_slice
- * else:
- * slice_copy(memview, &src) # <<<<<<<<<<<<<<
- * p_src = &src
- *
- */
- /*else*/ {
- __pyx_memoryview_slice_copy(__pyx_v_memview, (&__pyx_v_src));
-
- /* "View.MemoryView":731
- * else:
- * slice_copy(memview, &src)
- * p_src = &src # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_v_p_src = (&__pyx_v_src);
- }
- __pyx_L3:;
-
- /* "View.MemoryView":737
- *
- *
- * dst.memview = p_src.memview # <<<<<<<<<<<<<<
- * dst.data = p_src.data
- *
- */
- __pyx_t_4 = __pyx_v_p_src->memview;
- __pyx_v_dst.memview = __pyx_t_4;
-
- /* "View.MemoryView":738
- *
- * dst.memview = p_src.memview
- * dst.data = p_src.data # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_t_5 = __pyx_v_p_src->data;
- __pyx_v_dst.data = __pyx_t_5;
-
- /* "View.MemoryView":743
- *
- *
- * cdef __Pyx_memviewslice *p_dst = &dst # <<<<<<<<<<<<<<
- * cdef int *p_suboffset_dim = &suboffset_dim
- * cdef Py_ssize_t start, stop, step
- */
- __pyx_v_p_dst = (&__pyx_v_dst);
-
- /* "View.MemoryView":744
- *
- * cdef __Pyx_memviewslice *p_dst = &dst
- * cdef int *p_suboffset_dim = &suboffset_dim # <<<<<<<<<<<<<<
- * cdef Py_ssize_t start, stop, step
- * cdef bint have_start, have_stop, have_step
- */
- __pyx_v_p_suboffset_dim = (&__pyx_v_suboffset_dim);
-
- /* "View.MemoryView":748
- * cdef bint have_start, have_stop, have_step
- *
- * for dim, index in enumerate(indices): # <<<<<<<<<<<<<<
- * if PyIndex_Check(index):
- * slice_memviewslice(
- */
- __pyx_t_6 = 0;
- if (likely(PyList_CheckExact(__pyx_v_indices)) || PyTuple_CheckExact(__pyx_v_indices)) {
- __pyx_t_3 = __pyx_v_indices; __Pyx_INCREF(__pyx_t_3); __pyx_t_7 = 0;
- __pyx_t_8 = NULL;
- } else {
- __pyx_t_7 = -1; __pyx_t_3 = PyObject_GetIter(__pyx_v_indices); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 748, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_t_8 = Py_TYPE(__pyx_t_3)->tp_iternext; if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 748, __pyx_L1_error)
- }
- for (;;) {
- if (likely(!__pyx_t_8)) {
- if (likely(PyList_CheckExact(__pyx_t_3))) {
- if (__pyx_t_7 >= PyList_GET_SIZE(__pyx_t_3)) break;
- #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- __pyx_t_9 = PyList_GET_ITEM(__pyx_t_3, __pyx_t_7); __Pyx_INCREF(__pyx_t_9); __pyx_t_7++; if (unlikely(0 < 0)) __PYX_ERR(1, 748, __pyx_L1_error)
- #else
- __pyx_t_9 = PySequence_ITEM(__pyx_t_3, __pyx_t_7); __pyx_t_7++; if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 748, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_9);
- #endif
- } else {
- if (__pyx_t_7 >= PyTuple_GET_SIZE(__pyx_t_3)) break;
- #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- __pyx_t_9 = PyTuple_GET_ITEM(__pyx_t_3, __pyx_t_7); __Pyx_INCREF(__pyx_t_9); __pyx_t_7++; if (unlikely(0 < 0)) __PYX_ERR(1, 748, __pyx_L1_error)
- #else
- __pyx_t_9 = PySequence_ITEM(__pyx_t_3, __pyx_t_7); __pyx_t_7++; if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 748, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_9);
- #endif
- }
- } else {
- __pyx_t_9 = __pyx_t_8(__pyx_t_3);
- if (unlikely(!__pyx_t_9)) {
- PyObject* exc_type = PyErr_Occurred();
- if (exc_type) {
- if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();
- else __PYX_ERR(1, 748, __pyx_L1_error)
- }
- break;
- }
- __Pyx_GOTREF(__pyx_t_9);
- }
- __Pyx_XDECREF_SET(__pyx_v_index, __pyx_t_9);
- __pyx_t_9 = 0;
- __pyx_v_dim = __pyx_t_6;
- __pyx_t_6 = (__pyx_t_6 + 1);
-
- /* "View.MemoryView":749
- *
- * for dim, index in enumerate(indices):
- * if PyIndex_Check(index): # <<<<<<<<<<<<<<
- * slice_memviewslice(
- * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim],
- */
- __pyx_t_2 = (PyIndex_Check(__pyx_v_index) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":753
- * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim],
- * dim, new_ndim, p_suboffset_dim,
- * index, 0, 0, # start, stop, step # <<<<<<<<<<<<<<
- * 0, 0, 0, # have_{start,stop,step}
- * False)
- */
- __pyx_t_10 = __Pyx_PyIndex_AsSsize_t(__pyx_v_index); if (unlikely((__pyx_t_10 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 753, __pyx_L1_error)
-
- /* "View.MemoryView":750
- * for dim, index in enumerate(indices):
- * if PyIndex_Check(index):
- * slice_memviewslice( # <<<<<<<<<<<<<<
- * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim],
- * dim, new_ndim, p_suboffset_dim,
- */
- __pyx_t_11 = __pyx_memoryview_slice_memviewslice(__pyx_v_p_dst, (__pyx_v_p_src->shape[__pyx_v_dim]), (__pyx_v_p_src->strides[__pyx_v_dim]), (__pyx_v_p_src->suboffsets[__pyx_v_dim]), __pyx_v_dim, __pyx_v_new_ndim, __pyx_v_p_suboffset_dim, __pyx_t_10, 0, 0, 0, 0, 0, 0); if (unlikely(__pyx_t_11 == ((int)-1))) __PYX_ERR(1, 750, __pyx_L1_error)
-
- /* "View.MemoryView":749
- *
- * for dim, index in enumerate(indices):
- * if PyIndex_Check(index): # <<<<<<<<<<<<<<
- * slice_memviewslice(
- * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim],
- */
- goto __pyx_L6;
- }
-
- /* "View.MemoryView":756
- * 0, 0, 0, # have_{start,stop,step}
- * False)
- * elif index is None: # <<<<<<<<<<<<<<
- * p_dst.shape[new_ndim] = 1
- * p_dst.strides[new_ndim] = 0
- */
- __pyx_t_2 = (__pyx_v_index == Py_None);
- __pyx_t_1 = (__pyx_t_2 != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":757
- * False)
- * elif index is None:
- * p_dst.shape[new_ndim] = 1 # <<<<<<<<<<<<<<
- * p_dst.strides[new_ndim] = 0
- * p_dst.suboffsets[new_ndim] = -1
- */
- (__pyx_v_p_dst->shape[__pyx_v_new_ndim]) = 1;
-
- /* "View.MemoryView":758
- * elif index is None:
- * p_dst.shape[new_ndim] = 1
- * p_dst.strides[new_ndim] = 0 # <<<<<<<<<<<<<<
- * p_dst.suboffsets[new_ndim] = -1
- * new_ndim += 1
- */
- (__pyx_v_p_dst->strides[__pyx_v_new_ndim]) = 0;
-
- /* "View.MemoryView":759
- * p_dst.shape[new_ndim] = 1
- * p_dst.strides[new_ndim] = 0
- * p_dst.suboffsets[new_ndim] = -1 # <<<<<<<<<<<<<<
- * new_ndim += 1
- * else:
- */
- (__pyx_v_p_dst->suboffsets[__pyx_v_new_ndim]) = -1L;
-
- /* "View.MemoryView":760
- * p_dst.strides[new_ndim] = 0
- * p_dst.suboffsets[new_ndim] = -1
- * new_ndim += 1 # <<<<<<<<<<<<<<
- * else:
- * start = index.start or 0
- */
- __pyx_v_new_ndim = (__pyx_v_new_ndim + 1);
-
- /* "View.MemoryView":756
- * 0, 0, 0, # have_{start,stop,step}
- * False)
- * elif index is None: # <<<<<<<<<<<<<<
- * p_dst.shape[new_ndim] = 1
- * p_dst.strides[new_ndim] = 0
- */
- goto __pyx_L6;
- }
-
- /* "View.MemoryView":762
- * new_ndim += 1
- * else:
- * start = index.start or 0 # <<<<<<<<<<<<<<
- * stop = index.stop or 0
- * step = index.step or 0
- */
- /*else*/ {
- __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_start); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 762, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_9);
- __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_9); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 762, __pyx_L1_error)
- if (!__pyx_t_1) {
- __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
- } else {
- __pyx_t_12 = __Pyx_PyIndex_AsSsize_t(__pyx_t_9); if (unlikely((__pyx_t_12 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 762, __pyx_L1_error)
- __pyx_t_10 = __pyx_t_12;
- __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
- goto __pyx_L7_bool_binop_done;
- }
- __pyx_t_10 = 0;
- __pyx_L7_bool_binop_done:;
- __pyx_v_start = __pyx_t_10;
-
- /* "View.MemoryView":763
- * else:
- * start = index.start or 0
- * stop = index.stop or 0 # <<<<<<<<<<<<<<
- * step = index.step or 0
- *
- */
- __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_stop); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 763, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_9);
- __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_9); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 763, __pyx_L1_error)
- if (!__pyx_t_1) {
- __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
- } else {
- __pyx_t_12 = __Pyx_PyIndex_AsSsize_t(__pyx_t_9); if (unlikely((__pyx_t_12 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 763, __pyx_L1_error)
- __pyx_t_10 = __pyx_t_12;
- __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
- goto __pyx_L9_bool_binop_done;
- }
- __pyx_t_10 = 0;
- __pyx_L9_bool_binop_done:;
- __pyx_v_stop = __pyx_t_10;
-
- /* "View.MemoryView":764
- * start = index.start or 0
- * stop = index.stop or 0
- * step = index.step or 0 # <<<<<<<<<<<<<<
- *
- * have_start = index.start is not None
- */
- __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_step); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 764, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_9);
- __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_9); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 764, __pyx_L1_error)
- if (!__pyx_t_1) {
- __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
- } else {
- __pyx_t_12 = __Pyx_PyIndex_AsSsize_t(__pyx_t_9); if (unlikely((__pyx_t_12 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 764, __pyx_L1_error)
- __pyx_t_10 = __pyx_t_12;
- __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
- goto __pyx_L11_bool_binop_done;
- }
- __pyx_t_10 = 0;
- __pyx_L11_bool_binop_done:;
- __pyx_v_step = __pyx_t_10;
-
- /* "View.MemoryView":766
- * step = index.step or 0
- *
- * have_start = index.start is not None # <<<<<<<<<<<<<<
- * have_stop = index.stop is not None
- * have_step = index.step is not None
- */
- __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_start); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 766, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_9);
- __pyx_t_1 = (__pyx_t_9 != Py_None);
- __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
- __pyx_v_have_start = __pyx_t_1;
-
- /* "View.MemoryView":767
- *
- * have_start = index.start is not None
- * have_stop = index.stop is not None # <<<<<<<<<<<<<<
- * have_step = index.step is not None
- *
- */
- __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_stop); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 767, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_9);
- __pyx_t_1 = (__pyx_t_9 != Py_None);
- __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
- __pyx_v_have_stop = __pyx_t_1;
-
- /* "View.MemoryView":768
- * have_start = index.start is not None
- * have_stop = index.stop is not None
- * have_step = index.step is not None # <<<<<<<<<<<<<<
- *
- * slice_memviewslice(
- */
- __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_step); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 768, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_9);
- __pyx_t_1 = (__pyx_t_9 != Py_None);
- __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
- __pyx_v_have_step = __pyx_t_1;
-
- /* "View.MemoryView":770
- * have_step = index.step is not None
- *
- * slice_memviewslice( # <<<<<<<<<<<<<<
- * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim],
- * dim, new_ndim, p_suboffset_dim,
- */
- __pyx_t_11 = __pyx_memoryview_slice_memviewslice(__pyx_v_p_dst, (__pyx_v_p_src->shape[__pyx_v_dim]), (__pyx_v_p_src->strides[__pyx_v_dim]), (__pyx_v_p_src->suboffsets[__pyx_v_dim]), __pyx_v_dim, __pyx_v_new_ndim, __pyx_v_p_suboffset_dim, __pyx_v_start, __pyx_v_stop, __pyx_v_step, __pyx_v_have_start, __pyx_v_have_stop, __pyx_v_have_step, 1); if (unlikely(__pyx_t_11 == ((int)-1))) __PYX_ERR(1, 770, __pyx_L1_error)
-
- /* "View.MemoryView":776
- * have_start, have_stop, have_step,
- * True)
- * new_ndim += 1 # <<<<<<<<<<<<<<
- *
- * if isinstance(memview, _memoryviewslice):
- */
- __pyx_v_new_ndim = (__pyx_v_new_ndim + 1);
- }
- __pyx_L6:;
-
- /* "View.MemoryView":748
- * cdef bint have_start, have_stop, have_step
- *
- * for dim, index in enumerate(indices): # <<<<<<<<<<<<<<
- * if PyIndex_Check(index):
- * slice_memviewslice(
- */
- }
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
-
- /* "View.MemoryView":778
- * new_ndim += 1
- *
- * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<<
- * return memoryview_fromslice(dst, new_ndim,
- * memviewsliceobj.to_object_func,
- */
- __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type);
- __pyx_t_2 = (__pyx_t_1 != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":779
- *
- * if isinstance(memview, _memoryviewslice):
- * return memoryview_fromslice(dst, new_ndim, # <<<<<<<<<<<<<<
- * memviewsliceobj.to_object_func,
- * memviewsliceobj.to_dtype_func,
- */
- __Pyx_XDECREF(((PyObject *)__pyx_r));
-
- /* "View.MemoryView":780
- * if isinstance(memview, _memoryviewslice):
- * return memoryview_fromslice(dst, new_ndim,
- * memviewsliceobj.to_object_func, # <<<<<<<<<<<<<<
- * memviewsliceobj.to_dtype_func,
- * memview.dtype_is_object)
- */
- if (unlikely(!__pyx_v_memviewsliceobj)) { __Pyx_RaiseUnboundLocalError("memviewsliceobj"); __PYX_ERR(1, 780, __pyx_L1_error) }
-
- /* "View.MemoryView":781
- * return memoryview_fromslice(dst, new_ndim,
- * memviewsliceobj.to_object_func,
- * memviewsliceobj.to_dtype_func, # <<<<<<<<<<<<<<
- * memview.dtype_is_object)
- * else:
- */
- if (unlikely(!__pyx_v_memviewsliceobj)) { __Pyx_RaiseUnboundLocalError("memviewsliceobj"); __PYX_ERR(1, 781, __pyx_L1_error) }
-
- /* "View.MemoryView":779
- *
- * if isinstance(memview, _memoryviewslice):
- * return memoryview_fromslice(dst, new_ndim, # <<<<<<<<<<<<<<
- * memviewsliceobj.to_object_func,
- * memviewsliceobj.to_dtype_func,
- */
- __pyx_t_3 = __pyx_memoryview_fromslice(__pyx_v_dst, __pyx_v_new_ndim, __pyx_v_memviewsliceobj->to_object_func, __pyx_v_memviewsliceobj->to_dtype_func, __pyx_v_memview->dtype_is_object); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 779, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_memoryview_type))))) __PYX_ERR(1, 779, __pyx_L1_error)
- __pyx_r = ((struct __pyx_memoryview_obj *)__pyx_t_3);
- __pyx_t_3 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":778
- * new_ndim += 1
- *
- * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<<
- * return memoryview_fromslice(dst, new_ndim,
- * memviewsliceobj.to_object_func,
- */
- }
-
- /* "View.MemoryView":784
- * memview.dtype_is_object)
- * else:
- * return memoryview_fromslice(dst, new_ndim, NULL, NULL, # <<<<<<<<<<<<<<
- * memview.dtype_is_object)
- *
- */
- /*else*/ {
- __Pyx_XDECREF(((PyObject *)__pyx_r));
-
- /* "View.MemoryView":785
- * else:
- * return memoryview_fromslice(dst, new_ndim, NULL, NULL,
- * memview.dtype_is_object) # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_t_3 = __pyx_memoryview_fromslice(__pyx_v_dst, __pyx_v_new_ndim, NULL, NULL, __pyx_v_memview->dtype_is_object); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 784, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
-
- /* "View.MemoryView":784
- * memview.dtype_is_object)
- * else:
- * return memoryview_fromslice(dst, new_ndim, NULL, NULL, # <<<<<<<<<<<<<<
- * memview.dtype_is_object)
- *
- */
- if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_memoryview_type))))) __PYX_ERR(1, 784, __pyx_L1_error)
- __pyx_r = ((struct __pyx_memoryview_obj *)__pyx_t_3);
- __pyx_t_3 = 0;
- goto __pyx_L0;
- }
-
- /* "View.MemoryView":712
- *
- * @cname('__pyx_memview_slice')
- * cdef memoryview memview_slice(memoryview memview, object indices): # <<<<<<<<<<<<<<
- * cdef int new_ndim = 0, suboffset_dim = -1, dim
- * cdef bint negative_step
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_XDECREF(__pyx_t_9);
- __Pyx_AddTraceback("View.MemoryView.memview_slice", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XDECREF((PyObject *)__pyx_v_memviewsliceobj);
- __Pyx_XDECREF(__pyx_v_index);
- __Pyx_XGIVEREF((PyObject *)__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":809
- *
- * @cname('__pyx_memoryview_slice_memviewslice')
- * cdef int slice_memviewslice( # <<<<<<<<<<<<<<
- * __Pyx_memviewslice *dst,
- * Py_ssize_t shape, Py_ssize_t stride, Py_ssize_t suboffset,
- */
-
-static int __pyx_memoryview_slice_memviewslice(__Pyx_memviewslice *__pyx_v_dst, Py_ssize_t __pyx_v_shape, Py_ssize_t __pyx_v_stride, Py_ssize_t __pyx_v_suboffset, int __pyx_v_dim, int __pyx_v_new_ndim, int *__pyx_v_suboffset_dim, Py_ssize_t __pyx_v_start, Py_ssize_t __pyx_v_stop, Py_ssize_t __pyx_v_step, int __pyx_v_have_start, int __pyx_v_have_stop, int __pyx_v_have_step, int __pyx_v_is_slice) {
- Py_ssize_t __pyx_v_new_shape;
- int __pyx_v_negative_step;
- int __pyx_r;
- int __pyx_t_1;
- int __pyx_t_2;
- int __pyx_t_3;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
-
- /* "View.MemoryView":829
- * cdef bint negative_step
- *
- * if not is_slice: # <<<<<<<<<<<<<<
- *
- * if start < 0:
- */
- __pyx_t_1 = ((!(__pyx_v_is_slice != 0)) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":831
- * if not is_slice:
- *
- * if start < 0: # <<<<<<<<<<<<<<
- * start += shape
- * if not 0 <= start < shape:
- */
- __pyx_t_1 = ((__pyx_v_start < 0) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":832
- *
- * if start < 0:
- * start += shape # <<<<<<<<<<<<<<
- * if not 0 <= start < shape:
- * _err_dim(IndexError, "Index out of bounds (axis %d)", dim)
- */
- __pyx_v_start = (__pyx_v_start + __pyx_v_shape);
-
- /* "View.MemoryView":831
- * if not is_slice:
- *
- * if start < 0: # <<<<<<<<<<<<<<
- * start += shape
- * if not 0 <= start < shape:
- */
- }
-
- /* "View.MemoryView":833
- * if start < 0:
- * start += shape
- * if not 0 <= start < shape: # <<<<<<<<<<<<<<
- * _err_dim(IndexError, "Index out of bounds (axis %d)", dim)
- * else:
- */
- __pyx_t_1 = (0 <= __pyx_v_start);
- if (__pyx_t_1) {
- __pyx_t_1 = (__pyx_v_start < __pyx_v_shape);
- }
- __pyx_t_2 = ((!(__pyx_t_1 != 0)) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":834
- * start += shape
- * if not 0 <= start < shape:
- * _err_dim(IndexError, "Index out of bounds (axis %d)", dim) # <<<<<<<<<<<<<<
- * else:
- *
- */
- __pyx_t_3 = __pyx_memoryview_err_dim(__pyx_builtin_IndexError, ((char *)"Index out of bounds (axis %d)"), __pyx_v_dim); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 834, __pyx_L1_error)
-
- /* "View.MemoryView":833
- * if start < 0:
- * start += shape
- * if not 0 <= start < shape: # <<<<<<<<<<<<<<
- * _err_dim(IndexError, "Index out of bounds (axis %d)", dim)
- * else:
- */
- }
-
- /* "View.MemoryView":829
- * cdef bint negative_step
- *
- * if not is_slice: # <<<<<<<<<<<<<<
- *
- * if start < 0:
- */
- goto __pyx_L3;
- }
-
- /* "View.MemoryView":837
- * else:
- *
- * negative_step = have_step != 0 and step < 0 # <<<<<<<<<<<<<<
- *
- * if have_step and step == 0:
- */
- /*else*/ {
- __pyx_t_1 = ((__pyx_v_have_step != 0) != 0);
- if (__pyx_t_1) {
- } else {
- __pyx_t_2 = __pyx_t_1;
- goto __pyx_L6_bool_binop_done;
- }
- __pyx_t_1 = ((__pyx_v_step < 0) != 0);
- __pyx_t_2 = __pyx_t_1;
- __pyx_L6_bool_binop_done:;
- __pyx_v_negative_step = __pyx_t_2;
-
- /* "View.MemoryView":839
- * negative_step = have_step != 0 and step < 0
- *
- * if have_step and step == 0: # <<<<<<<<<<<<<<
- * _err_dim(ValueError, "Step may not be zero (axis %d)", dim)
- *
- */
- __pyx_t_1 = (__pyx_v_have_step != 0);
- if (__pyx_t_1) {
- } else {
- __pyx_t_2 = __pyx_t_1;
- goto __pyx_L9_bool_binop_done;
- }
- __pyx_t_1 = ((__pyx_v_step == 0) != 0);
- __pyx_t_2 = __pyx_t_1;
- __pyx_L9_bool_binop_done:;
- if (__pyx_t_2) {
-
- /* "View.MemoryView":840
- *
- * if have_step and step == 0:
- * _err_dim(ValueError, "Step may not be zero (axis %d)", dim) # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_t_3 = __pyx_memoryview_err_dim(__pyx_builtin_ValueError, ((char *)"Step may not be zero (axis %d)"), __pyx_v_dim); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 840, __pyx_L1_error)
-
- /* "View.MemoryView":839
- * negative_step = have_step != 0 and step < 0
- *
- * if have_step and step == 0: # <<<<<<<<<<<<<<
- * _err_dim(ValueError, "Step may not be zero (axis %d)", dim)
- *
- */
- }
-
- /* "View.MemoryView":843
- *
- *
- * if have_start: # <<<<<<<<<<<<<<
- * if start < 0:
- * start += shape
- */
- __pyx_t_2 = (__pyx_v_have_start != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":844
- *
- * if have_start:
- * if start < 0: # <<<<<<<<<<<<<<
- * start += shape
- * if start < 0:
- */
- __pyx_t_2 = ((__pyx_v_start < 0) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":845
- * if have_start:
- * if start < 0:
- * start += shape # <<<<<<<<<<<<<<
- * if start < 0:
- * start = 0
- */
- __pyx_v_start = (__pyx_v_start + __pyx_v_shape);
-
- /* "View.MemoryView":846
- * if start < 0:
- * start += shape
- * if start < 0: # <<<<<<<<<<<<<<
- * start = 0
- * elif start >= shape:
- */
- __pyx_t_2 = ((__pyx_v_start < 0) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":847
- * start += shape
- * if start < 0:
- * start = 0 # <<<<<<<<<<<<<<
- * elif start >= shape:
- * if negative_step:
- */
- __pyx_v_start = 0;
-
- /* "View.MemoryView":846
- * if start < 0:
- * start += shape
- * if start < 0: # <<<<<<<<<<<<<<
- * start = 0
- * elif start >= shape:
- */
- }
-
- /* "View.MemoryView":844
- *
- * if have_start:
- * if start < 0: # <<<<<<<<<<<<<<
- * start += shape
- * if start < 0:
- */
- goto __pyx_L12;
- }
-
- /* "View.MemoryView":848
- * if start < 0:
- * start = 0
- * elif start >= shape: # <<<<<<<<<<<<<<
- * if negative_step:
- * start = shape - 1
- */
- __pyx_t_2 = ((__pyx_v_start >= __pyx_v_shape) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":849
- * start = 0
- * elif start >= shape:
- * if negative_step: # <<<<<<<<<<<<<<
- * start = shape - 1
- * else:
- */
- __pyx_t_2 = (__pyx_v_negative_step != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":850
- * elif start >= shape:
- * if negative_step:
- * start = shape - 1 # <<<<<<<<<<<<<<
- * else:
- * start = shape
- */
- __pyx_v_start = (__pyx_v_shape - 1);
-
- /* "View.MemoryView":849
- * start = 0
- * elif start >= shape:
- * if negative_step: # <<<<<<<<<<<<<<
- * start = shape - 1
- * else:
- */
- goto __pyx_L14;
- }
-
- /* "View.MemoryView":852
- * start = shape - 1
- * else:
- * start = shape # <<<<<<<<<<<<<<
- * else:
- * if negative_step:
- */
- /*else*/ {
- __pyx_v_start = __pyx_v_shape;
- }
- __pyx_L14:;
-
- /* "View.MemoryView":848
- * if start < 0:
- * start = 0
- * elif start >= shape: # <<<<<<<<<<<<<<
- * if negative_step:
- * start = shape - 1
- */
- }
- __pyx_L12:;
-
- /* "View.MemoryView":843
- *
- *
- * if have_start: # <<<<<<<<<<<<<<
- * if start < 0:
- * start += shape
- */
- goto __pyx_L11;
- }
-
- /* "View.MemoryView":854
- * start = shape
- * else:
- * if negative_step: # <<<<<<<<<<<<<<
- * start = shape - 1
- * else:
- */
- /*else*/ {
- __pyx_t_2 = (__pyx_v_negative_step != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":855
- * else:
- * if negative_step:
- * start = shape - 1 # <<<<<<<<<<<<<<
- * else:
- * start = 0
- */
- __pyx_v_start = (__pyx_v_shape - 1);
-
- /* "View.MemoryView":854
- * start = shape
- * else:
- * if negative_step: # <<<<<<<<<<<<<<
- * start = shape - 1
- * else:
- */
- goto __pyx_L15;
- }
-
- /* "View.MemoryView":857
- * start = shape - 1
- * else:
- * start = 0 # <<<<<<<<<<<<<<
- *
- * if have_stop:
- */
- /*else*/ {
- __pyx_v_start = 0;
- }
- __pyx_L15:;
- }
- __pyx_L11:;
-
- /* "View.MemoryView":859
- * start = 0
- *
- * if have_stop: # <<<<<<<<<<<<<<
- * if stop < 0:
- * stop += shape
- */
- __pyx_t_2 = (__pyx_v_have_stop != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":860
- *
- * if have_stop:
- * if stop < 0: # <<<<<<<<<<<<<<
- * stop += shape
- * if stop < 0:
- */
- __pyx_t_2 = ((__pyx_v_stop < 0) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":861
- * if have_stop:
- * if stop < 0:
- * stop += shape # <<<<<<<<<<<<<<
- * if stop < 0:
- * stop = 0
- */
- __pyx_v_stop = (__pyx_v_stop + __pyx_v_shape);
-
- /* "View.MemoryView":862
- * if stop < 0:
- * stop += shape
- * if stop < 0: # <<<<<<<<<<<<<<
- * stop = 0
- * elif stop > shape:
- */
- __pyx_t_2 = ((__pyx_v_stop < 0) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":863
- * stop += shape
- * if stop < 0:
- * stop = 0 # <<<<<<<<<<<<<<
- * elif stop > shape:
- * stop = shape
- */
- __pyx_v_stop = 0;
-
- /* "View.MemoryView":862
- * if stop < 0:
- * stop += shape
- * if stop < 0: # <<<<<<<<<<<<<<
- * stop = 0
- * elif stop > shape:
- */
- }
-
- /* "View.MemoryView":860
- *
- * if have_stop:
- * if stop < 0: # <<<<<<<<<<<<<<
- * stop += shape
- * if stop < 0:
- */
- goto __pyx_L17;
- }
-
- /* "View.MemoryView":864
- * if stop < 0:
- * stop = 0
- * elif stop > shape: # <<<<<<<<<<<<<<
- * stop = shape
- * else:
- */
- __pyx_t_2 = ((__pyx_v_stop > __pyx_v_shape) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":865
- * stop = 0
- * elif stop > shape:
- * stop = shape # <<<<<<<<<<<<<<
- * else:
- * if negative_step:
- */
- __pyx_v_stop = __pyx_v_shape;
-
- /* "View.MemoryView":864
- * if stop < 0:
- * stop = 0
- * elif stop > shape: # <<<<<<<<<<<<<<
- * stop = shape
- * else:
- */
- }
- __pyx_L17:;
-
- /* "View.MemoryView":859
- * start = 0
- *
- * if have_stop: # <<<<<<<<<<<<<<
- * if stop < 0:
- * stop += shape
- */
- goto __pyx_L16;
- }
-
- /* "View.MemoryView":867
- * stop = shape
- * else:
- * if negative_step: # <<<<<<<<<<<<<<
- * stop = -1
- * else:
- */
- /*else*/ {
- __pyx_t_2 = (__pyx_v_negative_step != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":868
- * else:
- * if negative_step:
- * stop = -1 # <<<<<<<<<<<<<<
- * else:
- * stop = shape
- */
- __pyx_v_stop = -1L;
-
- /* "View.MemoryView":867
- * stop = shape
- * else:
- * if negative_step: # <<<<<<<<<<<<<<
- * stop = -1
- * else:
- */
- goto __pyx_L19;
- }
-
- /* "View.MemoryView":870
- * stop = -1
- * else:
- * stop = shape # <<<<<<<<<<<<<<
- *
- * if not have_step:
- */
- /*else*/ {
- __pyx_v_stop = __pyx_v_shape;
- }
- __pyx_L19:;
- }
- __pyx_L16:;
-
- /* "View.MemoryView":872
- * stop = shape
- *
- * if not have_step: # <<<<<<<<<<<<<<
- * step = 1
- *
- */
- __pyx_t_2 = ((!(__pyx_v_have_step != 0)) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":873
- *
- * if not have_step:
- * step = 1 # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_v_step = 1;
-
- /* "View.MemoryView":872
- * stop = shape
- *
- * if not have_step: # <<<<<<<<<<<<<<
- * step = 1
- *
- */
- }
-
- /* "View.MemoryView":877
- *
- * with cython.cdivision(True):
- * new_shape = (stop - start) // step # <<<<<<<<<<<<<<
- *
- * if (stop - start) - step * new_shape:
- */
- __pyx_v_new_shape = ((__pyx_v_stop - __pyx_v_start) / __pyx_v_step);
-
- /* "View.MemoryView":879
- * new_shape = (stop - start) // step
- *
- * if (stop - start) - step * new_shape: # <<<<<<<<<<<<<<
- * new_shape += 1
- *
- */
- __pyx_t_2 = (((__pyx_v_stop - __pyx_v_start) - (__pyx_v_step * __pyx_v_new_shape)) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":880
- *
- * if (stop - start) - step * new_shape:
- * new_shape += 1 # <<<<<<<<<<<<<<
- *
- * if new_shape < 0:
- */
- __pyx_v_new_shape = (__pyx_v_new_shape + 1);
-
- /* "View.MemoryView":879
- * new_shape = (stop - start) // step
- *
- * if (stop - start) - step * new_shape: # <<<<<<<<<<<<<<
- * new_shape += 1
- *
- */
- }
-
- /* "View.MemoryView":882
- * new_shape += 1
- *
- * if new_shape < 0: # <<<<<<<<<<<<<<
- * new_shape = 0
- *
- */
- __pyx_t_2 = ((__pyx_v_new_shape < 0) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":883
- *
- * if new_shape < 0:
- * new_shape = 0 # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_v_new_shape = 0;
-
- /* "View.MemoryView":882
- * new_shape += 1
- *
- * if new_shape < 0: # <<<<<<<<<<<<<<
- * new_shape = 0
- *
- */
- }
-
- /* "View.MemoryView":886
- *
- *
- * dst.strides[new_ndim] = stride * step # <<<<<<<<<<<<<<
- * dst.shape[new_ndim] = new_shape
- * dst.suboffsets[new_ndim] = suboffset
- */
- (__pyx_v_dst->strides[__pyx_v_new_ndim]) = (__pyx_v_stride * __pyx_v_step);
-
- /* "View.MemoryView":887
- *
- * dst.strides[new_ndim] = stride * step
- * dst.shape[new_ndim] = new_shape # <<<<<<<<<<<<<<
- * dst.suboffsets[new_ndim] = suboffset
- *
- */
- (__pyx_v_dst->shape[__pyx_v_new_ndim]) = __pyx_v_new_shape;
-
- /* "View.MemoryView":888
- * dst.strides[new_ndim] = stride * step
- * dst.shape[new_ndim] = new_shape
- * dst.suboffsets[new_ndim] = suboffset # <<<<<<<<<<<<<<
- *
- *
- */
- (__pyx_v_dst->suboffsets[__pyx_v_new_ndim]) = __pyx_v_suboffset;
- }
- __pyx_L3:;
-
- /* "View.MemoryView":891
- *
- *
- * if suboffset_dim[0] < 0: # <<<<<<<<<<<<<<
- * dst.data += start * stride
- * else:
- */
- __pyx_t_2 = (((__pyx_v_suboffset_dim[0]) < 0) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":892
- *
- * if suboffset_dim[0] < 0:
- * dst.data += start * stride # <<<<<<<<<<<<<<
- * else:
- * dst.suboffsets[suboffset_dim[0]] += start * stride
- */
- __pyx_v_dst->data = (__pyx_v_dst->data + (__pyx_v_start * __pyx_v_stride));
-
- /* "View.MemoryView":891
- *
- *
- * if suboffset_dim[0] < 0: # <<<<<<<<<<<<<<
- * dst.data += start * stride
- * else:
- */
- goto __pyx_L23;
- }
-
- /* "View.MemoryView":894
- * dst.data += start * stride
- * else:
- * dst.suboffsets[suboffset_dim[0]] += start * stride # <<<<<<<<<<<<<<
- *
- * if suboffset >= 0:
- */
- /*else*/ {
- __pyx_t_3 = (__pyx_v_suboffset_dim[0]);
- (__pyx_v_dst->suboffsets[__pyx_t_3]) = ((__pyx_v_dst->suboffsets[__pyx_t_3]) + (__pyx_v_start * __pyx_v_stride));
- }
- __pyx_L23:;
-
- /* "View.MemoryView":896
- * dst.suboffsets[suboffset_dim[0]] += start * stride
- *
- * if suboffset >= 0: # <<<<<<<<<<<<<<
- * if not is_slice:
- * if new_ndim == 0:
- */
- __pyx_t_2 = ((__pyx_v_suboffset >= 0) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":897
- *
- * if suboffset >= 0:
- * if not is_slice: # <<<<<<<<<<<<<<
- * if new_ndim == 0:
- * dst.data = ( dst.data)[0] + suboffset
- */
- __pyx_t_2 = ((!(__pyx_v_is_slice != 0)) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":898
- * if suboffset >= 0:
- * if not is_slice:
- * if new_ndim == 0: # <<<<<<<<<<<<<<
- * dst.data = ( dst.data)[0] + suboffset
- * else:
- */
- __pyx_t_2 = ((__pyx_v_new_ndim == 0) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":899
- * if not is_slice:
- * if new_ndim == 0:
- * dst.data = ( dst.data)[0] + suboffset # <<<<<<<<<<<<<<
- * else:
- * _err_dim(IndexError, "All dimensions preceding dimension %d "
- */
- __pyx_v_dst->data = ((((char **)__pyx_v_dst->data)[0]) + __pyx_v_suboffset);
-
- /* "View.MemoryView":898
- * if suboffset >= 0:
- * if not is_slice:
- * if new_ndim == 0: # <<<<<<<<<<<<<<
- * dst.data = ( dst.data)[0] + suboffset
- * else:
- */
- goto __pyx_L26;
- }
-
- /* "View.MemoryView":901
- * dst.data = ( dst.data)[0] + suboffset
- * else:
- * _err_dim(IndexError, "All dimensions preceding dimension %d " # <<<<<<<<<<<<<<
- * "must be indexed and not sliced", dim)
- * else:
- */
- /*else*/ {
-
- /* "View.MemoryView":902
- * else:
- * _err_dim(IndexError, "All dimensions preceding dimension %d "
- * "must be indexed and not sliced", dim) # <<<<<<<<<<<<<<
- * else:
- * suboffset_dim[0] = new_ndim
- */
- __pyx_t_3 = __pyx_memoryview_err_dim(__pyx_builtin_IndexError, ((char *)"All dimensions preceding dimension %d must be indexed and not sliced"), __pyx_v_dim); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 901, __pyx_L1_error)
- }
- __pyx_L26:;
-
- /* "View.MemoryView":897
- *
- * if suboffset >= 0:
- * if not is_slice: # <<<<<<<<<<<<<<
- * if new_ndim == 0:
- * dst.data = ( dst.data)[0] + suboffset
- */
- goto __pyx_L25;
- }
-
- /* "View.MemoryView":904
- * "must be indexed and not sliced", dim)
- * else:
- * suboffset_dim[0] = new_ndim # <<<<<<<<<<<<<<
- *
- * return 0
- */
- /*else*/ {
- (__pyx_v_suboffset_dim[0]) = __pyx_v_new_ndim;
- }
- __pyx_L25:;
-
- /* "View.MemoryView":896
- * dst.suboffsets[suboffset_dim[0]] += start * stride
- *
- * if suboffset >= 0: # <<<<<<<<<<<<<<
- * if not is_slice:
- * if new_ndim == 0:
- */
- }
-
- /* "View.MemoryView":906
- * suboffset_dim[0] = new_ndim
- *
- * return 0 # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_r = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":809
- *
- * @cname('__pyx_memoryview_slice_memviewslice')
- * cdef int slice_memviewslice( # <<<<<<<<<<<<<<
- * __Pyx_memviewslice *dst,
- * Py_ssize_t shape, Py_ssize_t stride, Py_ssize_t suboffset,
- */
-
- /* function exit code */
- __pyx_L1_error:;
- {
- #ifdef WITH_THREAD
- PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure();
- #endif
- __Pyx_AddTraceback("View.MemoryView.slice_memviewslice", __pyx_clineno, __pyx_lineno, __pyx_filename);
- #ifdef WITH_THREAD
- __Pyx_PyGILState_Release(__pyx_gilstate_save);
- #endif
- }
- __pyx_r = -1;
- __pyx_L0:;
- return __pyx_r;
-}
-
-/* "View.MemoryView":912
- *
- * @cname('__pyx_pybuffer_index')
- * cdef char *pybuffer_index(Py_buffer *view, char *bufp, Py_ssize_t index, # <<<<<<<<<<<<<<
- * Py_ssize_t dim) except NULL:
- * cdef Py_ssize_t shape, stride, suboffset = -1
- */
-
-static char *__pyx_pybuffer_index(Py_buffer *__pyx_v_view, char *__pyx_v_bufp, Py_ssize_t __pyx_v_index, Py_ssize_t __pyx_v_dim) {
- Py_ssize_t __pyx_v_shape;
- Py_ssize_t __pyx_v_stride;
- Py_ssize_t __pyx_v_suboffset;
- Py_ssize_t __pyx_v_itemsize;
- char *__pyx_v_resultp;
- char *__pyx_r;
- __Pyx_RefNannyDeclarations
- Py_ssize_t __pyx_t_1;
- int __pyx_t_2;
- PyObject *__pyx_t_3 = NULL;
- PyObject *__pyx_t_4 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("pybuffer_index", 0);
-
- /* "View.MemoryView":914
- * cdef char *pybuffer_index(Py_buffer *view, char *bufp, Py_ssize_t index,
- * Py_ssize_t dim) except NULL:
- * cdef Py_ssize_t shape, stride, suboffset = -1 # <<<<<<<<<<<<<<
- * cdef Py_ssize_t itemsize = view.itemsize
- * cdef char *resultp
- */
- __pyx_v_suboffset = -1L;
-
- /* "View.MemoryView":915
- * Py_ssize_t dim) except NULL:
- * cdef Py_ssize_t shape, stride, suboffset = -1
- * cdef Py_ssize_t itemsize = view.itemsize # <<<<<<<<<<<<<<
- * cdef char *resultp
- *
- */
- __pyx_t_1 = __pyx_v_view->itemsize;
- __pyx_v_itemsize = __pyx_t_1;
-
- /* "View.MemoryView":918
- * cdef char *resultp
- *
- * if view.ndim == 0: # <<<<<<<<<<<<<<
- * shape = view.len / itemsize
- * stride = itemsize
- */
- __pyx_t_2 = ((__pyx_v_view->ndim == 0) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":919
- *
- * if view.ndim == 0:
- * shape = view.len / itemsize # <<<<<<<<<<<<<<
- * stride = itemsize
- * else:
- */
- if (unlikely(__pyx_v_itemsize == 0)) {
- PyErr_SetString(PyExc_ZeroDivisionError, "integer division or modulo by zero");
- __PYX_ERR(1, 919, __pyx_L1_error)
- }
- else if (sizeof(Py_ssize_t) == sizeof(long) && (!(((Py_ssize_t)-1) > 0)) && unlikely(__pyx_v_itemsize == (Py_ssize_t)-1) && unlikely(UNARY_NEG_WOULD_OVERFLOW(__pyx_v_view->len))) {
- PyErr_SetString(PyExc_OverflowError, "value too large to perform division");
- __PYX_ERR(1, 919, __pyx_L1_error)
- }
- __pyx_v_shape = __Pyx_div_Py_ssize_t(__pyx_v_view->len, __pyx_v_itemsize);
-
- /* "View.MemoryView":920
- * if view.ndim == 0:
- * shape = view.len / itemsize
- * stride = itemsize # <<<<<<<<<<<<<<
- * else:
- * shape = view.shape[dim]
- */
- __pyx_v_stride = __pyx_v_itemsize;
-
- /* "View.MemoryView":918
- * cdef char *resultp
- *
- * if view.ndim == 0: # <<<<<<<<<<<<<<
- * shape = view.len / itemsize
- * stride = itemsize
- */
- goto __pyx_L3;
- }
-
- /* "View.MemoryView":922
- * stride = itemsize
- * else:
- * shape = view.shape[dim] # <<<<<<<<<<<<<<
- * stride = view.strides[dim]
- * if view.suboffsets != NULL:
- */
- /*else*/ {
- __pyx_v_shape = (__pyx_v_view->shape[__pyx_v_dim]);
-
- /* "View.MemoryView":923
- * else:
- * shape = view.shape[dim]
- * stride = view.strides[dim] # <<<<<<<<<<<<<<
- * if view.suboffsets != NULL:
- * suboffset = view.suboffsets[dim]
- */
- __pyx_v_stride = (__pyx_v_view->strides[__pyx_v_dim]);
-
- /* "View.MemoryView":924
- * shape = view.shape[dim]
- * stride = view.strides[dim]
- * if view.suboffsets != NULL: # <<<<<<<<<<<<<<
- * suboffset = view.suboffsets[dim]
- *
- */
- __pyx_t_2 = ((__pyx_v_view->suboffsets != NULL) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":925
- * stride = view.strides[dim]
- * if view.suboffsets != NULL:
- * suboffset = view.suboffsets[dim] # <<<<<<<<<<<<<<
- *
- * if index < 0:
- */
- __pyx_v_suboffset = (__pyx_v_view->suboffsets[__pyx_v_dim]);
-
- /* "View.MemoryView":924
- * shape = view.shape[dim]
- * stride = view.strides[dim]
- * if view.suboffsets != NULL: # <<<<<<<<<<<<<<
- * suboffset = view.suboffsets[dim]
- *
- */
- }
- }
- __pyx_L3:;
-
- /* "View.MemoryView":927
- * suboffset = view.suboffsets[dim]
- *
- * if index < 0: # <<<<<<<<<<<<<<
- * index += view.shape[dim]
- * if index < 0:
- */
- __pyx_t_2 = ((__pyx_v_index < 0) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":928
- *
- * if index < 0:
- * index += view.shape[dim] # <<<<<<<<<<<<<<
- * if index < 0:
- * raise IndexError("Out of bounds on buffer access (axis %d)" % dim)
- */
- __pyx_v_index = (__pyx_v_index + (__pyx_v_view->shape[__pyx_v_dim]));
-
- /* "View.MemoryView":929
- * if index < 0:
- * index += view.shape[dim]
- * if index < 0: # <<<<<<<<<<<<<<
- * raise IndexError("Out of bounds on buffer access (axis %d)" % dim)
- *
- */
- __pyx_t_2 = ((__pyx_v_index < 0) != 0);
- if (unlikely(__pyx_t_2)) {
-
- /* "View.MemoryView":930
- * index += view.shape[dim]
- * if index < 0:
- * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) # <<<<<<<<<<<<<<
- *
- * if index >= shape:
- */
- __pyx_t_3 = PyInt_FromSsize_t(__pyx_v_dim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 930, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_t_4 = __Pyx_PyString_Format(__pyx_kp_s_Out_of_bounds_on_buffer_access_a, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 930, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_builtin_IndexError, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 930, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
- __Pyx_Raise(__pyx_t_3, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __PYX_ERR(1, 930, __pyx_L1_error)
-
- /* "View.MemoryView":929
- * if index < 0:
- * index += view.shape[dim]
- * if index < 0: # <<<<<<<<<<<<<<
- * raise IndexError("Out of bounds on buffer access (axis %d)" % dim)
- *
- */
- }
-
- /* "View.MemoryView":927
- * suboffset = view.suboffsets[dim]
- *
- * if index < 0: # <<<<<<<<<<<<<<
- * index += view.shape[dim]
- * if index < 0:
- */
- }
-
- /* "View.MemoryView":932
- * raise IndexError("Out of bounds on buffer access (axis %d)" % dim)
- *
- * if index >= shape: # <<<<<<<<<<<<<<
- * raise IndexError("Out of bounds on buffer access (axis %d)" % dim)
- *
- */
- __pyx_t_2 = ((__pyx_v_index >= __pyx_v_shape) != 0);
- if (unlikely(__pyx_t_2)) {
-
- /* "View.MemoryView":933
- *
- * if index >= shape:
- * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) # <<<<<<<<<<<<<<
- *
- * resultp = bufp + index * stride
- */
- __pyx_t_3 = PyInt_FromSsize_t(__pyx_v_dim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 933, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_t_4 = __Pyx_PyString_Format(__pyx_kp_s_Out_of_bounds_on_buffer_access_a, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 933, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_builtin_IndexError, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 933, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
- __Pyx_Raise(__pyx_t_3, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __PYX_ERR(1, 933, __pyx_L1_error)
-
- /* "View.MemoryView":932
- * raise IndexError("Out of bounds on buffer access (axis %d)" % dim)
- *
- * if index >= shape: # <<<<<<<<<<<<<<
- * raise IndexError("Out of bounds on buffer access (axis %d)" % dim)
- *
- */
- }
-
- /* "View.MemoryView":935
- * raise IndexError("Out of bounds on buffer access (axis %d)" % dim)
- *
- * resultp = bufp + index * stride # <<<<<<<<<<<<<<
- * if suboffset >= 0:
- * resultp = ( resultp)[0] + suboffset
- */
- __pyx_v_resultp = (__pyx_v_bufp + (__pyx_v_index * __pyx_v_stride));
-
- /* "View.MemoryView":936
- *
- * resultp = bufp + index * stride
- * if suboffset >= 0: # <<<<<<<<<<<<<<
- * resultp = ( resultp)[0] + suboffset
- *
- */
- __pyx_t_2 = ((__pyx_v_suboffset >= 0) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":937
- * resultp = bufp + index * stride
- * if suboffset >= 0:
- * resultp = ( resultp)[0] + suboffset # <<<<<<<<<<<<<<
- *
- * return resultp
- */
- __pyx_v_resultp = ((((char **)__pyx_v_resultp)[0]) + __pyx_v_suboffset);
-
- /* "View.MemoryView":936
- *
- * resultp = bufp + index * stride
- * if suboffset >= 0: # <<<<<<<<<<<<<<
- * resultp = ( resultp)[0] + suboffset
- *
- */
- }
-
- /* "View.MemoryView":939
- * resultp = ( resultp)[0] + suboffset
- *
- * return resultp # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_r = __pyx_v_resultp;
- goto __pyx_L0;
-
- /* "View.MemoryView":912
- *
- * @cname('__pyx_pybuffer_index')
- * cdef char *pybuffer_index(Py_buffer *view, char *bufp, Py_ssize_t index, # <<<<<<<<<<<<<<
- * Py_ssize_t dim) except NULL:
- * cdef Py_ssize_t shape, stride, suboffset = -1
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_XDECREF(__pyx_t_4);
- __Pyx_AddTraceback("View.MemoryView.pybuffer_index", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":945
- *
- * @cname('__pyx_memslice_transpose')
- * cdef int transpose_memslice(__Pyx_memviewslice *memslice) nogil except 0: # <<<<<<<<<<<<<<
- * cdef int ndim = memslice.memview.view.ndim
- *
- */
-
-static int __pyx_memslice_transpose(__Pyx_memviewslice *__pyx_v_memslice) {
- int __pyx_v_ndim;
- Py_ssize_t *__pyx_v_shape;
- Py_ssize_t *__pyx_v_strides;
- int __pyx_v_i;
- int __pyx_v_j;
- int __pyx_r;
- int __pyx_t_1;
- Py_ssize_t *__pyx_t_2;
- long __pyx_t_3;
- long __pyx_t_4;
- Py_ssize_t __pyx_t_5;
- Py_ssize_t __pyx_t_6;
- int __pyx_t_7;
- int __pyx_t_8;
- int __pyx_t_9;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
-
- /* "View.MemoryView":946
- * @cname('__pyx_memslice_transpose')
- * cdef int transpose_memslice(__Pyx_memviewslice *memslice) nogil except 0:
- * cdef int ndim = memslice.memview.view.ndim # <<<<<<<<<<<<<<
- *
- * cdef Py_ssize_t *shape = memslice.shape
- */
- __pyx_t_1 = __pyx_v_memslice->memview->view.ndim;
- __pyx_v_ndim = __pyx_t_1;
-
- /* "View.MemoryView":948
- * cdef int ndim = memslice.memview.view.ndim
- *
- * cdef Py_ssize_t *shape = memslice.shape # <<<<<<<<<<<<<<
- * cdef Py_ssize_t *strides = memslice.strides
- *
- */
- __pyx_t_2 = __pyx_v_memslice->shape;
- __pyx_v_shape = __pyx_t_2;
-
- /* "View.MemoryView":949
- *
- * cdef Py_ssize_t *shape = memslice.shape
- * cdef Py_ssize_t *strides = memslice.strides # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_t_2 = __pyx_v_memslice->strides;
- __pyx_v_strides = __pyx_t_2;
-
- /* "View.MemoryView":953
- *
- * cdef int i, j
- * for i in range(ndim / 2): # <<<<<<<<<<<<<<
- * j = ndim - 1 - i
- * strides[i], strides[j] = strides[j], strides[i]
- */
- __pyx_t_3 = __Pyx_div_long(__pyx_v_ndim, 2);
- __pyx_t_4 = __pyx_t_3;
- for (__pyx_t_1 = 0; __pyx_t_1 < __pyx_t_4; __pyx_t_1+=1) {
- __pyx_v_i = __pyx_t_1;
-
- /* "View.MemoryView":954
- * cdef int i, j
- * for i in range(ndim / 2):
- * j = ndim - 1 - i # <<<<<<<<<<<<<<
- * strides[i], strides[j] = strides[j], strides[i]
- * shape[i], shape[j] = shape[j], shape[i]
- */
- __pyx_v_j = ((__pyx_v_ndim - 1) - __pyx_v_i);
-
- /* "View.MemoryView":955
- * for i in range(ndim / 2):
- * j = ndim - 1 - i
- * strides[i], strides[j] = strides[j], strides[i] # <<<<<<<<<<<<<<
- * shape[i], shape[j] = shape[j], shape[i]
- *
- */
- __pyx_t_5 = (__pyx_v_strides[__pyx_v_j]);
- __pyx_t_6 = (__pyx_v_strides[__pyx_v_i]);
- (__pyx_v_strides[__pyx_v_i]) = __pyx_t_5;
- (__pyx_v_strides[__pyx_v_j]) = __pyx_t_6;
-
- /* "View.MemoryView":956
- * j = ndim - 1 - i
- * strides[i], strides[j] = strides[j], strides[i]
- * shape[i], shape[j] = shape[j], shape[i] # <<<<<<<<<<<<<<
- *
- * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0:
- */
- __pyx_t_6 = (__pyx_v_shape[__pyx_v_j]);
- __pyx_t_5 = (__pyx_v_shape[__pyx_v_i]);
- (__pyx_v_shape[__pyx_v_i]) = __pyx_t_6;
- (__pyx_v_shape[__pyx_v_j]) = __pyx_t_5;
-
- /* "View.MemoryView":958
- * shape[i], shape[j] = shape[j], shape[i]
- *
- * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0: # <<<<<<<<<<<<<<
- * _err(ValueError, "Cannot transpose memoryview with indirect dimensions")
- *
- */
- __pyx_t_8 = (((__pyx_v_memslice->suboffsets[__pyx_v_i]) >= 0) != 0);
- if (!__pyx_t_8) {
- } else {
- __pyx_t_7 = __pyx_t_8;
- goto __pyx_L6_bool_binop_done;
- }
- __pyx_t_8 = (((__pyx_v_memslice->suboffsets[__pyx_v_j]) >= 0) != 0);
- __pyx_t_7 = __pyx_t_8;
- __pyx_L6_bool_binop_done:;
- if (__pyx_t_7) {
-
- /* "View.MemoryView":959
- *
- * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0:
- * _err(ValueError, "Cannot transpose memoryview with indirect dimensions") # <<<<<<<<<<<<<<
- *
- * return 1
- */
- __pyx_t_9 = __pyx_memoryview_err(__pyx_builtin_ValueError, ((char *)"Cannot transpose memoryview with indirect dimensions")); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 959, __pyx_L1_error)
-
- /* "View.MemoryView":958
- * shape[i], shape[j] = shape[j], shape[i]
- *
- * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0: # <<<<<<<<<<<<<<
- * _err(ValueError, "Cannot transpose memoryview with indirect dimensions")
- *
- */
- }
- }
-
- /* "View.MemoryView":961
- * _err(ValueError, "Cannot transpose memoryview with indirect dimensions")
- *
- * return 1 # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_r = 1;
- goto __pyx_L0;
-
- /* "View.MemoryView":945
- *
- * @cname('__pyx_memslice_transpose')
- * cdef int transpose_memslice(__Pyx_memviewslice *memslice) nogil except 0: # <<<<<<<<<<<<<<
- * cdef int ndim = memslice.memview.view.ndim
- *
- */
-
- /* function exit code */
- __pyx_L1_error:;
- {
- #ifdef WITH_THREAD
- PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure();
- #endif
- __Pyx_AddTraceback("View.MemoryView.transpose_memslice", __pyx_clineno, __pyx_lineno, __pyx_filename);
- #ifdef WITH_THREAD
- __Pyx_PyGILState_Release(__pyx_gilstate_save);
- #endif
- }
- __pyx_r = 0;
- __pyx_L0:;
- return __pyx_r;
-}
-
-/* "View.MemoryView":978
- * cdef int (*to_dtype_func)(char *, object) except 0
- *
- * def __dealloc__(self): # <<<<<<<<<<<<<<
- * __PYX_XDEC_MEMVIEW(&self.from_slice, 1)
- *
- */
-
-/* Python wrapper */
-static void __pyx_memoryviewslice___dealloc__(PyObject *__pyx_v_self); /*proto*/
-static void __pyx_memoryviewslice___dealloc__(PyObject *__pyx_v_self) {
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__dealloc__ (wrapper)", 0);
- __pyx_memoryviewslice___pyx_pf_15View_dot_MemoryView_16_memoryviewslice___dealloc__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
-}
-
-static void __pyx_memoryviewslice___pyx_pf_15View_dot_MemoryView_16_memoryviewslice___dealloc__(struct __pyx_memoryviewslice_obj *__pyx_v_self) {
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__dealloc__", 0);
-
- /* "View.MemoryView":979
- *
- * def __dealloc__(self):
- * __PYX_XDEC_MEMVIEW(&self.from_slice, 1) # <<<<<<<<<<<<<<
- *
- * cdef convert_item_to_object(self, char *itemp):
- */
- __PYX_XDEC_MEMVIEW((&__pyx_v_self->from_slice), 1);
-
- /* "View.MemoryView":978
- * cdef int (*to_dtype_func)(char *, object) except 0
- *
- * def __dealloc__(self): # <<<<<<<<<<<<<<
- * __PYX_XDEC_MEMVIEW(&self.from_slice, 1)
- *
- */
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
-}
-
-/* "View.MemoryView":981
- * __PYX_XDEC_MEMVIEW(&self.from_slice, 1)
- *
- * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<<
- * if self.to_object_func != NULL:
- * return self.to_object_func(itemp)
- */
-
-static PyObject *__pyx_memoryviewslice_convert_item_to_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- PyObject *__pyx_t_2 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("convert_item_to_object", 0);
-
- /* "View.MemoryView":982
- *
- * cdef convert_item_to_object(self, char *itemp):
- * if self.to_object_func != NULL: # <<<<<<<<<<<<<<
- * return self.to_object_func(itemp)
- * else:
- */
- __pyx_t_1 = ((__pyx_v_self->to_object_func != NULL) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":983
- * cdef convert_item_to_object(self, char *itemp):
- * if self.to_object_func != NULL:
- * return self.to_object_func(itemp) # <<<<<<<<<<<<<<
- * else:
- * return memoryview.convert_item_to_object(self, itemp)
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_2 = __pyx_v_self->to_object_func(__pyx_v_itemp); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 983, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_r = __pyx_t_2;
- __pyx_t_2 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":982
- *
- * cdef convert_item_to_object(self, char *itemp):
- * if self.to_object_func != NULL: # <<<<<<<<<<<<<<
- * return self.to_object_func(itemp)
- * else:
- */
- }
-
- /* "View.MemoryView":985
- * return self.to_object_func(itemp)
- * else:
- * return memoryview.convert_item_to_object(self, itemp) # <<<<<<<<<<<<<<
- *
- * cdef assign_item_from_object(self, char *itemp, object value):
- */
- /*else*/ {
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_2 = __pyx_memoryview_convert_item_to_object(((struct __pyx_memoryview_obj *)__pyx_v_self), __pyx_v_itemp); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 985, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_r = __pyx_t_2;
- __pyx_t_2 = 0;
- goto __pyx_L0;
- }
-
- /* "View.MemoryView":981
- * __PYX_XDEC_MEMVIEW(&self.from_slice, 1)
- *
- * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<<
- * if self.to_object_func != NULL:
- * return self.to_object_func(itemp)
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_AddTraceback("View.MemoryView._memoryviewslice.convert_item_to_object", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":987
- * return memoryview.convert_item_to_object(self, itemp)
- *
- * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<<
- * if self.to_dtype_func != NULL:
- * self.to_dtype_func(itemp, value)
- */
-
-static PyObject *__pyx_memoryviewslice_assign_item_from_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- int __pyx_t_2;
- PyObject *__pyx_t_3 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("assign_item_from_object", 0);
-
- /* "View.MemoryView":988
- *
- * cdef assign_item_from_object(self, char *itemp, object value):
- * if self.to_dtype_func != NULL: # <<<<<<<<<<<<<<
- * self.to_dtype_func(itemp, value)
- * else:
- */
- __pyx_t_1 = ((__pyx_v_self->to_dtype_func != NULL) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":989
- * cdef assign_item_from_object(self, char *itemp, object value):
- * if self.to_dtype_func != NULL:
- * self.to_dtype_func(itemp, value) # <<<<<<<<<<<<<<
- * else:
- * memoryview.assign_item_from_object(self, itemp, value)
- */
- __pyx_t_2 = __pyx_v_self->to_dtype_func(__pyx_v_itemp, __pyx_v_value); if (unlikely(__pyx_t_2 == ((int)0))) __PYX_ERR(1, 989, __pyx_L1_error)
-
- /* "View.MemoryView":988
- *
- * cdef assign_item_from_object(self, char *itemp, object value):
- * if self.to_dtype_func != NULL: # <<<<<<<<<<<<<<
- * self.to_dtype_func(itemp, value)
- * else:
- */
- goto __pyx_L3;
- }
-
- /* "View.MemoryView":991
- * self.to_dtype_func(itemp, value)
- * else:
- * memoryview.assign_item_from_object(self, itemp, value) # <<<<<<<<<<<<<<
- *
- * @property
- */
- /*else*/ {
- __pyx_t_3 = __pyx_memoryview_assign_item_from_object(((struct __pyx_memoryview_obj *)__pyx_v_self), __pyx_v_itemp, __pyx_v_value); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 991, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- }
- __pyx_L3:;
-
- /* "View.MemoryView":987
- * return memoryview.convert_item_to_object(self, itemp)
- *
- * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<<
- * if self.to_dtype_func != NULL:
- * self.to_dtype_func(itemp, value)
- */
-
- /* function exit code */
- __pyx_r = Py_None; __Pyx_INCREF(Py_None);
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_AddTraceback("View.MemoryView._memoryviewslice.assign_item_from_object", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":994
- *
- * @property
- * def base(self): # <<<<<<<<<<<<<<
- * return self.from_object
- *
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw_15View_dot_MemoryView_16_memoryviewslice_4base_1__get__(PyObject *__pyx_v_self); /*proto*/
-static PyObject *__pyx_pw_15View_dot_MemoryView_16_memoryviewslice_4base_1__get__(PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__get__ (wrapper)", 0);
- __pyx_r = __pyx_pf_15View_dot_MemoryView_16_memoryviewslice_4base___get__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_15View_dot_MemoryView_16_memoryviewslice_4base___get__(struct __pyx_memoryviewslice_obj *__pyx_v_self) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__get__", 0);
-
- /* "View.MemoryView":995
- * @property
- * def base(self):
- * return self.from_object # <<<<<<<<<<<<<<
- *
- * __pyx_getbuffer = capsule( &__pyx_memoryview_getbuffer, "getbuffer(obj, view, flags)")
- */
- __Pyx_XDECREF(__pyx_r);
- __Pyx_INCREF(__pyx_v_self->from_object);
- __pyx_r = __pyx_v_self->from_object;
- goto __pyx_L0;
-
- /* "View.MemoryView":994
- *
- * @property
- * def base(self): # <<<<<<<<<<<<<<
- * return self.from_object
- *
- */
-
- /* function exit code */
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "(tree fragment)":1
- * def __reduce_cython__(self): # <<<<<<<<<<<<<<
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state):
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw___pyx_memoryviewslice_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/
-static PyObject *__pyx_pw___pyx_memoryviewslice_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0);
- __pyx_r = __pyx_pf___pyx_memoryviewslice___reduce_cython__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf___pyx_memoryviewslice___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__reduce_cython__", 0);
-
- /* "(tree fragment)":2
- * def __reduce_cython__(self):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<<
- * def __setstate_cython__(self, __pyx_state):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- */
- __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__18, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_Raise(__pyx_t_1, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __PYX_ERR(1, 2, __pyx_L1_error)
-
- /* "(tree fragment)":1
- * def __reduce_cython__(self): # <<<<<<<<<<<<<<
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state):
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_AddTraceback("View.MemoryView._memoryviewslice.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "(tree fragment)":3
- * def __reduce_cython__(self):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<<
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw___pyx_memoryviewslice_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/
-static PyObject *__pyx_pw___pyx_memoryviewslice_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0);
- __pyx_r = __pyx_pf___pyx_memoryviewslice_2__setstate_cython__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf___pyx_memoryviewslice_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__setstate_cython__", 0);
-
- /* "(tree fragment)":4
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<<
- */
- __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__19, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_Raise(__pyx_t_1, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __PYX_ERR(1, 4, __pyx_L1_error)
-
- /* "(tree fragment)":3
- * def __reduce_cython__(self):
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<<
- * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_AddTraceback("View.MemoryView._memoryviewslice.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":1001
- *
- * @cname('__pyx_memoryview_fromslice')
- * cdef memoryview_fromslice(__Pyx_memviewslice memviewslice, # <<<<<<<<<<<<<<
- * int ndim,
- * object (*to_object_func)(char *),
- */
-
-static PyObject *__pyx_memoryview_fromslice(__Pyx_memviewslice __pyx_v_memviewslice, int __pyx_v_ndim, PyObject *(*__pyx_v_to_object_func)(char *), int (*__pyx_v_to_dtype_func)(char *, PyObject *), int __pyx_v_dtype_is_object) {
- struct __pyx_memoryviewslice_obj *__pyx_v_result = 0;
- Py_ssize_t __pyx_v_suboffset;
- PyObject *__pyx_v_length = NULL;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- PyObject *__pyx_t_2 = NULL;
- PyObject *__pyx_t_3 = NULL;
- __Pyx_TypeInfo *__pyx_t_4;
- Py_buffer __pyx_t_5;
- Py_ssize_t *__pyx_t_6;
- Py_ssize_t *__pyx_t_7;
- Py_ssize_t *__pyx_t_8;
- Py_ssize_t __pyx_t_9;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("memoryview_fromslice", 0);
-
- /* "View.MemoryView":1009
- * cdef _memoryviewslice result
- *
- * if memviewslice.memview == Py_None: # <<<<<<<<<<<<<<
- * return None
- *
- */
- __pyx_t_1 = ((((PyObject *)__pyx_v_memviewslice.memview) == Py_None) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":1010
- *
- * if memviewslice.memview == Py_None:
- * return None # <<<<<<<<<<<<<<
- *
- *
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_r = Py_None; __Pyx_INCREF(Py_None);
- goto __pyx_L0;
-
- /* "View.MemoryView":1009
- * cdef _memoryviewslice result
- *
- * if memviewslice.memview == Py_None: # <<<<<<<<<<<<<<
- * return None
- *
- */
- }
-
- /* "View.MemoryView":1015
- *
- *
- * result = _memoryviewslice(None, 0, dtype_is_object) # <<<<<<<<<<<<<<
- *
- * result.from_slice = memviewslice
- */
- __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_v_dtype_is_object); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1015, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1015, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_INCREF(Py_None);
- __Pyx_GIVEREF(Py_None);
- PyTuple_SET_ITEM(__pyx_t_3, 0, Py_None);
- __Pyx_INCREF(__pyx_int_0);
- __Pyx_GIVEREF(__pyx_int_0);
- PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_int_0);
- __Pyx_GIVEREF(__pyx_t_2);
- PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_t_2);
- __pyx_t_2 = 0;
- __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryviewslice_type), __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1015, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __pyx_v_result = ((struct __pyx_memoryviewslice_obj *)__pyx_t_2);
- __pyx_t_2 = 0;
-
- /* "View.MemoryView":1017
- * result = _memoryviewslice(None, 0, dtype_is_object)
- *
- * result.from_slice = memviewslice # <<<<<<<<<<<<<<
- * __PYX_INC_MEMVIEW(&memviewslice, 1)
- *
- */
- __pyx_v_result->from_slice = __pyx_v_memviewslice;
-
- /* "View.MemoryView":1018
- *
- * result.from_slice = memviewslice
- * __PYX_INC_MEMVIEW(&memviewslice, 1) # <<<<<<<<<<<<<<
- *
- * result.from_object = ( memviewslice.memview).base
- */
- __PYX_INC_MEMVIEW((&__pyx_v_memviewslice), 1);
-
- /* "View.MemoryView":1020
- * __PYX_INC_MEMVIEW(&memviewslice, 1)
- *
- * result.from_object = ( memviewslice.memview).base # <<<<<<<<<<<<<<
- * result.typeinfo = memviewslice.memview.typeinfo
- *
- */
- __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_memviewslice.memview), __pyx_n_s_base); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1020, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_GIVEREF(__pyx_t_2);
- __Pyx_GOTREF(__pyx_v_result->from_object);
- __Pyx_DECREF(__pyx_v_result->from_object);
- __pyx_v_result->from_object = __pyx_t_2;
- __pyx_t_2 = 0;
-
- /* "View.MemoryView":1021
- *
- * result.from_object = ( memviewslice.memview).base
- * result.typeinfo = memviewslice.memview.typeinfo # <<<<<<<<<<<<<<
- *
- * result.view = memviewslice.memview.view
- */
- __pyx_t_4 = __pyx_v_memviewslice.memview->typeinfo;
- __pyx_v_result->__pyx_base.typeinfo = __pyx_t_4;
-
- /* "View.MemoryView":1023
- * result.typeinfo = memviewslice.memview.typeinfo
- *
- * result.view = memviewslice.memview.view # <<<<<<<<<<<<<<
- * result.view.buf = memviewslice.data
- * result.view.ndim = ndim
- */
- __pyx_t_5 = __pyx_v_memviewslice.memview->view;
- __pyx_v_result->__pyx_base.view = __pyx_t_5;
-
- /* "View.MemoryView":1024
- *
- * result.view = memviewslice.memview.view
- * result.view.buf = memviewslice.data # <<<<<<<<<<<<<<
- * result.view.ndim = ndim
- * (<__pyx_buffer *> &result.view).obj = Py_None
- */
- __pyx_v_result->__pyx_base.view.buf = ((void *)__pyx_v_memviewslice.data);
-
- /* "View.MemoryView":1025
- * result.view = memviewslice.memview.view
- * result.view.buf = memviewslice.data
- * result.view.ndim = ndim # <<<<<<<<<<<<<<
- * (<__pyx_buffer *> &result.view).obj = Py_None
- * Py_INCREF(Py_None)
- */
- __pyx_v_result->__pyx_base.view.ndim = __pyx_v_ndim;
-
- /* "View.MemoryView":1026
- * result.view.buf = memviewslice.data
- * result.view.ndim = ndim
- * (<__pyx_buffer *> &result.view).obj = Py_None # <<<<<<<<<<<<<<
- * Py_INCREF(Py_None)
- *
- */
- ((Py_buffer *)(&__pyx_v_result->__pyx_base.view))->obj = Py_None;
-
- /* "View.MemoryView":1027
- * result.view.ndim = ndim
- * (<__pyx_buffer *> &result.view).obj = Py_None
- * Py_INCREF(Py_None) # <<<<<<<<<<<<<<
- *
- * if (memviewslice.memview).flags & PyBUF_WRITABLE:
- */
- Py_INCREF(Py_None);
-
- /* "View.MemoryView":1029
- * Py_INCREF(Py_None)
- *
- * if (memviewslice.memview).flags & PyBUF_WRITABLE: # <<<<<<<<<<<<<<
- * result.flags = PyBUF_RECORDS
- * else:
- */
- __pyx_t_1 = ((((struct __pyx_memoryview_obj *)__pyx_v_memviewslice.memview)->flags & PyBUF_WRITABLE) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":1030
- *
- * if (memviewslice.memview).flags & PyBUF_WRITABLE:
- * result.flags = PyBUF_RECORDS # <<<<<<<<<<<<<<
- * else:
- * result.flags = PyBUF_RECORDS_RO
- */
- __pyx_v_result->__pyx_base.flags = PyBUF_RECORDS;
-
- /* "View.MemoryView":1029
- * Py_INCREF(Py_None)
- *
- * if (memviewslice.memview).flags & PyBUF_WRITABLE: # <<<<<<<<<<<<<<
- * result.flags = PyBUF_RECORDS
- * else:
- */
- goto __pyx_L4;
- }
-
- /* "View.MemoryView":1032
- * result.flags = PyBUF_RECORDS
- * else:
- * result.flags = PyBUF_RECORDS_RO # <<<<<<<<<<<<<<
- *
- * result.view.shape = result.from_slice.shape
- */
- /*else*/ {
- __pyx_v_result->__pyx_base.flags = PyBUF_RECORDS_RO;
- }
- __pyx_L4:;
-
- /* "View.MemoryView":1034
- * result.flags = PyBUF_RECORDS_RO
- *
- * result.view.shape = result.from_slice.shape # <<<<<<<<<<<<<<
- * result.view.strides = result.from_slice.strides
- *
- */
- __pyx_v_result->__pyx_base.view.shape = ((Py_ssize_t *)__pyx_v_result->from_slice.shape);
-
- /* "View.MemoryView":1035
- *
- * result.view.shape = result.from_slice.shape
- * result.view.strides = result.from_slice.strides # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_v_result->__pyx_base.view.strides = ((Py_ssize_t *)__pyx_v_result->from_slice.strides);
-
- /* "View.MemoryView":1038
- *
- *
- * result.view.suboffsets = NULL # <<<<<<<<<<<<<<
- * for suboffset in result.from_slice.suboffsets[:ndim]:
- * if suboffset >= 0:
- */
- __pyx_v_result->__pyx_base.view.suboffsets = NULL;
-
- /* "View.MemoryView":1039
- *
- * result.view.suboffsets = NULL
- * for suboffset in result.from_slice.suboffsets[:ndim]: # <<<<<<<<<<<<<<
- * if suboffset >= 0:
- * result.view.suboffsets = result.from_slice.suboffsets
- */
- __pyx_t_7 = (__pyx_v_result->from_slice.suboffsets + __pyx_v_ndim);
- for (__pyx_t_8 = __pyx_v_result->from_slice.suboffsets; __pyx_t_8 < __pyx_t_7; __pyx_t_8++) {
- __pyx_t_6 = __pyx_t_8;
- __pyx_v_suboffset = (__pyx_t_6[0]);
-
- /* "View.MemoryView":1040
- * result.view.suboffsets = NULL
- * for suboffset in result.from_slice.suboffsets[:ndim]:
- * if suboffset >= 0: # <<<<<<<<<<<<<<
- * result.view.suboffsets = result.from_slice.suboffsets
- * break
- */
- __pyx_t_1 = ((__pyx_v_suboffset >= 0) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":1041
- * for suboffset in result.from_slice.suboffsets[:ndim]:
- * if suboffset >= 0:
- * result.view.suboffsets = result.from_slice.suboffsets # <<<<<<<<<<<<<<
- * break
- *
- */
- __pyx_v_result->__pyx_base.view.suboffsets = ((Py_ssize_t *)__pyx_v_result->from_slice.suboffsets);
-
- /* "View.MemoryView":1042
- * if suboffset >= 0:
- * result.view.suboffsets = result.from_slice.suboffsets
- * break # <<<<<<<<<<<<<<
- *
- * result.view.len = result.view.itemsize
- */
- goto __pyx_L6_break;
-
- /* "View.MemoryView":1040
- * result.view.suboffsets = NULL
- * for suboffset in result.from_slice.suboffsets[:ndim]:
- * if suboffset >= 0: # <<<<<<<<<<<<<<
- * result.view.suboffsets = result.from_slice.suboffsets
- * break
- */
- }
- }
- __pyx_L6_break:;
-
- /* "View.MemoryView":1044
- * break
- *
- * result.view.len = result.view.itemsize # <<<<<<<<<<<<<<
- * for length in result.view.shape[:ndim]:
- * result.view.len *= length
- */
- __pyx_t_9 = __pyx_v_result->__pyx_base.view.itemsize;
- __pyx_v_result->__pyx_base.view.len = __pyx_t_9;
-
- /* "View.MemoryView":1045
- *
- * result.view.len = result.view.itemsize
- * for length in result.view.shape[:ndim]: # <<<<<<<<<<<<<<
- * result.view.len *= length
- *
- */
- __pyx_t_7 = (__pyx_v_result->__pyx_base.view.shape + __pyx_v_ndim);
- for (__pyx_t_8 = __pyx_v_result->__pyx_base.view.shape; __pyx_t_8 < __pyx_t_7; __pyx_t_8++) {
- __pyx_t_6 = __pyx_t_8;
- __pyx_t_2 = PyInt_FromSsize_t((__pyx_t_6[0])); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1045, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_XDECREF_SET(__pyx_v_length, __pyx_t_2);
- __pyx_t_2 = 0;
-
- /* "View.MemoryView":1046
- * result.view.len = result.view.itemsize
- * for length in result.view.shape[:ndim]:
- * result.view.len *= length # <<<<<<<<<<<<<<
- *
- * result.to_object_func = to_object_func
- */
- __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_result->__pyx_base.view.len); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1046, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_3 = PyNumber_InPlaceMultiply(__pyx_t_2, __pyx_v_length); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1046, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __pyx_t_9 = __Pyx_PyIndex_AsSsize_t(__pyx_t_3); if (unlikely((__pyx_t_9 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 1046, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __pyx_v_result->__pyx_base.view.len = __pyx_t_9;
- }
-
- /* "View.MemoryView":1048
- * result.view.len *= length
- *
- * result.to_object_func = to_object_func # <<<<<<<<<<<<<<
- * result.to_dtype_func = to_dtype_func
- *
- */
- __pyx_v_result->to_object_func = __pyx_v_to_object_func;
-
- /* "View.MemoryView":1049
- *
- * result.to_object_func = to_object_func
- * result.to_dtype_func = to_dtype_func # <<<<<<<<<<<<<<
- *
- * return result
- */
- __pyx_v_result->to_dtype_func = __pyx_v_to_dtype_func;
-
- /* "View.MemoryView":1051
- * result.to_dtype_func = to_dtype_func
- *
- * return result # <<<<<<<<<<<<<<
- *
- * @cname('__pyx_memoryview_get_slice_from_memoryview')
- */
- __Pyx_XDECREF(__pyx_r);
- __Pyx_INCREF(((PyObject *)__pyx_v_result));
- __pyx_r = ((PyObject *)__pyx_v_result);
- goto __pyx_L0;
-
- /* "View.MemoryView":1001
- *
- * @cname('__pyx_memoryview_fromslice')
- * cdef memoryview_fromslice(__Pyx_memviewslice memviewslice, # <<<<<<<<<<<<<<
- * int ndim,
- * object (*to_object_func)(char *),
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_AddTraceback("View.MemoryView.memoryview_fromslice", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XDECREF((PyObject *)__pyx_v_result);
- __Pyx_XDECREF(__pyx_v_length);
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":1054
- *
- * @cname('__pyx_memoryview_get_slice_from_memoryview')
- * cdef __Pyx_memviewslice *get_slice_from_memview(memoryview memview, # <<<<<<<<<<<<<<
- * __Pyx_memviewslice *mslice) except NULL:
- * cdef _memoryviewslice obj
- */
-
-static __Pyx_memviewslice *__pyx_memoryview_get_slice_from_memoryview(struct __pyx_memoryview_obj *__pyx_v_memview, __Pyx_memviewslice *__pyx_v_mslice) {
- struct __pyx_memoryviewslice_obj *__pyx_v_obj = 0;
- __Pyx_memviewslice *__pyx_r;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- int __pyx_t_2;
- PyObject *__pyx_t_3 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("get_slice_from_memview", 0);
-
- /* "View.MemoryView":1057
- * __Pyx_memviewslice *mslice) except NULL:
- * cdef _memoryviewslice obj
- * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<<
- * obj = memview
- * return &obj.from_slice
- */
- __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type);
- __pyx_t_2 = (__pyx_t_1 != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":1058
- * cdef _memoryviewslice obj
- * if isinstance(memview, _memoryviewslice):
- * obj = memview # <<<<<<<<<<<<<<
- * return &obj.from_slice
- * else:
- */
- if (!(likely(((((PyObject *)__pyx_v_memview)) == Py_None) || likely(__Pyx_TypeTest(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type))))) __PYX_ERR(1, 1058, __pyx_L1_error)
- __pyx_t_3 = ((PyObject *)__pyx_v_memview);
- __Pyx_INCREF(__pyx_t_3);
- __pyx_v_obj = ((struct __pyx_memoryviewslice_obj *)__pyx_t_3);
- __pyx_t_3 = 0;
-
- /* "View.MemoryView":1059
- * if isinstance(memview, _memoryviewslice):
- * obj = memview
- * return &obj.from_slice # <<<<<<<<<<<<<<
- * else:
- * slice_copy(memview, mslice)
- */
- __pyx_r = (&__pyx_v_obj->from_slice);
- goto __pyx_L0;
-
- /* "View.MemoryView":1057
- * __Pyx_memviewslice *mslice) except NULL:
- * cdef _memoryviewslice obj
- * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<<
- * obj = memview
- * return &obj.from_slice
- */
- }
-
- /* "View.MemoryView":1061
- * return &obj.from_slice
- * else:
- * slice_copy(memview, mslice) # <<<<<<<<<<<<<<
- * return mslice
- *
- */
- /*else*/ {
- __pyx_memoryview_slice_copy(__pyx_v_memview, __pyx_v_mslice);
-
- /* "View.MemoryView":1062
- * else:
- * slice_copy(memview, mslice)
- * return mslice # <<<<<<<<<<<<<<
- *
- * @cname('__pyx_memoryview_slice_copy')
- */
- __pyx_r = __pyx_v_mslice;
- goto __pyx_L0;
- }
-
- /* "View.MemoryView":1054
- *
- * @cname('__pyx_memoryview_get_slice_from_memoryview')
- * cdef __Pyx_memviewslice *get_slice_from_memview(memoryview memview, # <<<<<<<<<<<<<<
- * __Pyx_memviewslice *mslice) except NULL:
- * cdef _memoryviewslice obj
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_AddTraceback("View.MemoryView.get_slice_from_memview", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XDECREF((PyObject *)__pyx_v_obj);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":1065
- *
- * @cname('__pyx_memoryview_slice_copy')
- * cdef void slice_copy(memoryview memview, __Pyx_memviewslice *dst): # <<<<<<<<<<<<<<
- * cdef int dim
- * cdef (Py_ssize_t*) shape, strides, suboffsets
- */
-
-static void __pyx_memoryview_slice_copy(struct __pyx_memoryview_obj *__pyx_v_memview, __Pyx_memviewslice *__pyx_v_dst) {
- int __pyx_v_dim;
- Py_ssize_t *__pyx_v_shape;
- Py_ssize_t *__pyx_v_strides;
- Py_ssize_t *__pyx_v_suboffsets;
- __Pyx_RefNannyDeclarations
- Py_ssize_t *__pyx_t_1;
- int __pyx_t_2;
- int __pyx_t_3;
- int __pyx_t_4;
- Py_ssize_t __pyx_t_5;
- __Pyx_RefNannySetupContext("slice_copy", 0);
-
- /* "View.MemoryView":1069
- * cdef (Py_ssize_t*) shape, strides, suboffsets
- *
- * shape = memview.view.shape # <<<<<<<<<<<<<<
- * strides = memview.view.strides
- * suboffsets = memview.view.suboffsets
- */
- __pyx_t_1 = __pyx_v_memview->view.shape;
- __pyx_v_shape = __pyx_t_1;
-
- /* "View.MemoryView":1070
- *
- * shape = memview.view.shape
- * strides = memview.view.strides # <<<<<<<<<<<<<<
- * suboffsets = memview.view.suboffsets
- *
- */
- __pyx_t_1 = __pyx_v_memview->view.strides;
- __pyx_v_strides = __pyx_t_1;
-
- /* "View.MemoryView":1071
- * shape = memview.view.shape
- * strides = memview.view.strides
- * suboffsets = memview.view.suboffsets # <<<<<<<<<<<<<<
- *
- * dst.memview = <__pyx_memoryview *> memview
- */
- __pyx_t_1 = __pyx_v_memview->view.suboffsets;
- __pyx_v_suboffsets = __pyx_t_1;
-
- /* "View.MemoryView":1073
- * suboffsets = memview.view.suboffsets
- *
- * dst.memview = <__pyx_memoryview *> memview # <<<<<<<<<<<<<<
- * dst.data = memview.view.buf
- *
- */
- __pyx_v_dst->memview = ((struct __pyx_memoryview_obj *)__pyx_v_memview);
-
- /* "View.MemoryView":1074
- *
- * dst.memview = <__pyx_memoryview *> memview
- * dst.data = memview.view.buf # <<<<<<<<<<<<<<
- *
- * for dim in range(memview.view.ndim):
- */
- __pyx_v_dst->data = ((char *)__pyx_v_memview->view.buf);
-
- /* "View.MemoryView":1076
- * dst.data = memview.view.buf
- *
- * for dim in range(memview.view.ndim): # <<<<<<<<<<<<<<
- * dst.shape[dim] = shape[dim]
- * dst.strides[dim] = strides[dim]
- */
- __pyx_t_2 = __pyx_v_memview->view.ndim;
- __pyx_t_3 = __pyx_t_2;
- for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) {
- __pyx_v_dim = __pyx_t_4;
-
- /* "View.MemoryView":1077
- *
- * for dim in range(memview.view.ndim):
- * dst.shape[dim] = shape[dim] # <<<<<<<<<<<<<<
- * dst.strides[dim] = strides[dim]
- * dst.suboffsets[dim] = suboffsets[dim] if suboffsets else -1
- */
- (__pyx_v_dst->shape[__pyx_v_dim]) = (__pyx_v_shape[__pyx_v_dim]);
-
- /* "View.MemoryView":1078
- * for dim in range(memview.view.ndim):
- * dst.shape[dim] = shape[dim]
- * dst.strides[dim] = strides[dim] # <<<<<<<<<<<<<<
- * dst.suboffsets[dim] = suboffsets[dim] if suboffsets else -1
- *
- */
- (__pyx_v_dst->strides[__pyx_v_dim]) = (__pyx_v_strides[__pyx_v_dim]);
-
- /* "View.MemoryView":1079
- * dst.shape[dim] = shape[dim]
- * dst.strides[dim] = strides[dim]
- * dst.suboffsets[dim] = suboffsets[dim] if suboffsets else -1 # <<<<<<<<<<<<<<
- *
- * @cname('__pyx_memoryview_copy_object')
- */
- if ((__pyx_v_suboffsets != 0)) {
- __pyx_t_5 = (__pyx_v_suboffsets[__pyx_v_dim]);
- } else {
- __pyx_t_5 = -1L;
- }
- (__pyx_v_dst->suboffsets[__pyx_v_dim]) = __pyx_t_5;
- }
-
- /* "View.MemoryView":1065
- *
- * @cname('__pyx_memoryview_slice_copy')
- * cdef void slice_copy(memoryview memview, __Pyx_memviewslice *dst): # <<<<<<<<<<<<<<
- * cdef int dim
- * cdef (Py_ssize_t*) shape, strides, suboffsets
- */
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
-}
-
-/* "View.MemoryView":1082
- *
- * @cname('__pyx_memoryview_copy_object')
- * cdef memoryview_copy(memoryview memview): # <<<<<<<<<<<<<<
- * "Create a new memoryview object"
- * cdef __Pyx_memviewslice memviewslice
- */
-
-static PyObject *__pyx_memoryview_copy_object(struct __pyx_memoryview_obj *__pyx_v_memview) {
- __Pyx_memviewslice __pyx_v_memviewslice;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("memoryview_copy", 0);
-
- /* "View.MemoryView":1085
- * "Create a new memoryview object"
- * cdef __Pyx_memviewslice memviewslice
- * slice_copy(memview, &memviewslice) # <<<<<<<<<<<<<<
- * return memoryview_copy_from_slice(memview, &memviewslice)
- *
- */
- __pyx_memoryview_slice_copy(__pyx_v_memview, (&__pyx_v_memviewslice));
-
- /* "View.MemoryView":1086
- * cdef __Pyx_memviewslice memviewslice
- * slice_copy(memview, &memviewslice)
- * return memoryview_copy_from_slice(memview, &memviewslice) # <<<<<<<<<<<<<<
- *
- * @cname('__pyx_memoryview_copy_object_from_slice')
- */
- __Pyx_XDECREF(__pyx_r);
- __pyx_t_1 = __pyx_memoryview_copy_object_from_slice(__pyx_v_memview, (&__pyx_v_memviewslice)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 1086, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_r = __pyx_t_1;
- __pyx_t_1 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":1082
- *
- * @cname('__pyx_memoryview_copy_object')
- * cdef memoryview_copy(memoryview memview): # <<<<<<<<<<<<<<
- * "Create a new memoryview object"
- * cdef __Pyx_memviewslice memviewslice
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_AddTraceback("View.MemoryView.memoryview_copy", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":1089
- *
- * @cname('__pyx_memoryview_copy_object_from_slice')
- * cdef memoryview_copy_from_slice(memoryview memview, __Pyx_memviewslice *memviewslice): # <<<<<<<<<<<<<<
- * """
- * Create a new memoryview object from a given memoryview object and slice.
- */
-
-static PyObject *__pyx_memoryview_copy_object_from_slice(struct __pyx_memoryview_obj *__pyx_v_memview, __Pyx_memviewslice *__pyx_v_memviewslice) {
- PyObject *(*__pyx_v_to_object_func)(char *);
- int (*__pyx_v_to_dtype_func)(char *, PyObject *);
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- int __pyx_t_1;
- int __pyx_t_2;
- PyObject *(*__pyx_t_3)(char *);
- int (*__pyx_t_4)(char *, PyObject *);
- PyObject *__pyx_t_5 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("memoryview_copy_from_slice", 0);
-
- /* "View.MemoryView":1096
- * cdef int (*to_dtype_func)(char *, object) except 0
- *
- * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<<
- * to_object_func = (<_memoryviewslice> memview).to_object_func
- * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func
- */
- __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type);
- __pyx_t_2 = (__pyx_t_1 != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":1097
- *
- * if isinstance(memview, _memoryviewslice):
- * to_object_func = (<_memoryviewslice> memview).to_object_func # <<<<<<<<<<<<<<
- * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func
- * else:
- */
- __pyx_t_3 = ((struct __pyx_memoryviewslice_obj *)__pyx_v_memview)->to_object_func;
- __pyx_v_to_object_func = __pyx_t_3;
-
- /* "View.MemoryView":1098
- * if isinstance(memview, _memoryviewslice):
- * to_object_func = (<_memoryviewslice> memview).to_object_func
- * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func # <<<<<<<<<<<<<<
- * else:
- * to_object_func = NULL
- */
- __pyx_t_4 = ((struct __pyx_memoryviewslice_obj *)__pyx_v_memview)->to_dtype_func;
- __pyx_v_to_dtype_func = __pyx_t_4;
-
- /* "View.MemoryView":1096
- * cdef int (*to_dtype_func)(char *, object) except 0
- *
- * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<<
- * to_object_func = (<_memoryviewslice> memview).to_object_func
- * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func
- */
- goto __pyx_L3;
- }
-
- /* "View.MemoryView":1100
- * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func
- * else:
- * to_object_func = NULL # <<<<<<<<<<<<<<
- * to_dtype_func = NULL
- *
- */
- /*else*/ {
- __pyx_v_to_object_func = NULL;
-
- /* "View.MemoryView":1101
- * else:
- * to_object_func = NULL
- * to_dtype_func = NULL # <<<<<<<<<<<<<<
- *
- * return memoryview_fromslice(memviewslice[0], memview.view.ndim,
- */
- __pyx_v_to_dtype_func = NULL;
- }
- __pyx_L3:;
-
- /* "View.MemoryView":1103
- * to_dtype_func = NULL
- *
- * return memoryview_fromslice(memviewslice[0], memview.view.ndim, # <<<<<<<<<<<<<<
- * to_object_func, to_dtype_func,
- * memview.dtype_is_object)
- */
- __Pyx_XDECREF(__pyx_r);
-
- /* "View.MemoryView":1105
- * return memoryview_fromslice(memviewslice[0], memview.view.ndim,
- * to_object_func, to_dtype_func,
- * memview.dtype_is_object) # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_t_5 = __pyx_memoryview_fromslice((__pyx_v_memviewslice[0]), __pyx_v_memview->view.ndim, __pyx_v_to_object_func, __pyx_v_to_dtype_func, __pyx_v_memview->dtype_is_object); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 1103, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- __pyx_r = __pyx_t_5;
- __pyx_t_5 = 0;
- goto __pyx_L0;
-
- /* "View.MemoryView":1089
- *
- * @cname('__pyx_memoryview_copy_object_from_slice')
- * cdef memoryview_copy_from_slice(memoryview memview, __Pyx_memviewslice *memviewslice): # <<<<<<<<<<<<<<
- * """
- * Create a new memoryview object from a given memoryview object and slice.
- */
-
- /* function exit code */
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_5);
- __Pyx_AddTraceback("View.MemoryView.memoryview_copy_from_slice", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = 0;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "View.MemoryView":1111
- *
- *
- * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) nogil: # <<<<<<<<<<<<<<
- * if arg < 0:
- * return -arg
- */
-
-static Py_ssize_t abs_py_ssize_t(Py_ssize_t __pyx_v_arg) {
- Py_ssize_t __pyx_r;
- int __pyx_t_1;
-
- /* "View.MemoryView":1112
- *
- * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) nogil:
- * if arg < 0: # <<<<<<<<<<<<<<
- * return -arg
- * else:
- */
- __pyx_t_1 = ((__pyx_v_arg < 0) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":1113
- * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) nogil:
- * if arg < 0:
- * return -arg # <<<<<<<<<<<<<<
- * else:
- * return arg
- */
- __pyx_r = (-__pyx_v_arg);
- goto __pyx_L0;
-
- /* "View.MemoryView":1112
- *
- * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) nogil:
- * if arg < 0: # <<<<<<<<<<<<<<
- * return -arg
- * else:
- */
- }
-
- /* "View.MemoryView":1115
- * return -arg
- * else:
- * return arg # <<<<<<<<<<<<<<
- *
- * @cname('__pyx_get_best_slice_order')
- */
- /*else*/ {
- __pyx_r = __pyx_v_arg;
- goto __pyx_L0;
- }
-
- /* "View.MemoryView":1111
- *
- *
- * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) nogil: # <<<<<<<<<<<<<<
- * if arg < 0:
- * return -arg
- */
-
- /* function exit code */
- __pyx_L0:;
- return __pyx_r;
-}
-
-/* "View.MemoryView":1118
- *
- * @cname('__pyx_get_best_slice_order')
- * cdef char get_best_order(__Pyx_memviewslice *mslice, int ndim) nogil: # <<<<<<<<<<<<<<
- * """
- * Figure out the best memory access order for a given slice.
- */
-
-static char __pyx_get_best_slice_order(__Pyx_memviewslice *__pyx_v_mslice, int __pyx_v_ndim) {
- int __pyx_v_i;
- Py_ssize_t __pyx_v_c_stride;
- Py_ssize_t __pyx_v_f_stride;
- char __pyx_r;
- int __pyx_t_1;
- int __pyx_t_2;
- int __pyx_t_3;
- int __pyx_t_4;
-
- /* "View.MemoryView":1123
- * """
- * cdef int i
- * cdef Py_ssize_t c_stride = 0 # <<<<<<<<<<<<<<
- * cdef Py_ssize_t f_stride = 0
- *
- */
- __pyx_v_c_stride = 0;
-
- /* "View.MemoryView":1124
- * cdef int i
- * cdef Py_ssize_t c_stride = 0
- * cdef Py_ssize_t f_stride = 0 # <<<<<<<<<<<<<<
- *
- * for i in range(ndim - 1, -1, -1):
- */
- __pyx_v_f_stride = 0;
-
- /* "View.MemoryView":1126
- * cdef Py_ssize_t f_stride = 0
- *
- * for i in range(ndim - 1, -1, -1): # <<<<<<<<<<<<<<
- * if mslice.shape[i] > 1:
- * c_stride = mslice.strides[i]
- */
- for (__pyx_t_1 = (__pyx_v_ndim - 1); __pyx_t_1 > -1; __pyx_t_1-=1) {
- __pyx_v_i = __pyx_t_1;
-
- /* "View.MemoryView":1127
- *
- * for i in range(ndim - 1, -1, -1):
- * if mslice.shape[i] > 1: # <<<<<<<<<<<<<<
- * c_stride = mslice.strides[i]
- * break
- */
- __pyx_t_2 = (((__pyx_v_mslice->shape[__pyx_v_i]) > 1) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":1128
- * for i in range(ndim - 1, -1, -1):
- * if mslice.shape[i] > 1:
- * c_stride = mslice.strides[i] # <<<<<<<<<<<<<<
- * break
- *
- */
- __pyx_v_c_stride = (__pyx_v_mslice->strides[__pyx_v_i]);
-
- /* "View.MemoryView":1129
- * if mslice.shape[i] > 1:
- * c_stride = mslice.strides[i]
- * break # <<<<<<<<<<<<<<
- *
- * for i in range(ndim):
- */
- goto __pyx_L4_break;
-
- /* "View.MemoryView":1127
- *
- * for i in range(ndim - 1, -1, -1):
- * if mslice.shape[i] > 1: # <<<<<<<<<<<<<<
- * c_stride = mslice.strides[i]
- * break
- */
- }
- }
- __pyx_L4_break:;
-
- /* "View.MemoryView":1131
- * break
- *
- * for i in range(ndim): # <<<<<<<<<<<<<<
- * if mslice.shape[i] > 1:
- * f_stride = mslice.strides[i]
- */
- __pyx_t_1 = __pyx_v_ndim;
- __pyx_t_3 = __pyx_t_1;
- for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) {
- __pyx_v_i = __pyx_t_4;
-
- /* "View.MemoryView":1132
- *
- * for i in range(ndim):
- * if mslice.shape[i] > 1: # <<<<<<<<<<<<<<
- * f_stride = mslice.strides[i]
- * break
- */
- __pyx_t_2 = (((__pyx_v_mslice->shape[__pyx_v_i]) > 1) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":1133
- * for i in range(ndim):
- * if mslice.shape[i] > 1:
- * f_stride = mslice.strides[i] # <<<<<<<<<<<<<<
- * break
- *
- */
- __pyx_v_f_stride = (__pyx_v_mslice->strides[__pyx_v_i]);
-
- /* "View.MemoryView":1134
- * if mslice.shape[i] > 1:
- * f_stride = mslice.strides[i]
- * break # <<<<<<<<<<<<<<
- *
- * if abs_py_ssize_t(c_stride) <= abs_py_ssize_t(f_stride):
- */
- goto __pyx_L7_break;
-
- /* "View.MemoryView":1132
- *
- * for i in range(ndim):
- * if mslice.shape[i] > 1: # <<<<<<<<<<<<<<
- * f_stride = mslice.strides[i]
- * break
- */
- }
- }
- __pyx_L7_break:;
-
- /* "View.MemoryView":1136
- * break
- *
- * if abs_py_ssize_t(c_stride) <= abs_py_ssize_t(f_stride): # <<<<<<<<<<<<<<
- * return 'C'
- * else:
- */
- __pyx_t_2 = ((abs_py_ssize_t(__pyx_v_c_stride) <= abs_py_ssize_t(__pyx_v_f_stride)) != 0);
- if (__pyx_t_2) {
-
- /* "View.MemoryView":1137
- *
- * if abs_py_ssize_t(c_stride) <= abs_py_ssize_t(f_stride):
- * return 'C' # <<<<<<<<<<<<<<
- * else:
- * return 'F'
- */
- __pyx_r = 'C';
- goto __pyx_L0;
-
- /* "View.MemoryView":1136
- * break
- *
- * if abs_py_ssize_t(c_stride) <= abs_py_ssize_t(f_stride): # <<<<<<<<<<<<<<
- * return 'C'
- * else:
- */
- }
-
- /* "View.MemoryView":1139
- * return 'C'
- * else:
- * return 'F' # <<<<<<<<<<<<<<
- *
- * @cython.cdivision(True)
- */
- /*else*/ {
- __pyx_r = 'F';
- goto __pyx_L0;
- }
-
- /* "View.MemoryView":1118
- *
- * @cname('__pyx_get_best_slice_order')
- * cdef char get_best_order(__Pyx_memviewslice *mslice, int ndim) nogil: # <<<<<<<<<<<<<<
- * """
- * Figure out the best memory access order for a given slice.
- */
-
- /* function exit code */
- __pyx_L0:;
- return __pyx_r;
-}
-
-/* "View.MemoryView":1142
- *
- * @cython.cdivision(True)
- * cdef void _copy_strided_to_strided(char *src_data, Py_ssize_t *src_strides, # <<<<<<<<<<<<<<
- * char *dst_data, Py_ssize_t *dst_strides,
- * Py_ssize_t *src_shape, Py_ssize_t *dst_shape,
- */
-
-static void _copy_strided_to_strided(char *__pyx_v_src_data, Py_ssize_t *__pyx_v_src_strides, char *__pyx_v_dst_data, Py_ssize_t *__pyx_v_dst_strides, Py_ssize_t *__pyx_v_src_shape, Py_ssize_t *__pyx_v_dst_shape, int __pyx_v_ndim, size_t __pyx_v_itemsize) {
- CYTHON_UNUSED Py_ssize_t __pyx_v_i;
- CYTHON_UNUSED Py_ssize_t __pyx_v_src_extent;
- Py_ssize_t __pyx_v_dst_extent;
- Py_ssize_t __pyx_v_src_stride;
- Py_ssize_t __pyx_v_dst_stride;
- int __pyx_t_1;
- int __pyx_t_2;
- int __pyx_t_3;
- Py_ssize_t __pyx_t_4;
- Py_ssize_t __pyx_t_5;
- Py_ssize_t __pyx_t_6;
-
- /* "View.MemoryView":1149
- *
- * cdef Py_ssize_t i
- * cdef Py_ssize_t src_extent = src_shape[0] # <<<<<<<<<<<<<<
- * cdef Py_ssize_t dst_extent = dst_shape[0]
- * cdef Py_ssize_t src_stride = src_strides[0]
- */
- __pyx_v_src_extent = (__pyx_v_src_shape[0]);
-
- /* "View.MemoryView":1150
- * cdef Py_ssize_t i
- * cdef Py_ssize_t src_extent = src_shape[0]
- * cdef Py_ssize_t dst_extent = dst_shape[0] # <<<<<<<<<<<<<<
- * cdef Py_ssize_t src_stride = src_strides[0]
- * cdef Py_ssize_t dst_stride = dst_strides[0]
- */
- __pyx_v_dst_extent = (__pyx_v_dst_shape[0]);
-
- /* "View.MemoryView":1151
- * cdef Py_ssize_t src_extent = src_shape[0]
- * cdef Py_ssize_t dst_extent = dst_shape[0]
- * cdef Py_ssize_t src_stride = src_strides[0] # <<<<<<<<<<<<<<
- * cdef Py_ssize_t dst_stride = dst_strides[0]
- *
- */
- __pyx_v_src_stride = (__pyx_v_src_strides[0]);
-
- /* "View.MemoryView":1152
- * cdef Py_ssize_t dst_extent = dst_shape[0]
- * cdef Py_ssize_t src_stride = src_strides[0]
- * cdef Py_ssize_t dst_stride = dst_strides[0] # <<<<<<<<<<<<<<
- *
- * if ndim == 1:
- */
- __pyx_v_dst_stride = (__pyx_v_dst_strides[0]);
-
- /* "View.MemoryView":1154
- * cdef Py_ssize_t dst_stride = dst_strides[0]
- *
- * if ndim == 1: # <<<<<<<<<<<<<<
- * if (src_stride > 0 and dst_stride > 0 and
- * src_stride == itemsize == dst_stride):
- */
- __pyx_t_1 = ((__pyx_v_ndim == 1) != 0);
- if (__pyx_t_1) {
-
- /* "View.MemoryView":1155
- *
- * if ndim == 1:
- * if (src_stride > 0 and dst_stride > 0 and # <<<<<<<<<<<<<<
- * src_stride == itemsize == dst_stride):
- * memcpy(dst_data, src_data, itemsize * dst_extent)
- */
- __pyx_t_2 = ((__pyx_v_src_stride > 0) != 0);
- if (__pyx_t_2) {
- } else {
- __pyx_t_1 = __pyx_t_2;
- goto __pyx_L5_bool_binop_done;
- }
- __pyx_t_2 = ((__pyx_v_dst_stride > 0) != 0);
- if (__pyx_t_2) {
- } else {
- __pyx_t_1 = __pyx_t_2;
- goto __pyx_L5_bool_binop_done;
- }
-
- /* "View.MemoryView":1156
- * if ndim == 1:
- * if (src_stride > 0 and dst_stride > 0 and
- * src_stride == itemsize == dst_stride): # <<<<<<<<<<<<<<
- * memcpy(dst_data, src_data, itemsize * dst_extent)
- * else:
- */
- __pyx_t_2 = (((size_t)__pyx_v_src_stride) == __pyx_v_itemsize);
- if (__pyx_t_2) {
- __pyx_t_2 = (__pyx_v_itemsize == ((size_t)__pyx_v_dst_stride));
- }
- __pyx_t_3 = (__pyx_t_2 != 0);
- __pyx_t_1 = __pyx_t_3;
- __pyx_L5_bool_binop_done:;
-
- /* "View.MemoryView":1155
- *
- * if ndim == 1:
- * if (src_stride > 0 and dst_stride > 0 and # <<<<<<<<<<<<<<
- * src_stride == itemsize == dst_stride):
- * memcpy(dst_data, src_data, itemsize * dst_extent)
- */
- if (__pyx_t_1) {
-
- /* "View.MemoryView":1157
- * if (src_stride > 0 and dst_stride > 0 and
- * src_stride == itemsize == dst_stride):
- * memcpy(dst_data, src_data, itemsize * dst_extent) # <<<<<<<<<<<<<<
- * else:
- * for i in range(dst_extent):
- */
- (void)(memcpy(__pyx_v_dst_data, __pyx_v_src_data, (__pyx_v_itemsize * __pyx_v_dst_extent)));
-
- /* "View.MemoryView":1155
- *
- * if ndim == 1:
- * if (src_stride > 0 and dst_stride > 0 and # <<<<<<<<<<<<<<
- *