diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Danea easyfatt 2013 crack the risks and consequences of using illegal software.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Danea easyfatt 2013 crack the risks and consequences of using illegal software.md deleted file mode 100644 index 88ab444405c3a32caec3925baec77e45ec78d0af..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Danea easyfatt 2013 crack the risks and consequences of using illegal software.md +++ /dev/null @@ -1,173 +0,0 @@ - -

Danea easyfatt 2013 crack: What is it and how to use it?

-

If you are looking for a software that can help you manage your invoices, inventory, orders, quotes, and accounting, you might have heard of Danea easyfatt. This is a popular program that is designed for small and medium businesses in Italy. However, if you want to use this software without paying for a license, you might also be interested in Danea easyfatt 2013 crack. This is a modified version of the program that allows you to bypass the activation process and use it for free. But what exactly is a crack and how can you use it safely? In this article, we will explain everything you need to know about Danea easyfatt 2013 crack, including how to download, install, and use it.

-

Danea easyfatt 2013 crack


Download Filehttps://byltly.com/2uKuVE



-

Introduction

-

What is Danea easyfatt?

-

Danea easyfatt is a software developed by Danea Soft (Italia), a company that specializes in creating solutions for small and medium enterprises. Danea easyfatt is one of their flagship products, which offers a comprehensive and user-friendly interface for managing various aspects of your business. With Danea easyfatt, you can:

- -

Danea easyfatt is compatible with Windows operating systems and supports multiple languages. It also comes in different editions depending on your needs: Basic, Professional, Enterprise, etc. However, each edition has a different price tag and requires a license key to activate.

-

What is a crack?

-

A crack is a term used to describe a file or a program that modifies or alters the original software in order to remove or bypass its protection mechanisms. For example, some software require an activation code or a serial number to verify that you have purchased a legitimate copy. A crack can either generate a fake code or replace the original file that checks for the code with a modified one that allows you to use the software without any restrictions.

-

A crack can also be used to unlock or enable features that are otherwise unavailable or limited in the original software. For example, some software have trial versions that expire after a certain period of time or have reduced functionality. A crack can either extend the trial period indefinitely or enable all the features as if you have bought the full version.

-

A crack can be either an executable file (.exe) that you run before or after installing the original software or a patch file (.dll) that you copy and paste into the installation folder of the original software. Sometimes, a crack can also come with instructions or a keygen (a program that generates keys) that you need to follow carefully.

-

Why would you need a crack for Danea easyfatt 2013?

-

There are many reasons why someone would want to use a crack for Danea easyfatt 2013. Some of them are:

-

Danea easyfatt 2013 full version download
-How to crack Danea easyfatt 2013 software
-Danea easyfatt 2013 serial key generator
-Danea easyfatt 2013 activation code free
-Danea easyfatt 2013 patch download
-Danea easyfatt 2013 license key crack
-Danea easyfatt 2013 torrent download
-Danea easyfatt 2013 keygen online
-Danea easyfatt 2013 cracked version for windows
-Danea easyfatt 2013 registration code crack
-Danea easyfatt 2013 product key crack
-Danea easyfatt 2013 crack mac os x
-Danea easyfatt 2013 crack no survey
-Danea easyfatt 2013 crack without password
-Danea easyfatt 2013 crack direct download link
-Danea easyfatt 2013 crack rar file
-Danea easyfatt 2013 crack zip file
-Danea easyfatt 2013 crack iso file
-Danea easyfatt 2013 crack exe file
-Danea easyfatt 2013 crack setup file
-Danea easyfatt 2013 crack installer file
-Danea easyfatt 2013 crack portable file
-Danea easyfatt 2013 crack working file
-Danea easyfatt 2013 crack latest version
-Danea easyfatt 2013 crack updated version
-Danea easyfatt 2013 crack with tutorial
-Danea easyfatt 2013 crack with instructions
-Danea easyfatt 2013 crack with guide
-Danea easyfatt 2013 crack with manual
-Danea easyfatt 2013 crack with video
-Danea easyfatt 2013 crack with proof
-Danea easyfatt 2013 crack with reviews
-Danea easyfatt 2013 crack with testimonials
-Danea easyfatt 2013 crack with feedbacks
-Danea easyfatt 2013 crack with ratings
-Danea easyfatt 2013 crack with comments
-Danea easyfatt 2013 crack with support
-Danea easyfatt 2013 crack with helpdesk
-Danea easyfatt 2013 crack with customer service
-Danea easyfatt 2013 crack with warranty
-Danea easyfatt 2013 crack with guarantee
-Danea easyfatt 2013 crack with refund policy
-Danea easyfatt 2013 crack with discount offer
-Danea easyfatt 2013 crack with coupon code
-Danea easyfatt 2013 crack with promo code
-Danea easyfatt 2013 crack with free trial
-Danea easyfatt 2013 crack with free download link

- -

However, using a crack also comes with some risks and disadvantages. Some of them are:

- -

Therefore, before using a crack for Danea easyfatt 2013, you should weigh the pros and cons carefully and decide whether it is worth it or not.

-

How to download and install Danea easyfatt 2013 crack

-

Where to find the crack file

-

If you have decided to use a crack for Danea easyfatt 2013, you need to find a reliable source where you can download it. There are many websites that offer cracks for various software but not all of them are trustworthy or safe. Some of them may contain fake links or malicious files that can harm your device or data. Therefore, you should be careful when choosing where to download from.

-

One way to find a reputable website is to look for reviews or feedback from other users who have downloaded from there before. You can also check if the website has any security certificates or badges that indicate its legitimacy. Another way is to use an antivirus program or an online scanner tool that can scan the website or the file for any potential threats before downloading.

-

For example, one website that claims to offer Danea easyfatt 2013 crack is . According to this website,

-

"Salve a tutti, come da richiesta abbiamo messo a disposizione Danea Easyfatt Enterprise per i sistemi Windows. Consiglio di utilizzare il software jdownloader.org per poter scaricare le varie parti comodamente e WinRaR per estrarre l’archivio."

-

This means "Hello everyone, as requested we have made available Danea Easyfatt Enterprise for Windows systems. I recommend using jdownloader.org software to download various parts comfortably and WinRaR to extract the archive."

-

The website also provides three mirror links where you can download the archive file named Danea_EasyFatt_Enterprise_2020_v46c_Build_6011.rar. The password to open the archive is apritisesamo.

-

How to disable antivirus and extract the file

-

Before installing the program, you need to disable your antivirus and extract the file from the archive. This is because your antivirus may detect the crack as a threat and block or delete it. To disable your antivirus, you can follow these steps:

-
    -
  1. Open your antivirus program and go to its settings or options menu.
  2. -
  3. Look for an option that allows you to turn off or pause the protection temporarily. It may be called something like "Disable", "Deactivate", "Suspend", etc.
  4. -
  5. Select the option and choose how long you want to disable it. It may be in minutes, hours, or until restart. You can also choose which components of protection you want to disable, such as real-time scanning, firewall, etc.
  6. -
  7. Confirm your choice and close your antivirus program. You should see an icon on your taskbar indicating that your antivirus is off.
  8. -
To extract the file from the archive, you need to use a software that can handle RAR files. One of the most popular and free options is 7-Zip, which you can download from . After installing 7-Zip, you can follow these steps:

-
    -
  1. Right-click on the archive file and select "7-Zip" from the menu.
  2. -
  3. Select one of the "Extract" options, depending on where you want to extract the files. You can choose to extract them to a new folder with the same name as the archive, to the current folder, or to a custom location.
  4. -
  5. Enter the password apritisesamo when prompted and click "OK".
  6. -
  7. Wait for the extraction process to finish. You should see a new folder or files in the destination you chose.
  8. -
-

How to install the program and replace the exe file

-

After extracting the file from the archive, you need to install the program and replace the original exe file with the cracked one. To do that, you can follow these steps:

-
    -
  1. Open the folder where you extracted the files and double-click on the Setup.exe file.
  2. -
  3. Follow the instructions on the screen to install Danea easyfatt 2013 on your device. You can choose your preferred language, destination folder, and shortcuts.
  4. -
  5. When the installation is complete, close the program completely. You can also exit it from the system tray if it is running in the background.
  6. -
  7. Open the folder named "Crack" and copy the DaneaEasyFatt.exe file.
  8. -
  9. Paste it into the installation folder of Danea easyfatt 2013, which is usually located at C:\Program Files (x86)\Danea Easyfatt 2013.
  10. -
  11. If prompted to replace or overwrite the existing file, click "Yes" or "Replace".
  12. -
-

How to use Danea easyfatt 2013 crack

-

How to activate the program with the crack

-

Now that you have installed the program and replaced the exe file, you can activate the program with the crack. To do that, you can follow these steps:

-
    -
  1. Launch Danea easyfatt 2013 from your desktop or start menu shortcut.
  2. -
  3. You should see a window asking you to enter your license key or activate online. Click on "Activate online".
  4. -
  5. You should see another window asking you to enter your email address and password. Enter any email address and password you want and click "OK".
  6. -
  7. You should see a message saying that your activation was successful and that you have a valid license for Danea easyfatt Enterprise 2020.
  8. -
  9. Click "OK" and enjoy using Danea easyfatt 2013 crack.
  10. -
-

How to access the features and functions of Danea easyfatt

-

Danea easyfatt 2013 crack allows you to access all the features and functions of Danea easyfatt Enterprise 2020, which is the most advanced edition of the software. You can explore the various menus, tabs, and buttons on the main interface to find what you need. Some of the main features and functions are:

- -

How to avoid errors or problems with the crack

-

Danea easyfatt 2013 crack may not work perfectly for everyone. You may encounter some errors or problems with the software functionality or compatibility. To avoid or fix them, you can try some of these tips:

- -

Conclusion

-

Summary of the main points

-

In this article, we have explained what Danea easyfatt 2013 crack is and how to use it. We have covered:

- -

Benefits and risks of using a crack

-

We have also discussed some of the benefits and risks of using a crack for Danea easyfatt 2013. Some of them are:

- -

Call to action and disclaimer

-

We hope this article has been helpful for you in understanding and using Danea easyfatt 2013 crack. However, we do not endorse or recommend using cracks for any software as they are illegal and unethical. We are not responsible for any damages or losses that may result from using cracks. We advise you to use cracks at your own risk and discretion. If you like Danea easyfatt and find it useful for your business needs, we encourage you to buy a legitimate license from Danea Soft (Italia) and support their work. Thank you for reading this article!

- **FAQs** Q: What is Danea easyfatt? A: Danea easyfatt is a software that helps you manage your invoices, inventory, orders, quotes, and accounting. Q: What is a crack? A: A crack is a file or a program that modifies or alters the original software in order to remove or bypass its protection mechanisms. Q: How do I download Danea easyfatt 2013 crack? A: You need

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Download BEST Microsoft Office Professional Plus 2013 Rtm Activation.md b/spaces/1gistliPinn/ChatGPT4/Examples/Download BEST Microsoft Office Professional Plus 2013 Rtm Activation.md deleted file mode 100644 index 10e80be88f86cbc9b8bf737c52b6f941509c9c40..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Download BEST Microsoft Office Professional Plus 2013 Rtm Activation.md +++ /dev/null @@ -1,7 +0,0 @@ -
-

microsoft office is a series of office applications offered by microsoft for home and business use. office has advanced features like edit pdfs, advanced multimedia functions, good touch navigation, helpful new assistants and also some disadvantages since the user has almost no choice but to take cloud use, and tablet work. both 32-bit and the 64-bit client application are supported by office 2013. you can even use the trial version for office 2013 for 30 days to get a chance to test it without having to buy it, youll get different office 2013 product key to keeping it operating for one month. you will be able to access word 2013, powerpoint 2013, excel 2013, outlook 2013 with this package.

-

yes. aws support has been successfully supporting our customers who run microsoft windows-based ec2 instances in the aws cloud since 2008 when we first launched windows server on ec2. our support engineers have deep experience with microsoft technologies on aws including amazon ec2, amazon ecs, amazon rds, amazon workspaces and others. now aws has further enhanced our support capabilities with a new additional direct engagement between aws support and microsoft support, to help ensure high quality support and issue resolution for our customers. to find more information on end of support (eos) for microsoft products go here.

-

download microsoft office professional plus 2013 rtm activation


Download 🔗 https://imgfil.com/2uxXBc



-

per microsofts visual studio licensing guide, visual studio subscriptions purchased through certain channels provide perpetual use rights even after the subscription has expired. the use of perpetual licenses acquired before 10/1/2019 for products released prior to 10/1/2019 is permitted on aws dedicated infrastructure regardless of the renewal or expiration of the subscription under which the perpetual licenses were acquired.aws also offers fully-compliant, amazon-provided licenses formicrosoft visual studio enterprise 2022 and microsoft visual studio professional 2022 amazon machine images (amis) on amazon elastic compute cloud (amazon ec2). these amis are available on the amazon ec2 console and on aws marketplace, to launch instances on-demand without any long-term licensing commitments.to learn more, visit aws license manager user guide.

899543212b
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Clash Royale is waiting for you on yapup.site download and play today.md b/spaces/1phancelerku/anime-remove-background/Clash Royale is waiting for you on yapup.site download and play today.md deleted file mode 100644 index ce3dc01ee133084a63ba75716cda060d8e5304b2..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Clash Royale is waiting for you on yapup.site download and play today.md +++ /dev/null @@ -1,116 +0,0 @@ -
-

How to Download Clash Royale from Yapup.site

-

If you are looking for a fun and addictive game to play on your Android device, you might want to try Clash Royale. It is a real-time multiplayer battle game that features your favorite characters from Clash of Clans and more. In this article, we will show you how to download Clash Royale from Yapup.site, a website that offers free APK downloads for Android games and apps. We will also give you some tips and tricks to help you win at Clash Royale.

-

What is Clash Royale?

-

A real-time multiplayer battle game

-

Clash Royale is a game developed and published by Supercell, the same company behind the popular Clash of Clans. It was released in 2016 and has since become one of the most played mobile games in the world. In Clash Royale, you have to collect and upgrade cards that feature troops, spells, and defenses from the Clash universe. You then use these cards to create your own battle deck and fight against other players online in fast-paced matches. The goal is to destroy your opponent's three towers, including the king tower, while protecting your own. You can also join or form clans with other players and participate in clan wars, tournaments, and seasonal events.

-

yapup.site download clash royale


DOWNLOAD ››››› https://jinyurl.com/2uNJOn



-

Features of Clash Royale

-

Some of the features that make Clash Royale an exciting and challenging game are:

- -

What is Yapup.site?

-

A website that offers free APK downloads

-

Yapup.site is a website that provides free APK downloads for Android games and apps. APK stands for Android Package Kit, which is a file format that contains all the elements needed to install an app on an Android device. By downloading APK files from Yapup.site, you can access games and apps that are not available on the Google Play Store or that are restricted in your region. You can also get the latest updates and versions of your favorite games and apps before they are officially released.

-

Benefits of using Yapup.site

-

Some of the benefits of using Yapup.site to download APK files are:

- -

How to Download Clash Royale from Yapup.site

-

Step 1: Visit the website

-

The first step to download Clash Royale from Yapup.site is to visit the website using your web browser. You can use any browser you prefer, such as Chrome, Firefox, Safari, or Opera. The website has a simple and user-friendly interface that allows you to easily navigate and find the games and apps you want.

-

Step 2: Search for Clash Royale

-

The next step is to search for Clash Royale on the website. You can use the search bar at the top of the homepage to type in the name of the game. Alternatively, you can browse through the categories and genres of games and apps on the website. You can also check out the featured, popular, and new games and apps on the homepage. Once you find Clash Royale, click on it to open its page.

-

Step 3: Click on the download button

-

The third step is to click on the download button on the Clash Royale page. You will see a green button that says "Download APK" at the bottom of the page. You will also see some information about the game, such as its size, version, developer, rating, and description. You can read this information to learn more about the game and its features. You can also see some screenshots and videos of the game to get a glimpse of its gameplay. After you click on the download button, you will be redirected to another page where you have to wait for a few seconds before the download starts.

-

yapup.site clash royale apk free download
-yapup.site clash royale mod download for android
-yapup.site download clash royale latest version
-yapup.site download clash royale on pc
-yapup.site download clash royale hack
-yapup.site download clash royale update
-yapup.site download clash royale private server
-yapup.site download clash royale for ios
-yapup.site download clash royale online
-yapup.site download clash royale game
-yapup.site download clash royale cheats
-yapup.site download clash royale gems generator
-yapup.site download clash royale cards
-yapup.site download clash royale decks
-yapup.site download clash royale strategy guide
-yapup.site download clash royale tips and tricks
-yapup.site download clash royale wallpaper
-yapup.site download clash royale videos
-yapup.site download clash royale replays
-yapup.site download clash royale tournaments
-yapup.site download clash royale clan wars
-yapup.site download clash royale season pass
-yapup.site download clash royale emotes
-yapup.site download clash royale skins
-yapup.site download clash royale magic items
-yapup.site download clash royale challenges
-yapup.site download clash royale events
-yapup.site download clash royale news
-yapup.site download clash royale reddit
-yapup.site download clash royale wiki
-yapup.site download clash royale fan art
-yapup.site download clash royale memes
-yapup.site download clash royale merchandise
-yapup.site download clash royale forum
-yapup.site download clash royale support
-yapup.site download clash royale reviews
-yapup.site download clash royale ratings
-yapup.site download clash royale statistics
-yapup.site download clash royale history
-yapup.site download clash royale developer blog

-

Step 4: Install the APK file

-

The final step is to install the APK file on your Android device. After the download is complete, you will see a notification on your device that says "Download complete". You can tap on this notification to open the APK file. Alternatively, you can go to your device's file manager and locate the APK file in your downloads folder. Before you install the APK file, you have to enable the installation of unknown sources on your device. To do this, go to your device's settings and then security. Find the option that says "Unknown sources" and toggle it on. This will allow you to install apps from sources other than the Google Play Store. After you enable this option, you can tap on the APK file and follow the instructions on your screen to install Clash Royale on your device.

-

Tips and Tricks for Playing Clash Royale

-

Join a clan and share cards

-

One of the best ways to improve your skills and progress in Clash Royale is to join a clan and share cards with other players. A clan is a group of players who can chat, donate, request, and trade cards with each other. By joining a clan, you can get more cards to upgrade your deck and also learn from other players' strategies and tips. You can also participate in clan wars and earn rewards for your clan.

-

Build a balanced deck and use your elixir wisely

-

Another important tip for playing Clash Royale is to build a balanced deck and use your elixir wisely. A balanced deck is one that has a good mix of cards that can counter different types of threats and also deal damage to your opponent's towers. You should have cards that can attack from a distance, such as archers or fireball; cards that can tank damage, such as giant or knight; cards that can swarm or distract, such as goblins or skeletons; and cards that can support or enhance, such as witch or rage. You should also have cards that cost different amounts of elixir, so that you can always have something to play depending on your elixir level. Elixir is the resource that you use to play cards in Clash Royale. It regenerates over time during a match, but it is limited by a maximum of 10 units. Therefore, you have to be careful not to waste elixir by playing cards that are not needed or effective. You should also try to gain an elixir advantage over your opponent by playing cards that cost less than their counters or by making positive trades. For example, if you use a fireball that costs 4 elixir to destroy a minion horde that costs 5 elixir, you gain an elixir advantage of 1 unit.

-

Defend your towers and attack the enemy's weak spots

-

The last tip for playing Clash Royale is to defend your towers and attack the enemy's weak spots. Your towers are your main defense against your opponent's attacks. They have high health and damage output, but they are vulnerable to certain types of cards or combinations. Therefore, you have to protect them by placing your troops strategically and using spells or buildings when necessary. On the other hand, you also have to find opportunities to attack your opponent's towers and deal damage to them. You should look for their weak spots, such as their low-health towers or their lack of counters for your cards. You should also try to exploit their mistakes, such as their overcommitment or their poor placement of cards. You should also try to create combos or synergies with your cards, such as using a hog rider with a freeze spell or using a balloon with a rage spell.

-

Conclusion

-

Clash Royale is a fun and addictive game that you can download and play on your Android device. You can download it from Yapup.site, a website that offers free APK downloads for Android games and apps. You can also follow the tips and tricks we shared in this article to improve your skills and win more matches. We hope you enjoyed this article and found it helpful. If you have any questions or feedback, please let us know in the comments section below. Happy clashing!

-

FAQs

-

Here are some frequently asked questions about Clash Royale and Yapup.site:

- - - - - - - - - - - - - - - - - - - - - - - - - -
QuestionAnswer
Is Clash Royale free to play?Yes, Clash Royale is free to download and play. However, it also offers in-app purchases that can enhance your gaming experience.
Is Yapup.site safe to use?Yes, Yapup.site is safe to use. It does not contain any malware or viruses that can harm your device. However, you should always be careful when downloading APK files from unknown sources and scan them with an antivirus before installing them.
How can I update Clash Royale from Yapup.site?You can update Clash Royale from Yapup.site by visiting the website again and downloading the latest version of the game. You can also enable the auto-update feature on your device's settings to get the updates automatically.
How can I contact the support team of Clash Royale?You can contact the support team of Clash Royale by tapping on the settings icon on the top right corner of the game screen and then tapping on the help and support button. You can also visit the official website or social media pages of Clash Royale for more information and assistance.
How can I contact the support team of Yapup.site?You can contact the support team of Yapup.site by visiting the website and clicking on the contact us button at the bottom of the page. You can also email them at yapup.site@gmail.com or follow them on Facebook or Twitter for more updates and news.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Dislyte A Stylish Urban RPG with Divine Power and Funky Music.md b/spaces/1phancelerku/anime-remove-background/Dislyte A Stylish Urban RPG with Divine Power and Funky Music.md deleted file mode 100644 index c9c9c3893c877c75ac5847e4066c318398588c9f..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Dislyte A Stylish Urban RPG with Divine Power and Funky Music.md +++ /dev/null @@ -1,81 +0,0 @@ - -

Dislyte Global Download: How to Play the Stylish Urban Mythological RPG on PC and Mobile

-

Introduction

-

If you are a fan of pop-fantasy RPGs with striking audio-visual experience, you might want to check out Dislyte, a new game that features heroes and monsters from mythologies. Dislyte is set in a futuristic urban playground where mysterious powers and mythology collide. You can build your own squad of Espers, who are ordinary people with divine powers from gods of worldwide mythologies, and fight against the greatest threat to humanity.

-

dislyte global download


Download Zip ✑ ✑ ✑ https://jinyurl.com/2uNL2D



-

In this article, we will show you how to download and play Dislyte on PC and mobile devices, so that you can enjoy the game's high-quality soundtracks and graphics, as well as grind easier without draining your battery. We will also share some tips and tricks to improve your gaming experience.

-

What is Dislyte?

-

Dislyte is a pop-fantasy RPG developed by FARLIGHT and published by Lilith Games. It was released globally in May 2023, after a successful soft launch in selected regions. The game has received positive reviews from players and critics, who praised its unique art style, engaging gameplay, and diverse characters.

-

Dislyte is inspired by various mythologies, such as Chinese, Egyptian, Greek, and Northern European. You can collect and customize over 100 Espers, each with their own skills, personalities, and appearances. You can also form teams with other players and participate in various modes, such as story mode, arena mode, raid mode, and more.

-

Why play Dislyte on PC and mobile?

-

Dislyte is a game that can be enjoyed on both PC and mobile devices. Playing Dislyte on PC has some advantages, such as:

- -

Playing Dislyte on mobile devices also has some benefits, such as:

- -

No matter what device you choose to play Dislyte on, you will have a fun and immersive gaming experience.

-

How to download and play Dislyte on PC and Mac

-

If you want to play Dislyte on PC or Mac, you will need an emulator that can run Android apps on your computer. We recommend using LDPlayer, which is one of the best emulators for playing mobile games on PC. Here are the steps to download and play Dislyte on PC and Mac using LDPlayer:

-

Step 1: Download LDPlayer emulator

-

Go to this link and download LDPlayer emulator for your PC or Mac. Make sure you download the 64-bit version if asked. After downloading, install LDPlayer on your computer by following the instructions.

-

How to download and play Dislyte on PC, Mac & Mobile
-Dislyte APK download for Android devices
-Dislyte official website and social media links
-Dislyte review and gameplay guide
-Dislyte best espers and tier list
-Dislyte codes and how to redeem them
-Dislyte latest news and updates
-Dislyte tips and tricks for beginners
-Dislyte soundtrack and graphics quality
-Dislyte system requirements and compatibility
-Dislyte vs other pop-fantasy RPGs
-Dislyte characters and their mythological origins
-Dislyte story and lore overview
-Dislyte PvP and PvE modes
-Dislyte gacha system and rates
-Dislyte relics and how to farm them
-Dislyte team building and strategy
-Dislyte events and rewards
-Dislyte bugs and issues report
-Dislyte fan art and community
-Dislyte wiki and FAQ
-Dislyte emulator download for PC users
-Dislyte VPN download for region locked players
-Dislyte mod apk download and features
-Dislyte cheats and hacks warning
-Dislyte support and customer service contact
-Dislyte gameplay video and streamers recommendation
-Dislyte memes and funny moments
-Dislyte skins and costumes preview
-Dislyte collaborations and crossover events
-Dislyte reroll guide and best starter espers
-Dislyte coupon codes and freebies giveaway
-Dislyte QooApp download for iOS users
-Dislyte discord server and reddit forum join link
-Dislyte ratings and feedback from players
-Dislyte developer interview and behind the scenes
-Dislyte future plans and roadmap reveal
-Dislyte comparison with Farlight 84, another game by Lilith Games
-Dislyte global release date and countdown timer
-Dislyte pre-registration rewards and how to claim them

-

Step 2: Install Dislyte from Google
  • A: For playing Dislyte on PC, you need a Windows 7 or higher operating system, an Intel or AMD CPU, 4 GB of RAM, and 4 GB of disk space. For playing Dislyte on mobile, you need an Android 5.0 or higher device with at least 2 GB of RAM and 3 GB of storage space.
  • -
  • Q: How can I get more Espers in Dislyte?
  • -
  • A: You can get more Espers in Dislyte by summoning them with crystals or tickets, which can be obtained from completing quests, events, achievements, or purchasing them with real money. You can also upgrade your Espers by enhancing their skills, relics, and star levels.
  • -
  • Q: How can I join a guild in Dislyte?
  • -
  • A: You can join a guild in Dislyte by tapping on the guild icon on the main screen and searching for a guild that suits your preferences. You can also create your own guild if you have enough crystals. Joining a guild will allow you to chat with other members, participate in guild wars, and receive guild rewards.
  • -
  • Q: How can I contact the customer service of Dislyte?
  • -
  • A: You can contact the customer service of Dislyte by tapping on the gear icon on the top right corner and then tapping on Customer Service. You can also send an email to dislyte@lilithgames.com or visit their official website or social media pages for more information.
  • -
  • Q: What are the best Espers to use in Dislyte?
  • -
  • A: There is no definitive answer to this question, as different Espers have different strengths and weaknesses, and the best Espers may vary depending on your play style, team composition, and game mode. However, some of the popular Espers that are considered to be powerful and versatile are Zeus, Athena, Odin, Thor, Ra, Anubis, and Sun Wukong.
  • -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1toTree/lora_test/ppdiffusers/experimental/__init__.py b/spaces/1toTree/lora_test/ppdiffusers/experimental/__init__.py deleted file mode 100644 index a775a741f2a5383b4ab8269dec842f59da5d69d4..0000000000000000000000000000000000000000 --- a/spaces/1toTree/lora_test/ppdiffusers/experimental/__init__.py +++ /dev/null @@ -1,17 +0,0 @@ -# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# flake8: noqa - -from .rl import ValueGuidedRLPipeline diff --git a/spaces/1toTree/lora_test/ppdiffusers/pipelines/ddpm/__init__.py b/spaces/1toTree/lora_test/ppdiffusers/pipelines/ddpm/__init__.py deleted file mode 100644 index 19f629ea8ffb6f3af770b737c947ff73ea78514c..0000000000000000000000000000000000000000 --- a/spaces/1toTree/lora_test/ppdiffusers/pipelines/ddpm/__init__.py +++ /dev/null @@ -1,17 +0,0 @@ -# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# flake8: noqa -from .pipeline_ddpm import DDPMPipeline diff --git a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/audio/audio_processing.py b/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/audio/audio_processing.py deleted file mode 100644 index 77a4057aa82f226f68474f4c2a19eba84510d663..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/audio/audio_processing.py +++ /dev/null @@ -1,100 +0,0 @@ -import torch -import numpy as np -import librosa.util as librosa_util -from scipy.signal import get_window - - -def window_sumsquare( - window, - n_frames, - hop_length, - win_length, - n_fft, - dtype=np.float32, - norm=None, -): - """ - # from librosa 0.6 - Compute the sum-square envelope of a window function at a given hop length. - - This is used to estimate modulation effects induced by windowing - observations in short-time fourier transforms. - - Parameters - ---------- - window : string, tuple, number, callable, or list-like - Window specification, as in `get_window` - - n_frames : int > 0 - The number of analysis frames - - hop_length : int > 0 - The number of samples to advance between frames - - win_length : [optional] - The length of the window function. By default, this matches `n_fft`. - - n_fft : int > 0 - The length of each analysis frame. - - dtype : np.dtype - The data type of the output - - Returns - ------- - wss : np.ndarray, shape=`(n_fft + hop_length * (n_frames - 1))` - The sum-squared envelope of the window function - """ - if win_length is None: - win_length = n_fft - - n = n_fft + hop_length * (n_frames - 1) - x = np.zeros(n, dtype=dtype) - - # Compute the squared window at the desired length - win_sq = get_window(window, win_length, fftbins=True) - win_sq = librosa_util.normalize(win_sq, norm=norm) ** 2 - win_sq = librosa_util.pad_center(win_sq, n_fft) - - # Fill the envelope - for i in range(n_frames): - sample = i * hop_length - x[sample : min(n, sample + n_fft)] += win_sq[: max(0, min(n_fft, n - sample))] - return x - - -def griffin_lim(magnitudes, stft_fn, n_iters=30): - """ - PARAMS - ------ - magnitudes: spectrogram magnitudes - stft_fn: STFT class with transform (STFT) and inverse (ISTFT) methods - """ - - angles = np.angle(np.exp(2j * np.pi * np.random.rand(*magnitudes.size()))) - angles = angles.astype(np.float32) - angles = torch.autograd.Variable(torch.from_numpy(angles)) - signal = stft_fn.inverse(magnitudes, angles).squeeze(1) - - for i in range(n_iters): - _, angles = stft_fn.transform(signal) - signal = stft_fn.inverse(magnitudes, angles).squeeze(1) - return signal - - -def dynamic_range_compression(x, normalize_fun=torch.log, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return normalize_fun(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/open_clap/timm_model.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/open_clap/timm_model.py deleted file mode 100644 index 071dd148c772f398e87ecbfc836dcfa4a3ae01af..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/open_clap/timm_model.py +++ /dev/null @@ -1,106 +0,0 @@ -""" timm model adapter - -Wraps timm (https://github.com/rwightman/pytorch-image-models) models for use as a vision tower in CLIP model. -""" -from collections import OrderedDict - -import torch.nn as nn - -try: - import timm - from timm.models.layers import Mlp, to_2tuple - from timm.models.layers.attention_pool2d import RotAttentionPool2d - from timm.models.layers.attention_pool2d import AttentionPool2d as AbsAttentionPool2d -except ImportError as e: - timm = None - -from .utils import freeze_batch_norm_2d - - -class TimmModel(nn.Module): - """ timm model adapter - # FIXME this adapter is a work in progress, may change in ways that break weight compat - """ - - def __init__( - self, - model_name, - embed_dim, - image_size=224, - pool='avg', - proj='linear', - drop=0., - pretrained=False): - super().__init__() - if timm is None: - raise RuntimeError("Please `pip install timm` to use timm models.") - - self.image_size = to_2tuple(image_size) - self.trunk = timm.create_model(model_name, pretrained=pretrained) - feat_size = self.trunk.default_cfg.get('pool_size', None) - feature_ndim = 1 if not feat_size else 2 - if pool in ('abs_attn', 'rot_attn'): - assert feature_ndim == 2 - # if attn pooling used, remove both classifier and default pool - self.trunk.reset_classifier(0, global_pool='') - else: - # reset global pool if pool config set, otherwise leave as network default - reset_kwargs = dict(global_pool=pool) if pool else {} - self.trunk.reset_classifier(0, **reset_kwargs) - prev_chs = self.trunk.num_features - - head_layers = OrderedDict() - if pool == 'abs_attn': - head_layers['pool'] = AbsAttentionPool2d(prev_chs, feat_size=feat_size, out_features=embed_dim) - prev_chs = embed_dim - elif pool == 'rot_attn': - head_layers['pool'] = RotAttentionPool2d(prev_chs, out_features=embed_dim) - prev_chs = embed_dim - else: - assert proj, 'projection layer needed if non-attention pooling is used.' - - # NOTE attention pool ends with a projection layer, so proj should usually be set to '' if such pooling is used - if proj == 'linear': - head_layers['drop'] = nn.Dropout(drop) - head_layers['proj'] = nn.Linear(prev_chs, embed_dim) - elif proj == 'mlp': - head_layers['mlp'] = Mlp(prev_chs, 2 * embed_dim, embed_dim, drop=drop) - - self.head = nn.Sequential(head_layers) - - def lock(self, unlocked_groups=0, freeze_bn_stats=False): - """ lock modules - Args: - unlocked_groups (int): leave last n layer groups unlocked (default: 0) - """ - if not unlocked_groups: - # lock full model - for param in self.trunk.parameters(): - param.requires_grad = False - if freeze_bn_stats: - freeze_batch_norm_2d(self.trunk) - else: - # NOTE: partial freeze requires latest timm (master) branch and is subject to change - try: - # FIXME import here until API stable and in an official release - from timm.models.helpers import group_parameters, group_modules - except ImportError: - raise RuntimeError( - 'Please install latest timm `pip install git+https://github.com/rwightman/pytorch-image-models`') - matcher = self.trunk.group_matcher() - gparams = group_parameters(self.trunk, matcher) - max_layer_id = max(gparams.keys()) - max_layer_id = max_layer_id - unlocked_groups - for group_idx in range(max_layer_id + 1): - group = gparams[group_idx] - for param in group: - self.trunk.get_parameter(param).requires_grad = False - if freeze_bn_stats: - gmodules = group_modules(self.trunk, matcher, reverse=True) - gmodules = {k for k, v in gmodules.items() if v <= max_layer_id} - freeze_batch_norm_2d(self.trunk, gmodules) - - def forward(self, x): - x = self.trunk(x) - x = self.head(x) - return x diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/ps_adv_mlm_history.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/ps_adv_mlm_history.py deleted file mode 100644 index b61a1b2349a34f504ae59aabb3430cc4eb703fbe..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/ps_adv_mlm_history.py +++ /dev/null @@ -1,171 +0,0 @@ -import torch -from torch import nn -from tasks.tts.ps_adv import PortaSpeechAdvTask, FastSpeechTask -from text_to_speech.utils.commons.hparams import hparams - - -class PortaSpeechAdvMLMTask(PortaSpeechAdvTask): - - def build_optimizer(self, model): - optimizer_gen = torch.optim.AdamW( - self.model.parameters(), - lr=hparams['lr'], - betas=(hparams['optimizer_adam_beta1'], hparams['optimizer_adam_beta2']), - weight_decay=hparams['weight_decay']) - - optimizer_disc = torch.optim.AdamW( - self.disc_params, - lr=hparams['disc_lr'], - betas=(hparams['optimizer_adam_beta1'], hparams['optimizer_adam_beta2']), - **hparams["discriminator_optimizer_params"]) if len(self.disc_params) > 0 else None - - optimizer_encoder = torch.optim.AdamW( - self.model.encoder.parameters(), - lr=hparams['lr'], - betas=(hparams['optimizer_adam_beta1'], hparams['optimizer_adam_beta2']), - weight_decay=hparams['weight_decay']) - return [optimizer_gen, optimizer_disc, optimizer_encoder] - - def build_scheduler(self, optimizer): - return [ - FastSpeechTask.build_scheduler(self, optimizer[0]), # Generator Scheduler - torch.optim.lr_scheduler.StepLR(optimizer=optimizer[1], # Discriminator Scheduler - **hparams["discriminator_scheduler_params"]), - FastSpeechTask.build_scheduler(self, optimizer[2]), # Generator Scheduler - ] - - def on_before_optimization(self, opt_idx): - if opt_idx in [0,2]: - nn.utils.clip_grad_norm_(self.dp_params, hparams['clip_grad_norm']) - if self.use_graph_encoder: - nn.utils.clip_grad_norm_(self.gen_params_except_gae_and_dp, hparams['clip_grad_norm']) - nn.utils.clip_grad_norm_(self.gae_params, hparams['clip_grad_norm']) - elif self.use_bert: - nn.utils.clip_grad_norm_(self.gen_params_except_bert_and_dp, hparams['clip_grad_norm']) - nn.utils.clip_grad_norm_(self.bert_params, hparams['clip_grad_norm']) - else: - nn.utils.clip_grad_norm_(self.gen_params_except_dp, hparams['clip_grad_norm']) - else: - nn.utils.clip_grad_norm_(self.disc_params, hparams["clip_grad_norm"]) - - def on_after_optimization(self, epoch, batch_idx, optimizer, optimizer_idx): - if self.scheduler is not None: - self.scheduler[0].step(self.global_step // hparams['accumulate_grad_batches']) - self.scheduler[1].step(self.global_step // hparams['accumulate_grad_batches']) - self.scheduler[2].step(self.global_step // hparams['accumulate_grad_batches']) - - def on_after_optimization(self, epoch, batch_idx, optimizer, optimizer_idx): - if self.scheduler is not None: - self.scheduler[0].step(self.global_step // hparams['accumulate_grad_batches']) - self.scheduler[1].step(self.global_step // hparams['accumulate_grad_batches']) - self.scheduler[2].step(self.global_step // hparams['accumulate_grad_batches']) - - def _training_step(self, sample, batch_idx, optimizer_idx): - loss_output = {} - loss_weights = {} - disc_start = self.global_step >= hparams["disc_start_steps"] and hparams['lambda_mel_adv'] > 0 - if optimizer_idx == 0: - ####################### - # Generator # - ####################### - loss_output, model_out = self.run_model(sample, infer=False) - self.model_out_gt = self.model_out = \ - {k: v.detach() for k, v in model_out.items() if isinstance(v, torch.Tensor)} - if disc_start: - mel_p = model_out['mel_out'] - if hasattr(self.model, 'out2mel'): - mel_p = self.model.out2mel(mel_p) - o_ = self.mel_disc(mel_p) - p_, pc_ = o_['y'], o_['y_c'] - if p_ is not None: - loss_output['a'] = self.mse_loss_fn(p_, p_.new_ones(p_.size())) - loss_weights['a'] = hparams['lambda_mel_adv'] - if pc_ is not None: - loss_output['ac'] = self.mse_loss_fn(pc_, pc_.new_ones(pc_.size())) - loss_weights['ac'] = hparams['lambda_mel_adv'] - elif optimizer_idx == 1: - ####################### - # Discriminator # - ####################### - if disc_start and self.global_step % hparams['disc_interval'] == 0: - model_out = self.model_out_gt - mel_g = sample['mels'] - mel_p = model_out['mel_out'] - o = self.mel_disc(mel_g) - p, pc = o['y'], o['y_c'] - o_ = self.mel_disc(mel_p) - p_, pc_ = o_['y'], o_['y_c'] - if p_ is not None: - loss_output["r"] = self.mse_loss_fn(p, p.new_ones(p.size())) - loss_output["f"] = self.mse_loss_fn(p_, p_.new_zeros(p_.size())) - if pc_ is not None: - loss_output["rc"] = self.mse_loss_fn(pc, pc.new_ones(pc.size())) - loss_output["fc"] = self.mse_loss_fn(pc_, pc_.new_zeros(pc_.size())) - else: - loss_output, model_out = self.run_contrastive_learning(sample) - - total_loss = sum([loss_weights.get(k, 1) * v for k, v in loss_output.items() if isinstance(v, torch.Tensor) and v.requires_grad]) - loss_output['batch_size'] = sample['txt_tokens'].size()[0] - return total_loss, loss_output - - def run_contrastive_learning(self, sample): - losses = {} - outputs = {} - - bert = self.model.encoder.bert - pooler = self.model.encoder.pooler - sim = self.model.encoder.sim - # electra_gen = self.model.encoder.electra_gen - # electra_disc = self.model.encoder.electra_disc - # electra_head = self.model.encoder.electra_head - - cl_feats = sample['cl_feats'] - bs, _, t = cl_feats['cl_input_ids'].shape - cl_input_ids = cl_feats['cl_input_ids'].reshape([bs*2, t]) - cl_attention_mask = cl_feats['cl_attention_mask'].reshape([bs*2, t]) - cl_token_type_ids = cl_feats['cl_token_type_ids'].reshape([bs*2, t]) - cl_output = bert(cl_input_ids, attention_mask=cl_attention_mask,token_type_ids=cl_token_type_ids,) - pooler_output = pooler(cl_attention_mask, cl_output) - pooler_output = pooler_output.reshape([bs, 2, -1]) - z1, z2 = pooler_output[:,0], pooler_output[:,1] - - cos_sim = sim(z1.unsqueeze(1), z2.unsqueeze(0)) - labels = torch.arange(cos_sim.size(0)).long().to(z1.device) - ce_fn = nn.CrossEntropyLoss() - cl_loss = ce_fn(cos_sim, labels) - losses['cl_v'] = cl_loss.detach() - losses['cl'] = cl_loss * hparams['lambda_mlm'] - - # mlm_input_ids = cl_feats['mlm_input_ids'] - # mlm_input_ids = mlm_input_ids.view((-1, mlm_input_ids.size(-1))) - # with torch.no_grad(): - # g_pred = electra_gen(mlm_input_ids, cl_attention_mask)[0].argmax(-1) - # g_pred[:, 0] = 101 # CLS token - # replaced = (g_pred != cl_input_ids) * cl_attention_mask - # e_inputs = g_pred * cl_attention_mask - # mlm_outputs = electra_disc( - # e_inputs, - # attention_mask=cl_attention_mask, - # token_type_ids=cl_token_type_ids, - # position_ids=None, - # head_mask=None, - # inputs_embeds=None, - # output_attentions=None, - # output_hidden_states=False, # True if cls.model_args.pooler_type in ['avg_top2', 'avg_first_last'] else False, - # return_dict=True, - # cls_input=pooler_output.view((-1, pooler_output.size(-1))), - # ) - # e_labels = replaced.view(-1, replaced.size(-1)) - # prediction_scores = electra_head(mlm_outputs.last_hidden_state) - # # rep = (e_labels == 1) * cl_attention_mask - # # fix = (e_labels == 0) * cl_attention_mask - # # prediction = prediction_scores.argmax(-1) - # # self.electra_rep_acc = float((prediction*rep).sum()/rep.sum()) - # # self.electra_fix_acc = float(1.0 - (prediction*fix).sum()/fix.sum()) - # # self.electra_acc = float(((prediction == e_labels) * cl_attention_mask).sum()/cl_attention_mask.sum()) - # masked_lm_loss = ce_fn(prediction_scores.view(-1, 2), e_labels.view(-1)) - # losses['mlm_v'] = masked_lm_loss.detach() - # losses['mlm'] = masked_lm_loss * hparams['lambda_mlm'] - - return losses, outputs - \ No newline at end of file diff --git a/spaces/AIWaves/SOP_Generation-single/Prompt/base_Prompts.py b/spaces/AIWaves/SOP_Generation-single/Prompt/base_Prompts.py deleted file mode 100644 index 5005b3e4ef61effe011430f472570c4832a34320..0000000000000000000000000000000000000000 --- a/spaces/AIWaves/SOP_Generation-single/Prompt/base_Prompts.py +++ /dev/null @@ -1,84 +0,0 @@ - -# SOP======================================================================================================== -# "environment_prompt" -# current_state , self(sop) -Get_environment_prompt = "f\"Here are the description of current scenario:{self.current_state.environment_prompt};\\n\"" - - -# sop.transit -#================================================================ -Transit_system_prompt = "f\"{environment_prompt};\\n{judge_system_prompt}\\n\""; - -# transit chat message -# "environment_prompt" is get from "Get_environment_prompt" ; "chat_history_message" if from Memory -Transit_message = "f\"{environment_summary};\\n Here is the The chat history:\\n {chat_history_message};\\nHere is the last query you especially need to pay attention:\\n{query};\\n Here is the relevant conversation: \\n{relevant_history} \\n\\n\"" - - -Transit_last_prompt = "f\"{judge_last_prompt}\"" -#sop.transit================================================================ - -# sop.call -#================================================================ -# help controller to determine the next role to speak.(the {} is agent role) call_prompt + allocate_component -Allocate_component = "f\"If it's currently supposed to be speaking for {role}, then output {role}.\\n\"" - -# environment_prompt is get from "Get_environment_prompt" ; "chat_history_message" if from Memory -Call_system_prompt = "f\"{environment_prompt};\\n{call_system_prompt};\\n{allocate_prompt}.\\n\"" - -# -Call_last_prompt = "f\"Here is the last query you especially need to pay attention:\\n{query};\\n Here is the the relevant conversation :\\n{relevant_history};\\nNow please choose the person to speak according to the following rules :{allocate_prompt};\\nNote: The person whose turn it is now cannot be the same as the person who spoke last time, so {last_name} cannot be output\\n.\"" - -Call_message = "f\"Here is the chat history:\\n{chat_history_message};\\nHere is the name of the person who last speak: {last_name}.\\n \"" -#sop.call================================================================ -# SOP======================================================================================================== - - - - - - -# Memory======================================================================================================== -Single_message = "f\"role: {role} \\n speak content : {content}; \"" - -Chat_total_message = "f\"{{{chat_history}}}\"" -# Memory======================================================================================================== - - - - - - -# Environment======================================================================================================== -Default_environment_summary_system_prompt = "\"\\nYour task is to summarize the historical dialogue records according to the current scene, and summarize the most important information\"" - -Default_environment_summary_last_prompt = "\"Please make a summary based on the historical chat records, the output format is history summary: \{your summary content\} \"" - -Environment_summary_memory = "f\"Here is the information you need to know:\\n\\n\ - Here is the summary of the previous dialogue history:\\n{summary}.\\n\ - Here is the latest conversation record:\\n {chat_history},\\n\ - Here is the relevant chat history you may need:{relevant_history}.\\n\"" - -Environment_summary_system_prompt = "f\"{environment_prompt};\\n{current_memory};\\n{summary_system_prompt};\\n\"" - - -# observe -Agent_observe_relevant_memory = "f\"\\n{relevant_memory}. \\n\"" - - -Agent_observe_memory = "f\"Here's what you need to know(Remember, this is just information, Try not to repeat what's inside):\\nHere is the relevant chat history you may need:{relevant_memory};\\n\ -Here is the previous summary of chat history :\\n{agent.short_term_memory}.\\n\ -Here is the relevant memory :\\n{agent.relevant_memory}.\\n\ -Here is the new chat history:\\n {conversations};\\n\ - \"" -# Environment======================================================================================================== - - - - -# Agent======================================================================================================== -Agent_summary_system_prompt = "f\"{summary_prompt};\\n Here is the past summary:{self.short_term_memory};\\nHere is the new chat_history:\\n{conversations};\\nPlease summary Please summarize based on the above information;\\n\"" - -Agent_last_prompt = "f\"{last_prompt};Please continue the talk based on your known information;Remember that you just represent {name}, do not speak for others,just speak as normal.\"" - -Agent_system_prompt = "f\"{system_prompt},\"" -# Agent======================================================================================================== diff --git a/spaces/Abhilashvj/planogram-compliance/utils/flask_rest_api/example_request.py b/spaces/Abhilashvj/planogram-compliance/utils/flask_rest_api/example_request.py deleted file mode 100644 index 773ad893296750992789a77a59e0f5ad657d0e35..0000000000000000000000000000000000000000 --- a/spaces/Abhilashvj/planogram-compliance/utils/flask_rest_api/example_request.py +++ /dev/null @@ -1,19 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Perform test request -""" - -import pprint - -import requests - -DETECTION_URL = "http://localhost:5000/v1/object-detection/yolov5s" -IMAGE = "zidane.jpg" - -# Read image -with open(IMAGE, "rb") as f: - image_data = f.read() - -response = requests.post(DETECTION_URL, files={"image": image_data}).json() - -pprint.pprint(response) diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/rings/Factory.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/rings/Factory.d.ts deleted file mode 100644 index 0c2572b6395e340f4577395e5870cca3f5ea11c5..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/rings/Factory.d.ts +++ /dev/null @@ -1,6 +0,0 @@ -import Rings from './Rings'; -import Base from '../base/Base'; - -export default function Factory( - config?: Base.IConfig -): Rings; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/perspectivecard/Factory.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/perspectivecard/Factory.js deleted file mode 100644 index f2d9958b7d078f20beb6e9022c99ae49b21da8ec..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/perspectivecard/Factory.js +++ /dev/null @@ -1,13 +0,0 @@ -import PerspectiveCard from './PerspectiveCard.js'; -import ObjectFactory from '../ObjectFactory.js'; -import SetValue from '../../../plugins/utils/object/SetValue.js'; - -ObjectFactory.register('perspectiveCard', function (config) { - var gameObject = new PerspectiveCard(this.scene, config); - this.scene.add.existing(gameObject); - return gameObject; -}); - -SetValue(window, 'RexPlugins.UI.PerspectiveCard', PerspectiveCard); - -export default PerspectiveCard; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/rotate/Rotate.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/rotate/Rotate.js deleted file mode 100644 index 2f6db8ed15730f46d687df010daf08dc3a6a867d..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/rotate/Rotate.js +++ /dev/null @@ -1,2 +0,0 @@ -import { Rotate } from '../../../plugins/gestures.js'; -export default Rotate; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/tabpages/Factory.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/tabpages/Factory.d.ts deleted file mode 100644 index 78081442c308c5d5bc640052efba504bd3f3b721..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/tabpages/Factory.d.ts +++ /dev/null @@ -1,5 +0,0 @@ -import TabPages from './TabPages'; - -export default function ( - config?: TabPages.IConfig -): TabPages; \ No newline at end of file diff --git a/spaces/AirtistDesign/stablediffusionapi-rev-animated/app.py b/spaces/AirtistDesign/stablediffusionapi-rev-animated/app.py deleted file mode 100644 index 677247e899cedc240b7d420722fc808f956d98dc..0000000000000000000000000000000000000000 --- a/spaces/AirtistDesign/stablediffusionapi-rev-animated/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/stablediffusionapi/rev-animated").launch() \ No newline at end of file diff --git a/spaces/Ameaou/academic-chatgpt3.1/crazy_functions/crazy_utils.py b/spaces/Ameaou/academic-chatgpt3.1/crazy_functions/crazy_utils.py deleted file mode 100644 index 4e0eba499e6f2fa94b1a962421b3c4bfef7a2f26..0000000000000000000000000000000000000000 --- a/spaces/Ameaou/academic-chatgpt3.1/crazy_functions/crazy_utils.py +++ /dev/null @@ -1,566 +0,0 @@ -import traceback -from toolbox import update_ui, get_conf - -def input_clipping(inputs, history, max_token_limit): - import numpy as np - from request_llm.bridge_all import model_info - enc = model_info["gpt-3.5-turbo"]['tokenizer'] - def get_token_num(txt): return len(enc.encode(txt, disallowed_special=())) - - mode = 'input-and-history' - # 当 输入部分的token占比 小于 全文的一半时,只裁剪历史 - input_token_num = get_token_num(inputs) - if input_token_num < max_token_limit//2: - mode = 'only-history' - max_token_limit = max_token_limit - input_token_num - - everything = [inputs] if mode == 'input-and-history' else [''] - everything.extend(history) - n_token = get_token_num('\n'.join(everything)) - everything_token = [get_token_num(e) for e in everything] - delta = max(everything_token) // 16 # 截断时的颗粒度 - - while n_token > max_token_limit: - where = np.argmax(everything_token) - encoded = enc.encode(everything[where], disallowed_special=()) - clipped_encoded = encoded[:len(encoded)-delta] - everything[where] = enc.decode(clipped_encoded)[:-1] # -1 to remove the may-be illegal char - everything_token[where] = get_token_num(everything[where]) - n_token = get_token_num('\n'.join(everything)) - - if mode == 'input-and-history': - inputs = everything[0] - else: - pass - history = everything[1:] - return inputs, history - -def request_gpt_model_in_new_thread_with_ui_alive( - inputs, inputs_show_user, llm_kwargs, - chatbot, history, sys_prompt, refresh_interval=0.2, - handle_token_exceed=True, - retry_times_at_unknown_error=2, - ): - """ - Request GPT model,请求GPT模型同时维持用户界面活跃。 - - 输入参数 Args (以_array结尾的输入变量都是列表,列表长度为子任务的数量,执行时,会把列表拆解,放到每个子线程中分别执行): - inputs (string): List of inputs (输入) - inputs_show_user (string): List of inputs to show user(展现在报告中的输入,借助此参数,在汇总报告中隐藏啰嗦的真实输入,增强报告的可读性) - top_p (float): Top p value for sampling from model distribution (GPT参数,浮点数) - temperature (float): Temperature value for sampling from model distribution(GPT参数,浮点数) - chatbot: chatbot inputs and outputs (用户界面对话窗口句柄,用于数据流可视化) - history (list): List of chat history (历史,对话历史列表) - sys_prompt (string): List of system prompts (系统输入,列表,用于输入给GPT的前提提示,比如你是翻译官怎样怎样) - refresh_interval (float, optional): Refresh interval for UI (default: 0.2) (刷新时间间隔频率,建议低于1,不可高于3,仅仅服务于视觉效果) - handle_token_exceed:是否自动处理token溢出的情况,如果选择自动处理,则会在溢出时暴力截断,默认开启 - retry_times_at_unknown_error:失败时的重试次数 - - 输出 Returns: - future: 输出,GPT返回的结果 - """ - import time - from concurrent.futures import ThreadPoolExecutor - from request_llm.bridge_all import predict_no_ui_long_connection - # 用户反馈 - chatbot.append([inputs_show_user, ""]) - yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面 - executor = ThreadPoolExecutor(max_workers=16) - mutable = ["", time.time(), ""] - def _req_gpt(inputs, history, sys_prompt): - retry_op = retry_times_at_unknown_error - exceeded_cnt = 0 - while True: - # watchdog error - if len(mutable) >= 2 and (time.time()-mutable[1]) > 5: - raise RuntimeError("检测到程序终止。") - try: - # 【第一种情况】:顺利完成 - result = predict_no_ui_long_connection( - inputs=inputs, llm_kwargs=llm_kwargs, - history=history, sys_prompt=sys_prompt, observe_window=mutable) - return result - except ConnectionAbortedError as token_exceeded_error: - # 【第二种情况】:Token溢出 - if handle_token_exceed: - exceeded_cnt += 1 - # 【选择处理】 尝试计算比例,尽可能多地保留文本 - from toolbox import get_reduce_token_percent - p_ratio, n_exceed = get_reduce_token_percent(str(token_exceeded_error)) - MAX_TOKEN = 4096 - EXCEED_ALLO = 512 + 512 * exceeded_cnt - inputs, history = input_clipping(inputs, history, max_token_limit=MAX_TOKEN-EXCEED_ALLO) - mutable[0] += f'[Local Message] 警告,文本过长将进行截断,Token溢出数:{n_exceed}。\n\n' - continue # 返回重试 - else: - # 【选择放弃】 - tb_str = '```\n' + traceback.format_exc() + '```' - mutable[0] += f"[Local Message] 警告,在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n" - return mutable[0] # 放弃 - except: - # 【第三种情况】:其他错误:重试几次 - tb_str = '```\n' + traceback.format_exc() + '```' - print(tb_str) - mutable[0] += f"[Local Message] 警告,在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n" - if retry_op > 0: - retry_op -= 1 - mutable[0] += f"[Local Message] 重试中,请稍等 {retry_times_at_unknown_error-retry_op}/{retry_times_at_unknown_error}:\n\n" - if ("Rate limit reached" in tb_str) or ("Too Many Requests" in tb_str): - time.sleep(30) - time.sleep(5) - continue # 返回重试 - else: - time.sleep(5) - return mutable[0] # 放弃 - - # 提交任务 - future = executor.submit(_req_gpt, inputs, history, sys_prompt) - while True: - # yield一次以刷新前端页面 - time.sleep(refresh_interval) - # “喂狗”(看门狗) - mutable[1] = time.time() - if future.done(): - break - chatbot[-1] = [chatbot[-1][0], mutable[0]] - yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面 - - final_result = future.result() - chatbot[-1] = [chatbot[-1][0], final_result] - yield from update_ui(chatbot=chatbot, history=[]) # 如果最后成功了,则删除报错信息 - return final_result - - -def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency( - inputs_array, inputs_show_user_array, llm_kwargs, - chatbot, history_array, sys_prompt_array, - refresh_interval=0.2, max_workers=-1, scroller_max_len=30, - handle_token_exceed=True, show_user_at_complete=False, - retry_times_at_unknown_error=2, - ): - """ - Request GPT model using multiple threads with UI and high efficiency - 请求GPT模型的[多线程]版。 - 具备以下功能: - 实时在UI上反馈远程数据流 - 使用线程池,可调节线程池的大小避免openai的流量限制错误 - 处理中途中止的情况 - 网络等出问题时,会把traceback和已经接收的数据转入输出 - - 输入参数 Args (以_array结尾的输入变量都是列表,列表长度为子任务的数量,执行时,会把列表拆解,放到每个子线程中分别执行): - inputs_array (list): List of inputs (每个子任务的输入) - inputs_show_user_array (list): List of inputs to show user(每个子任务展现在报告中的输入,借助此参数,在汇总报告中隐藏啰嗦的真实输入,增强报告的可读性) - llm_kwargs: llm_kwargs参数 - chatbot: chatbot (用户界面对话窗口句柄,用于数据流可视化) - history_array (list): List of chat history (历史对话输入,双层列表,第一层列表是子任务分解,第二层列表是对话历史) - sys_prompt_array (list): List of system prompts (系统输入,列表,用于输入给GPT的前提提示,比如你是翻译官怎样怎样) - refresh_interval (float, optional): Refresh interval for UI (default: 0.2) (刷新时间间隔频率,建议低于1,不可高于3,仅仅服务于视觉效果) - max_workers (int, optional): Maximum number of threads (default: see config.py) (最大线程数,如果子任务非常多,需要用此选项防止高频地请求openai导致错误) - scroller_max_len (int, optional): Maximum length for scroller (default: 30)(数据流的显示最后收到的多少个字符,仅仅服务于视觉效果) - handle_token_exceed (bool, optional): (是否在输入过长时,自动缩减文本) - handle_token_exceed:是否自动处理token溢出的情况,如果选择自动处理,则会在溢出时暴力截断,默认开启 - show_user_at_complete (bool, optional): (在结束时,把完整输入-输出结果显示在聊天框) - retry_times_at_unknown_error:子任务失败时的重试次数 - - 输出 Returns: - list: List of GPT model responses (每个子任务的输出汇总,如果某个子任务出错,response中会携带traceback报错信息,方便调试和定位问题。) - """ - import time, random - from concurrent.futures import ThreadPoolExecutor - from request_llm.bridge_all import predict_no_ui_long_connection - assert len(inputs_array) == len(history_array) - assert len(inputs_array) == len(sys_prompt_array) - if max_workers == -1: # 读取配置文件 - try: max_workers, = get_conf('DEFAULT_WORKER_NUM') - except: max_workers = 8 - if max_workers <= 0 or max_workers >= 20: max_workers = 8 - # 屏蔽掉 chatglm的多线程,可能会导致严重卡顿 - if not (llm_kwargs['llm_model'].startswith('gpt-') or llm_kwargs['llm_model'].startswith('api2d-')): - max_workers = 1 - - executor = ThreadPoolExecutor(max_workers=max_workers) - n_frag = len(inputs_array) - # 用户反馈 - chatbot.append(["请开始多线程操作。", ""]) - yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面 - # 跨线程传递 - mutable = [["", time.time(), "等待中"] for _ in range(n_frag)] - - # 子线程任务 - def _req_gpt(index, inputs, history, sys_prompt): - gpt_say = "" - retry_op = retry_times_at_unknown_error - exceeded_cnt = 0 - mutable[index][2] = "执行中" - while True: - # watchdog error - if len(mutable[index]) >= 2 and (time.time()-mutable[index][1]) > 5: - raise RuntimeError("检测到程序终止。") - try: - # 【第一种情况】:顺利完成 - # time.sleep(10); raise RuntimeError("测试") - gpt_say = predict_no_ui_long_connection( - inputs=inputs, llm_kwargs=llm_kwargs, history=history, - sys_prompt=sys_prompt, observe_window=mutable[index], console_slience=True - ) - mutable[index][2] = "已成功" - return gpt_say - except ConnectionAbortedError as token_exceeded_error: - # 【第二种情况】:Token溢出, - if handle_token_exceed: - exceeded_cnt += 1 - # 【选择处理】 尝试计算比例,尽可能多地保留文本 - from toolbox import get_reduce_token_percent - p_ratio, n_exceed = get_reduce_token_percent(str(token_exceeded_error)) - MAX_TOKEN = 4096 - EXCEED_ALLO = 512 + 512 * exceeded_cnt - inputs, history = input_clipping(inputs, history, max_token_limit=MAX_TOKEN-EXCEED_ALLO) - gpt_say += f'[Local Message] 警告,文本过长将进行截断,Token溢出数:{n_exceed}。\n\n' - mutable[index][2] = f"截断重试" - continue # 返回重试 - else: - # 【选择放弃】 - tb_str = '```\n' + traceback.format_exc() + '```' - gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n" - if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0] - mutable[index][2] = "输入过长已放弃" - return gpt_say # 放弃 - except: - # 【第三种情况】:其他错误 - tb_str = '```\n' + traceback.format_exc() + '```' - print(tb_str) - gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n" - if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0] - if retry_op > 0: - retry_op -= 1 - wait = random.randint(5, 20) - if ("Rate limit reached" in tb_str) or ("Too Many Requests" in tb_str): - wait = wait * 3 - fail_info = "OpenAI绑定信用卡可解除频率限制 " - else: - fail_info = "" - # 也许等待十几秒后,情况会好转 - for i in range(wait): - mutable[index][2] = f"{fail_info}等待重试 {wait-i}"; time.sleep(1) - # 开始重试 - mutable[index][2] = f"重试中 {retry_times_at_unknown_error-retry_op}/{retry_times_at_unknown_error}" - continue # 返回重试 - else: - mutable[index][2] = "已失败" - wait = 5 - time.sleep(5) - return gpt_say # 放弃 - - # 异步任务开始 - futures = [executor.submit(_req_gpt, index, inputs, history, sys_prompt) for index, inputs, history, sys_prompt in zip( - range(len(inputs_array)), inputs_array, history_array, sys_prompt_array)] - cnt = 0 - while True: - # yield一次以刷新前端页面 - time.sleep(refresh_interval) - cnt += 1 - worker_done = [h.done() for h in futures] - if all(worker_done): - executor.shutdown() - break - # 更好的UI视觉效果 - observe_win = [] - # 每个线程都要“喂狗”(看门狗) - for thread_index, _ in enumerate(worker_done): - mutable[thread_index][1] = time.time() - # 在前端打印些好玩的东西 - for thread_index, _ in enumerate(worker_done): - print_something_really_funny = "[ ...`"+mutable[thread_index][0][-scroller_max_len:].\ - replace('\n', '').replace('```', '...').replace( - ' ', '.').replace('
    ', '.....').replace('$', '.')+"`... ]" - observe_win.append(print_something_really_funny) - # 在前端打印些好玩的东西 - stat_str = ''.join([f'`{mutable[thread_index][2]}`: {obs}\n\n' - if not done else f'`{mutable[thread_index][2]}`\n\n' - for thread_index, done, obs in zip(range(len(worker_done)), worker_done, observe_win)]) - # 在前端打印些好玩的东西 - chatbot[-1] = [chatbot[-1][0], f'多线程操作已经开始,完成情况: \n\n{stat_str}' + ''.join(['.']*(cnt % 10+1))] - yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面 - - # 异步任务结束 - gpt_response_collection = [] - for inputs_show_user, f in zip(inputs_show_user_array, futures): - gpt_res = f.result() - gpt_response_collection.extend([inputs_show_user, gpt_res]) - - # 是否在结束时,在界面上显示结果 - if show_user_at_complete: - for inputs_show_user, f in zip(inputs_show_user_array, futures): - gpt_res = f.result() - chatbot.append([inputs_show_user, gpt_res]) - yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面 - time.sleep(0.3) - return gpt_response_collection - - -def breakdown_txt_to_satisfy_token_limit(txt, get_token_fn, limit): - def cut(txt_tocut, must_break_at_empty_line): # 递归 - if get_token_fn(txt_tocut) <= limit: - return [txt_tocut] - else: - lines = txt_tocut.split('\n') - estimated_line_cut = limit / get_token_fn(txt_tocut) * len(lines) - estimated_line_cut = int(estimated_line_cut) - for cnt in reversed(range(estimated_line_cut)): - if must_break_at_empty_line: - if lines[cnt] != "": - continue - print(cnt) - prev = "\n".join(lines[:cnt]) - post = "\n".join(lines[cnt:]) - if get_token_fn(prev) < limit: - break - if cnt == 0: - raise RuntimeError("存在一行极长的文本!") - # print(len(post)) - # 列表递归接龙 - result = [prev] - result.extend(cut(post, must_break_at_empty_line)) - return result - try: - return cut(txt, must_break_at_empty_line=True) - except RuntimeError: - return cut(txt, must_break_at_empty_line=False) - - -def force_breakdown(txt, limit, get_token_fn): - """ - 当无法用标点、空行分割时,我们用最暴力的方法切割 - """ - for i in reversed(range(len(txt))): - if get_token_fn(txt[:i]) < limit: - return txt[:i], txt[i:] - return "Tiktoken未知错误", "Tiktoken未知错误" - -def breakdown_txt_to_satisfy_token_limit_for_pdf(txt, get_token_fn, limit): - # 递归 - def cut(txt_tocut, must_break_at_empty_line, break_anyway=False): - if get_token_fn(txt_tocut) <= limit: - return [txt_tocut] - else: - lines = txt_tocut.split('\n') - estimated_line_cut = limit / get_token_fn(txt_tocut) * len(lines) - estimated_line_cut = int(estimated_line_cut) - cnt = 0 - for cnt in reversed(range(estimated_line_cut)): - if must_break_at_empty_line: - if lines[cnt] != "": - continue - prev = "\n".join(lines[:cnt]) - post = "\n".join(lines[cnt:]) - if get_token_fn(prev) < limit: - break - if cnt == 0: - if break_anyway: - prev, post = force_breakdown(txt_tocut, limit, get_token_fn) - else: - raise RuntimeError(f"存在一行极长的文本!{txt_tocut}") - # print(len(post)) - # 列表递归接龙 - result = [prev] - result.extend(cut(post, must_break_at_empty_line, break_anyway=break_anyway)) - return result - try: - # 第1次尝试,将双空行(\n\n)作为切分点 - return cut(txt, must_break_at_empty_line=True) - except RuntimeError: - try: - # 第2次尝试,将单空行(\n)作为切分点 - return cut(txt, must_break_at_empty_line=False) - except RuntimeError: - try: - # 第3次尝试,将英文句号(.)作为切分点 - res = cut(txt.replace('.', '。\n'), must_break_at_empty_line=False) # 这个中文的句号是故意的,作为一个标识而存在 - return [r.replace('。\n', '.') for r in res] - except RuntimeError as e: - try: - # 第4次尝试,将中文句号(。)作为切分点 - res = cut(txt.replace('。', '。。\n'), must_break_at_empty_line=False) - return [r.replace('。。\n', '。') for r in res] - except RuntimeError as e: - # 第5次尝试,没办法了,随便切一下敷衍吧 - return cut(txt, must_break_at_empty_line=False, break_anyway=True) - - - -def read_and_clean_pdf_text(fp): - """ - 这个函数用于分割pdf,用了很多trick,逻辑较乱,效果奇好 - - **输入参数说明** - - `fp`:需要读取和清理文本的pdf文件路径 - - **输出参数说明** - - `meta_txt`:清理后的文本内容字符串 - - `page_one_meta`:第一页清理后的文本内容列表 - - **函数功能** - 读取pdf文件并清理其中的文本内容,清理规则包括: - - 提取所有块元的文本信息,并合并为一个字符串 - - 去除短块(字符数小于100)并替换为回车符 - - 清理多余的空行 - - 合并小写字母开头的段落块并替换为空格 - - 清除重复的换行 - - 将每个换行符替换为两个换行符,使每个段落之间有两个换行符分隔 - """ - import fitz, copy - import re - import numpy as np - from colorful import print亮黄, print亮绿 - fc = 0 # Index 0 文本 - fs = 1 # Index 1 字体 - fb = 2 # Index 2 框框 - REMOVE_FOOT_NOTE = True # 是否丢弃掉 不是正文的内容 (比正文字体小,如参考文献、脚注、图注等) - REMOVE_FOOT_FFSIZE_PERCENT = 0.95 # 小于正文的?时,判定为不是正文(有些文章的正文部分字体大小不是100%统一的,有肉眼不可见的小变化) - def primary_ffsize(l): - """ - 提取文本块主字体 - """ - fsize_statiscs = {} - for wtf in l['spans']: - if wtf['size'] not in fsize_statiscs: fsize_statiscs[wtf['size']] = 0 - fsize_statiscs[wtf['size']] += len(wtf['text']) - return max(fsize_statiscs, key=fsize_statiscs.get) - - def ffsize_same(a,b): - """ - 提取字体大小是否近似相等 - """ - return abs((a-b)/max(a,b)) < 0.02 - - with fitz.open(fp) as doc: - meta_txt = [] - meta_font = [] - - meta_line = [] - meta_span = [] - ############################## <第 1 步,搜集初始信息> ################################## - for index, page in enumerate(doc): - # file_content += page.get_text() - text_areas = page.get_text("dict") # 获取页面上的文本信息 - for t in text_areas['blocks']: - if 'lines' in t: - pf = 998 - for l in t['lines']: - txt_line = "".join([wtf['text'] for wtf in l['spans']]) - if len(txt_line) == 0: continue - pf = primary_ffsize(l) - meta_line.append([txt_line, pf, l['bbox'], l]) - for wtf in l['spans']: # for l in t['lines']: - meta_span.append([wtf['text'], wtf['size'], len(wtf['text'])]) - # meta_line.append(["NEW_BLOCK", pf]) - # 块元提取 for each word segment with in line for each line cross-line words for each block - meta_txt.extend([" ".join(["".join([wtf['text'] for wtf in l['spans']]) for l in t['lines']]).replace( - '- ', '') for t in text_areas['blocks'] if 'lines' in t]) - meta_font.extend([np.mean([np.mean([wtf['size'] for wtf in l['spans']]) - for l in t['lines']]) for t in text_areas['blocks'] if 'lines' in t]) - if index == 0: - page_one_meta = [" ".join(["".join([wtf['text'] for wtf in l['spans']]) for l in t['lines']]).replace( - '- ', '') for t in text_areas['blocks'] if 'lines' in t] - - ############################## <第 2 步,获取正文主字体> ################################## - fsize_statiscs = {} - for span in meta_span: - if span[1] not in fsize_statiscs: fsize_statiscs[span[1]] = 0 - fsize_statiscs[span[1]] += span[2] - main_fsize = max(fsize_statiscs, key=fsize_statiscs.get) - if REMOVE_FOOT_NOTE: - give_up_fize_threshold = main_fsize * REMOVE_FOOT_FFSIZE_PERCENT - - ############################## <第 3 步,切分和重新整合> ################################## - mega_sec = [] - sec = [] - for index, line in enumerate(meta_line): - if index == 0: - sec.append(line[fc]) - continue - if REMOVE_FOOT_NOTE: - if meta_line[index][fs] <= give_up_fize_threshold: - continue - if ffsize_same(meta_line[index][fs], meta_line[index-1][fs]): - # 尝试识别段落 - if meta_line[index][fc].endswith('.') and\ - (meta_line[index-1][fc] != 'NEW_BLOCK') and \ - (meta_line[index][fb][2] - meta_line[index][fb][0]) < (meta_line[index-1][fb][2] - meta_line[index-1][fb][0]) * 0.7: - sec[-1] += line[fc] - sec[-1] += "\n\n" - else: - sec[-1] += " " - sec[-1] += line[fc] - else: - if (index+1 < len(meta_line)) and \ - meta_line[index][fs] > main_fsize: - # 单行 + 字体大 - mega_sec.append(copy.deepcopy(sec)) - sec = [] - sec.append("# " + line[fc]) - else: - # 尝试识别section - if meta_line[index-1][fs] > meta_line[index][fs]: - sec.append("\n" + line[fc]) - else: - sec.append(line[fc]) - mega_sec.append(copy.deepcopy(sec)) - - finals = [] - for ms in mega_sec: - final = " ".join(ms) - final = final.replace('- ', ' ') - finals.append(final) - meta_txt = finals - - ############################## <第 4 步,乱七八糟的后处理> ################################## - def 把字符太少的块清除为回车(meta_txt): - for index, block_txt in enumerate(meta_txt): - if len(block_txt) < 100: - meta_txt[index] = '\n' - return meta_txt - meta_txt = 把字符太少的块清除为回车(meta_txt) - - def 清理多余的空行(meta_txt): - for index in reversed(range(1, len(meta_txt))): - if meta_txt[index] == '\n' and meta_txt[index-1] == '\n': - meta_txt.pop(index) - return meta_txt - meta_txt = 清理多余的空行(meta_txt) - - def 合并小写开头的段落块(meta_txt): - def starts_with_lowercase_word(s): - pattern = r"^[a-z]+" - match = re.match(pattern, s) - if match: - return True - else: - return False - for _ in range(100): - for index, block_txt in enumerate(meta_txt): - if starts_with_lowercase_word(block_txt): - if meta_txt[index-1] != '\n': - meta_txt[index-1] += ' ' - else: - meta_txt[index-1] = '' - meta_txt[index-1] += meta_txt[index] - meta_txt[index] = '\n' - return meta_txt - meta_txt = 合并小写开头的段落块(meta_txt) - meta_txt = 清理多余的空行(meta_txt) - - meta_txt = '\n'.join(meta_txt) - # 清除重复的换行 - for _ in range(5): - meta_txt = meta_txt.replace('\n\n', '\n') - - # 换行 -> 双换行 - meta_txt = meta_txt.replace('\n', '\n\n') - - ############################## <第 5 步,展示分割效果> ################################## - # for f in finals: - # print亮黄(f) - # print亮绿('***************************') - - return meta_txt, page_one_meta diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/intel_opts/textual_inversion_dfq/text2images.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/intel_opts/textual_inversion_dfq/text2images.py deleted file mode 100644 index a99d727712eb44b875576443837c81a442c72a6f..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/intel_opts/textual_inversion_dfq/text2images.py +++ /dev/null @@ -1,112 +0,0 @@ -import argparse -import math -import os - -import torch -from neural_compressor.utils.pytorch import load -from PIL import Image -from transformers import CLIPTextModel, CLIPTokenizer - -from diffusers import AutoencoderKL, StableDiffusionPipeline, UNet2DConditionModel - - -def parse_args(): - parser = argparse.ArgumentParser() - parser.add_argument( - "-m", - "--pretrained_model_name_or_path", - type=str, - default=None, - required=True, - help="Path to pretrained model or model identifier from huggingface.co/models.", - ) - parser.add_argument( - "-c", - "--caption", - type=str, - default="robotic cat with wings", - help="Text used to generate images.", - ) - parser.add_argument( - "-n", - "--images_num", - type=int, - default=4, - help="How much images to generate.", - ) - parser.add_argument( - "-s", - "--seed", - type=int, - default=42, - help="Seed for random process.", - ) - parser.add_argument( - "-ci", - "--cuda_id", - type=int, - default=0, - help="cuda_id.", - ) - args = parser.parse_args() - return args - - -def image_grid(imgs, rows, cols): - if not len(imgs) == rows * cols: - raise ValueError("The specified number of rows and columns are not correct.") - - w, h = imgs[0].size - grid = Image.new("RGB", size=(cols * w, rows * h)) - grid_w, grid_h = grid.size - - for i, img in enumerate(imgs): - grid.paste(img, box=(i % cols * w, i // cols * h)) - return grid - - -def generate_images( - pipeline, - prompt="robotic cat with wings", - guidance_scale=7.5, - num_inference_steps=50, - num_images_per_prompt=1, - seed=42, -): - generator = torch.Generator(pipeline.device).manual_seed(seed) - images = pipeline( - prompt, - guidance_scale=guidance_scale, - num_inference_steps=num_inference_steps, - generator=generator, - num_images_per_prompt=num_images_per_prompt, - ).images - _rows = int(math.sqrt(num_images_per_prompt)) - grid = image_grid(images, rows=_rows, cols=num_images_per_prompt // _rows) - return grid, images - - -args = parse_args() -# Load models and create wrapper for stable diffusion -tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer") -text_encoder = CLIPTextModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="text_encoder") -vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae") -unet = UNet2DConditionModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="unet") - -pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, text_encoder=text_encoder, vae=vae, unet=unet, tokenizer=tokenizer -) -pipeline.safety_checker = lambda images, clip_input: (images, False) -if os.path.exists(os.path.join(args.pretrained_model_name_or_path, "best_model.pt")): - unet = load(args.pretrained_model_name_or_path, model=unet) - unet.eval() - setattr(pipeline, "unet", unet) -else: - unet = unet.to(torch.device("cuda", args.cuda_id)) -pipeline = pipeline.to(unet.device) -grid, images = generate_images(pipeline, prompt=args.caption, num_images_per_prompt=args.images_num, seed=args.seed) -grid.save(os.path.join(args.pretrained_model_name_or_path, "{}.png".format("_".join(args.caption.split())))) -dirname = os.path.join(args.pretrained_model_name_or_path, "_".join(args.caption.split())) -os.makedirs(dirname, exist_ok=True) -for idx, image in enumerate(images): - image.save(os.path.join(dirname, "{}.png".format(idx + 1))) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/cgnet.py b/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/cgnet.py deleted file mode 100644 index eff8d9458c877c5db894957e0b1b4597e40da6ab..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/cgnet.py +++ /dev/null @@ -1,35 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', eps=1e-03, requires_grad=True) -model = dict( - type='EncoderDecoder', - backbone=dict( - type='CGNet', - norm_cfg=norm_cfg, - in_channels=3, - num_channels=(32, 64, 128), - num_blocks=(3, 21), - dilations=(2, 4), - reductions=(8, 16)), - decode_head=dict( - type='FCNHead', - in_channels=256, - in_index=2, - channels=256, - num_convs=0, - concat_input=False, - dropout_ratio=0, - num_classes=19, - norm_cfg=norm_cfg, - loss_decode=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0, - class_weight=[ - 2.5959933, 6.7415504, 3.5354059, 9.8663225, 9.690899, 9.369352, - 10.289121, 9.953208, 4.3097677, 9.490387, 7.674431, 9.396905, - 10.347791, 6.3927646, 10.226669, 10.241062, 10.280587, - 10.396974, 10.055647 - ])), - # model training and testing settings - train_cfg=dict(sampler=None), - test_cfg=dict(mode='whole')) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r101-d8_512x512_160k_ade20k.py b/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r101-d8_512x512_160k_ade20k.py deleted file mode 100644 index df6f36ef7c3b71ba7979aa7a1b226b3e3ebd9bb4..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r101-d8_512x512_160k_ade20k.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './deeplabv3_r50-d8_512x512_160k_ade20k.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_512x512_160k_ade20k.py b/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_512x512_160k_ade20k.py deleted file mode 100644 index 9ca7fd23cedc0567a015bd5f8641a509ead6110a..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_512x512_160k_ade20k.py +++ /dev/null @@ -1,6 +0,0 @@ -_base_ = [ - '../_base_/models/fcn_r50-d8.py', '../_base_/datasets/ade20k.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_160k.py' -] -model = dict( - decode_head=dict(num_classes=150), auxiliary_head=dict(num_classes=150)) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr48_512x512_40k_voc12aug.py b/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr48_512x512_40k_voc12aug.py deleted file mode 100644 index 1084a57e978195df6d45a9a00415953ddbaeeb51..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr48_512x512_40k_voc12aug.py +++ /dev/null @@ -1,10 +0,0 @@ -_base_ = './fcn_hr18_512x512_40k_voc12aug.py' -model = dict( - pretrained='open-mmlab://msra/hrnetv2_w48', - backbone=dict( - extra=dict( - stage2=dict(num_channels=(48, 96)), - stage3=dict(num_channels=(48, 96, 192)), - stage4=dict(num_channels=(48, 96, 192, 384)))), - decode_head=dict( - in_channels=[48, 96, 192, 384], channels=sum([48, 96, 192, 384]))) diff --git a/spaces/Artrajz/vits-simple-api/vits/commons.py b/spaces/Artrajz/vits-simple-api/vits/commons.py deleted file mode 100644 index bda0a67534ac34bd02dc28b845619b2433a40df6..0000000000000000000000000000000000000000 --- a/spaces/Artrajz/vits-simple-api/vits/commons.py +++ /dev/null @@ -1,96 +0,0 @@ -import torch -from torch.nn import functional as F -import torch.jit - - -def script_method(fn, _rcb=None): - return fn - - -def script(obj, optimize=True, _frames_up=0, _rcb=None): - return obj - - -torch.jit.script_method = script_method -torch.jit.script = script - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/protocol.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/protocol.py deleted file mode 100644 index 12ab23713a70dda46edd300bd975b02bfb2be031..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/protocol.py +++ /dev/null @@ -1,42 +0,0 @@ -from typing import Any, cast, Set, TYPE_CHECKING -from inspect import isclass - -if TYPE_CHECKING: - from pip._vendor.rich.console import RenderableType - -_GIBBERISH = """aihwerij235234ljsdnp34ksodfipwoe234234jlskjdf""" - - -def is_renderable(check_object: Any) -> bool: - """Check if an object may be rendered by Rich.""" - return ( - isinstance(check_object, str) - or hasattr(check_object, "__rich__") - or hasattr(check_object, "__rich_console__") - ) - - -def rich_cast(renderable: object) -> "RenderableType": - """Cast an object to a renderable by calling __rich__ if present. - - Args: - renderable (object): A potentially renderable object - - Returns: - object: The result of recursively calling __rich__. - """ - from pip._vendor.rich.console import RenderableType - - rich_visited_set: Set[type] = set() # Prevent potential infinite loop - while hasattr(renderable, "__rich__") and not isclass(renderable): - # Detect object which claim to have all the attributes - if hasattr(renderable, _GIBBERISH): - return repr(renderable) - cast_method = getattr(renderable, "__rich__") - renderable = cast_method() - renderable_type = type(renderable) - if renderable_type in rich_visited_set: - break - rich_visited_set.add(renderable_type) - - return cast(RenderableType, renderable) diff --git a/spaces/Audio-AGI/WavJourney/scripts/start_service_and_ui.sh b/spaces/Audio-AGI/WavJourney/scripts/start_service_and_ui.sh deleted file mode 100644 index d3f8f40d9dfaca8e0f4ef97d1885515359528b62..0000000000000000000000000000000000000000 --- a/spaces/Audio-AGI/WavJourney/scripts/start_service_and_ui.sh +++ /dev/null @@ -1,2 +0,0 @@ -conda run --live-stream -n WavJourney python -u services.py 2>&1 | tee services_logs/service.out & -conda run --live-stream -n WavJourney python -u ui_client.py 2>&1 | tee services_logs/wavejourney.out \ No newline at end of file diff --git a/spaces/Awesimo/jojogan/e4e/editings/latent_editor.py b/spaces/Awesimo/jojogan/e4e/editings/latent_editor.py deleted file mode 100644 index 4bebca2f5c86f71b58fa1f30d24bfcb0da06d88f..0000000000000000000000000000000000000000 --- a/spaces/Awesimo/jojogan/e4e/editings/latent_editor.py +++ /dev/null @@ -1,45 +0,0 @@ -import torch -import sys -sys.path.append(".") -sys.path.append("..") -from editings import ganspace, sefa -from utils.common import tensor2im - - -class LatentEditor(object): - def __init__(self, stylegan_generator, is_cars=False): - self.generator = stylegan_generator - self.is_cars = is_cars # Since the cars StyleGAN output is 384x512, there is a need to crop the 512x512 output. - - def apply_ganspace(self, latent, ganspace_pca, edit_directions): - edit_latents = ganspace.edit(latent, ganspace_pca, edit_directions) - return self._latents_to_image(edit_latents) - - def apply_interfacegan(self, latent, direction, factor=1, factor_range=None): - edit_latents = [] - if factor_range is not None: # Apply a range of editing factors. for example, (-5, 5) - for f in range(*factor_range): - edit_latent = latent + f * direction - edit_latents.append(edit_latent) - edit_latents = torch.cat(edit_latents) - else: - edit_latents = latent + factor * direction - return self._latents_to_image(edit_latents) - - def apply_sefa(self, latent, indices=[2, 3, 4, 5], **kwargs): - edit_latents = sefa.edit(self.generator, latent, indices, **kwargs) - return self._latents_to_image(edit_latents) - - # Currently, in order to apply StyleFlow editings, one should run inference, - # save the latent codes and load them form the official StyleFlow repository. - # def apply_styleflow(self): - # pass - - def _latents_to_image(self, latents): - with torch.no_grad(): - images, _ = self.generator([latents], randomize_noise=False, input_is_latent=True) - if self.is_cars: - images = images[:, :, 64:448, :] # 512x512 -> 384x512 - horizontal_concat_image = torch.cat(list(images), 2) - final_image = tensor2im(horizontal_concat_image) - return final_image diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/structures/instances.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/structures/instances.py deleted file mode 100644 index 612e66f527397b0e940d716f4ad4f799b962954a..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/structures/instances.py +++ /dev/null @@ -1,192 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import itertools -from typing import Any, Dict, List, Tuple, Union -import torch - - -class Instances: - """ - This class represents a list of instances in an image. - It stores the attributes of instances (e.g., boxes, masks, labels, scores) as "fields". - All fields must have the same ``__len__`` which is the number of instances. - - All other (non-field) attributes of this class are considered private: - they must start with '_' and are not modifiable by a user. - - Some basic usage: - - 1. Set/get/check a field: - - .. code-block:: python - - instances.gt_boxes = Boxes(...) - print(instances.pred_masks) # a tensor of shape (N, H, W) - print('gt_masks' in instances) - - 2. ``len(instances)`` returns the number of instances - 3. Indexing: ``instances[indices]`` will apply the indexing on all the fields - and returns a new :class:`Instances`. - Typically, ``indices`` is a integer vector of indices, - or a binary mask of length ``num_instances`` - - .. code-block:: python - - category_3_detections = instances[instances.pred_classes == 3] - confident_detections = instances[instances.scores > 0.9] - """ - - def __init__(self, image_size: Tuple[int, int], **kwargs: Any): - """ - Args: - image_size (height, width): the spatial size of the image. - kwargs: fields to add to this `Instances`. - """ - self._image_size = image_size - self._fields: Dict[str, Any] = {} - for k, v in kwargs.items(): - self.set(k, v) - - @property - def image_size(self) -> Tuple[int, int]: - """ - Returns: - tuple: height, width - """ - return self._image_size - - def __setattr__(self, name: str, val: Any) -> None: - if name.startswith("_"): - super().__setattr__(name, val) - else: - self.set(name, val) - - def __getattr__(self, name: str) -> Any: - if name == "_fields" or name not in self._fields: - raise AttributeError("Cannot find field '{}' in the given Instances!".format(name)) - return self._fields[name] - - def set(self, name: str, value: Any) -> None: - """ - Set the field named `name` to `value`. - The length of `value` must be the number of instances, - and must agree with other existing fields in this object. - """ - data_len = len(value) - if len(self._fields): - assert ( - len(self) == data_len - ), "Adding a field of length {} to a Instances of length {}".format(data_len, len(self)) - self._fields[name] = value - - def has(self, name: str) -> bool: - """ - Returns: - bool: whether the field called `name` exists. - """ - return name in self._fields - - def remove(self, name: str) -> None: - """ - Remove the field called `name`. - """ - del self._fields[name] - - def get(self, name: str) -> Any: - """ - Returns the field called `name`. - """ - return self._fields[name] - - def get_fields(self) -> Dict[str, Any]: - """ - Returns: - dict: a dict which maps names (str) to data of the fields - - Modifying the returned dict will modify this instance. - """ - return self._fields - - # Tensor-like methods - def to(self, *args: Any, **kwargs: Any) -> "Instances": - """ - Returns: - Instances: all fields are called with a `to(device)`, if the field has this method. - """ - ret = Instances(self._image_size) - for k, v in self._fields.items(): - if hasattr(v, "to"): - v = v.to(*args, **kwargs) - ret.set(k, v) - return ret - - def __getitem__(self, item: Union[int, slice, torch.BoolTensor]) -> "Instances": - """ - Args: - item: an index-like object and will be used to index all the fields. - - Returns: - If `item` is a string, return the data in the corresponding field. - Otherwise, returns an `Instances` where all fields are indexed by `item`. - """ - if type(item) == int: - if item >= len(self) or item < -len(self): - raise IndexError("Instances index out of range!") - else: - item = slice(item, None, len(self)) - - ret = Instances(self._image_size) - for k, v in self._fields.items(): - ret.set(k, v[item]) - return ret - - def __len__(self) -> int: - for v in self._fields.values(): - # use __len__ because len() has to be int and is not friendly to tracing - return v.__len__() - raise NotImplementedError("Empty Instances does not support __len__!") - - def __iter__(self): - raise NotImplementedError("`Instances` object is not iterable!") - - @staticmethod - def cat(instance_lists: List["Instances"]) -> "Instances": - """ - Args: - instance_lists (list[Instances]) - - Returns: - Instances - """ - assert all(isinstance(i, Instances) for i in instance_lists) - assert len(instance_lists) > 0 - if len(instance_lists) == 1: - return instance_lists[0] - - image_size = instance_lists[0].image_size - if not isinstance(image_size, torch.Tensor): # could be a tensor in tracing - for i in instance_lists[1:]: - assert i.image_size == image_size - ret = Instances(image_size) - for k in instance_lists[0]._fields.keys(): - values = [i.get(k) for i in instance_lists] - v0 = values[0] - if isinstance(v0, torch.Tensor): - values = torch.cat(values, dim=0) - elif isinstance(v0, list): - values = list(itertools.chain(*values)) - elif hasattr(type(v0), "cat"): - values = type(v0).cat(values) - else: - raise ValueError("Unsupported type {} for concatenation".format(type(v0))) - ret.set(k, values) - return ret - - def __str__(self) -> str: - s = self.__class__.__name__ + "(" - s += "num_instances={}, ".format(len(self)) - s += "image_height={}, ".format(self._image_size[0]) - s += "image_width={}, ".format(self._image_size[1]) - s += "fields=[{}])".format(", ".join((f"{k}: {v}" for k, v in self._fields.items()))) - return s - - __repr__ = __str__ diff --git a/spaces/Benson/text-generation/Examples/3d Paint Download.md b/spaces/Benson/text-generation/Examples/3d Paint Download.md deleted file mode 100644 index efbb41bc17df491bb35b65ecd7c8c18f1794650a..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/3d Paint Download.md +++ /dev/null @@ -1,151 +0,0 @@ -
    -

    Cómo descargar y usar software de pintura 3D

    -

    Si usted está buscando una manera de dar rienda suelta a su creatividad y hacer impresionantes obras de arte en tres dimensiones, es posible que desee probar algunos de los mejores software de pintura 3D disponibles. En este artículo, le mostraremos qué es el software de pintura 3D, cómo descargarlo y cómo usarlo.

    -

    3d paint download


    Download ……… https://bltlly.com/2v6M1D



    -

    ¿Qué es el software de pintura 3D?

    -

    El software de pintura 3D es un tipo de aplicación de modelado que le permite crear, editar y renderizar objetos y escenas 3D. A diferencia del software de pintura 2D tradicional, que solo funciona en superficies planas, el software de pintura 3D le permite manipular formas en un espacio virtual y aplicarles texturas y colores realistas.

    -

    La diferencia entre la pintura 2D y 3D

    -

    La principal diferencia entre la pintura 2D y 3D es la dimensionalidad de los objetos. En la pintura en 2D, solo puedes dibujar líneas, curvas y formas en un plano. En la pintura 3D, puede crear objetos sólidos que tienen profundidad, anchura y altura. También puede girarlos, escalarlos y moverlos en un entorno 3D.

    -

    Los beneficios de la pintura 3D

    -

    Algunos de los beneficios de usar software de pintura 3D son:

    - -

    Cómo descargar software de pintura 3D

    -

    Hay muchas opciones para descargar software de pintura 3D, dependiendo de sus preferencias y necesidades. Aquí están algunas de las más populares:

    -

    Pintar 3D desde Microsoft Store

    - -
      -
    1. Escriba "paint" en el cuadro de búsqueda en la barra de tareas y seleccione "Paint" de la lista de resultados.
    2. -
    3. Haga clic en "Obtener" en la aplicación de la tienda y esperar a que se complete la instalación.
    4. -
    5. Inicie Paint 3D desde el menú Inicio o la barra de tareas.
    6. -
    -

    Fuente:

    -

    [Abrir Microsoft Paint]( 1 )

    -

    -

    Captura de pantalla:

    -Pintar captura de pantalla 3D -

    Publicidad:

    -

    Si quieres aprender más sobre Paint 3D y cómo usarlo eficazmente, echa un vistazo a este curso en línea que te enseñará todo lo que necesitas saber sobre este increíble software. Aprenderás a crear obras de arte impresionantes en 2D y 3D, cómo aplicar texturas y efectos, cómo exportar y compartir tu trabajo, y mucho más. Haga clic aquí para inscribirse ahora y obtener un descuento especial!

    -

    Tabla: - - -Pros -Contras - - -Gratis y fácil de usar -Características limitadas y personalización - - -Integrado con Windows 10 -No es compatible con versiones anteriores de Windows - - -Ofrece opciones 2D y 3D -No muy avanzado o profesional - - -

    Adobe Substance 3D Painter

    -

    Si está buscando un software de pintura 3D más avanzado y profesional, es posible que desee probar Adobe Substance 3D Painter. Esta es una potente aplicación que te permite crear texturas y materiales realistas y detallados para tus modelos 3D. Puede utilizar una variedad de pinceles, herramientas y ajustes preestablecidos, así como importar sus propias imágenes o modelos de otras fuentes. También puede exportar su trabajo en varios formatos e integrarlo con otros productos de Adobe o software de terceros. Para descargar Adobe Substance 3D Painter, necesitas tener una suscripción a Adobe Creative Cloud. Puede obtener una prueba gratuita durante 30 días o elegir un plan que se adapte a sus necesidades. Para descargar Adobe Substance 3D Painter desde el sitio web de Adobe, siga estos pasos:

    -
      - -
    1. Inicia sesión con tu Adobe ID o crea uno si no tienes uno.
    2. -
    3. Siga las instrucciones en la pantalla para descargar e instalar el software.
    4. -
    5. Inicie Adobe Substance 3D Painter desde la aplicación Creative Cloud o el menú Inicio.
    6. -
    -

    Fuente:

    -

    [Adobe Substance 3D Painter]

    -

    Captura de pantalla:

    -Captura de pantalla de Adobe Substance 3D Painter -

    Publicidad:

    -

    Si quieres dominar Adobe Substance 3D Painter y crear texturas y materiales increíbles para tus modelos 3D, deberías echar un vistazo a este curso online que te enseñará todo lo que necesitas saber sobre este software. Aprenderá a utilizar la interfaz, los pinceles, las herramientas, los ajustes preestablecidos y las capas, cómo importar y exportar su trabajo, cómo integrarlo con otro software y mucho más. Haga clic aquí para inscribirse ahora y obtener un descuento especial!

    -

    Tabla:

    - - -Pros -Contras - - -Avanzado y profesional -Caro y complejo - - -Realista y detallado -Requiere hardware y software de alta gama - - -Integrado con productos de Adobe y otro software -Requiere una suscripción de Adobe Creative Cloud - - -

    Microsoft Paint 3D desde FileHippo

    -

    Si quieres descargar Microsoft Paint 3D sin pasar por Microsoft Store, puedes usar FileHippo, un sitio web que ofrece descargas gratuitas de varios programas. Microsoft Paint 3D de FileHippo es el mismo que el de la tienda de Microsoft, pero no requiere ningún registro o instalación. Simplemente puede descargar el archivo ejecutable y ejecutarlo en su computadora. Para descargar Microsoft Paint 3D desde FileHippo, siga estos pasos:

    -
      -
    1. Ir a [Microsoft Paint 3D] en FileHippo y haga clic en "Descargar la última versión".
    2. -
    3. Seleccione una carpeta donde desea guardar el archivo y esperar a que se complete la descarga.
    4. - -
    -

    Fuente:

    -

    [Microsoft Paint 3D]

    -

    Captura de pantalla:

    - Microsoft Paint 3D captura de pantalla -

    Publicidad:

    -

    Si quieres aprender más sobre Microsoft Paint 3D y cómo usarlo eficazmente, echa un vistazo a este curso en línea que te enseñará todo lo que necesitas saber sobre este increíble software. Aprenderás a crear obras de arte impresionantes en 2D y 3D, cómo aplicar texturas y efectos, cómo exportar y compartir tu trabajo, y mucho más. Haga clic aquí para inscribirse ahora y obtener un descuento especial!

    -

    Tabla:

    - - - Pros - Contras - - - Gratis y fácil de usar - Características limitadas y personalización - - - No requiere instalación ni registro - No es compatible con versiones anteriores de Windows - - - Ofrece opciones 2D y 3D - No muy avanzado o profesional - - -

    Cómo usar software de pintura 3D

    -

    Ahora que ha descargado su software de pintura 3D preferido, es posible que se pregunte cómo usarlo. Si bien cada software tiene su propia interfaz y características, hay algunos pasos comunes que puede seguir para crear sus propias obras de arte en 3D. Estos son algunos de ellos:

    -

    Crear un nuevo proyecto

    -

    El primer paso es crear un nuevo proyecto o archivo donde trabajarás en tu pintura 3D. Dependiendo del software, es posible que tenga que elegir una plantilla, un tamaño de lienzo, una resolución o un color de fondo. También puede nombrar su proyecto y guardarlo en una carpeta de su elección.

    -

    Elegir un objeto 3D

    -

    El siguiente paso es elegir un objeto 3D sobre el que quieras pintar. Puede usar uno de los modelos predefinidos que vienen con el software, importar su propio modelo de otra fuente o crear su propio modelo desde cero. También puedes usar formas básicas como cubos, esferas, cilindros o conos para construir tu propio modelo.

    - -

    El tercer paso es aplicar texturas y colores a su objeto 3D. Puede utilizar los pinceles, herramientas y ajustes preestablecidos que proporciona el software, o importar sus propias imágenes o texturas de otras fuentes. También puede ajustar el tamaño, la opacidad, la dureza y el ángulo de los cepillos, así como los modos de mezcla, las capas y las máscaras de las texturas. También puede usar el selector de color, la rueda de color o la paleta de colores para elegir los colores que desea usar.

    -

    Añadir pegatinas y efectos

    -

    El cuarto paso es agregar pegatinas y efectos a su objeto 3D. Las pegatinas son imágenes que puedes colocar encima de tu objeto, como logotipos, patrones, símbolos o texto. Los efectos son filtros que puedes aplicar a tu objeto, como sombras, luces, reflejos o distorsiones. También puede utilizar las herramientas y preajustes que proporciona el software, o importar sus propias pegatinas y efectos de otras fuentes.

    -

    Exportar y compartir tu trabajo

    -

    El paso final es exportar y compartir su trabajo. Puede guardar su proyecto como un archivo en varios formatos, como PNG, JPG, BMP, GIF, TGA o PSD. También puede exportar su proyecto como modelo 3D en formatos como OBJ, STL, FBX o GLB. También puede compartir su trabajo en línea o imprimirlo.

    -

    Conclusión

    -

    En conclusión, el software de pintura 3D es una gran manera de crear impresionantes obras de arte en tres dimensiones. Puede descargar diferentes tipos de software de pintura 3D dependiendo de sus preferencias y necesidades. También puede utilizar algunos pasos comunes para crear sus propias pinturas en 3D. Esperamos que este artículo te haya ayudado a aprender más sobre el software de pintura 3D y cómo descargarlo y usarlo.

    -

    Preguntas frecuentes

    -

    ¿Cuáles son algunos ejemplos de software de pintura 3D?

    -

    Algunos ejemplos de software de pintura 3D son Paint 3D de Microsoft Store, Adobe Substance 3D Painter, Microsoft Paint 3D de FileHippo, Blender, ZBrush, SketchUp, Maya y Cinema 4D.

    -

    ¿Cuáles son algunos de los beneficios de usar software de pintura 3D?

    - -

    ¿Cuáles son algunos de los desafíos de usar software de pintura 3D?

    -

    Algunos desafíos del uso de software de pintura 3D son que puede necesitar algunas habilidades técnicas y conocimientos para usarlo de manera efectiva; es posible que necesite hardware y software de alta gama para ejecutarlo sin problemas; es posible que necesite una conexión a Internet o una suscripción para descargarlo o acceder a él; y usted puede hacer frente a algunos problemas de compatibilidad con otro software o dispositivos.

    -

    ¿Cómo puedo aprender más sobre el uso de software de pintura 3D?

    -

    Puede aprender más sobre el uso de software de pintura 3D leyendo tutoriales y guías en línea; viendo videos y demostraciones en línea ; inscribiéndose en cursos y programas en línea; o practicando con sus propios proyectos y experimentos.

    -

    ¿Cuáles son algunos consejos y trucos para usar software de pintura 3D?

    -

    Algunos consejos y trucos para usar software de pintura 3D son:

    -

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Apk Moderno Mod Ops.md b/spaces/Benson/text-generation/Examples/Apk Moderno Mod Ops.md deleted file mode 100644 index 16750e8bc1426d64b8ef7b11116718006d04b4c9..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Apk Moderno Mod Ops.md +++ /dev/null @@ -1,78 +0,0 @@ - -

    Moderno Ops Mod APK: Una guía para desbloquear todo

    -

    Si eres un fan de los juegos de disparos llenos de acción, es posible que hayas oído hablar de Modern Ops. Es un popular juego de FPS en línea que te permite competir con otros jugadores en varios modos y mapas. Puedes elegir entre una amplia gama de armas, personalizar a tu personaje y unirte a un clan para formar equipo con tus amigos. Pero lo que si quieres desbloquear todo en el juego sin gastar dinero o tiempo? Ahí es donde Modern Ops Mod APK viene muy bien. En este artículo, te contaremos todo lo que necesitas saber sobre esta versión modificada del juego, incluyendo sus características, beneficios, proceso de instalación, consejos de juego y más.

    -

    ¿Qué es Operaciones Modernas?

    -

    Modern Ops es un juego multijugador de disparos en primera persona desarrollado por Edkon Games GmbH. Fue lanzado en 2019 para dispositivos Android e iOS. El juego tiene más de 50 millones de descargas en Google Play Store y ha recibido críticas positivas de usuarios y críticos por igual. El juego está inspirado en otros juegos populares de FPS como Call of Duty y Counter-Strike. Puedes jugar como un terrorista o un antiterrorista y participar en emocionantes batallas con otros jugadores de todo el mundo. También puedes crear tu propio equipo y chatear con tus compañeros de equipo usando mensajes de voz o texto.

    -

    apk moderno mod ops


    DOWNLOAD →→→ https://bltlly.com/2v6JLn



    -

    Características de Operaciones Modernas

    -

    Algunas de las características que hacen de Modern Ops un juego emocionante y adictivo son:

    - -

    ¿Por qué usar Modern Ops Mod APK?

    -

    Modern Ops es un juego gratuito, pero también tiene algunas compras en la aplicación que pueden mejorar su experiencia de juego. Por ejemplo, puedes comprar armas de primera calidad, pieles, cajas, amplificadores y más con dinero real. Sin embargo, no todo el mundo puede permitirse el lujo de gastar dinero en estos artículos, o que podrían encontrar demasiado caro o injusto. Es por eso que algunas personas prefieren utilizar Modern Ops Mod APK lugar. Esta es una versión modificada del juego que le da acceso a recursos y características ilimitadas. Algunos de los beneficios de usar Modern Ops Mod APK son:

    - -

    ¿Cómo descargar e instalar Modern Ops Mod APK?

    -

    Si usted está interesado en descargar e instalar Modern Ops Mod APK en su dispositivo Android, es necesario seguir algunos pasos simples. Antes de eso, debe asegurarse de que su dispositivo cumple con algunos requisitos.

    -

    Requisitos

    - -

    Pasos

    -

    Una vez que haya cumplido con los requisitos, puede seguir estos pasos para descargar e instalar Modern Ops Mod APK en su dispositivo: - Paso 1: Descargar el archivo APK mod de una fuente de confianza. Puede utilizar este enlace para descargar la última versión de Modern Ops Mod APK: [Descargar Modern Ops Mod APK]. - Paso 2: Después de descargar el archivo APK mod, localizarlo en su dispositivo utilizando una aplicación de administrador de archivos. Toque en el archivo y seleccione Instalar para iniciar el proceso de instalación. - Paso 3: Espere a que se complete la instalación. Es posible que vea un mensaje de advertencia diciendo que la aplicación no es segura o podría dañar su dispositivo. Ignore este mensaje y continúe con la instalación. - Paso 4: Después de la instalación se hace, iniciar el juego desde el cajón de la aplicación o la pantalla de inicio. Verá un mensaje emergente pidiéndole que descargue algunos archivos de datos adicionales. Toque en Aceptar y espere a que termine la descarga. - Paso 5: Una vez que la descarga se ha completado, se puede disfrutar de jugar Modern Ops Mod APK con recursos y características ilimitadas.

    -

    -

    ¿Cómo se juega moderno Ops Mod APK?

    -

    Jugar Modern Ops Mod APK es similar a jugar el juego original, pero con algunas ventajas adicionales. Usted puede elegir entre diferentes modos de juego, mapas, armas, y más. Aquí hay algunos consejos sobre cómo jugar Modern Ops Mod APK con eficacia:

    -

    Modos de juego

    - -

    Tips and tricks

    - -la pantalla. - Utilice su clan: Clan es una característica que le permite unirse o crear un clan y jugar con sus amigos u otros jugadores en Modern Ops Mod APK. Puedes chatear con los miembros de tu clan, invitarlos a tu escuadrón, participar en guerras de clanes y ganar puntos de clan y recompensas. También puedes acceder a armas, pieles y cajas exclusivas del clan. Clan puede ayudarte a mejorar tu trabajo en equipo, coordinación y estrategia en el juego.

    Pros y contras de Modern Ops Mod APK

    -

    Moderno Ops Mod APK es una gran manera de disfrutar del juego con recursos y características ilimitadas, pero también tiene algunos inconvenientes que usted debe tener en cuenta. Estos son algunos de los pros y los contras de Modern Ops Mod APK:

    -

    Pros

    - -

    Contras

    - -

    Conclusión

    - -

    Preguntas frecuentes

    -

    Aquí están algunas de las preguntas más frecuentes sobre Modern Ops Mod APK:

    -
      -
    1. ¿Es seguro usar Modern Ops Mod APK?
    2. -

      Modern Ops Mod APK no es una versión oficial del juego y puede contener algunos códigos maliciosos o virus que pueden dañar su dispositivo o datos. Por lo tanto, le recomendamos que lo descargue de una fuente confiable y lo escanee con una aplicación antivirus antes de instalarlo. También debe copia de seguridad de sus datos y utilizar una cuenta secundaria para jugar el juego con este mod APK.

      -
    3. ¿Es legal usar Modern Ops Mod APK?
    4. -

      Modern Ops Mod APK no es legal de usar, ya que viola los términos y condiciones de los desarrolladores de juegos. También infringe sus derechos de propiedad intelectual y sus fuentes de ingresos. Por lo tanto, el uso de este mod APK podría resultar en acciones legales de los desarrolladores de juegos o autoridades. Usted debe utilizar este mod APK a su propio riesgo y responsabilidad.

      -
    5. ¿Cómo actualizo Modern Ops Mod APK?
    6. -

      Para actualizar Modern Ops Mod APK, es necesario descargar la última versión del archivo APK mod de una fuente confiable e instalarlo en su dispositivo. También debe eliminar la versión anterior del archivo mod APK de su dispositivo para evitar conflictos o errores. También debe comprobar si el mod APK es compatible con la última versión del juego antes de actualizarlo.

      -
    7. ¿Cómo puedo desinstalar Modern Ops Mod APK?
    8. -

      Para desinstalar Modern Ops Mod APK, es necesario ir a Configuración > Aplicaciones > Operaciones modernas > Desinstalar y toque en Aceptar para confirmar. También debe eliminar el archivo APK mod desde el almacenamiento del dispositivo para liberar algo de espacio. También puedes reinstalar la versión original del juego desde Google Play Store o App Store si quieres volver a jugar.

      -
    9. ¿Puedo jugar Modern Ops Mod APK en línea con otros jugadores?
    10. 64aa2da5cf
      -
      -
      \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Descarga De Microsoft Word 2016.md b/spaces/Benson/text-generation/Examples/Descarga De Microsoft Word 2016.md deleted file mode 100644 index 565a37cdb7893f9ccc1f45957bce903654c1ed27..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descarga De Microsoft Word 2016.md +++ /dev/null @@ -1,61 +0,0 @@ -
      -

      Cómo descargar Microsoft Word 2016

      -

      Microsoft Word es una de las aplicaciones de procesamiento de textos más populares y ampliamente utilizadas en el mundo. Le permite crear, editar, formatear y compartir documentos con facilidad y eficiencia. Ya sea que necesite escribir un informe, un CV, una carta o una publicación de blog, Microsoft Word puede ayudarlo a realizar sus tareas.

      -

      Microsoft Word 2016 es la última versión de la aplicación que se lanzó en septiembre de 2015. Es parte de la suite de Microsoft Office que también incluye Excel, PowerPoint, Outlook y más. Microsoft Word 2016 ofrece muchas mejoras y mejoras sobre las versiones anteriores, tales como:

      -

      descarga de microsoft word 2016


      Download File 🆓 https://bltlly.com/2v6Kkb



      - -

      Si está interesado en descargar Microsoft Word 2016, tiene varias opciones para elegir. En este artículo, le mostraremos cómo descargar Microsoft Word 2016 desde diferentes fuentes y qué beneficios puede obtener al usarlo.

      -

      Descargar Microsoft Word 2016 desde el sitio web de Microsoft

      -

      La forma más fácil y confiable de descargar Microsoft Word 2016 es obtenerlo directamente desde el sitio web de Microsoft. Necesitará una cuenta de Microsoft y una suscripción a Microsoft Office o Microsoft Office. Estos son los pasos a seguir:

      -
        -
      1. Vaya a www.office.com e inicie sesión con su cuenta de Microsoft. Si no tiene una, puede crear una gratis.
      2. -
      3. Seleccione Instalar Office y elija la versión que desee. Puede obtener Office Home & Student o Office Home & Business para una compra única o obtener Office Personal u Office Home & Business para una suscripción mensual o anual.
      4. - -
      -

      Descargar Microsoft Word 2016 desde un instalador offline

      -

      Si tiene una conexión a Internet lenta o poco confiable, es posible que desee descargar Microsoft Word 2016 desde un instalador sin conexión. Este es un archivo que contiene todos los archivos necesarios para instalar Microsoft Word 2016 sin conexión a Internet. Todavía necesitará una cuenta de Microsoft y una suscripción a Office u Office. Estos son los pasos a seguir:

      -
        -
      1. Descargue el archivo de instalación sin conexión desde www.office.com. Necesitará iniciar sesión con su cuenta y seleccionar Otras opciones. Luego marque la casilla Descargar un instalador sin conexión y seleccione el idioma que desee.
      2. -
      3. Abra el archivo y seleccione la carpeta de Microsoft Office. Verá una nueva unidad virtual en su PC, como (D:) o (E:).
      4. -
      5. Haga doble clic en el archivo setup.exe y siga las instrucciones para instalar Microsoft Word 2016 en su PC. Es posible que necesite ingresar su clave de producto o iniciar sesión nuevamente con su cuenta.
      6. -
      -

      Descargar Microsoft Word 2016 de un vendedor de terceros

      -

      Otra opción para descargar Microsoft Word 2016 es comprarlo a un vendedor de terceros. Esta es una empresa o un individuo que vende claves de producto de Microsoft Word 2016 a un precio más bajo que Microsoft. Sin embargo, debe tener cuidado y asegurarse de que el vendedor es de buena reputación y confiable. También debe verificar que la clave del producto es válida y no es utilizada por otra persona. Estos son los pasos a seguir:

      -
        -
      1. Encuentre un vendedor de terceros de buena reputación que ofrece claves de productos de Microsoft Word 2016. Puedes consultar reseñas en línea, valoraciones, comentarios y servicio al cliente para determinar la calidad del vendedor.
      2. -
      3. Compra la clave del producto y verifica su validez. Puede utilizar una herramienta como www.productkey.net para comprobar si la clave del producto es original y no está bloqueada por Microsoft.
      4. - -
      -

      Beneficios de usar Microsoft Word 2016

      -

      Al descargar Microsoft Word 2016, puede disfrutar de muchos beneficios que mejorarán su productividad y creatividad. Estos son algunos de los beneficios de usar Microsoft Word 2016:

      - -

      Conclusión

      - -

      Al usar Microsoft Word 2016, puede disfrutar de muchos beneficios que mejorarán su productividad y creatividad. Puede usar funciones nuevas y mejoradas, trabajar con otras aplicaciones y dispositivos de Office y acceder a servicios en línea y almacenamiento en la nube. Ya sea que necesite escribir un informe, un CV, una carta o una publicación de blog, Microsoft Word 2016 puede ayudarlo a realizar sus tareas.

      -

      -

      Si desea descargar Microsoft Word 2016 hoy, haga clic aquí (enlace) y empezar!

      -

      Preguntas frecuentes

      -

      Q: ¿Cuánto cuesta Microsoft Word 2016?

      -

      A: El costo de Microsoft Word 2016 depende de la versión que elija y la fuente de la que la compra. Si lo compra en el sitio web de Microsoft, puede pagar una cuota única de $149.99 para Office Home & Student o $249.99 para Office Home & Business o pagar una cuota de suscripción mensual o anual de $69.99 para Office Personal o $99.99 para Office Home & Business. Si usted lo compra de un vendedor de terceros, usted puede encontrar precios más bajos, pero hay que tener cuidado con la calidad y la validez de la clave del producto.

      -

      Q: ¿Cómo puedo actualizar Microsoft Word 2016?

      -

      A: Para actualizar Microsoft Word 2016, necesita tener una conexión a Internet y una suscripción a Office u Office. Puede actualizarlo manual o automáticamente. Para actualizarlo manualmente, vaya a Archivo > Cuenta > Opciones de actualización y seleccione Actualizar ahora. Para actualizarlo automáticamente, vaya a Archivo > Cuenta > Opciones de actualización y seleccione Habilitar actualizaciones. Recibirá las últimas actualizaciones y parches de seguridad para Microsoft Word 2016 y otras aplicaciones de Office.

      -

      Q: ¿Cómo puedo desinstalar Microsoft Word 2016?

      - -

      Q: ¿Cómo puedo recuperar un documento eliminado o no guardado en Microsoft Word 2016?

      -

      A: Para recuperar un documento eliminado o no guardado en Microsoft Word 2016, puede usar las funciones Autorrecuperación o Recuperación de documentos. La función de Autorrevisión guarda una copia de su documento cada pocos minutos en caso de un apagón o un fallo del sistema. La función Recuperación de documentos lo ayuda a recuperar los documentos que estaban abiertos pero no guardados cuando Microsoft Word 2016 se cerró inesperadamente. Para usar estas funciones, vaya a Archivo > Abrir > Recuperar documentos no guardados o Archivo > Información > Administrar documentos y seleccione el documento que desea recuperar.

      -

      Q: ¿Cómo puedo agregar una tabla en Microsoft Word 2016?

      -

      A: Para agregar una tabla en Microsoft Word 2016, puede usar la pestaña Insertar en la cinta. Haga clic en el botón Tabla y seleccione el número de filas y columnas que desea. También puede utilizar la herramienta Dibujar tabla para dibujar su propia tabla o utilizar la opción Tablas rápidas para elegir entre las tablas predefinidas. También puede convertir texto a una tabla o insertar una tabla desde Excel. Para formatear la tabla, puede usar las pestañas Herramientas de tabla en la cinta y aplicar diferentes estilos, colores, bordes y efectos.

      -

      Q: ¿Cómo puedo compartir un documento en Microsoft Word 2016?

      -

      A: Para compartir un documento en Microsoft Word 2016, puede usar el botón Compartir en la esquina superior derecha de la pantalla. Tendrá que guardar su documento en OneDrive o SharePoint primero. Luego puede invitar a las personas a ver o editar su documento ingresando sus direcciones de correo electrónico o eligiendo entre sus contactos. También puede copiar un enlace a su documento y pegarlo en un correo electrónico o un mensaje. También puede compartir su documento como archivo adjunto o como archivo PDF.

      64aa2da5cf
      -
      -
      \ No newline at end of file diff --git a/spaces/BetterAPI/BetterChat_new/src/routes/settings/+server.ts b/spaces/BetterAPI/BetterChat_new/src/routes/settings/+server.ts deleted file mode 100644 index 8073a482cb1b0ae89ce1cf2b372b6939f596e935..0000000000000000000000000000000000000000 --- a/spaces/BetterAPI/BetterChat_new/src/routes/settings/+server.ts +++ /dev/null @@ -1,34 +0,0 @@ -import { collections } from "$lib/server/database.js"; -import { subMinutes } from "date-fns"; -import { z } from "zod"; - -export async function PATCH({ locals, request }) { - const json = await request.json(); - - const settings = z - .object({ - shareConversationsWithModelAuthors: z.boolean().default(true), - ethicsModalAcceptedAt: z.optional(z.date({ coerce: true }).min(subMinutes(new Date(), 5))), - }) - .parse(json); - - await collections.settings.updateOne( - { - sessionId: locals.sessionId, - }, - { - $set: { - ...settings, - updatedAt: new Date(), - }, - $setOnInsert: { - createdAt: new Date(), - }, - }, - { - upsert: true, - } - ); - - return new Response(); -} diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/docs/_source/basic/install.md b/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/docs/_source/basic/install.md deleted file mode 100644 index c01940f1399f092ab0a75e3498bad4abe658d5d9..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/docs/_source/basic/install.md +++ /dev/null @@ -1,207 +0,0 @@ -# Installation - -This page provides basic prerequisites to run OpenVQA, including the setups of hardware, software, and datasets. - -## Hardware & Software Setup - -A machine with at least **1 GPU (>= 8GB)**, **20GB memory** and **50GB free disk space** is required. We strongly recommend to use a SSD drive to guarantee high-speed I/O. - -The following packages are required to build the project correctly. - -- [Python](https://www.python.org/downloads/) >= 3.5 -- [Cuda](https://developer.nvidia.com/cuda-toolkit) >= 9.0 and [cuDNN](https://developer.nvidia.com/cudnn) -- [PyTorch](http://pytorch.org/) >= 0.4.1 with CUDA (**PyTorch 1.x is also supported**). -- [SpaCy](https://spacy.io/) and initialize the [GloVe](https://github.com/explosion/spacy-models/releases/download/en_vectors_web_lg-2.1.0/en_vectors_web_lg-2.1.0.tar.gz) as follows: - -```bash -$ pip install -r requirements.txt -$ wget https://github.com/explosion/spacy-models/releases/download/en_vectors_web_lg-2.1.0/en_vectors_web_lg-2.1.0.tar.gz -O en_vectors_web_lg-2.1.0.tar.gz -$ pip install en_vectors_web_lg-2.1.0.tar.gz -``` - -## Dataset Setup - -The following datasets should be prepared before running the experiments. - -**Note that if you only want to run experiments on one specific dataset, you can focus on the setup for that and skip the rest.** - -### VQA-v2 - -- Image Features - -The image features are extracted using the [bottom-up-attention](https://github.com/peteanderson80/bottom-up-attention) strategy, with each image being represented as an dynamic number (from 10 to 100) of 2048-D features. We store the features for each image in a `.npz` file. You can prepare the visual features by yourself or download the extracted features from [OneDrive](https://awma1-my.sharepoint.com/:f:/g/personal/yuz_l0_tn/EsfBlbmK1QZFhCOFpr4c5HUBzUV0aH2h1McnPG1jWAxytQ?e=2BZl8O) or [BaiduYun](https://pan.baidu.com/s/1C7jIWgM3hFPv-YXJexItgw#list/path=%2F). The downloaded files contains three files: **train2014.tar.gz, val2014.tar.gz, and test2015.tar.gz**, corresponding to the features of the train/val/test images for *VQA-v2*, respectively. - -All the image feature files are unzipped and placed in the `data/vqa/feats` folder to form the following tree structure: - -``` -|-- data - |-- vqa - | |-- feats - | | |-- train2014 - | | | |-- COCO_train2014_...jpg.npz - | | | |-- ... - | | |-- val2014 - | | | |-- COCO_val2014_...jpg.npz - | | | |-- ... - | | |-- test2015 - | | | |-- COCO_test2015_...jpg.npz - | | | |-- ... -``` - -- QA Annotations - -Download all the annotation `json` files for VQA-v2, including the [train questions](https://s3.amazonaws.com/cvmlp/vqa/mscoco/vqa/v2_Questions_Train_mscoco.zip), [val questions](https://s3.amazonaws.com/cvmlp/vqa/mscoco/vqa/v2_Questions_Val_mscoco.zip), [test questions](https://s3.amazonaws.com/cvmlp/vqa/mscoco/vqa/v2_Questions_Test_mscoco.zip), [train answers](https://s3.amazonaws.com/cvmlp/vqa/mscoco/vqa/v2_Annotations_Train_mscoco.zip), and [val answers](https://s3.amazonaws.com/cvmlp/vqa/mscoco/vqa/v2_Annotations_Val_mscoco.zip). - -In addition, we use the VQA samples from the Visual Genome to augment the training samples. We pre-processed these samples by two rules: - -1. Select the QA pairs with the corresponding images appear in the MS-COCO *train* and *val* splits; -2. Select the QA pairs with the answer appear in the processed answer list (occurs more than 8 times in whole *VQA-v2* answers). - -We provide our processed vg questions and annotations files, you can download them from [OneDrive](https://awma1-my.sharepoint.com/:f:/g/personal/yuz_l0_tn/EmVHVeGdck1IifPczGmXoaMBFiSvsegA6tf_PqxL3HXclw) or [BaiduYun](https://pan.baidu.com/s/1QCOtSxJGQA01DnhUg7FFtQ#list/path=%2F). - -All the QA annotation files are unzipped and placed in the `data/vqa/raw` folder to form the following tree structure: - -``` -|-- data - |-- vqa - | |-- raw - | | |-- v2_OpenEnded_mscoco_train2014_questions.json - | | |-- v2_OpenEnded_mscoco_val2014_questions.json - | | |-- v2_OpenEnded_mscoco_test2015_questions.json - | | |-- v2_OpenEnded_mscoco_test-dev2015_questions.json - | | |-- v2_mscoco_train2014_annotations.json - | | |-- v2_mscoco_val2014_annotations.json - | | |-- VG_questions.json - | | |-- VG_annotations.json - -``` - -### GQA - -- Image Features - -Download the [spatial features](https://nlp.stanford.edu/data/gqa/spatialFeatures.zip) and [object features](https://nlp.stanford.edu/data/gqa/objectFeatures.zip) for GQA from its official website. **Spatial Features Files** include `gqa_spatial_*.h5` and `gqa_spatial_info.json`. **Object Features Files** include `gqa_objects_*.h5` and `gqa_objects_info.json`. -To make the input features consistent with those for VQA-v2, we provide a [script](https://github.com/MILVLG/openvqa/tree/master/data/gqa/gqa_feat_preproc.py) to transform `.h5` feature files into multiple `.npz` files, with each file corresponding to one image. - -```bash -$ cd data/gqa - -$ unzip spatialFeatures.zip -$ python gqa_feat_preproc.py --mode=spatial --spatial_dir=./spatialFeatures --out_dir=./feats/gqa-grid -$ rm -r spatialFeatures.zip ./spatialFeatures - -$ unzip objectFeatures.zip -$ python gqa_feat_preproc.py --mode=object --object_dir=./objectFeatures --out_dir=./feats/gqa-frcn -$ rm -r objectFeatures.zip ./objectFeatures -``` - -All the processed feature files are placed in the `data/gqa/feats` folder to form the following tree structure: - -``` -|-- data - |-- gqa - | |-- feats - | | |-- gqa-frcn - | | | |-- 1.npz - | | | |-- ... - | | |-- gqa-grid - | | | |-- 1.npz - | | | |-- ... -``` - -- Questions and Scene Graphs - -Download all the GQA [QA files](https://nlp.stanford.edu/data/gqa/questions1.2.zip) from the official site, including all the splits needed for training, validation and testing. Download the [scene graphs files](https://nlp.stanford.edu/data/gqa/sceneGraphs.zip) for `train` and `val` splits from the official site. Download the [supporting files](https://nlp.stanford.edu/data/gqa/eval.zip) from the official site, including the `train` and `val` choices supporting files for the evaluation. - -All the question files and scene graph files are unzipped and placed in the `data/gqa/raw` folder to form the following tree structure: - -``` -|-- data - |-- gqa - | |-- raw - | | |-- questions1.2 - | | | |-- train_all_questions - | | | | |-- train_all_questions_0.json - | | | | |-- ... - | | | | |-- train_all_questions_9.json - | | | |-- train_balanced_questions.json - | | | |-- val_all_questions.json - | | | |-- val_balanced_questions.json - | | | |-- testdev_all_questions.json - | | | |-- testdev_balanced_questions.json - | | | |-- test_all_questions.json - | | | |-- test_balanced_questions.json - | | | |-- challenge_all_questions.json - | | | |-- challenge_balanced_questions.json - | | | |-- submission_all_questions.json - | | |-- eval - | | | |-- train_choices - | | | | |-- train_all_questions_0.json - | | | | |-- ... - | | | | |-- train_all_questions_9.json - | | | |-- val_choices.json - | | |-- sceneGraphs - | | | |-- train_sceneGraphs.json - | | | |-- val_sceneGraphs.json -``` - -### CLEVR - -- Images, Questions and Scene Graphs - -Download all the [CLEVR v1.0](https://dl.fbaipublicfiles.com/clevr/CLEVR_v1.0.zip) from the official site, including all the splits needed for training, validation and testing. - -All the image files, question files and scene graph files are unzipped and placed in the `data/clevr/raw` folder to form the following tree structure: - -``` -|-- data - |-- clevr - | |-- raw - | | |-- images - | | | |-- train - | | | | |-- CLEVR_train_000000.json - | | | | |-- ... - | | | | |-- CLEVR_train_069999.json - | | | |-- val - | | | | |-- CLEVR_val_000000.json - | | | | |-- ... - | | | | |-- CLEVR_val_014999.json - | | | |-- test - | | | | |-- CLEVR_test_000000.json - | | | | |-- ... - | | | | |-- CLEVR_test_014999.json - | | |-- questions - | | | |-- CLEVR_train_questions.json - | | | |-- CLEVR_val_questions.json - | | | |-- CLEVR_test_questions.json - | | |-- scenes - | | | |-- CLEVR_train_scenes.json - | | | |-- CLEVR_val_scenes.json -``` - -- Image Features - -To make the input features consistent with those for VQA-v2, we provide a [script](https://github.com/MILVLG/openvqa/tree/master/data/clevr/clevr_extract_feat.py) to extract image features using a pre-trained ResNet-101 model like most previous works did and generate `.h5` files, with each file corresponding to one image. - -```bash -$ cd data/clevr - -$ python clevr_extract_feat.py --mode=all --gpu=0 -``` - -All the processed feature files are placed in the `data/clevr/feats` folder to form the following tree structure: - -``` -|-- data - |-- clevr - | |-- feats - | | |-- train - | | | |-- 1.npz - | | | |-- ... - | | |-- val - | | | |-- 1.npz - | | | |-- ... - | | |-- test - | | | |-- 1.npz - | | | |-- ... -``` \ No newline at end of file diff --git a/spaces/CVPR/LIVE/thrust/thrust/set_operations.h b/spaces/CVPR/LIVE/thrust/thrust/set_operations.h deleted file mode 100644 index a51eaed4351e52aaf3569c986cc5153640dd15d6..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/set_operations.h +++ /dev/null @@ -1,2963 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - -/*! \file set_operations.h - * \brief Set theoretic operations for sorted ranges - */ - -#pragma once - -#include -#include -#include - -namespace thrust -{ - - -/*! \addtogroup set_operations Set Operations - * \ingroup algorithms - * \{ - */ - - -/*! \p set_difference constructs a sorted range that is the set difference of the sorted - * ranges [first1, last1) and [first2, last2). The return value is the - * end of the output range. - * - * In the simplest case, \p set_difference performs the "difference" operation from set - * theory: the output range contains a copy of every element that is contained in - * [first1, last1) and not contained in [first2, last1). The general case - * is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [first1, last1) contains \c m elements - * that are equivalent to each other and if [first2, last2) contains \c n - * elements that are equivalent to them, the last max(m-n,0) elements from - * [first1, last1) range shall be copied to the output range. - * - * This version of \p set_difference compares elements using \c operator<. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first1 The beginning of the first input range. - * \param last1 The end of the first input range. - * \param first2 The beginning of the second input range. - * \param last2 The end of the second input range. - * \param result The beginning of the output range. - * \return The end of the output range. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam OutputIterator is a model of Output Iterator. - * - * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to operator<. - * \pre The resulting range shall not overlap with either input range. - * - * The following code snippet demonstrates how to use \p set_difference to compute the - * set difference of two sets of integers sorted in ascending order using the \p thrust::host execution - * policy for parallelization: - * - * \code - * #include - * #include - * ... - * int A1[6] = {0, 1, 3, 4, 5, 6, 9}; - * int A2[5] = {1, 3, 5, 7, 9}; - * - * int result[3]; - * - * int *result_end = thrust::set_difference(thrust::host, A1, A1 + 6, A2, A2 + 5, result); - * // result is now {0, 4, 6} - * \endcode - * - * \see http://www.sgi.com/tech/stl/set_difference.html - * \see \p includes - * \see \p set_union - * \see \p set_intersection - * \see \p set_symmetric_difference - * \see \p sort - * \see \p is_sorted - */ -template -__host__ __device__ - OutputIterator set_difference(const thrust::detail::execution_policy_base &exec, - InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2, - InputIterator2 last2, - OutputIterator result); - - -/*! \p set_difference constructs a sorted range that is the set difference of the sorted - * ranges [first1, last1) and [first2, last2). The return value is the - * end of the output range. - * - * In the simplest case, \p set_difference performs the "difference" operation from set - * theory: the output range contains a copy of every element that is contained in - * [first1, last1) and not contained in [first2, last1). The general case - * is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [first1, last1) contains \c m elements - * that are equivalent to each other and if [first2, last2) contains \c n - * elements that are equivalent to them, the last max(m-n,0) elements from - * [first1, last1) range shall be copied to the output range. - * - * This version of \p set_difference compares elements using \c operator<. - * - * \param first1 The beginning of the first input range. - * \param last1 The end of the first input range. - * \param first2 The beginning of the second input range. - * \param last2 The end of the second input range. - * \param result The beginning of the output range. - * \return The end of the output range. - * - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam OutputIterator is a model of Output Iterator. - * - * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to operator<. - * \pre The resulting range shall not overlap with either input range. - * - * The following code snippet demonstrates how to use \p set_difference to compute the - * set difference of two sets of integers sorted in ascending order. - * - * \code - * #include - * ... - * int A1[6] = {0, 1, 3, 4, 5, 6, 9}; - * int A2[5] = {1, 3, 5, 7, 9}; - * - * int result[3]; - * - * int *result_end = thrust::set_difference(A1, A1 + 6, A2, A2 + 5, result); - * // result is now {0, 4, 6} - * \endcode - * - * \see http://www.sgi.com/tech/stl/set_difference.html - * \see \p includes - * \see \p set_union - * \see \p set_intersection - * \see \p set_symmetric_difference - * \see \p sort - * \see \p is_sorted - */ -template - OutputIterator set_difference(InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2, - InputIterator2 last2, - OutputIterator result); - - -/*! \p set_difference constructs a sorted range that is the set difference of the sorted - * ranges [first1, last1) and [first2, last2). The return value is the - * end of the output range. - * - * In the simplest case, \p set_difference performs the "difference" operation from set - * theory: the output range contains a copy of every element that is contained in - * [first1, last1) and not contained in [first2, last1). The general case - * is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [first1, last1) contains \c m elements - * that are equivalent to each other and if [first2, last2) contains \c n - * elements that are equivalent to them, the last max(m-n,0) elements from - * [first1, last1) range shall be copied to the output range. - * - * This version of \p set_difference compares elements using a function object \p comp. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first1 The beginning of the first input range. - * \param last1 The end of the first input range. - * \param first2 The beginning of the second input range. - * \param last2 The end of the second input range. - * \param result The beginning of the output range. - * \param comp Comparison operator. - * \return The end of the output range. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1's \c value_type is convertable to \p StrictWeakCompare's \c first_argument_type. - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2's \c value_type is convertable to \p StrictWeakCompare's \c second_argument_type. - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam OutputIterator is a model of Output Iterator. - * \tparam StrictWeakCompare is a model of Strict Weak Ordering. - * - * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to \p comp. - * \pre The resulting range shall not overlap with either input range. - * - * The following code snippet demonstrates how to use \p set_difference to compute the - * set difference of two sets of integers sorted in descending order using the \p thrust::host execution - * policy for parallelization: - * - * \code - * #include - * #include - * #include - * ... - * int A1[6] = {9, 6, 5, 4, 3, 1, 0}; - * int A2[5] = {9, 7, 5, 3, 1}; - * - * int result[3]; - * - * int *result_end = thrust::set_difference(thrust::host, A1, A1 + 6, A2, A2 + 5, result, thrust::greater()); - * // result is now {6, 4, 0} - * \endcode - * - * \see http://www.sgi.com/tech/stl/set_difference.html - * \see \p includes - * \see \p set_union - * \see \p set_intersection - * \see \p set_symmetric_difference - * \see \p sort - * \see \p is_sorted - */ -template -__host__ __device__ - OutputIterator set_difference(const thrust::detail::execution_policy_base &exec, - InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2, - InputIterator2 last2, - OutputIterator result, - StrictWeakCompare comp); - - -/*! \p set_difference constructs a sorted range that is the set difference of the sorted - * ranges [first1, last1) and [first2, last2). The return value is the - * end of the output range. - * - * In the simplest case, \p set_difference performs the "difference" operation from set - * theory: the output range contains a copy of every element that is contained in - * [first1, last1) and not contained in [first2, last1). The general case - * is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [first1, last1) contains \c m elements - * that are equivalent to each other and if [first2, last2) contains \c n - * elements that are equivalent to them, the last max(m-n,0) elements from - * [first1, last1) range shall be copied to the output range. - * - * This version of \p set_difference compares elements using a function object \p comp. - * - * \param first1 The beginning of the first input range. - * \param last1 The end of the first input range. - * \param first2 The beginning of the second input range. - * \param last2 The end of the second input range. - * \param result The beginning of the output range. - * \param comp Comparison operator. - * \return The end of the output range. - * - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1's \c value_type is convertable to \p StrictWeakCompare's \c first_argument_type. - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2's \c value_type is convertable to \p StrictWeakCompare's \c second_argument_type. - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam OutputIterator is a model of Output Iterator. - * \tparam StrictWeakCompare is a model of Strict Weak Ordering. - * - * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to \p comp. - * \pre The resulting range shall not overlap with either input range. - * - * The following code snippet demonstrates how to use \p set_difference to compute the - * set difference of two sets of integers sorted in descending order. - * - * \code - * #include - * #include - * ... - * int A1[6] = {9, 6, 5, 4, 3, 1, 0}; - * int A2[5] = {9, 7, 5, 3, 1}; - * - * int result[3]; - * - * int *result_end = thrust::set_difference(A1, A1 + 6, A2, A2 + 5, result, thrust::greater()); - * // result is now {6, 4, 0} - * \endcode - * - * \see http://www.sgi.com/tech/stl/set_difference.html - * \see \p includes - * \see \p set_union - * \see \p set_intersection - * \see \p set_symmetric_difference - * \see \p sort - * \see \p is_sorted - */ -template - OutputIterator set_difference(InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2, - InputIterator2 last2, - OutputIterator result, - StrictWeakCompare comp); - - -/*! \p set_intersection constructs a sorted range that is the - * intersection of sorted ranges [first1, last1) and - * [first2, last2). The return value is the end of the - * output range. - * - * In the simplest case, \p set_intersection performs the - * "intersection" operation from set theory: the output range - * contains a copy of every element that is contained in both - * [first1, last1) and [first2, last2). The - * general case is more complicated, because the input ranges may - * contain duplicate elements. The generalization is that if a value - * appears \c m times in [first1, last1) and \c n times in - * [first2, last2) (where \c m may be zero), then it - * appears min(m,n) times in the output range. - * \p set_intersection is stable, meaning that both elements are - * copied from the first range rather than the second, and that the - * relative order of elements in the output range is the same as in - * the first input range. - * - * This version of \p set_intersection compares objects using - * \c operator<. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first1 The beginning of the first input range. - * \param last1 The end of the first input range. - * \param first2 The beginning of the second input range. - * \param last2 The end of the second input range. - * \param result The beginning of the output range. - * \return The end of the output range. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam OutputIterator is a model of Output Iterator. - * - * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to operator<. - * \pre The resulting range shall not overlap with either input range. - * - * The following code snippet demonstrates how to use \p set_intersection to compute the - * set intersection of two sets of integers sorted in ascending order using the \p thrust::host execution - * policy for parallelization: - * - * \code - * #include - * #include - * ... - * int A1[6] = {1, 3, 5, 7, 9, 11}; - * int A2[7] = {1, 1, 2, 3, 5, 8, 13}; - * - * int result[7]; - * - * int *result_end = thrust::set_intersection(thrust::host, A1, A1 + 6, A2, A2 + 7, result); - * // result is now {1, 3, 5} - * \endcode - * - * \see http://www.sgi.com/tech/stl/set_intersection.html - * \see \p includes - * \see \p set_union - * \see \p set_intersection - * \see \p set_symmetric_difference - * \see \p sort - * \see \p is_sorted - */ -template -__host__ __device__ - OutputIterator set_intersection(const thrust::detail::execution_policy_base &exec, - InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2, - InputIterator2 last2, - OutputIterator result); - - -/*! \p set_intersection constructs a sorted range that is the - * intersection of sorted ranges [first1, last1) and - * [first2, last2). The return value is the end of the - * output range. - * - * In the simplest case, \p set_intersection performs the - * "intersection" operation from set theory: the output range - * contains a copy of every element that is contained in both - * [first1, last1) and [first2, last2). The - * general case is more complicated, because the input ranges may - * contain duplicate elements. The generalization is that if a value - * appears \c m times in [first1, last1) and \c n times in - * [first2, last2) (where \c m may be zero), then it - * appears min(m,n) times in the output range. - * \p set_intersection is stable, meaning that both elements are - * copied from the first range rather than the second, and that the - * relative order of elements in the output range is the same as in - * the first input range. - * - * This version of \p set_intersection compares objects using - * \c operator<. - * - * \param first1 The beginning of the first input range. - * \param last1 The end of the first input range. - * \param first2 The beginning of the second input range. - * \param last2 The end of the second input range. - * \param result The beginning of the output range. - * \return The end of the output range. - * - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam OutputIterator is a model of Output Iterator. - * - * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to operator<. - * \pre The resulting range shall not overlap with either input range. - * - * The following code snippet demonstrates how to use \p set_intersection to compute the - * set intersection of two sets of integers sorted in ascending order. - * - * \code - * #include - * ... - * int A1[6] = {1, 3, 5, 7, 9, 11}; - * int A2[7] = {1, 1, 2, 3, 5, 8, 13}; - * - * int result[7]; - * - * int *result_end = thrust::set_intersection(A1, A1 + 6, A2, A2 + 7, result); - * // result is now {1, 3, 5} - * \endcode - * - * \see http://www.sgi.com/tech/stl/set_intersection.html - * \see \p includes - * \see \p set_union - * \see \p set_intersection - * \see \p set_symmetric_difference - * \see \p sort - * \see \p is_sorted - */ -template - OutputIterator set_intersection(InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2, - InputIterator2 last2, - OutputIterator result); - - -/*! \p set_intersection constructs a sorted range that is the - * intersection of sorted ranges [first1, last1) and - * [first2, last2). The return value is the end of the - * output range. - * - * In the simplest case, \p set_intersection performs the - * "intersection" operation from set theory: the output range - * contains a copy of every element that is contained in both - * [first1, last1) and [first2, last2). The - * general case is more complicated, because the input ranges may - * contain duplicate elements. The generalization is that if a value - * appears \c m times in [first1, last1) and \c n times in - * [first2, last2) (where \c m may be zero), then it - * appears min(m,n) times in the output range. - * \p set_intersection is stable, meaning that both elements are - * copied from the first range rather than the second, and that the - * relative order of elements in the output range is the same as in - * the first input range. - * - * This version of \p set_intersection compares elements using a function object \p comp. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first1 The beginning of the first input range. - * \param last1 The end of the first input range. - * \param first2 The beginning of the second input range. - * \param last2 The end of the second input range. - * \param result The beginning of the output range. - * \param comp Comparison operator. - * \return The end of the output range. - * - * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to \p comp. - * \pre The resulting range shall not overlap with either input range. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam OutputIterator is a model of Output Iterator. - * - * The following code snippet demonstrates how to use \p set_intersection to compute - * the set intersection of sets of integers sorted in descending order using the \p thrust::host execution - * policy for parallelization: - * - * \code - * #include - * #include - * ... - * int A1[6] = {11, 9, 7, 5, 3, 1}; - * int A2[7] = {13, 8, 5, 3, 2, 1, 1}; - * - * int result[3]; - * - * int *result_end = thrust::set_intersection(thrust::host, A1, A1 + 6, A2, A2 + 7, result, thrust::greater()); - * // result is now {5, 3, 1} - * \endcode - * - * \see http://www.sgi.com/tech/stl/set_intersection.html - * \see \p includes - * \see \p set_union - * \see \p set_intersection - * \see \p set_symmetric_difference - * \see \p sort - * \see \p is_sorted - */ -template -__host__ __device__ - OutputIterator set_intersection(const thrust::detail::execution_policy_base &exec, - InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2, - InputIterator2 last2, - OutputIterator result, - StrictWeakCompare comp); - - -/*! \p set_intersection constructs a sorted range that is the - * intersection of sorted ranges [first1, last1) and - * [first2, last2). The return value is the end of the - * output range. - * - * In the simplest case, \p set_intersection performs the - * "intersection" operation from set theory: the output range - * contains a copy of every element that is contained in both - * [first1, last1) and [first2, last2). The - * general case is more complicated, because the input ranges may - * contain duplicate elements. The generalization is that if a value - * appears \c m times in [first1, last1) and \c n times in - * [first2, last2) (where \c m may be zero), then it - * appears min(m,n) times in the output range. - * \p set_intersection is stable, meaning that both elements are - * copied from the first range rather than the second, and that the - * relative order of elements in the output range is the same as in - * the first input range. - * - * This version of \p set_intersection compares elements using a function object \p comp. - * - * \param first1 The beginning of the first input range. - * \param last1 The end of the first input range. - * \param first2 The beginning of the second input range. - * \param last2 The end of the second input range. - * \param result The beginning of the output range. - * \param comp Comparison operator. - * \return The end of the output range. - * - * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to \p comp. - * \pre The resulting range shall not overlap with either input range. - * - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam OutputIterator is a model of Output Iterator. - * - * The following code snippet demonstrates how to use \p set_intersection to compute - * the set intersection of sets of integers sorted in descending order. - * - * \code - * #include - * ... - * int A1[6] = {11, 9, 7, 5, 3, 1}; - * int A2[7] = {13, 8, 5, 3, 2, 1, 1}; - * - * int result[3]; - * - * int *result_end = thrust::set_intersection(A1, A1 + 6, A2, A2 + 7, result, thrust::greater()); - * // result is now {5, 3, 1} - * \endcode - * - * \see http://www.sgi.com/tech/stl/set_intersection.html - * \see \p includes - * \see \p set_union - * \see \p set_intersection - * \see \p set_symmetric_difference - * \see \p sort - * \see \p is_sorted - */ -template - OutputIterator set_intersection(InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2, - InputIterator2 last2, - OutputIterator result, - StrictWeakCompare comp); - - -/*! \p set_symmetric_difference constructs a sorted range that is the set symmetric - * difference of the sorted ranges [first1, last1) and [first2, last2). - * The return value is the end of the output range. - * - * In the simplest case, \p set_symmetric_difference performs a set theoretic calculation: - * it constructs the union of the two sets A - B and B - A, where A and B are the two - * input ranges. That is, the output range contains a copy of every element that is - * contained in [first1, last1) but not [first2, last1), and a copy of - * every element that is contained in [first2, last2) but not [first1, last1). - * The general case is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [first1, last1) contains \c m elements that are - * equivalent to each other and [first2, last1) contains \c n elements that are - * equivalent to them, then |m - n| of those elements shall be copied to the output - * range: the last m - n elements from [first1, last1) if m > n, and - * the last n - m of these elements from [first2, last2) if m < n. - * - * This version of \p set_union compares elements using \c operator<. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first1 The beginning of the first input range. - * \param last1 The end of the first input range. - * \param first2 The beginning of the second input range. - * \param last2 The end of the second input range. - * \param result The beginning of the output range. - * \return The end of the output range. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam OutputIterator is a model of Output Iterator. - * - * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to operator<. - * \pre The resulting range shall not overlap with either input range. - * - * The following code snippet demonstrates how to use \p set_symmetric_difference to compute - * the symmetric difference of two sets of integers sorted in ascending order using the \p thrust::host - * execution policy for parallelization: - * - * \code - * #include - * #include - * ... - * int A1[6] = {0, 1, 2, 2, 4, 6, 7}; - * int A2[5] = {1, 1, 2, 5, 8}; - * - * int result[6]; - * - * int *result_end = thrust::set_symmetric_difference(thrust::host, A1, A1 + 6, A2, A2 + 5, result); - * // result = {0, 4, 5, 6, 7, 8} - * \endcode - * - * \see http://www.sgi.com/tech/stl/set_symmetric_difference.html - * \see \p merge - * \see \p includes - * \see \p set_difference - * \see \p set_union - * \see \p set_intersection - * \see \p sort - * \see \p is_sorted - */ -template -__host__ __device__ - OutputIterator set_symmetric_difference(const thrust::detail::execution_policy_base &exec, - InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2, - InputIterator2 last2, - OutputIterator result); - - -/*! \p set_symmetric_difference constructs a sorted range that is the set symmetric - * difference of the sorted ranges [first1, last1) and [first2, last2). - * The return value is the end of the output range. - * - * In the simplest case, \p set_symmetric_difference performs a set theoretic calculation: - * it constructs the union of the two sets A - B and B - A, where A and B are the two - * input ranges. That is, the output range contains a copy of every element that is - * contained in [first1, last1) but not [first2, last1), and a copy of - * every element that is contained in [first2, last2) but not [first1, last1). - * The general case is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [first1, last1) contains \c m elements that are - * equivalent to each other and [first2, last1) contains \c n elements that are - * equivalent to them, then |m - n| of those elements shall be copied to the output - * range: the last m - n elements from [first1, last1) if m > n, and - * the last n - m of these elements from [first2, last2) if m < n. - * - * This version of \p set_union compares elements using \c operator<. - * - * \param first1 The beginning of the first input range. - * \param last1 The end of the first input range. - * \param first2 The beginning of the second input range. - * \param last2 The end of the second input range. - * \param result The beginning of the output range. - * \return The end of the output range. - * - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam OutputIterator is a model of Output Iterator. - * - * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to operator<. - * \pre The resulting range shall not overlap with either input range. - * - * The following code snippet demonstrates how to use \p set_symmetric_difference to compute - * the symmetric difference of two sets of integers sorted in ascending order. - * - * \code - * #include - * ... - * int A1[6] = {0, 1, 2, 2, 4, 6, 7}; - * int A2[5] = {1, 1, 2, 5, 8}; - * - * int result[6]; - * - * int *result_end = thrust::set_symmetric_difference(A1, A1 + 6, A2, A2 + 5, result); - * // result = {0, 4, 5, 6, 7, 8} - * \endcode - * - * \see http://www.sgi.com/tech/stl/set_symmetric_difference.html - * \see \p merge - * \see \p includes - * \see \p set_difference - * \see \p set_union - * \see \p set_intersection - * \see \p sort - * \see \p is_sorted - */ -template - OutputIterator set_symmetric_difference(InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2, - InputIterator2 last2, - OutputIterator result); - - -/*! \p set_symmetric_difference constructs a sorted range that is the set symmetric - * difference of the sorted ranges [first1, last1) and [first2, last2). - * The return value is the end of the output range. - * - * In the simplest case, \p set_symmetric_difference performs a set theoretic calculation: - * it constructs the union of the two sets A - B and B - A, where A and B are the two - * input ranges. That is, the output range contains a copy of every element that is - * contained in [first1, last1) but not [first2, last1), and a copy of - * every element that is contained in [first2, last2) but not [first1, last1). - * The general case is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [first1, last1) contains \c m elements that are - * equivalent to each other and [first2, last1) contains \c n elements that are - * equivalent to them, then |m - n| of those elements shall be copied to the output - * range: the last m - n elements from [first1, last1) if m > n, and - * the last n - m of these elements from [first2, last2) if m < n. - * - * This version of \p set_union compares elements using a function object \p comp. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first1 The beginning of the first input range. - * \param last1 The end of the first input range. - * \param first2 The beginning of the second input range. - * \param last2 The end of the second input range. - * \param result The beginning of the output range. - * \param comp Comparison operator. - * \return The end of the output range. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam OutputIterator is a model of Output Iterator. - * - * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to \p comp. - * \pre The resulting range shall not overlap with either input range. - * - * The following code snippet demonstrates how to use \p set_symmetric_difference to compute - * the symmetric difference of two sets of integers sorted in descending order using the \p thrust::host - * execution policy for parallelization: - * - * \code - * #include - * #include - * ... - * int A1[6] = {7, 6, 4, 2, 2, 1, 0}; - * int A2[5] = {8, 5, 2, 1, 1}; - * - * int result[6]; - * - * int *result_end = thrust::set_symmetric_difference(thrust::host, A1, A1 + 6, A2, A2 + 5, result); - * // result = {8, 7, 6, 5, 4, 0} - * \endcode - * - * \see http://www.sgi.com/tech/stl/set_symmetric_difference.html - * \see \p merge - * \see \p includes - * \see \p set_difference - * \see \p set_union - * \see \p set_intersection - * \see \p sort - * \see \p is_sorted - */ -template -__host__ __device__ - OutputIterator set_symmetric_difference(const thrust::detail::execution_policy_base &exec, - InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2, - InputIterator2 last2, - OutputIterator result, - StrictWeakCompare comp); - - -/*! \p set_symmetric_difference constructs a sorted range that is the set symmetric - * difference of the sorted ranges [first1, last1) and [first2, last2). - * The return value is the end of the output range. - * - * In the simplest case, \p set_symmetric_difference performs a set theoretic calculation: - * it constructs the union of the two sets A - B and B - A, where A and B are the two - * input ranges. That is, the output range contains a copy of every element that is - * contained in [first1, last1) but not [first2, last1), and a copy of - * every element that is contained in [first2, last2) but not [first1, last1). - * The general case is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [first1, last1) contains \c m elements that are - * equivalent to each other and [first2, last1) contains \c n elements that are - * equivalent to them, then |m - n| of those elements shall be copied to the output - * range: the last m - n elements from [first1, last1) if m > n, and - * the last n - m of these elements from [first2, last2) if m < n. - * - * This version of \p set_union compares elements using a function object \p comp. - * - * \param first1 The beginning of the first input range. - * \param last1 The end of the first input range. - * \param first2 The beginning of the second input range. - * \param last2 The end of the second input range. - * \param result The beginning of the output range. - * \param comp Comparison operator. - * \return The end of the output range. - * - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam OutputIterator is a model of Output Iterator. - * - * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to \p comp. - * \pre The resulting range shall not overlap with either input range. - * - * The following code snippet demonstrates how to use \p set_symmetric_difference to compute - * the symmetric difference of two sets of integers sorted in descending order. - * - * \code - * #include - * ... - * int A1[6] = {7, 6, 4, 2, 2, 1, 0}; - * int A2[5] = {8, 5, 2, 1, 1}; - * - * int result[6]; - * - * int *result_end = thrust::set_symmetric_difference(A1, A1 + 6, A2, A2 + 5, result); - * // result = {8, 7, 6, 5, 4, 0} - * \endcode - * - * \see http://www.sgi.com/tech/stl/set_symmetric_difference.html - * \see \p merge - * \see \p includes - * \see \p set_difference - * \see \p set_union - * \see \p set_intersection - * \see \p sort - * \see \p is_sorted - */ -template - OutputIterator set_symmetric_difference(InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2, - InputIterator2 last2, - OutputIterator result, - StrictWeakCompare comp); - - -/*! \p set_union constructs a sorted range that is the union of the sorted ranges - * [first1, last1) and [first2, last2). The return value is the - * end of the output range. - * - * In the simplest case, \p set_union performs the "union" operation from set - * theory: the output range contains a copy of every element that is contained in - * [first1, last1), [first2, last1), or both. The general case - * is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [first1, last1) contains \c m elements - * that are equivalent to each other and if [first2, last2) contains \c n - * elements that are equivalent to them, then all \c m elements from the first - * range shall be copied to the output range, in order, and then max(n - m, 0) - * elements from the second range shall be copied to the output, in order. - * - * This version of \p set_union compares elements using \c operator<. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first1 The beginning of the first input range. - * \param last1 The end of the first input range. - * \param first2 The beginning of the second input range. - * \param last2 The end of the second input range. - * \param result The beginning of the output range. - * \return The end of the output range. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam OutputIterator is a model of Output Iterator. - * - * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to operator<. - * \pre The resulting range shall not overlap with either input range. - * - * The following code snippet demonstrates how to use \p set_union to compute the union of - * two sets of integers sorted in ascending order using the \p thrust::host execution policy for - * parallelization: - * - * \code - * #include - * #include - * ... - * int A1[7] = {0, 2, 4, 6, 8, 10, 12}; - * int A2[5] = {1, 3, 5, 7, 9}; - * - * int result[11]; - * - * int *result_end = thrust::set_union(thrust::host, A1, A1 + 7, A2, A2 + 5, result); - * // result = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12} - * \endcode - * - * \see http://www.sgi.com/tech/stl/set_union.html - * \see \p merge - * \see \p includes - * \see \p set_union - * \see \p set_intersection - * \see \p set_symmetric_difference - * \see \p sort - * \see \p is_sorted - */ -template -__host__ __device__ - OutputIterator set_union(const thrust::detail::execution_policy_base &exec, - InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2, - InputIterator2 last2, - OutputIterator result); - - -/*! \p set_union constructs a sorted range that is the union of the sorted ranges - * [first1, last1) and [first2, last2). The return value is the - * end of the output range. - * - * In the simplest case, \p set_union performs the "union" operation from set - * theory: the output range contains a copy of every element that is contained in - * [first1, last1), [first2, last1), or both. The general case - * is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [first1, last1) contains \c m elements - * that are equivalent to each other and if [first2, last2) contains \c n - * elements that are equivalent to them, then all \c m elements from the first - * range shall be copied to the output range, in order, and then max(n - m, 0) - * elements from the second range shall be copied to the output, in order. - * - * This version of \p set_union compares elements using \c operator<. - * - * \param first1 The beginning of the first input range. - * \param last1 The end of the first input range. - * \param first2 The beginning of the second input range. - * \param last2 The end of the second input range. - * \param result The beginning of the output range. - * \return The end of the output range. - * - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam OutputIterator is a model of Output Iterator. - * - * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to operator<. - * \pre The resulting range shall not overlap with either input range. - * - * The following code snippet demonstrates how to use \p set_union to compute the union of - * two sets of integers sorted in ascending order. - * - * \code - * #include - * ... - * int A1[7] = {0, 2, 4, 6, 8, 10, 12}; - * int A2[5] = {1, 3, 5, 7, 9}; - * - * int result[11]; - * - * int *result_end = thrust::set_union(A1, A1 + 7, A2, A2 + 5, result); - * // result = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12} - * \endcode - * - * \see http://www.sgi.com/tech/stl/set_union.html - * \see \p merge - * \see \p includes - * \see \p set_union - * \see \p set_intersection - * \see \p set_symmetric_difference - * \see \p sort - * \see \p is_sorted - */ -template - OutputIterator set_union(InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2, - InputIterator2 last2, - OutputIterator result); - - -/*! \p set_union constructs a sorted range that is the union of the sorted ranges - * [first1, last1) and [first2, last2). The return value is the - * end of the output range. - * - * In the simplest case, \p set_union performs the "union" operation from set - * theory: the output range contains a copy of every element that is contained in - * [first1, last1), [first2, last1), or both. The general case - * is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [first1, last1) contains \c m elements - * that are equivalent to each other and if [first2, last2) contains \c n - * elements that are equivalent to them, then all \c m elements from the first - * range shall be copied to the output range, in order, and then max(n - m, 0) - * elements from the second range shall be copied to the output, in order. - * - * This version of \p set_union compares elements using a function object \p comp. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first1 The beginning of the first input range. - * \param last1 The end of the first input range. - * \param first2 The beginning of the second input range. - * \param last2 The end of the second input range. - * \param result The beginning of the output range. - * \param comp Comparison operator. - * \return The end of the output range. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1's \c value_type is convertable to \p StrictWeakCompare's \c first_argument_type. - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2's \c value_type is convertable to \p StrictWeakCompare's \c second_argument_type. - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam OutputIterator is a model of Output Iterator. - * \tparam StrictWeakCompare is a model of Strict Weak Ordering. - * - * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to \p comp. - * \pre The resulting range shall not overlap with either input range. - * - * The following code snippet demonstrates how to use \p set_union to compute the union of - * two sets of integers sorted in ascending order using the \p thrust::host execution policy for - * parallelization: - * - * \code - * #include - * #include - * #include - * ... - * int A1[7] = {12, 10, 8, 6, 4, 2, 0}; - * int A2[5] = {9, 7, 5, 3, 1}; - * - * int result[11]; - * - * int *result_end = thrust::set_union(thrust::host, A1, A1 + 7, A2, A2 + 5, result, thrust::greater()); - * // result = {12, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0} - * \endcode - * - * \see http://www.sgi.com/tech/stl/set_union.html - * \see \p merge - * \see \p includes - * \see \p set_union - * \see \p set_intersection - * \see \p set_symmetric_difference - * \see \p sort - * \see \p is_sorted - */ -template -__host__ __device__ - OutputIterator set_union(const thrust::detail::execution_policy_base &exec, - InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2, - InputIterator2 last2, - OutputIterator result, - StrictWeakCompare comp); - - -/*! \p set_union constructs a sorted range that is the union of the sorted ranges - * [first1, last1) and [first2, last2). The return value is the - * end of the output range. - * - * In the simplest case, \p set_union performs the "union" operation from set - * theory: the output range contains a copy of every element that is contained in - * [first1, last1), [first2, last1), or both. The general case - * is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [first1, last1) contains \c m elements - * that are equivalent to each other and if [first2, last2) contains \c n - * elements that are equivalent to them, then all \c m elements from the first - * range shall be copied to the output range, in order, and then max(n - m, 0) - * elements from the second range shall be copied to the output, in order. - * - * This version of \p set_union compares elements using a function object \p comp. - * - * \param first1 The beginning of the first input range. - * \param last1 The end of the first input range. - * \param first2 The beginning of the second input range. - * \param last2 The end of the second input range. - * \param result The beginning of the output range. - * \param comp Comparison operator. - * \return The end of the output range. - * - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1's \c value_type is convertable to \p StrictWeakCompare's \c first_argument_type. - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2's \c value_type is convertable to \p StrictWeakCompare's \c second_argument_type. - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam OutputIterator is a model of Output Iterator. - * \tparam StrictWeakCompare is a model of Strict Weak Ordering. - * - * \pre The ranges [first1, last1) and [first2, last2) shall be sorted with respect to \p comp. - * \pre The resulting range shall not overlap with either input range. - * - * The following code snippet demonstrates how to use \p set_union to compute the union of - * two sets of integers sorted in ascending order. - * - * \code - * #include - * #include - * ... - * int A1[7] = {12, 10, 8, 6, 4, 2, 0}; - * int A2[5] = {9, 7, 5, 3, 1}; - * - * int result[11]; - * - * int *result_end = thrust::set_union(A1, A1 + 7, A2, A2 + 5, result, thrust::greater()); - * // result = {12, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0} - * \endcode - * - * \see http://www.sgi.com/tech/stl/set_union.html - * \see \p merge - * \see \p includes - * \see \p set_union - * \see \p set_intersection - * \see \p set_symmetric_difference - * \see \p sort - * \see \p is_sorted - */ -template - OutputIterator set_union(InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2, - InputIterator2 last2, - OutputIterator result, - StrictWeakCompare comp); - - -/*! \p set_difference_by_key performs a key-value difference operation from set theory. - * \p set_difference_by_key constructs a sorted range that is the difference of the sorted - * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated - * with each element from the input and output key ranges is a value element. The associated input - * value ranges need not be sorted. - * - * In the simplest case, \p set_difference_by_key performs the "difference" operation from set - * theory: the keys output range contains a copy of every element that is contained in - * [keys_first1, keys_last1) and not contained in [keys_first2, keys_last2). - * The general case is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [keys_first1, keys_last1) contains \c m elements - * that are equivalent to each other and if [keys_first2, keys_last2) contains \c n - * elements that are equivalent to them, the last max(m-n,0) elements from - * [keys_first1, keys_last1) range shall be copied to the output range. - * - * Each time a key element is copied from [keys_first1, keys_last1) or - * [keys_first2, keys_last2) is copied to the keys output range, the - * corresponding value element is copied from the corresponding values input range (beginning at - * \p values_first1 or \p values_first2) to the values output range. - * - * This version of \p set_difference_by_key compares key elements using \c operator<. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param keys_first1 The beginning of the first input range of keys. - * \param keys_last1 The end of the first input range of keys. - * \param keys_first2 The beginning of the second input range of keys. - * \param keys_last2 The end of the second input range of keys. - * \param values_first1 The beginning of the first input range of values. - * \param values_first2 The beginning of the first input range of values. - * \param keys_result The beginning of the output range of keys. - * \param values_result The beginning of the output range of values. - * \return A \p pair \c p such that p.first is the end of the output range of keys, - * and such that p.second is the end of the output range of values. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator3 is a model of Input Iterator, - * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam InputIterator4 is a model of Input Iterator, - * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam OutputIterator1 is a model of Output Iterator. - * \tparam OutputIterator2 is a model of Output Iterator. - * - * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to operator<. - * \pre The resulting ranges shall not overlap with any input range. - * - * The following code snippet demonstrates how to use \p set_difference_by_key to compute the - * set difference of two sets of integers sorted in ascending order with their values using the \p thrust::host - * execution policy for parallelization: - * - * \code - * #include - * #include - * ... - * int A_keys[6] = {0, 1, 3, 4, 5, 6, 9}; - * int A_vals[6] = {0, 0, 0, 0, 0, 0, 0}; - * - * int B_keys[5] = {1, 3, 5, 7, 9}; - * int B_vals[5] = {1, 1, 1, 1, 1}; - * - * int keys_result[3]; - * int vals_result[3]; - * - * thrust::pair end = thrust::set_difference_by_key(thrust::host, A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result); - * // keys_result is now {0, 4, 6} - * // vals_result is now {0, 0, 0} - * \endcode - * - * \see \p set_union_by_key - * \see \p set_intersection_by_key - * \see \p set_symmetric_difference_by_key - * \see \p sort_by_key - * \see \p is_sorted - */ -template -__host__ __device__ - thrust::pair - set_difference_by_key(const thrust::detail::execution_policy_base &exec, - InputIterator1 keys_first1, - InputIterator1 keys_last1, - InputIterator2 keys_first2, - InputIterator2 keys_last2, - InputIterator3 values_first1, - InputIterator4 values_first2, - OutputIterator1 keys_result, - OutputIterator2 values_result); - - -/*! \p set_difference_by_key performs a key-value difference operation from set theory. - * \p set_difference_by_key constructs a sorted range that is the difference of the sorted - * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated - * with each element from the input and output key ranges is a value element. The associated input - * value ranges need not be sorted. - * - * In the simplest case, \p set_difference_by_key performs the "difference" operation from set - * theory: the keys output range contains a copy of every element that is contained in - * [keys_first1, keys_last1) and not contained in [keys_first2, keys_last2). - * The general case is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [keys_first1, keys_last1) contains \c m elements - * that are equivalent to each other and if [keys_first2, keys_last2) contains \c n - * elements that are equivalent to them, the last max(m-n,0) elements from - * [keys_first1, keys_last1) range shall be copied to the output range. - * - * Each time a key element is copied from [keys_first1, keys_last1) or - * [keys_first2, keys_last2) is copied to the keys output range, the - * corresponding value element is copied from the corresponding values input range (beginning at - * \p values_first1 or \p values_first2) to the values output range. - * - * This version of \p set_difference_by_key compares key elements using \c operator<. - * - * \param keys_first1 The beginning of the first input range of keys. - * \param keys_last1 The end of the first input range of keys. - * \param keys_first2 The beginning of the second input range of keys. - * \param keys_last2 The end of the second input range of keys. - * \param values_first1 The beginning of the first input range of values. - * \param values_first2 The beginning of the first input range of values. - * \param keys_result The beginning of the output range of keys. - * \param values_result The beginning of the output range of values. - * \return A \p pair \c p such that p.first is the end of the output range of keys, - * and such that p.second is the end of the output range of values. - * - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator3 is a model of Input Iterator, - * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam InputIterator4 is a model of Input Iterator, - * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam OutputIterator1 is a model of Output Iterator. - * \tparam OutputIterator2 is a model of Output Iterator. - * - * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to operator<. - * \pre The resulting ranges shall not overlap with any input range. - * - * The following code snippet demonstrates how to use \p set_difference_by_key to compute the - * set difference of two sets of integers sorted in ascending order with their values. - * - * \code - * #include - * ... - * int A_keys[6] = {0, 1, 3, 4, 5, 6, 9}; - * int A_vals[6] = {0, 0, 0, 0, 0, 0, 0}; - * - * int B_keys[5] = {1, 3, 5, 7, 9}; - * int B_vals[5] = {1, 1, 1, 1, 1}; - * - * int keys_result[3]; - * int vals_result[3]; - * - * thrust::pair end = thrust::set_difference_by_key(A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result); - * // keys_result is now {0, 4, 6} - * // vals_result is now {0, 0, 0} - * \endcode - * - * \see \p set_union_by_key - * \see \p set_intersection_by_key - * \see \p set_symmetric_difference_by_key - * \see \p sort_by_key - * \see \p is_sorted - */ -template - thrust::pair - set_difference_by_key(InputIterator1 keys_first1, - InputIterator1 keys_last1, - InputIterator2 keys_first2, - InputIterator2 keys_last2, - InputIterator3 values_first1, - InputIterator4 values_first2, - OutputIterator1 keys_result, - OutputIterator2 values_result); - - -/*! \p set_difference_by_key performs a key-value difference operation from set theory. - * \p set_difference_by_key constructs a sorted range that is the difference of the sorted - * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated - * with each element from the input and output key ranges is a value element. The associated input - * value ranges need not be sorted. - * - * In the simplest case, \p set_difference_by_key performs the "difference" operation from set - * theory: the keys output range contains a copy of every element that is contained in - * [keys_first1, keys_last1) and not contained in [keys_first2, keys_last2). - * The general case is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [keys_first1, keys_last1) contains \c m elements - * that are equivalent to each other and if [keys_first2, keys_last2) contains \c n - * elements that are equivalent to them, the last max(m-n,0) elements from - * [keys_first1, keys_last1) range shall be copied to the output range. - * - * Each time a key element is copied from [keys_first1, keys_last1) or - * [keys_first2, keys_last2) is copied to the keys output range, the - * corresponding value element is copied from the corresponding values input range (beginning at - * \p values_first1 or \p values_first2) to the values output range. - * - * This version of \p set_difference_by_key compares key elements using a function object \p comp. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param keys_first1 The beginning of the first input range of keys. - * \param keys_last1 The end of the first input range of keys. - * \param keys_first2 The beginning of the second input range of keys. - * \param keys_last2 The end of the second input range of keys. - * \param values_first1 The beginning of the first input range of values. - * \param values_first2 The beginning of the first input range of values. - * \param keys_result The beginning of the output range of keys. - * \param values_result The beginning of the output range of values. - * \param comp Comparison operator. - * \return A \p pair \c p such that p.first is the end of the output range of keys, - * and such that p.second is the end of the output range of values. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator3 is a model of Input Iterator, - * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam InputIterator4 is a model of Input Iterator, - * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam OutputIterator1 is a model of Output Iterator. - * \tparam OutputIterator2 is a model of Output Iterator. - * \tparam StrictWeakCompare is a model of Strict Weak Ordering. - * - * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to \p comp. - * \pre The resulting ranges shall not overlap with any input range. - * - * The following code snippet demonstrates how to use \p set_difference_by_key to compute the - * set difference of two sets of integers sorted in descending order with their values using the \p thrust::host - * execution policy for parallelization: - * - * \code - * #include - * #include - * #include - * ... - * int A_keys[6] = {9, 6, 5, 4, 3, 1, 0}; - * int A_vals[6] = {0, 0, 0, 0, 0, 0, 0}; - * - * int B_keys[5] = {9, 7, 5, 3, 1}; - * int B_vals[5] = {1, 1, 1, 1, 1}; - * - * int keys_result[3]; - * int vals_result[3]; - * - * thrust::pair end = thrust::set_difference_by_key(thrust::host, A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result, thrust::greater()); - * // keys_result is now {0, 4, 6} - * // vals_result is now {0, 0, 0} - * \endcode - * - * \see \p set_union_by_key - * \see \p set_intersection_by_key - * \see \p set_symmetric_difference_by_key - * \see \p sort_by_key - * \see \p is_sorted - */ -template -__host__ __device__ - thrust::pair - set_difference_by_key(const thrust::detail::execution_policy_base &exec, - InputIterator1 keys_first1, - InputIterator1 keys_last1, - InputIterator2 keys_first2, - InputIterator2 keys_last2, - InputIterator3 values_first1, - InputIterator4 values_first2, - OutputIterator1 keys_result, - OutputIterator2 values_result, - StrictWeakCompare comp); - - -/*! \p set_difference_by_key performs a key-value difference operation from set theory. - * \p set_difference_by_key constructs a sorted range that is the difference of the sorted - * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated - * with each element from the input and output key ranges is a value element. The associated input - * value ranges need not be sorted. - * - * In the simplest case, \p set_difference_by_key performs the "difference" operation from set - * theory: the keys output range contains a copy of every element that is contained in - * [keys_first1, keys_last1) and not contained in [keys_first2, keys_last2). - * The general case is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [keys_first1, keys_last1) contains \c m elements - * that are equivalent to each other and if [keys_first2, keys_last2) contains \c n - * elements that are equivalent to them, the last max(m-n,0) elements from - * [keys_first1, keys_last1) range shall be copied to the output range. - * - * Each time a key element is copied from [keys_first1, keys_last1) or - * [keys_first2, keys_last2) is copied to the keys output range, the - * corresponding value element is copied from the corresponding values input range (beginning at - * \p values_first1 or \p values_first2) to the values output range. - * - * This version of \p set_difference_by_key compares key elements using a function object \p comp. - * - * \param keys_first1 The beginning of the first input range of keys. - * \param keys_last1 The end of the first input range of keys. - * \param keys_first2 The beginning of the second input range of keys. - * \param keys_last2 The end of the second input range of keys. - * \param values_first1 The beginning of the first input range of values. - * \param values_first2 The beginning of the first input range of values. - * \param keys_result The beginning of the output range of keys. - * \param values_result The beginning of the output range of values. - * \param comp Comparison operator. - * \return A \p pair \c p such that p.first is the end of the output range of keys, - * and such that p.second is the end of the output range of values. - * - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator3 is a model of Input Iterator, - * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam InputIterator4 is a model of Input Iterator, - * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam OutputIterator1 is a model of Output Iterator. - * \tparam OutputIterator2 is a model of Output Iterator. - * \tparam StrictWeakCompare is a model of Strict Weak Ordering. - * - * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to \p comp. - * \pre The resulting ranges shall not overlap with any input range. - * - * The following code snippet demonstrates how to use \p set_difference_by_key to compute the - * set difference of two sets of integers sorted in descending order with their values. - * - * \code - * #include - * #include - * ... - * int A_keys[6] = {9, 6, 5, 4, 3, 1, 0}; - * int A_vals[6] = {0, 0, 0, 0, 0, 0, 0}; - * - * int B_keys[5] = {9, 7, 5, 3, 1}; - * int B_vals[5] = {1, 1, 1, 1, 1}; - * - * int keys_result[3]; - * int vals_result[3]; - * - * thrust::pair end = thrust::set_difference_by_key(A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result, thrust::greater()); - * // keys_result is now {0, 4, 6} - * // vals_result is now {0, 0, 0} - * \endcode - * - * \see \p set_union_by_key - * \see \p set_intersection_by_key - * \see \p set_symmetric_difference_by_key - * \see \p sort_by_key - * \see \p is_sorted - */ -template - thrust::pair - set_difference_by_key(InputIterator1 keys_first1, - InputIterator1 keys_last1, - InputIterator2 keys_first2, - InputIterator2 keys_last2, - InputIterator3 values_first1, - InputIterator4 values_first2, - OutputIterator1 keys_result, - OutputIterator2 values_result, - StrictWeakCompare comp); - - -/*! \p set_intersection_by_key performs a key-value intersection operation from set theory. - * \p set_intersection_by_key constructs a sorted range that is the intersection of the sorted - * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated - * with each element from the input and output key ranges is a value element. The associated input - * value ranges need not be sorted. - * - * In the simplest case, \p set_intersection_by_key performs the "intersection" operation from set - * theory: the keys output range contains a copy of every element that is contained in both - * [keys_first1, keys_last1) [keys_first2, keys_last2). - * The general case is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if an element appears \c m times in [keys_first1, keys_last1) - * and \c n times in [keys_first2, keys_last2) (where \c m may be zero), then it - * appears min(m,n) times in the keys output range. - * \p set_intersection_by_key is stable, meaning both that elements are copied from the first - * input range rather than the second, and that the relative order of elements in the output range - * is the same as the first input range. - * - * Each time a key element is copied from [keys_first1, keys_last1) to the keys output range, - * the corresponding value element is copied from [values_first1, values_last1) to the values - * output range. - * - * This version of \p set_intersection_by_key compares objects using \c operator<. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param keys_first1 The beginning of the first input range of keys. - * \param keys_last1 The end of the first input range of keys. - * \param keys_first2 The beginning of the second input range of keys. - * \param keys_last2 The end of the second input range of keys. - * \param values_first1 The beginning of the first input range of values. - * \param keys_result The beginning of the output range of keys. - * \param values_result The beginning of the output range of values. - * \return A \p pair \c p such that p.first is the end of the output range of keys, - * and such that p.second is the end of the output range of values. - * - * \note Unlike the other key-value set operations, \p set_intersection_by_key is unique in that it has no - * \c values_first2 parameter because elements from the second input range are never copied to the output range. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator3 is a model of Input Iterator, - * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam OutputIterator1 is a model of Output Iterator. - * \tparam OutputIterator2 is a model of Output Iterator. - * - * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to operator<. - * \pre The resulting ranges shall not overlap with any input range. - * - * The following code snippet demonstrates how to use \p set_intersection_by_key to compute the - * set intersection of two sets of integers sorted in ascending order with their values using the \p thrust::host - * execution policy for parallelization: - * - * \code - * #include - * #include - * ... - * int A_keys[6] = {1, 3, 5, 7, 9, 11}; - * int A_vals[6] = {0, 0, 0, 0, 0, 0}; - * - * int B_keys[7] = {1, 1, 2, 3, 5, 8, 13}; - * - * int keys_result[7]; - * int vals_result[7]; - * - * thrust::pair end = thrust::set_intersection_by_key(thrust::host, A_keys, A_keys + 6, B_keys, B_keys + 7, A_vals, keys_result, vals_result); - * - * // keys_result is now {1, 3, 5} - * // vals_result is now {0, 0, 0} - * \endcode - * - * \see \p set_union_by_key - * \see \p set_difference_by_key - * \see \p set_symmetric_difference_by_key - * \see \p sort_by_key - * \see \p is_sorted - */ -template -__host__ __device__ - thrust::pair - set_intersection_by_key(const thrust::detail::execution_policy_base &exec, - InputIterator1 keys_first1, - InputIterator1 keys_last1, - InputIterator2 keys_first2, - InputIterator2 keys_last2, - InputIterator3 values_first1, - OutputIterator1 keys_result, - OutputIterator2 values_result); - - -/*! \p set_intersection_by_key performs a key-value intersection operation from set theory. - * \p set_intersection_by_key constructs a sorted range that is the intersection of the sorted - * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated - * with each element from the input and output key ranges is a value element. The associated input - * value ranges need not be sorted. - * - * In the simplest case, \p set_intersection_by_key performs the "intersection" operation from set - * theory: the keys output range contains a copy of every element that is contained in both - * [keys_first1, keys_last1) [keys_first2, keys_last2). - * The general case is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if an element appears \c m times in [keys_first1, keys_last1) - * and \c n times in [keys_first2, keys_last2) (where \c m may be zero), then it - * appears min(m,n) times in the keys output range. - * \p set_intersection_by_key is stable, meaning both that elements are copied from the first - * input range rather than the second, and that the relative order of elements in the output range - * is the same as the first input range. - * - * Each time a key element is copied from [keys_first1, keys_last1) to the keys output range, - * the corresponding value element is copied from [values_first1, values_last1) to the values - * output range. - * - * This version of \p set_intersection_by_key compares objects using \c operator<. - * - * \param keys_first1 The beginning of the first input range of keys. - * \param keys_last1 The end of the first input range of keys. - * \param keys_first2 The beginning of the second input range of keys. - * \param keys_last2 The end of the second input range of keys. - * \param values_first1 The beginning of the first input range of values. - * \param keys_result The beginning of the output range of keys. - * \param values_result The beginning of the output range of values. - * \return A \p pair \c p such that p.first is the end of the output range of keys, - * and such that p.second is the end of the output range of values. - * - * \note Unlike the other key-value set operations, \p set_intersection_by_key is unique in that it has no - * \c values_first2 parameter because elements from the second input range are never copied to the output range. - * - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator3 is a model of Input Iterator, - * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam OutputIterator1 is a model of Output Iterator. - * \tparam OutputIterator2 is a model of Output Iterator. - * - * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to operator<. - * \pre The resulting ranges shall not overlap with any input range. - * - * The following code snippet demonstrates how to use \p set_intersection_by_key to compute the - * set intersection of two sets of integers sorted in ascending order with their values. - * - * \code - * #include - * ... - * int A_keys[6] = {1, 3, 5, 7, 9, 11}; - * int A_vals[6] = {0, 0, 0, 0, 0, 0}; - * - * int B_keys[7] = {1, 1, 2, 3, 5, 8, 13}; - * - * int keys_result[7]; - * int vals_result[7]; - * - * thrust::pair end = thrust::set_intersection_by_key(A_keys, A_keys + 6, B_keys, B_keys + 7, A_vals, keys_result, vals_result); - * - * // keys_result is now {1, 3, 5} - * // vals_result is now {0, 0, 0} - * \endcode - * - * \see \p set_union_by_key - * \see \p set_difference_by_key - * \see \p set_symmetric_difference_by_key - * \see \p sort_by_key - * \see \p is_sorted - */ -template - thrust::pair - set_intersection_by_key(InputIterator1 keys_first1, - InputIterator1 keys_last1, - InputIterator2 keys_first2, - InputIterator2 keys_last2, - InputIterator3 values_first1, - OutputIterator1 keys_result, - OutputIterator2 values_result); - - -/*! \p set_intersection_by_key performs a key-value intersection operation from set theory. - * \p set_intersection_by_key constructs a sorted range that is the intersection of the sorted - * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated - * with each element from the input and output key ranges is a value element. The associated input - * value ranges need not be sorted. - * - * In the simplest case, \p set_intersection_by_key performs the "intersection" operation from set - * theory: the keys output range contains a copy of every element that is contained in both - * [keys_first1, keys_last1) [keys_first2, keys_last2). - * The general case is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if an element appears \c m times in [keys_first1, keys_last1) - * and \c n times in [keys_first2, keys_last2) (where \c m may be zero), then it - * appears min(m,n) times in the keys output range. - * \p set_intersection_by_key is stable, meaning both that elements are copied from the first - * input range rather than the second, and that the relative order of elements in the output range - * is the same as the first input range. - * - * Each time a key element is copied from [keys_first1, keys_last1) to the keys output range, - * the corresponding value element is copied from [values_first1, values_last1) to the values - * output range. - * - * This version of \p set_intersection_by_key compares objects using a function object \p comp. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param keys_first1 The beginning of the first input range of keys. - * \param keys_last1 The end of the first input range of keys. - * \param keys_first2 The beginning of the second input range of keys. - * \param keys_last2 The end of the second input range of keys. - * \param values_first1 The beginning of the first input range of values. - * \param keys_result The beginning of the output range of keys. - * \param values_result The beginning of the output range of values. - * \param comp Comparison operator. - * \return A \p pair \c p such that p.first is the end of the output range of keys, - * and such that p.second is the end of the output range of values. - * - * \note Unlike the other key-value set operations, \p set_intersection_by_key is unique in that it has no - * \c values_first2 parameter because elements from the second input range are never copied to the output range. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator3 is a model of Input Iterator, - * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam OutputIterator1 is a model of Output Iterator. - * \tparam OutputIterator2 is a model of Output Iterator. - * \tparam StrictWeakCompare is a model of Strict Weak Ordering. - * - * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to \p comp. - * \pre The resulting ranges shall not overlap with any input range. - * - * The following code snippet demonstrates how to use \p set_intersection_by_key to compute the - * set intersection of two sets of integers sorted in descending order with their values using the - * \p thrust::host execution policy for parallelization: - * - * \code - * #include - * #include - * #include - * ... - * int A_keys[6] = {11, 9, 7, 5, 3, 1}; - * int A_vals[6] = { 0, 0, 0, 0, 0, 0}; - * - * int B_keys[7] = {13, 8, 5, 3, 2, 1, 1}; - * - * int keys_result[7]; - * int vals_result[7]; - * - * thrust::pair end = thrust::set_intersection_by_key(thrust::host, A_keys, A_keys + 6, B_keys, B_keys + 7, A_vals, keys_result, vals_result, thrust::greater()); - * - * // keys_result is now {5, 3, 1} - * // vals_result is now {0, 0, 0} - * \endcode - * - * \see \p set_union_by_key - * \see \p set_difference_by_key - * \see \p set_symmetric_difference_by_key - * \see \p sort_by_key - * \see \p is_sorted - */ -template -__host__ __device__ - thrust::pair - set_intersection_by_key(const thrust::detail::execution_policy_base &exec, - InputIterator1 keys_first1, - InputIterator1 keys_last1, - InputIterator2 keys_first2, - InputIterator2 keys_last2, - InputIterator3 values_first1, - OutputIterator1 keys_result, - OutputIterator2 values_result, - StrictWeakCompare comp); - - -/*! \p set_intersection_by_key performs a key-value intersection operation from set theory. - * \p set_intersection_by_key constructs a sorted range that is the intersection of the sorted - * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated - * with each element from the input and output key ranges is a value element. The associated input - * value ranges need not be sorted. - * - * In the simplest case, \p set_intersection_by_key performs the "intersection" operation from set - * theory: the keys output range contains a copy of every element that is contained in both - * [keys_first1, keys_last1) [keys_first2, keys_last2). - * The general case is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if an element appears \c m times in [keys_first1, keys_last1) - * and \c n times in [keys_first2, keys_last2) (where \c m may be zero), then it - * appears min(m,n) times in the keys output range. - * \p set_intersection_by_key is stable, meaning both that elements are copied from the first - * input range rather than the second, and that the relative order of elements in the output range - * is the same as the first input range. - * - * Each time a key element is copied from [keys_first1, keys_last1) to the keys output range, - * the corresponding value element is copied from [values_first1, values_last1) to the values - * output range. - * - * This version of \p set_intersection_by_key compares objects using a function object \p comp. - * - * \param keys_first1 The beginning of the first input range of keys. - * \param keys_last1 The end of the first input range of keys. - * \param keys_first2 The beginning of the second input range of keys. - * \param keys_last2 The end of the second input range of keys. - * \param values_first1 The beginning of the first input range of values. - * \param keys_result The beginning of the output range of keys. - * \param values_result The beginning of the output range of values. - * \param comp Comparison operator. - * \return A \p pair \c p such that p.first is the end of the output range of keys, - * and such that p.second is the end of the output range of values. - * - * \note Unlike the other key-value set operations, \p set_intersection_by_key is unique in that it has no - * \c values_first2 parameter because elements from the second input range are never copied to the output range. - * - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator3 is a model of Input Iterator, - * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam OutputIterator1 is a model of Output Iterator. - * \tparam OutputIterator2 is a model of Output Iterator. - * \tparam StrictWeakCompare is a model of Strict Weak Ordering. - * - * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to \p comp. - * \pre The resulting ranges shall not overlap with any input range. - * - * The following code snippet demonstrates how to use \p set_intersection_by_key to compute the - * set intersection of two sets of integers sorted in descending order with their values. - * - * \code - * #include - * #include - * ... - * int A_keys[6] = {11, 9, 7, 5, 3, 1}; - * int A_vals[6] = { 0, 0, 0, 0, 0, 0}; - * - * int B_keys[7] = {13, 8, 5, 3, 2, 1, 1}; - * - * int keys_result[7]; - * int vals_result[7]; - * - * thrust::pair end = thrust::set_intersection_by_key(A_keys, A_keys + 6, B_keys, B_keys + 7, A_vals, keys_result, vals_result, thrust::greater()); - * - * // keys_result is now {5, 3, 1} - * // vals_result is now {0, 0, 0} - * \endcode - * - * \see \p set_union_by_key - * \see \p set_difference_by_key - * \see \p set_symmetric_difference_by_key - * \see \p sort_by_key - * \see \p is_sorted - */ -template - thrust::pair - set_intersection_by_key(InputIterator1 keys_first1, - InputIterator1 keys_last1, - InputIterator2 keys_first2, - InputIterator2 keys_last2, - InputIterator3 values_first1, - OutputIterator1 keys_result, - OutputIterator2 values_result, - StrictWeakCompare comp); - - -/*! \p set_symmetric_difference_by_key performs a key-value symmetric difference operation from set theory. - * \p set_difference_by_key constructs a sorted range that is the symmetric difference of the sorted - * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated - * with each element from the input and output key ranges is a value element. The associated input - * value ranges need not be sorted. - * - * In the simplest case, \p set_symmetric_difference_by_key performs a set theoretic calculation: - * it constructs the union of the two sets A - B and B - A, where A and B are the two - * input ranges. That is, the output range contains a copy of every element that is - * contained in [keys_first1, keys_last1) but not [keys_first2, keys_last1), and a copy of - * every element that is contained in [keys_first2, keys_last2) but not [keys_first1, keys_last1). - * The general case is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [keys_first1, keys_last1) contains \c m elements that are - * equivalent to each other and [keys_first2, keys_last1) contains \c n elements that are - * equivalent to them, then |m - n| of those elements shall be copied to the output - * range: the last m - n elements from [keys_first1, keys_last1) if m > n, and - * the last n - m of these elements from [keys_first2, keys_last2) if m < n. - * - * Each time a key element is copied from [keys_first1, keys_last1) or - * [keys_first2, keys_last2) is copied to the keys output range, the - * corresponding value element is copied from the corresponding values input range (beginning at - * \p values_first1 or \p values_first2) to the values output range. - * - * This version of \p set_symmetric_difference_by_key compares key elements using \c operator<. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param keys_first1 The beginning of the first input range of keys. - * \param keys_last1 The end of the first input range of keys. - * \param keys_first2 The beginning of the second input range of keys. - * \param keys_last2 The end of the second input range of keys. - * \param values_first1 The beginning of the first input range of values. - * \param values_first2 The beginning of the first input range of values. - * \param keys_result The beginning of the output range of keys. - * \param values_result The beginning of the output range of values. - * \return A \p pair \c p such that p.first is the end of the output range of keys, - * and such that p.second is the end of the output range of values. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator3 is a model of Input Iterator, - * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam InputIterator4 is a model of Input Iterator, - * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam OutputIterator1 is a model of Output Iterator. - * \tparam OutputIterator2 is a model of Output Iterator. - * - * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to operator<. - * \pre The resulting ranges shall not overlap with any input range. - * - * The following code snippet demonstrates how to use \p set_symmetric_difference_by_key to compute the - * symmetric difference of two sets of integers sorted in ascending order with their values using the - * \p thrust::host execution policy for parallelization: - * - * \code - * #include - * #include - * ... - * int A_keys[6] = {0, 1, 2, 2, 4, 6, 7}; - * int A_vals[6] = {0, 0, 0, 0, 0, 0, 0}; - * - * int B_keys[5] = {1, 1, 2, 5, 8}; - * int B_vals[5] = {1, 1, 1, 1, 1}; - * - * int keys_result[6]; - * int vals_result[6]; - * - * thrust::pair end = thrust::set_symmetric_difference_by_key(thrust::host, A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result); - * // keys_result is now {0, 4, 5, 6, 7, 8} - * // vals_result is now {0, 0, 1, 0, 0, 1} - * \endcode - * - * \see \p set_union_by_key - * \see \p set_intersection_by_key - * \see \p set_difference_by_key - * \see \p sort_by_key - * \see \p is_sorted - */ -template -__host__ __device__ - thrust::pair - set_symmetric_difference_by_key(const thrust::detail::execution_policy_base &exec, - InputIterator1 keys_first1, - InputIterator1 keys_last1, - InputIterator2 keys_first2, - InputIterator2 keys_last2, - InputIterator3 values_first1, - InputIterator4 values_first2, - OutputIterator1 keys_result, - OutputIterator2 values_result); - - -/*! \p set_symmetric_difference_by_key performs a key-value symmetric difference operation from set theory. - * \p set_difference_by_key constructs a sorted range that is the symmetric difference of the sorted - * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated - * with each element from the input and output key ranges is a value element. The associated input - * value ranges need not be sorted. - * - * In the simplest case, \p set_symmetric_difference_by_key performs a set theoretic calculation: - * it constructs the union of the two sets A - B and B - A, where A and B are the two - * input ranges. That is, the output range contains a copy of every element that is - * contained in [keys_first1, keys_last1) but not [keys_first2, keys_last1), and a copy of - * every element that is contained in [keys_first2, keys_last2) but not [keys_first1, keys_last1). - * The general case is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [keys_first1, keys_last1) contains \c m elements that are - * equivalent to each other and [keys_first2, keys_last1) contains \c n elements that are - * equivalent to them, then |m - n| of those elements shall be copied to the output - * range: the last m - n elements from [keys_first1, keys_last1) if m > n, and - * the last n - m of these elements from [keys_first2, keys_last2) if m < n. - * - * Each time a key element is copied from [keys_first1, keys_last1) or - * [keys_first2, keys_last2) is copied to the keys output range, the - * corresponding value element is copied from the corresponding values input range (beginning at - * \p values_first1 or \p values_first2) to the values output range. - * - * This version of \p set_symmetric_difference_by_key compares key elements using \c operator<. - * - * \param keys_first1 The beginning of the first input range of keys. - * \param keys_last1 The end of the first input range of keys. - * \param keys_first2 The beginning of the second input range of keys. - * \param keys_last2 The end of the second input range of keys. - * \param values_first1 The beginning of the first input range of values. - * \param values_first2 The beginning of the first input range of values. - * \param keys_result The beginning of the output range of keys. - * \param values_result The beginning of the output range of values. - * \return A \p pair \c p such that p.first is the end of the output range of keys, - * and such that p.second is the end of the output range of values. - * - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator3 is a model of Input Iterator, - * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam InputIterator4 is a model of Input Iterator, - * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam OutputIterator1 is a model of Output Iterator. - * \tparam OutputIterator2 is a model of Output Iterator. - * - * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to operator<. - * \pre The resulting ranges shall not overlap with any input range. - * - * The following code snippet demonstrates how to use \p set_symmetric_difference_by_key to compute the - * symmetric difference of two sets of integers sorted in ascending order with their values. - * - * \code - * #include - * ... - * int A_keys[6] = {0, 1, 2, 2, 4, 6, 7}; - * int A_vals[6] = {0, 0, 0, 0, 0, 0, 0}; - * - * int B_keys[5] = {1, 1, 2, 5, 8}; - * int B_vals[5] = {1, 1, 1, 1, 1}; - * - * int keys_result[6]; - * int vals_result[6]; - * - * thrust::pair end = thrust::set_symmetric_difference_by_key(A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result); - * // keys_result is now {0, 4, 5, 6, 7, 8} - * // vals_result is now {0, 0, 1, 0, 0, 1} - * \endcode - * - * \see \p set_union_by_key - * \see \p set_intersection_by_key - * \see \p set_difference_by_key - * \see \p sort_by_key - * \see \p is_sorted - */ -template - thrust::pair - set_symmetric_difference_by_key(InputIterator1 keys_first1, - InputIterator1 keys_last1, - InputIterator2 keys_first2, - InputIterator2 keys_last2, - InputIterator3 values_first1, - InputIterator4 values_first2, - OutputIterator1 keys_result, - OutputIterator2 values_result); - - -/*! \p set_symmetric_difference_by_key performs a key-value symmetric difference operation from set theory. - * \p set_difference_by_key constructs a sorted range that is the symmetric difference of the sorted - * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated - * with each element from the input and output key ranges is a value element. The associated input - * value ranges need not be sorted. - * - * In the simplest case, \p set_symmetric_difference_by_key performs a set theoretic calculation: - * it constructs the union of the two sets A - B and B - A, where A and B are the two - * input ranges. That is, the output range contains a copy of every element that is - * contained in [keys_first1, keys_last1) but not [keys_first2, keys_last1), and a copy of - * every element that is contained in [keys_first2, keys_last2) but not [keys_first1, keys_last1). - * The general case is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [keys_first1, keys_last1) contains \c m elements that are - * equivalent to each other and [keys_first2, keys_last1) contains \c n elements that are - * equivalent to them, then |m - n| of those elements shall be copied to the output - * range: the last m - n elements from [keys_first1, keys_last1) if m > n, and - * the last n - m of these elements from [keys_first2, keys_last2) if m < n. - * - * Each time a key element is copied from [keys_first1, keys_last1) or - * [keys_first2, keys_last2) is copied to the keys output range, the - * corresponding value element is copied from the corresponding values input range (beginning at - * \p values_first1 or \p values_first2) to the values output range. - * - * This version of \p set_symmetric_difference_by_key compares key elements using a function object \c comp. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param keys_first1 The beginning of the first input range of keys. - * \param keys_last1 The end of the first input range of keys. - * \param keys_first2 The beginning of the second input range of keys. - * \param keys_last2 The end of the second input range of keys. - * \param values_first1 The beginning of the first input range of values. - * \param values_first2 The beginning of the first input range of values. - * \param keys_result The beginning of the output range of keys. - * \param values_result The beginning of the output range of values. - * \param comp Comparison operator. - * \return A \p pair \c p such that p.first is the end of the output range of keys, - * and such that p.second is the end of the output range of values. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator3 is a model of Input Iterator, - * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam InputIterator4 is a model of Input Iterator, - * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam OutputIterator1 is a model of Output Iterator. - * \tparam OutputIterator2 is a model of Output Iterator. - * \tparam StrictWeakCompare is a model of Strict Weak Ordering. - * - * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to \p comp. - * \pre The resulting ranges shall not overlap with any input range. - * - * The following code snippet demonstrates how to use \p set_symmetric_difference_by_key to compute the - * symmetric difference of two sets of integers sorted in descending order with their values using the - * \p thrust::host execution policy for parallelization: - * - * \code - * #include - * #include - * #include - * ... - * int A_keys[6] = {7, 6, 4, 2, 2, 1, 0}; - * int A_vals[6] = {0, 0, 0, 0, 0, 0, 0}; - * - * int B_keys[5] = {8, 5, 2, 1, 1}; - * int B_vals[5] = {1, 1, 1, 1, 1}; - * - * int keys_result[6]; - * int vals_result[6]; - * - * thrust::pair end = thrust::set_symmetric_difference_by_key(thrust::host, A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result); - * // keys_result is now {8, 7, 6, 5, 4, 0} - * // vals_result is now {1, 0, 0, 1, 0, 0} - * \endcode - * - * \see \p set_union_by_key - * \see \p set_intersection_by_key - * \see \p set_difference_by_key - * \see \p sort_by_key - * \see \p is_sorted - */ -template -__host__ __device__ - thrust::pair - set_symmetric_difference_by_key(const thrust::detail::execution_policy_base &exec, - InputIterator1 keys_first1, - InputIterator1 keys_last1, - InputIterator2 keys_first2, - InputIterator2 keys_last2, - InputIterator3 values_first1, - InputIterator4 values_first2, - OutputIterator1 keys_result, - OutputIterator2 values_result, - StrictWeakCompare comp); - - -/*! \p set_symmetric_difference_by_key performs a key-value symmetric difference operation from set theory. - * \p set_difference_by_key constructs a sorted range that is the symmetric difference of the sorted - * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated - * with each element from the input and output key ranges is a value element. The associated input - * value ranges need not be sorted. - * - * In the simplest case, \p set_symmetric_difference_by_key performs a set theoretic calculation: - * it constructs the union of the two sets A - B and B - A, where A and B are the two - * input ranges. That is, the output range contains a copy of every element that is - * contained in [keys_first1, keys_last1) but not [keys_first2, keys_last1), and a copy of - * every element that is contained in [keys_first2, keys_last2) but not [keys_first1, keys_last1). - * The general case is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [keys_first1, keys_last1) contains \c m elements that are - * equivalent to each other and [keys_first2, keys_last1) contains \c n elements that are - * equivalent to them, then |m - n| of those elements shall be copied to the output - * range: the last m - n elements from [keys_first1, keys_last1) if m > n, and - * the last n - m of these elements from [keys_first2, keys_last2) if m < n. - * - * Each time a key element is copied from [keys_first1, keys_last1) or - * [keys_first2, keys_last2) is copied to the keys output range, the - * corresponding value element is copied from the corresponding values input range (beginning at - * \p values_first1 or \p values_first2) to the values output range. - * - * This version of \p set_symmetric_difference_by_key compares key elements using a function object \c comp. - * - * \param keys_first1 The beginning of the first input range of keys. - * \param keys_last1 The end of the first input range of keys. - * \param keys_first2 The beginning of the second input range of keys. - * \param keys_last2 The end of the second input range of keys. - * \param values_first1 The beginning of the first input range of values. - * \param values_first2 The beginning of the first input range of values. - * \param keys_result The beginning of the output range of keys. - * \param values_result The beginning of the output range of values. - * \param comp Comparison operator. - * \return A \p pair \c p such that p.first is the end of the output range of keys, - * and such that p.second is the end of the output range of values. - * - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator3 is a model of Input Iterator, - * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam InputIterator4 is a model of Input Iterator, - * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam OutputIterator1 is a model of Output Iterator. - * \tparam OutputIterator2 is a model of Output Iterator. - * \tparam StrictWeakCompare is a model of Strict Weak Ordering. - * - * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to \p comp. - * \pre The resulting ranges shall not overlap with any input range. - * - * The following code snippet demonstrates how to use \p set_symmetric_difference_by_key to compute the - * symmetric difference of two sets of integers sorted in descending order with their values. - * - * \code - * #include - * #include - * ... - * int A_keys[6] = {7, 6, 4, 2, 2, 1, 0}; - * int A_vals[6] = {0, 0, 0, 0, 0, 0, 0}; - * - * int B_keys[5] = {8, 5, 2, 1, 1}; - * int B_vals[5] = {1, 1, 1, 1, 1}; - * - * int keys_result[6]; - * int vals_result[6]; - * - * thrust::pair end = thrust::set_symmetric_difference_by_key(A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result); - * // keys_result is now {8, 7, 6, 5, 4, 0} - * // vals_result is now {1, 0, 0, 1, 0, 0} - * \endcode - * - * \see \p set_union_by_key - * \see \p set_intersection_by_key - * \see \p set_difference_by_key - * \see \p sort_by_key - * \see \p is_sorted - */ -template - thrust::pair - set_symmetric_difference_by_key(InputIterator1 keys_first1, - InputIterator1 keys_last1, - InputIterator2 keys_first2, - InputIterator2 keys_last2, - InputIterator3 values_first1, - InputIterator4 values_first2, - OutputIterator1 keys_result, - OutputIterator2 values_result, - StrictWeakCompare comp); - - -/*! \p set_union_by_key performs a key-value union operation from set theory. - * \p set_union_by_key constructs a sorted range that is the union of the sorted - * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated - * with each element from the input and output key ranges is a value element. The associated input - * value ranges need not be sorted. - * - * In the simplest case, \p set_union_by_key performs the "union" operation from set theory: - * the output range contains a copy of every element that is contained in - * [keys_first1, keys_last1), [keys_first2, keys_last1), or both. The general case - * is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [keys_first1, keys_last1) contains \c m elements - * that are equivalent to each other and if [keys_first2, keys_last2) contains \c n - * elements that are equivalent to them, then all \c m elements from the first - * range shall be copied to the output range, in order, and then max(n - m, 0) - * elements from the second range shall be copied to the output, in order. - * - * Each time a key element is copied from [keys_first1, keys_last1) or - * [keys_first2, keys_last2) is copied to the keys output range, the - * corresponding value element is copied from the corresponding values input range (beginning at - * \p values_first1 or \p values_first2) to the values output range. - * - * This version of \p set_union_by_key compares key elements using \c operator<. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param keys_first1 The beginning of the first input range of keys. - * \param keys_last1 The end of the first input range of keys. - * \param keys_first2 The beginning of the second input range of keys. - * \param keys_last2 The end of the second input range of keys. - * \param values_first1 The beginning of the first input range of values. - * \param values_first2 The beginning of the first input range of values. - * \param keys_result The beginning of the output range of keys. - * \param values_result The beginning of the output range of values. - * \return A \p pair \c p such that p.first is the end of the output range of keys, - * and such that p.second is the end of the output range of values. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator3 is a model of Input Iterator, - * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam InputIterator4 is a model of Input Iterator, - * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam OutputIterator1 is a model of Output Iterator. - * \tparam OutputIterator2 is a model of Output Iterator. - * - * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to operator<. - * \pre The resulting ranges shall not overlap with any input range. - * - * The following code snippet demonstrates how to use \p set_symmetric_difference_by_key to compute the - * symmetric difference of two sets of integers sorted in ascending order with their values using the - * \p thrust::host execution policy for parallelization: - * - * \code - * #include - * #include - * ... - * int A_keys[6] = {0, 2, 4, 6, 8, 10, 12}; - * int A_vals[6] = {0, 0, 0, 0, 0, 0, 0}; - * - * int B_keys[5] = {1, 3, 5, 7, 9}; - * int B_vals[5] = {1, 1, 1, 1, 1}; - * - * int keys_result[11]; - * int vals_result[11]; - * - * thrust::pair end = thrust::set_symmetric_difference_by_key(thrust::host, A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result); - * // keys_result is now {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12} - * // vals_result is now {0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0} - * \endcode - * - * \see \p set_symmetric_difference_by_key - * \see \p set_intersection_by_key - * \see \p set_difference_by_key - * \see \p sort_by_key - * \see \p is_sorted - */ -template -__host__ __device__ - thrust::pair - set_union_by_key(const thrust::detail::execution_policy_base &exec, - InputIterator1 keys_first1, - InputIterator1 keys_last1, - InputIterator2 keys_first2, - InputIterator2 keys_last2, - InputIterator3 values_first1, - InputIterator4 values_first2, - OutputIterator1 keys_result, - OutputIterator2 values_result); - - -/*! \p set_union_by_key performs a key-value union operation from set theory. - * \p set_union_by_key constructs a sorted range that is the union of the sorted - * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated - * with each element from the input and output key ranges is a value element. The associated input - * value ranges need not be sorted. - * - * In the simplest case, \p set_union_by_key performs the "union" operation from set theory: - * the output range contains a copy of every element that is contained in - * [keys_first1, keys_last1), [keys_first2, keys_last1), or both. The general case - * is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [keys_first1, keys_last1) contains \c m elements - * that are equivalent to each other and if [keys_first2, keys_last2) contains \c n - * elements that are equivalent to them, then all \c m elements from the first - * range shall be copied to the output range, in order, and then max(n - m, 0) - * elements from the second range shall be copied to the output, in order. - * - * Each time a key element is copied from [keys_first1, keys_last1) or - * [keys_first2, keys_last2) is copied to the keys output range, the - * corresponding value element is copied from the corresponding values input range (beginning at - * \p values_first1 or \p values_first2) to the values output range. - * - * This version of \p set_union_by_key compares key elements using \c operator<. - * - * \param keys_first1 The beginning of the first input range of keys. - * \param keys_last1 The end of the first input range of keys. - * \param keys_first2 The beginning of the second input range of keys. - * \param keys_last2 The end of the second input range of keys. - * \param values_first1 The beginning of the first input range of values. - * \param values_first2 The beginning of the first input range of values. - * \param keys_result The beginning of the output range of keys. - * \param values_result The beginning of the output range of values. - * \return A \p pair \c p such that p.first is the end of the output range of keys, - * and such that p.second is the end of the output range of values. - * - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator3 is a model of Input Iterator, - * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam InputIterator4 is a model of Input Iterator, - * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam OutputIterator1 is a model of Output Iterator. - * \tparam OutputIterator2 is a model of Output Iterator. - * - * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to operator<. - * \pre The resulting ranges shall not overlap with any input range. - * - * The following code snippet demonstrates how to use \p set_symmetric_difference_by_key to compute the - * symmetric difference of two sets of integers sorted in ascending order with their values. - * - * \code - * #include - * ... - * int A_keys[6] = {0, 2, 4, 6, 8, 10, 12}; - * int A_vals[6] = {0, 0, 0, 0, 0, 0, 0}; - * - * int B_keys[5] = {1, 3, 5, 7, 9}; - * int B_vals[5] = {1, 1, 1, 1, 1}; - * - * int keys_result[11]; - * int vals_result[11]; - * - * thrust::pair end = thrust::set_symmetric_difference_by_key(A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result); - * // keys_result is now {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12} - * // vals_result is now {0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0} - * \endcode - * - * \see \p set_symmetric_difference_by_key - * \see \p set_intersection_by_key - * \see \p set_difference_by_key - * \see \p sort_by_key - * \see \p is_sorted - */ -template - thrust::pair - set_union_by_key(InputIterator1 keys_first1, - InputIterator1 keys_last1, - InputIterator2 keys_first2, - InputIterator2 keys_last2, - InputIterator3 values_first1, - InputIterator4 values_first2, - OutputIterator1 keys_result, - OutputIterator2 values_result); - - -/*! \p set_union_by_key performs a key-value union operation from set theory. - * \p set_union_by_key constructs a sorted range that is the union of the sorted - * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated - * with each element from the input and output key ranges is a value element. The associated input - * value ranges need not be sorted. - * - * In the simplest case, \p set_union_by_key performs the "union" operation from set theory: - * the output range contains a copy of every element that is contained in - * [keys_first1, keys_last1), [keys_first2, keys_last1), or both. The general case - * is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [keys_first1, keys_last1) contains \c m elements - * that are equivalent to each other and if [keys_first2, keys_last2) contains \c n - * elements that are equivalent to them, then all \c m elements from the first - * range shall be copied to the output range, in order, and then max(n - m, 0) - * elements from the second range shall be copied to the output, in order. - * - * Each time a key element is copied from [keys_first1, keys_last1) or - * [keys_first2, keys_last2) is copied to the keys output range, the - * corresponding value element is copied from the corresponding values input range (beginning at - * \p values_first1 or \p values_first2) to the values output range. - * - * This version of \p set_union_by_key compares key elements using a function object \c comp. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param keys_first1 The beginning of the first input range of keys. - * \param keys_last1 The end of the first input range of keys. - * \param keys_first2 The beginning of the second input range of keys. - * \param keys_last2 The end of the second input range of keys. - * \param values_first1 The beginning of the first input range of values. - * \param values_first2 The beginning of the first input range of values. - * \param keys_result The beginning of the output range of keys. - * \param values_result The beginning of the output range of values. - * \param comp Comparison operator. - * \return A \p pair \c p such that p.first is the end of the output range of keys, - * and such that p.second is the end of the output range of values. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator3 is a model of Input Iterator, - * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam InputIterator4 is a model of Input Iterator, - * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam OutputIterator1 is a model of Output Iterator. - * \tparam OutputIterator2 is a model of Output Iterator. - * \tparam StrictWeakCompare is a model of Strict Weak Ordering. - * - * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to \p comp. - * \pre The resulting ranges shall not overlap with any input range. - * - * The following code snippet demonstrates how to use \p set_symmetric_difference_by_key to compute the - * symmetric difference of two sets of integers sorted in descending order with their values using the - * \p thrust::host execution policy for parallelization: - * - * \code - * #include - * #include - * #include - * ... - * int A_keys[6] = {12, 10, 8, 6, 4, 2, 0}; - * int A_vals[6] = { 0, 0, 0, 0, 0, 0, 0}; - * - * int B_keys[5] = {9, 7, 5, 3, 1}; - * int B_vals[5] = {1, 1, 1, 1, 1}; - * - * int keys_result[11]; - * int vals_result[11]; - * - * thrust::pair end = thrust::set_symmetric_difference_by_key(thrust::host, A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result, thrust::greater()); - * // keys_result is now {12, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0} - * // vals_result is now { 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0} - * \endcode - * - * \see \p set_symmetric_difference_by_key - * \see \p set_intersection_by_key - * \see \p set_difference_by_key - * \see \p sort_by_key - * \see \p is_sorted - */ -template -__host__ __device__ - thrust::pair - set_union_by_key(const thrust::detail::execution_policy_base &exec, - InputIterator1 keys_first1, - InputIterator1 keys_last1, - InputIterator2 keys_first2, - InputIterator2 keys_last2, - InputIterator3 values_first1, - InputIterator4 values_first2, - OutputIterator1 keys_result, - OutputIterator2 values_result, - StrictWeakCompare comp); - - -/*! \p set_union_by_key performs a key-value union operation from set theory. - * \p set_union_by_key constructs a sorted range that is the union of the sorted - * ranges [keys_first1, keys_last1) and [keys_first2, keys_last2). Associated - * with each element from the input and output key ranges is a value element. The associated input - * value ranges need not be sorted. - * - * In the simplest case, \p set_union_by_key performs the "union" operation from set theory: - * the output range contains a copy of every element that is contained in - * [keys_first1, keys_last1), [keys_first2, keys_last1), or both. The general case - * is more complicated, because the input ranges may contain duplicate elements. - * The generalization is that if [keys_first1, keys_last1) contains \c m elements - * that are equivalent to each other and if [keys_first2, keys_last2) contains \c n - * elements that are equivalent to them, then all \c m elements from the first - * range shall be copied to the output range, in order, and then max(n - m, 0) - * elements from the second range shall be copied to the output, in order. - * - * Each time a key element is copied from [keys_first1, keys_last1) or - * [keys_first2, keys_last2) is copied to the keys output range, the - * corresponding value element is copied from the corresponding values input range (beginning at - * \p values_first1 or \p values_first2) to the values output range. - * - * This version of \p set_union_by_key compares key elements using a function object \c comp. - * - * \param keys_first1 The beginning of the first input range of keys. - * \param keys_last1 The end of the first input range of keys. - * \param keys_first2 The beginning of the second input range of keys. - * \param keys_last2 The end of the second input range of keys. - * \param values_first1 The beginning of the first input range of values. - * \param values_first2 The beginning of the first input range of values. - * \param keys_result The beginning of the output range of keys. - * \param values_result The beginning of the output range of values. - * \param comp Comparison operator. - * \return A \p pair \c p such that p.first is the end of the output range of keys, - * and such that p.second is the end of the output range of values. - * - * \tparam InputIterator1 is a model of Input Iterator, - * \p InputIterator1 and \p InputIterator2 have the same \c value_type, - * \p InputIterator1's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator1's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator1's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator2 is a model of Input Iterator, - * \p InputIterator2 and \p InputIterator1 have the same \c value_type, - * \p InputIterator2's \c value_type is a model of LessThan Comparable, - * the ordering on \p InputIterator2's \c value_type is a strict weak ordering, as defined in the LessThan Comparable requirements, - * and \p InputIterator2's \c value_type is convertable to a type in \p OutputIterator's set of \c value_types. - * \tparam InputIterator3 is a model of Input Iterator, - * and \p InputIterator3's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam InputIterator4 is a model of Input Iterator, - * and \p InputIterator4's \c value_type is convertible to a type in \p OutputIterator2's set of \c value_types. - * \tparam OutputIterator1 is a model of Output Iterator. - * \tparam OutputIterator2 is a model of Output Iterator. - * \tparam StrictWeakCompare is a model of Strict Weak Ordering. - * - * \pre The ranges [keys_first1, keys_last1) and [keys_first2, keys_last2) shall be sorted with respect to \p comp. - * \pre The resulting ranges shall not overlap with any input range. - * - * The following code snippet demonstrates how to use \p set_symmetric_difference_by_key to compute the - * symmetric difference of two sets of integers sorted in descending order with their values. - * - * \code - * #include - * #include - * ... - * int A_keys[6] = {12, 10, 8, 6, 4, 2, 0}; - * int A_vals[6] = { 0, 0, 0, 0, 0, 0, 0}; - * - * int B_keys[5] = {9, 7, 5, 3, 1}; - * int B_vals[5] = {1, 1, 1, 1, 1}; - * - * int keys_result[11]; - * int vals_result[11]; - * - * thrust::pair end = thrust::set_symmetric_difference_by_key(A_keys, A_keys + 6, B_keys, B_keys + 5, A_vals, B_vals, keys_result, vals_result, thrust::greater()); - * // keys_result is now {12, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0} - * // vals_result is now { 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0} - * \endcode - * - * \see \p set_symmetric_difference_by_key - * \see \p set_intersection_by_key - * \see \p set_difference_by_key - * \see \p sort_by_key - * \see \p is_sorted - */ -template - thrust::pair - set_union_by_key(InputIterator1 keys_first1, - InputIterator1 keys_last1, - InputIterator2 keys_first2, - InputIterator2 keys_last2, - InputIterator3 values_first1, - InputIterator4 values_first2, - OutputIterator1 keys_result, - OutputIterator2 values_result, - StrictWeakCompare comp); - - -/*! \} // end set_operations - */ - - -} // end thrust - -#include - diff --git a/spaces/CVPR/regionclip-demo/detectron2/data/clip_build.py b/spaces/CVPR/regionclip-demo/detectron2/data/clip_build.py deleted file mode 100644 index bec75db871cd8d66118748aa90fe10d014bdaf89..0000000000000000000000000000000000000000 --- a/spaces/CVPR/regionclip-demo/detectron2/data/clip_build.py +++ /dev/null @@ -1,158 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -import bisect -import copy -import logging -import os -import torch -import torch.utils.data -import torch.distributed -from torch.utils.data.dataset import ConcatDataset - -from .catalog import DatasetCatalog -from .clip_datasets.clip_img_txt_pair_tsv import CLIPImgTxtPairTSVDataset - -from .transforms.build import build_clip_transforms - -def config_tsv_dataset_args(cfg, dataset_file, factory_name=None, is_train=True): - ############### code removecd as tsv_dataset_name = factory_name = "CLIPImgTxtPairTSVDataset" ############## - if factory_name is not None: - tsv_dataset_name = factory_name - - if tsv_dataset_name in ["CLIPImgTxtPairTSVDataset"]: - # no need for extra arguments - args = {} - args['args'] = cfg - args['seq_len'] = cfg.DATASETS.MAX_SEQ_LENGTH # cfg.max_seq_length - - return args, tsv_dataset_name - - -def build_dataset(cfg, transforms, dataset_catalog, is_train=True, is_aux=False): - """ - Arguments: - cfg: config file. - transforms (callable): transforms to apply to each (image, target) sample - dataset_catalog (DatasetCatalog): contains the information on how to construct a dataset. - is_train (bool): whether to setup the dataset for training or testing - """ - - dataset_list = (cfg.DATASETS.TRAIN if not is_aux else cfg.DATASETS.AUX) if is_train else cfg.DATASETS.TEST - factory_list = (cfg.DATASETS.FACTORY_TRAIN if not is_aux else cfg.DATASETS.FACTORY_AUX) if is_train else cfg.DATASETS.FACTORY_TEST - path_list = (cfg.DATASETS.PATH_TRAIN if not is_aux else cfg.DATASETS.PATH_AUX) if is_train else cfg.DATASETS.PATH_TEST - - if not isinstance(dataset_list, (list, tuple)): - raise RuntimeError( - "dataset_list should be a list of strings, got {}".format(dataset_list)) - if not isinstance(factory_list, (list, tuple)): - raise RuntimeError( - "factory_list should be a list of strings, got {}".format(factory_list)) - datasets = [] - target_offset = 0 - for i, dataset_name in enumerate(dataset_list): - factory_name = factory_list[i] if i < len(factory_list) else None - - if factory_name == "CLIPImgTxtPairTSVDataset": - dataset_names_merged = dataset_name.split('+') - path_lists_merged = path_list[i].split('+') - - assert len(dataset_names_merged) == len(path_lists_merged), "number of datasets must match that of dataset paths" - - image_tsv_list = [] - text_tsv_list = [] - dataset_name_list = [] - map_files = [] - max_num_tsv = 20 # maximum tsv files to load within a given folder - - for dname, dpath in zip(dataset_names_merged, path_lists_merged): - args, tsv_dataset_name = config_tsv_dataset_args( - cfg, dataset_name, factory_name, is_train - ) - factory = CLIPImgTxtPairTSVDataset if tsv_dataset_name in ["CLIPImgTxtPairTSVDataset"] else None - prev_len = len(image_tsv_list) - - isFile = os.path.isfile(dpath) - if isFile: - dpath_listed_files = [os.path.basename(dpath)] - dpath = os.path.dirname(dpath) - else: - dpath_listed_files = sorted(os.listdir(dpath)) - - for filename in dpath_listed_files: - if ("images" in filename or "image" in filename or "img" in filename) and filename.endswith(".tsv"): - image_tsv_list.append(os.path.join(dpath, filename)) - if "images" in filename: # "images" - "text" - text_tsv_list.append(os.path.join(dpath, filename.replace("images", "text"))) - elif "image" in filename: # "image"-"text" - text_tsv_list.append(os.path.join(dpath, filename.replace("image", "text"))) - elif "img" in filename: # "img"-"caption" - text_tsv_list.append(os.path.join(dpath, filename.replace("img", "caption"))) - if len(image_tsv_list) - prev_len == max_num_tsv: - break - dataset_name_list += [dname] * (len(image_tsv_list) - prev_len) - - if dname == "imagenet22k": - map_files += [os.path.join(dpath, 'darknet_data_imagenet.labels.list')] * (len(image_tsv_list) - prev_len) - else: - map_files += [None] * (len(image_tsv_list) - prev_len) - - assert len(image_tsv_list) == len(text_tsv_list), \ - "the number image tsv files must be equal to that of text tsv files, otherwise check your data!" - - args["image_tsv_file"] = image_tsv_list - args["text_tsv_file"] = text_tsv_list - args["dataset_name"] = dataset_name_list - args["map_file"] = map_files - args["filtered_datasets"] = cfg.DATASETS.FILTERED_CLASSIFICATION_DATASETS - assert len(image_tsv_list) == len(text_tsv_list) == len(dataset_name_list) == len(map_files) - - print("number of image tsv files: ", len(image_tsv_list)) - print("number of text tsv fies: ", len(text_tsv_list)) - - args["is_train"] = is_train - args["transforms"] = transforms - args["target_offset"] = target_offset - if "bpe" in cfg.INPUT.TEXT_TOKENIZER: - from detectron2.data.datasets.clip_prompt_utils import SimpleTokenizer as _Tokenizer - tokenizer = _Tokenizer() - args["tokenizer_type"] = "bpe" - args["tokenizer"] = tokenizer - # make dataset from factory - dataset = factory(**args) - datasets.append(dataset) - - precomputed_tokens = {} - dataset_classes = {} - for dataset in datasets: - if hasattr(dataset, "input_ids_all_classes"): - precomputed_tokens["imagenet"] = \ - [dataset.input_ids_all_classes, dataset.input_mask_all_classes, dataset.segment_ids_all_classes] - if hasattr(dataset, "classnames"): - if isinstance(dataset.classnames, dict): - dataset_classes.update(dataset.classnames) - else: - dataset_classes[dataset.dataset_name] = dataset.classnames - - # for testing, return a list of datasets - if not is_train: - return datasets, precomputed_tokens, dataset_classes - - if len(datasets) == 0: - return None, None, None - - # for training, concatenate all datasets into a single one - dataset = datasets[0] - if len(datasets) > 1: - dataset = ConcatDataset(datasets) - return [dataset], precomputed_tokens, dataset_classes - - -def make_clip_dataset(cfg, is_train=True, is_aux=False, transforms=None): - if transforms is None: - transforms = build_clip_transforms(cfg, is_train) - print("data transforms: ") - print(transforms) - datasets, precomputed_tokens, dataset_classes = build_dataset(cfg, transforms, DatasetCatalog, is_train, is_aux) - - if not datasets: - return None, None, None - return datasets, precomputed_tokens, dataset_classes \ No newline at end of file diff --git a/spaces/CVPR/transfiner/configs/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_400ep_LSJ.py b/spaces/CVPR/transfiner/configs/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_400ep_LSJ.py deleted file mode 100644 index 8f369a2afedb6c6e69fd52ff9a9a6b1cdf965937..0000000000000000000000000000000000000000 --- a/spaces/CVPR/transfiner/configs/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_400ep_LSJ.py +++ /dev/null @@ -1,14 +0,0 @@ -from .mask_rcnn_regnetx_4gf_dds_FPN_100ep_LSJ import ( - dataloader, - lr_multiplier, - model, - optimizer, - train, -) - -train.max_iter *= 4 # 100ep -> 400ep - -lr_multiplier.scheduler.milestones = [ - milestone * 4 for milestone in lr_multiplier.scheduler.milestones -] -lr_multiplier.scheduler.num_updates = train.max_iter diff --git a/spaces/CikeyQI/meme-api/meme_generator/memes/dont_go_near/__init__.py b/spaces/CikeyQI/meme-api/meme_generator/memes/dont_go_near/__init__.py deleted file mode 100644 index 8675f01518412a8c0dd98887ed15586000308f03..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/meme-api/meme_generator/memes/dont_go_near/__init__.py +++ /dev/null @@ -1,22 +0,0 @@ -from pathlib import Path -from typing import List - -from pil_utils import BuildImage - -from meme_generator import add_meme -from meme_generator.utils import make_jpg_or_gif - -img_dir = Path(__file__).parent / "images" - - -def dont_go_near(images: List[BuildImage], texts, args): - frame = BuildImage.open(img_dir / "0.png") - - def make(img: BuildImage) -> BuildImage: - img = img.convert("RGBA").resize((170, 170), keep_ratio=True) - return frame.copy().paste(img, (23, 231), alpha=True) - - return make_jpg_or_gif(images[0], make) - - -add_meme("dont_go_near", dont_go_near, min_images=1, max_images=1, keywords=["不要靠近"]) diff --git a/spaces/CoWork/dreambooth-training-public/app.py b/spaces/CoWork/dreambooth-training-public/app.py deleted file mode 100644 index f7d90f7250ccac1b7d250062b6d3348124acdf4e..0000000000000000000000000000000000000000 --- a/spaces/CoWork/dreambooth-training-public/app.py +++ /dev/null @@ -1,687 +0,0 @@ -from subprocess import getoutput -import os - -gpu_info = getoutput('nvidia-smi') -if("A10G" in gpu_info): - which_gpu = "A10G" - os.system(f"pip install --no-deps xformers==0.0.16rc425") -elif("T4" in gpu_info): - which_gpu = "T4" - os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+1515f77.d20221130-cp38-cp38-linux_x86_64.whl") -else: - which_gpu = "CPU" - -import gradio as gr -from pathlib import Path -import argparse -import shutil -from train_dreambooth import run_training -from convertosd import convert -from PIL import Image -from slugify import slugify -import requests -import torch -import zipfile -import tarfile -import urllib.parse -import gc -from diffusers import StableDiffusionPipeline -from huggingface_hub import snapshot_download, update_repo_visibility, HfApi - -is_spaces = True if "SPACE_ID" in os.environ else False -if(is_spaces): - is_shared_ui = True if "multimodalart/dreambooth-training" in os.environ['SPACE_ID'] else False -else: - is_shared_ui = False -is_gpu_associated = torch.cuda.is_available() - -os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1" - -if(is_gpu_associated): - model_v1 = snapshot_download(repo_id="multimodalart/sd-fine-tunable") - model_v2 = snapshot_download(repo_id="stabilityai/stable-diffusion-2-1", ignore_patterns=["*.ckpt", "*.safetensors"]) - model_v2_512 = snapshot_download(repo_id="stabilityai/stable-diffusion-2-1-base", ignore_patterns=["*.ckpt", "*.safetensors"]) - safety_checker = snapshot_download(repo_id="multimodalart/sd-sc") - model_to_load = model_v1 - -def swap_base_model(selected_model): - if(is_gpu_associated): - global model_to_load - if(selected_model == "v1-5"): - model_to_load = model_v1 - elif(selected_model == "v2-1-768"): - model_to_load = model_v2 - else: - model_to_load = model_v2_512 - - - -css = ''' - .instruction{position: absolute; top: 0;right: 0;margin-top: 0px !important} - .arrow{position: absolute;top: 0;right: -110px;margin-top: -8px !important} - #component-4, #component-3, #component-10{min-height: 0} - .duplicate-button img{margin: 0} -''' -maximum_concepts = 3 - -def swap_text(option, base): - resize_width = 768 if base == "v2-1-768" else 512 - mandatory_liability = "You must have the right to do so and you are liable for the images you use, example:" - if(option == "object"): - instance_prompt_example = "cttoy" - freeze_for = 30 - return [f"You are going to train `object`(s), upload 5-10 images of each object you are planning on training on from different angles/perspectives. You can use services like birme for smart cropping. {mandatory_liability}:", '''''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to {resize_width}x{resize_width}.", freeze_for, gr.update(visible=False)] - elif(option == "person"): - instance_prompt_example = "julcto" - freeze_for = 70 - #show_prior_preservation = True if base != "v2-1-768" else False - show_prior_preservation=False - if(show_prior_preservation): - prior_preservation_box_update = gr.update(visible=show_prior_preservation) - else: - prior_preservation_box_update = gr.update(visible=show_prior_preservation, value=False) - return [f"You are going to train a `person`(s), upload 10-20 images of each person you are planning on training on from different angles/perspectives. You can use services like birme for smart cropping. {mandatory_liability}:", '''''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to {resize_width}x{resize_width}.", freeze_for, prior_preservation_box_update] - elif(option == "style"): - instance_prompt_example = "trsldamrl" - freeze_for = 10 - return [f"You are going to train a `style`, upload 10-20 images of the style you are planning on training on. You can use services like birme for smart cropping. {mandatory_liability}:", '''''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to {resize_width}x{resize_width}", freeze_for, gr.update(visible=False)] - -def count_files(*inputs): - file_counter = 0 - concept_counter = 0 - for i, input in enumerate(inputs): - if(i < maximum_concepts): - files = inputs[i] - if(files): - concept_counter+=1 - file_counter+=len(files) - uses_custom = inputs[-1] - type_of_thing = inputs[-4] - selected_model = inputs[-5] - experimental_faces = inputs[-6] - if(uses_custom): - Training_Steps = int(inputs[-3]) - else: - Training_Steps = file_counter*150 - if(type_of_thing == "person" and Training_Steps > 2400): - Training_Steps = 2400 #Avoid overfitting on person faces - if(is_spaces): - if(selected_model == "v1-5"): - its = 1.1 if which_gpu == "T4" else 1.8 - if(experimental_faces): - its = 1 - elif(selected_model == "v2-1-512"): - its = 0.8 if which_gpu == "T4" else 1.5 - if(experimental_faces): - its = 0.7 - elif(selected_model == "v2-1-768"): - its = 0.48 if which_gpu == "T4" else 0.85 - - gpu_price = 0.60 if which_gpu == "T4" else 1.10 - summary_sentence = f'''You are going to train {concept_counter} {type_of_thing}(s), with {file_counter} images for {Training_Steps} steps. The training should take around {round(Training_Steps/its, 2)} seconds, or {round((Training_Steps/its)/60, 2)} minutes. - The setup, compression and uploading the model can take up to 20 minutes.
      As the {which_gpu}-Small GPU costs US${gpu_price} for 1h, the estimated cost for this training is below US${round((((Training_Steps/its)/3600)+0.3+0.1)*gpu_price, 2)}.

      - If you check the box below the GPU attribution will automatically removed after training is done and the model is uploaded. If not, don't forget to come back here and swap the hardware back to CPU.

      ''' - else: - summary_sentence = f'''You are going to train {concept_counter} {type_of_thing}(s), with {file_counter} images for {Training_Steps} steps.

      ''' - - return([gr.update(visible=True), gr.update(visible=True, value=summary_sentence)]) - -def update_steps(*files_list): - file_counter = 0 - for i, files in enumerate(files_list): - if(files): - file_counter+=len(files) - return(gr.update(value=file_counter*200)) - -def visualise_progress_bar(): - return gr.update(visible=True) - -def pad_image(image): - w, h = image.size - if w == h: - return image - elif w > h: - new_image = Image.new(image.mode, (w, w), (0, 0, 0)) - new_image.paste(image, (0, (w - h) // 2)) - return new_image - else: - new_image = Image.new(image.mode, (h, h), (0, 0, 0)) - new_image.paste(image, ((h - w) // 2, 0)) - return new_image - -def validate_model_upload(hf_token, model_name): - if(hf_token != ''): - api = HfApi() - try: - _ = api.whoami(hf_token) - except: - raise gr.Error("You have inserted an invalid Hugging Face token") - try: - if(is_spaces): - update_repo_visibility(repo_id=os.environ['SPACE_ID'], private=True, token=hf_token, repo_type="space") - except: - raise gr.Error("Oops, you created a Hugging Face token with read permissions only. You need one with write permissions") - else: - raise gr.Error("Please insert a Hugging Face Token (make sure to create it with write permissions)") - if(model_name == ""): - raise gr.Error("Please fill in your model's name") - -def swap_hardware(hf_token, hardware="cpu-basic"): - hardware_url = f"https://huggingface.co/spaces/{os.environ['SPACE_ID']}/hardware" - headers = { "authorization" : f"Bearer {hf_token}"} - body = {'flavor': hardware} - requests.post(hardware_url, json = body, headers=headers) - -def swap_sleep_time(hf_token,sleep_time): - sleep_time_url = f"https://huggingface.co/api/spaces/{os.environ['SPACE_ID']}/sleeptime" - headers = { "authorization" : f"Bearer {hf_token}"} - body = {'seconds':sleep_time} - requests.post(sleep_time_url,json=body,headers=headers) - -def get_sleep_time(hf_token): - sleep_time_url = f"https://huggingface.co/api/spaces/{os.environ['SPACE_ID']}" - headers = { "authorization" : f"Bearer {hf_token}"} - response = requests.get(sleep_time_url,headers=headers) - try: - gcTimeout = response.json()['runtime']['gcTimeout'] - except: - gcTimeout = None - return gcTimeout - -def write_to_community(title, description,hf_token): - from huggingface_hub import HfApi - api = HfApi() - api.create_discussion(repo_id=os.environ['SPACE_ID'], title=title, description=description,repo_type="space", token=hf_token) - -def train(progress=gr.Progress(track_tqdm=True), *inputs): - which_model = inputs[-10] - if(which_model == ""): - raise gr.Error("You forgot to select a base model to use") - - if is_shared_ui: - raise gr.Error("This Space only works in duplicated instances") - if not is_gpu_associated: - raise gr.Error("Please associate a T4 or A10G GPU for this Space") - hf_token = inputs[-5] - model_name = inputs[-7] - if(is_spaces): - sleep_time = get_sleep_time(hf_token) - if sleep_time: - swap_sleep_time(hf_token, -1) - remove_attribution_after = inputs[-6] - else: - remove_attribution_after = False - - if(remove_attribution_after): - validate_model_upload(hf_token, model_name) - - torch.cuda.empty_cache() - if 'pipe' in globals(): - global pipe, pipe_is_set - del pipe - pipe_is_set = False - gc.collect() - - if os.path.exists("output_model"): shutil.rmtree('output_model') - if os.path.exists("instance_images"): shutil.rmtree('instance_images') - if os.path.exists("diffusers_model.tar"): os.remove("diffusers_model.tar") - if os.path.exists("model.ckpt"): os.remove("model.ckpt") - if os.path.exists("hastrained.success"): os.remove("hastrained.success") - file_counter = 0 - resolution = 512 if which_model != "v2-1-768" else 768 - for i, input in enumerate(inputs): - if(i < maximum_concepts-1): - if(input): - os.makedirs('instance_images',exist_ok=True) - files = inputs[i+(maximum_concepts*2)] - prompt = inputs[i+maximum_concepts] - if(prompt == "" or prompt == None): - raise gr.Error("You forgot to define your concept prompt") - for j, file_temp in enumerate(files): - file = Image.open(file_temp.name) - image = pad_image(file) - image = image.resize((resolution, resolution)) - extension = file_temp.name.split(".")[1] - image = image.convert('RGB') - image.save(f'instance_images/{prompt}_({j+1}).jpg', format="JPEG", quality = 100) - file_counter += 1 - - os.makedirs('output_model',exist_ok=True) - uses_custom = inputs[-1] - type_of_thing = inputs[-4] - experimental_face_improvement = inputs[-9] - - if(uses_custom): - Training_Steps = int(inputs[-3]) - Train_text_encoder_for = int(inputs[-2]) - else: - if(type_of_thing == "object"): - Train_text_encoder_for=30 - - elif(type_of_thing == "style"): - Train_text_encoder_for=15 - - elif(type_of_thing == "person"): - Train_text_encoder_for=70 - - Training_Steps = file_counter*150 - if(type_of_thing == "person" and Training_Steps > 2600): - Training_Steps = 2600 #Avoid overfitting on people's faces - stptxt = int((Training_Steps*Train_text_encoder_for)/100) - gradient_checkpointing = True if (experimental_face_improvement or which_model != "v1-5") else False - cache_latents = True if which_model != "v1-5" else False - if (type_of_thing == "object" or type_of_thing == "style" or (type_of_thing == "person" and not experimental_face_improvement)): - args_general = argparse.Namespace( - image_captions_filename = True, - train_text_encoder = True if stptxt > 0 else False, - stop_text_encoder_training = stptxt, - save_n_steps = 0, - pretrained_model_name_or_path = model_to_load, - instance_data_dir="instance_images", - class_data_dir=None, - output_dir="output_model", - instance_prompt="", - seed=42, - resolution=resolution, - mixed_precision="fp16", - train_batch_size=1, - gradient_accumulation_steps=1, - use_8bit_adam=True, - learning_rate=2e-6, - lr_scheduler="polynomial", - lr_warmup_steps = 0, - max_train_steps=Training_Steps, - gradient_checkpointing=gradient_checkpointing, - cache_latents=cache_latents, - ) - print("Starting single training...") - lock_file = open("intraining.lock", "w") - lock_file.close() - try: - run_training(args_general) - except Exception as e: - if(is_spaces): - title="There was an error on during your training" - description=f''' - Unfortunately there was an error during training your {model_name} model. - Please check it out below. Feel free to report this issue to [Dreambooth Training](https://huggingface.co/spaces/multimodalart/dreambooth-training): - ``` - {str(e)} - ``` - ''' - swap_hardware(hf_token, "cpu-basic") - write_to_community(title,description,hf_token) - - - gc.collect() - torch.cuda.empty_cache() - if(which_model == "v1-5"): - print("Adding Safety Checker to the model...") - shutil.copytree(f"{safety_checker}/feature_extractor", "output_model/feature_extractor", dirs_exist_ok=True) - shutil.copytree(f"{safety_checker}/safety_checker", "output_model/safety_checker", dirs_exist_ok=True) - shutil.copy(f"model_index.json", "output_model/model_index.json") - - if(not remove_attribution_after): - swap_sleep_time(hf_token, sleep_time) - print("Archiving model file...") - with tarfile.open("diffusers_model.tar", "w") as tar: - tar.add("output_model", arcname=os.path.basename("output_model")) - if os.path.exists("intraining.lock"): os.remove("intraining.lock") - trained_file = open("hastrained.success", "w") - trained_file.close() - print("Training completed!") - return [ - gr.update(visible=False), #progress_bar - gr.update(visible=True, value=["diffusers_model.tar"]), #result - gr.update(visible=True), #try_your_model - gr.update(visible=True), #push_to_hub - gr.update(visible=True), #convert_button - gr.update(visible=False), #training_ongoing - gr.update(visible=True) #completed_training - ] - else: - where_to_upload = inputs[-8] - push(model_name, where_to_upload, hf_token, which_model, True) - swap_hardware(hf_token, "cpu-basic") - -pipe_is_set = False -def generate(prompt, steps): - torch.cuda.empty_cache() - from diffusers import StableDiffusionPipeline - global pipe_is_set - if(not pipe_is_set): - global pipe - pipe = StableDiffusionPipeline.from_pretrained("./output_model", torch_dtype=torch.float16) - pipe = pipe.to("cuda") - pipe_is_set = True - - image = pipe(prompt, num_inference_steps=steps).images[0] - return(image) - -def push(model_name, where_to_upload, hf_token, which_model, comes_from_automated=False): - validate_model_upload(hf_token, model_name) - if(not os.path.exists("model.ckpt")): - convert("output_model", "model.ckpt") - from huggingface_hub import HfApi, HfFolder, CommitOperationAdd - from huggingface_hub import create_repo - model_name_slug = slugify(model_name) - api = HfApi() - your_username = api.whoami(token=hf_token)["name"] - if(where_to_upload == "My personal profile"): - model_id = f"{your_username}/{model_name_slug}" - else: - model_id = f"sd-dreambooth-library/{model_name_slug}" - headers = {"Authorization" : f"Bearer: {hf_token}", "Content-Type": "application/json"} - response = requests.post("https://huggingface.co/organizations/sd-dreambooth-library/share/SSeOwppVCscfTEzFGQaqpfcjukVeNrKNHX", headers=headers) - - print(f"Starting to upload the model {model_id}...") - images_upload = os.listdir("instance_images") - image_string = "" - instance_prompt_list = [] - previous_instance_prompt = '' - for i, image in enumerate(images_upload): - instance_prompt = image.split("_")[0] - if(instance_prompt != previous_instance_prompt): - title_instance_prompt_string = instance_prompt - instance_prompt_list.append(instance_prompt) - else: - title_instance_prompt_string = '' - previous_instance_prompt = instance_prompt - image_string = f'''{title_instance_prompt_string} {"(use that on your prompt)" if title_instance_prompt_string != "" else ""} -{image_string}![{instance_prompt} {i}](https://huggingface.co/{model_id}/resolve/main/concept_images/{urllib.parse.quote(image)})''' - readme_text = f'''--- -license: creativeml-openrail-m -tags: -- text-to-image -widget: -- text: {instance_prompt_list[0]} ---- -### {model_name} Dreambooth model trained by {api.whoami(token=hf_token)["name"]} with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the {which_model} base model - -You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! - -Sample pictures of: -{image_string} -''' - #Save the readme to a file - readme_file = open("model.README.md", "w") - readme_file.write(readme_text) - readme_file.close() - #Save the token identifier to a file - text_file = open("token_identifier.txt", "w") - text_file.write(', '.join(instance_prompt_list)) - text_file.close() - try: - create_repo(model_id,private=True, token=hf_token) - except: - import time - epoch_time = str(int(time.time())) - create_repo(f"{model_id}-{epoch_time}", private=True,token=hf_token) - operations = [ - CommitOperationAdd(path_in_repo="token_identifier.txt", path_or_fileobj="token_identifier.txt"), - CommitOperationAdd(path_in_repo="README.md", path_or_fileobj="model.README.md"), - CommitOperationAdd(path_in_repo=f"model.ckpt",path_or_fileobj="model.ckpt") - ] - api.create_commit( - repo_id=model_id, - operations=operations, - commit_message=f"Upload the model {model_name}", - token=hf_token - ) - api.upload_folder( - folder_path="output_model", - repo_id=model_id, - token=hf_token - ) - api.upload_folder( - folder_path="instance_images", - path_in_repo="concept_images", - repo_id=model_id, - token=hf_token - ) - if is_spaces: - if(not comes_from_automated): - extra_message = "Don't forget to remove the GPU attribution after you play with it." - else: - extra_message = "The GPU has been removed automatically as requested, and you can try the model via the model page" - title=f"Your model {model_name} has finished trained from the Dreambooth Train Spaces!" - description=f"Your model has been successfully uploaded to: https://huggingface.co/{model_id}. {extra_message}" - write_to_community(title, description, hf_token) - #api.create_discussion(repo_id=os.environ['SPACE_ID'], title=f"Your model {model_name} has finished trained from the Dreambooth Train Spaces!", description=f"Your model has been successfully uploaded to: https://huggingface.co/{model_id}. {extra_message}",repo_type="space", token=hf_token) - print("Model uploaded successfully!") - return [gr.update(visible=True, value=f"Successfully uploaded your model. Access it [here](https://huggingface.co/{model_id})"), gr.update(visible=True, value=["diffusers_model.tar", "model.ckpt"])] - -def convert_to_ckpt(): - if 'pipe' in globals(): - global pipe, pipe_is_set - del pipe - pipe_is_set = False - gc.collect() - convert("output_model", "model.ckpt") - return gr.update(visible=True, value=["diffusers_model.tar", "model.ckpt"]) - -def check_status(top_description): - if os.path.exists("hastrained.success"): - if is_spaces: - update_top_tag = gr.update(value=f''' -
      -

      Your model has finished training ✅

      -

      Yay, congratulations on training your model. Scroll down to play with with it, save it (either downloading it or on the Hugging Face Hub). Once you are done, your model is safe, and you don't want to train a new one, go to the settings page and downgrade your Space to a CPU Basic

      -
      - ''') - else: - update_top_tag = gr.update(value=f''' -
      -

      Your model has finished training ✅

      -

      Yay, congratulations on training your model. Scroll down to play with with it, save it (either downloading it or on the Hugging Face Hub).

      -
      - ''') - show_outputs = True - elif os.path.exists("intraining.lock"): - update_top_tag = gr.update(value=''' -
      -

      Don't worry, your model is still training! ⌛

      -

      You closed the tab while your model was training, but it's all good! It is still training right now. You can click the "Open logs" button above here to check the training status. Once training is done, reload this tab to interact with your model

      -
      - ''') - show_outputs = False - else: - update_top_tag = gr.update(value=top_description) - show_outputs = False - if os.path.exists("diffusers_model.tar"): - update_files_tag = gr.update(visible=show_outputs, value=["diffusers_model.tar"]) - else: - update_files_tag = gr.update(visible=show_outputs) - return [ - update_top_tag, #top_description - gr.update(visible=show_outputs), #try_your_model - gr.update(visible=show_outputs), #push_to_hub - update_files_tag, #result - gr.update(visible=show_outputs), #convert_button - ] - -def checkbox_swap(checkbox): - return [gr.update(visible=checkbox), gr.update(visible=checkbox), gr.update(visible=checkbox), gr.update(visible=checkbox)] - -with gr.Blocks(css=css) as demo: - with gr.Box(): - if is_shared_ui: - top_description = gr.HTML(f''' -
      -

      Attention - This Space doesn't work in this shared UI

      -

      For it to work, you can either run locally or duplicate the Space and run it on your own profile using a (paid) private T4-small or A10G-small GPU for training. A T4 costs US$0.60/h, so it should cost < US$1 to train most models using default settings with it!  Duplicate Space

      - - -
      - ''') - elif(is_spaces): - if(is_gpu_associated): - top_description = gr.HTML(f''' -
      -

      You have successfully associated a {which_gpu} GPU to the Dreambooth Training Space 🎉

      -

      You can now train your model! You will be billed by the minute from when you activated the GPU until when it is turned it off.

      -
      - ''') - else: - top_description = gr.HTML(f''' -
      -

      You have successfully duplicated the Dreambooth Training Space 🎉

      -

      There's only one step left before you can train your model: attribute a T4-small or A10G-small GPU to it (via the Settings tab) and run the training below. You will be billed by the minute from when you activate the GPU until when it is turned it off.

      -
      - ''') - else: - top_description = gr.HTML(f''' -
      -

      You have successfully cloned the Dreambooth Training Space locally 🎉

      -

      Do a pip install requirements-local.txt

      -
      - ''') - gr.Markdown("# Dreambooth Training UI 💭") - gr.Markdown("Customize Stable Diffusion v1 or v2 (ⁿᵉʷ!) by giving it a few examples of a concept. Based on the [🧨 diffusers](https://github.com/huggingface/diffusers) implementation, additional techniques from [TheLastBen](https://github.com/TheLastBen/diffusers) and [ShivamShrirao](https://github.com/ShivamShrirao/diffusers)") - - with gr.Row() as what_are_you_training: - type_of_thing = gr.Dropdown(label="What would you like to train?", choices=["object", "person", "style"], value="object", interactive=True) - with gr.Column(): - base_model_to_use = gr.Dropdown(label="Which base model would you like to use?", choices=["v1-5", "v2-1-512", "v2-1-768"], value="v1-5", interactive=True) - - #Very hacky approach to emulate dynamically created Gradio components - with gr.Row() as upload_your_concept: - with gr.Column(): - thing_description = gr.Markdown("You are going to train an `object`, please upload 5-10 images of the object you are planning on training on from different angles/perspectives. You must have the right to do so and you are liable for the images you use, example") - thing_experimental = gr.Checkbox(label="Improve faces (prior preservation) - can take longer training but can improve faces", visible=False, value=False) - thing_image_example = gr.HTML('''''') - things_naming = gr.Markdown("You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `cttoy` here). Images will be automatically cropped to 512x512.") - - with gr.Column(): - file_collection = [] - concept_collection = [] - buttons_collection = [] - delete_collection = [] - is_visible = [] - - row = [None] * maximum_concepts - for x in range(maximum_concepts): - ordinal = lambda n: "%d%s" % (n, "tsnrhtdd"[(n // 10 % 10 != 1) * (n % 10 < 4) * n % 10::4]) - if(x == 0): - visible = True - is_visible.append(gr.State(value=True)) - else: - visible = False - is_visible.append(gr.State(value=False)) - - file_collection.append(gr.File(file_types=["image"], label=f'''Upload the images for your {ordinal(x+1) if (x>0) else ""} concept''', file_count="multiple", interactive=True, visible=visible)) - with gr.Column(visible=visible) as row[x]: - concept_collection.append(gr.Textbox(label=f'''{ordinal(x+1) if (x>0) else ""} concept prompt - use a unique, made up word to avoid collisions''')) - with gr.Row(): - if(x < maximum_concepts-1): - buttons_collection.append(gr.Button(value="Add +1 concept", visible=visible)) - if(x > 0): - delete_collection.append(gr.Button(value=f"Delete {ordinal(x+1)} concept")) - - counter_add = 1 - for button in buttons_collection: - if(counter_add < len(buttons_collection)): - button.click(lambda: - [gr.update(visible=True),gr.update(visible=True), gr.update(visible=False), gr.update(visible=True), True, None], - None, - [row[counter_add], file_collection[counter_add], buttons_collection[counter_add-1], buttons_collection[counter_add], is_visible[counter_add], file_collection[counter_add]], queue=False) - else: - button.click(lambda:[gr.update(visible=True),gr.update(visible=True), gr.update(visible=False), True], None, [row[counter_add], file_collection[counter_add], buttons_collection[counter_add-1], is_visible[counter_add]], queue=False) - counter_add += 1 - - counter_delete = 1 - for delete_button in delete_collection: - if(counter_delete < len(delete_collection)+1): - delete_button.click(lambda:[gr.update(visible=False),gr.update(visible=False), gr.update(visible=True), False], None, [file_collection[counter_delete], row[counter_delete], buttons_collection[counter_delete-1], is_visible[counter_delete]], queue=False) - counter_delete += 1 - - with gr.Accordion("Custom Settings", open=False): - swap_auto_calculated = gr.Checkbox(label="Use custom settings") - gr.Markdown("If not checked, the % of frozen encoder will be tuned automatically to whether you are training an `object`, `person` or `style`. The text-encoder is frozen after 10% of the steps for a style, 30% of the steps for an object and 75% trained for persons. The number of steps varies between 1400 and 2400 depending on how many images uploaded. If you see too many artifacts in your output, it means it may have overfit and you need less steps. If your results aren't really what you wanted, it may be underfitting and you need more steps.") - steps = gr.Number(label="How many steps", value=2400) - perc_txt_encoder = gr.Number(label="Percentage of the training steps the text-encoder should be trained as well", value=30) - - with gr.Box(visible=False) as training_summary: - training_summary_text = gr.HTML("", visible=True, label="Training Summary") - is_advanced_visible = True if is_spaces else False - training_summary_checkbox = gr.Checkbox(label="Automatically remove paid GPU attribution and upload model to the Hugging Face Hub after training", value=True, visible=is_advanced_visible) - training_summary_model_name = gr.Textbox(label="Name of your model", visible=True) - training_summary_where_to_upload = gr.Dropdown(["My personal profile", "Public Library"], value="My personal profile", label="Upload to", visible=True) - training_summary_token_message = gr.Markdown("[A Hugging Face write access token](https://huggingface.co/settings/tokens), go to \"New token\" -> Role : Write. A regular read token won't work here.", visible=True) - training_summary_token = gr.Textbox(label="Hugging Face Write Token", type="password", visible=True) - - train_btn = gr.Button("Start Training") - progress_bar = gr.Textbox(visible=False) - if(is_shared_ui): - training_ongoing = gr.Markdown("## This Space only works in duplicated instances. Please duplicate it and try again!", visible=False) - elif(not is_gpu_associated): - training_ongoing = gr.Markdown("## Oops, you haven't associated your T4 or A10G GPU to this Space. Visit the Settings tab, associate and try again.", visible=False) - else: - training_ongoing = gr.Markdown("## Training is ongoing ⌛... You can close this tab if you like or just wait. If you did not check the `Remove GPU After training`, you can come back here to try your model and upload it after training. Don't forget to remove the GPU attribution after you are done. ", visible=False) - - - #Post-training UI - completed_training = gr.Markdown('''# ✅ Training completed. - ### Don't forget to remove the GPU attribution after you are done trying and uploading your model''', visible=False) - - with gr.Row(): - with gr.Box(visible=False) as try_your_model: - gr.Markdown("## Try your model") - prompt = gr.Textbox(label="Type your prompt") - result_image = gr.Image() - inference_steps = gr.Slider(minimum=1, maximum=150, value=50, step=1) - generate_button = gr.Button("Generate Image") - - with gr.Box(visible=False) as push_to_hub: - gr.Markdown("## Push to Hugging Face Hub") - model_name = gr.Textbox(label="Name of your model", placeholder="Tarsila do Amaral Style") - where_to_upload = gr.Dropdown(["My personal profile", "Public Library"], label="Upload to") - gr.Markdown("[A Hugging Face write access token](https://huggingface.co/settings/tokens), go to \"New token\" -> Role : Write. A regular read token won't work here.") - hf_token = gr.Textbox(label="Hugging Face Write Token", type="password") - - push_button = gr.Button("Push to the Hub") - - result = gr.File(label="Download the uploaded models in the diffusers format", visible=True) - success_message_upload = gr.Markdown(visible=False) - convert_button = gr.Button("Convert to CKPT", visible=False) - - #Swap the examples and the % of text encoder trained depending if it is an object, person or style - type_of_thing.change(fn=swap_text, inputs=[type_of_thing, base_model_to_use], outputs=[thing_description, thing_image_example, things_naming, perc_txt_encoder, thing_experimental], queue=False, show_progress=False) - - #Swap the base model - - base_model_to_use.change(fn=swap_text, inputs=[type_of_thing, base_model_to_use], outputs=[thing_description, thing_image_example, things_naming, perc_txt_encoder, thing_experimental], queue=False, show_progress=False) - #base_model_to_use.change(fn=visualise_progress_bar, inputs=[], outputs=progress_bar) - base_model_to_use.change(fn=swap_base_model, inputs=base_model_to_use, outputs=[]) - #Update the summary box below the UI according to how many images are uploaded and whether users are using custom settings or not - for file in file_collection: - #file.change(fn=update_steps,inputs=file_collection, outputs=steps) - file.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False) - - thing_experimental.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False) - base_model_to_use.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False) - steps.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False) - perc_txt_encoder.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False) - - #Give more options if the user wants to finish everything after training - if(is_spaces): - training_summary_checkbox.change(fn=checkbox_swap, inputs=training_summary_checkbox, outputs=[training_summary_token_message, training_summary_token, training_summary_model_name, training_summary_where_to_upload],queue=False, show_progress=False) - #Add a message for while it is in training - - #train_btn.click(lambda:gr.update(visible=True), inputs=None, outputs=training_ongoing) - - #The main train function - train_btn.click(lambda:gr.update(visible=True), inputs=[], outputs=progress_bar) - train_btn.click(fn=train, inputs=is_visible+concept_collection+file_collection+[base_model_to_use]+[thing_experimental]+[training_summary_where_to_upload]+[training_summary_model_name]+[training_summary_checkbox]+[training_summary_token]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[progress_bar, result, try_your_model, push_to_hub, convert_button, training_ongoing, completed_training], queue=False) - - #Button to generate an image from your trained model after training - generate_button.click(fn=generate, inputs=[prompt, inference_steps], outputs=result_image, queue=False) - #Button to push the model to the Hugging Face Hub - push_button.click(fn=push, inputs=[model_name, where_to_upload, hf_token, base_model_to_use], outputs=[success_message_upload, result], queue=False) - #Button to convert the model to ckpt format - convert_button.click(fn=convert_to_ckpt, inputs=[], outputs=result, queue=False) - - #Checks if the training is running - demo.load(fn=check_status, inputs=top_description, outputs=[top_description, try_your_model, push_to_hub, result, convert_button], queue=False, show_progress=False) - -demo.queue(default_enabled=False).launch(debug=True) \ No newline at end of file diff --git a/spaces/CofAI/picscore/picscore.py b/spaces/CofAI/picscore/picscore.py deleted file mode 100644 index dcf7bdfa03fa9a21f5f644b13477acabe2e2cfd1..0000000000000000000000000000000000000000 --- a/spaces/CofAI/picscore/picscore.py +++ /dev/null @@ -1,7 +0,0 @@ -import gradio as gr - -description = """
      - PICSCORE BETA-1 -
      - """ -gr.Interface.load("CompVis/stable-diffusion-v1-4", description=description).launch() \ No newline at end of file diff --git a/spaces/CofAI/picscore1/README.md b/spaces/CofAI/picscore1/README.md deleted file mode 100644 index 5db32d4bb0c64fa27e161a05697df5348d7c923c..0000000000000000000000000000000000000000 --- a/spaces/CofAI/picscore1/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: PicScore — Stabel Diffusion -emoji: 🖼 -colorFrom: indigo -colorTo: purple -sdk: static -pinned: true -license: other ---- - -#tags: StableDiffusion, SD, PicScore, promt, picgen - ---- - -This is PicScore with Stable Diffusion 2.1 for FREE! \ No newline at end of file diff --git a/spaces/CoreyMorris/MMLU-by-task-Leaderboard/plotting_utils.py b/spaces/CoreyMorris/MMLU-by-task-Leaderboard/plotting_utils.py deleted file mode 100644 index fc5385a4b573559446f5c6ba42d1bd7c477116cb..0000000000000000000000000000000000000000 --- a/spaces/CoreyMorris/MMLU-by-task-Leaderboard/plotting_utils.py +++ /dev/null @@ -1,152 +0,0 @@ -import streamlit as st -import pandas as pd -import plotly.express as px -import matplotlib.pyplot as plt -import numpy as np -import plotly.graph_objects as go - -def plot_top_n(df, target_column, n=10): - top_n = df.nlargest(n, target_column) - - # Initialize the bar plot - fig, ax1 = plt.subplots(figsize=(10, 5)) - - # Set width for each bar and their positions - width = 0.28 - ind = np.arange(len(top_n)) - - # Plot target_column and MMLU_average on the primary y-axis with adjusted positions - ax1.bar(ind - width, top_n[target_column], width=width, color='blue', label=target_column) - ax1.bar(ind, top_n['MMLU_average'], width=width, color='orange', label='MMLU_average') - - # Set the primary y-axis labels and title - ax1.set_title(f'Top {n} performing models on {target_column}') - ax1.set_xlabel('Model') - ax1.set_ylabel('Score') - - # Create a secondary y-axis for Parameters - ax2 = ax1.twinx() - - # Plot Parameters as bars on the secondary y-axis with adjusted position - ax2.bar(ind + width, top_n['Parameters'], width=width, color='red', label='Parameters') - - # Set the secondary y-axis labels - ax2.set_ylabel('Parameters', color='red') - ax2.tick_params(axis='y', labelcolor='red') - - # Set the x-ticks and their labels - ax1.set_xticks(ind) - ax1.set_xticklabels(top_n.index, rotation=45, ha="right") - - # Adjust the legend - fig.tight_layout() - fig.legend(loc='center left', bbox_to_anchor=(1, 0.5)) - - # Show the plot - st.pyplot(fig) - -# Function to create an unfilled radar chart -def create_radar_chart_unfilled(df, model_names, metrics): - fig = go.Figure() - min_value = df.loc[model_names, metrics].min().min() - max_value = df.loc[model_names, metrics].max().max() - for model_name in model_names: - values_model = df.loc[model_name, metrics] - fig.add_trace(go.Scatterpolar( - r=values_model, - theta=metrics, - name=model_name - )) - - fig.update_layout( - polar=dict( - radialaxis=dict( - visible=True, - range=[min_value, max_value] - )), - showlegend=True, - width=800, # Change the width as needed - height=600 # Change the height as needed - ) - return fig - - - -# Function to create a line chart -def create_line_chart(df, model_names, metrics): - line_data = [] - for model_name in model_names: - values_model = df.loc[model_name, metrics] - for metric, value in zip(metrics, values_model): - line_data.append({'Model': model_name, 'Metric': metric, 'Value': value}) - - line_df = pd.DataFrame(line_data) - - fig = px.line(line_df, x='Metric', y='Value', color='Model', title='Comparison of Models', line_dash_sequence=['solid']) - fig.update_layout(showlegend=True) - return fig - -def create_plot(df, x_values, y_values, models=None, title=None): - if models is not None: - df = df[df.index.isin(models)] - - # remove rows with NaN values - df = df.dropna(subset=[x_values, y_values]) - - plot_data = pd.DataFrame({ - 'Model': df.index, - x_values: df[x_values], - y_values: df[y_values], - }) - - plot_data['color'] = 'purple' - fig = px.scatter(plot_data, x=x_values, y=y_values, color='color', hover_data=['Model'], trendline="ols") - - # If title is not provided, use x_values vs. y_values as the default title - if title is None: - title = x_values + " vs. " + y_values - - layout_args = dict( - showlegend=False, - xaxis_title=x_values, - yaxis_title=y_values, - xaxis=dict(), - yaxis=dict(), - title=title, - height=500, - width=1000, - ) - fig.update_layout(**layout_args) - - # Add a dashed line at 0.25 for the y_values - x_min = df[x_values].min() - x_max = df[x_values].max() - - y_min = df[y_values].min() - y_max = df[y_values].max() - - if x_values.startswith('MMLU'): - fig.add_shape( - type='line', - x0=0.25, x1=0.25, - y0=y_min, y1=y_max, - line=dict( - color='red', - width=2, - dash='dash' - ) - ) - - if y_values.startswith('MMLU'): - fig.add_shape( - type='line', - x0=x_min, x1=x_max, - y0=0.25, y1=0.25, - line=dict( - color='red', - width=2, - dash='dash' - ) - ) - - return fig \ No newline at end of file diff --git a/spaces/Cyril666/ContourNet-ABI/setup.py b/spaces/Cyril666/ContourNet-ABI/setup.py deleted file mode 100644 index 837c2cd15f4624f630540ef6993dcb9123adb39b..0000000000000000000000000000000000000000 --- a/spaces/Cyril666/ContourNet-ABI/setup.py +++ /dev/null @@ -1,69 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -#!/usr/bin/env python - -import glob -import os - -import torch -from setuptools import find_packages -from setuptools import setup -from torch.utils.cpp_extension import CUDA_HOME -from torch.utils.cpp_extension import CppExtension -from torch.utils.cpp_extension import CUDAExtension - -requirements = ["torch", "torchvision"] - - -def get_extensions(): - this_dir = os.path.dirname(os.path.abspath(__file__)) - extensions_dir = os.path.join(this_dir, "maskrcnn_benchmark", "csrc") - - main_file = glob.glob(os.path.join(extensions_dir, "*.cpp")) - source_cpu = glob.glob(os.path.join(extensions_dir, "cpu", "*.cpp")) - source_cuda = glob.glob(os.path.join(extensions_dir, "cuda", "*.cu")) - - sources = main_file + source_cpu - extension = CppExtension - - extra_compile_args = {"cxx": []} - define_macros = [] - - if (torch.cuda.is_available() and CUDA_HOME is not None) or os.getenv("FORCE_CUDA", "0") == "1": - extension = CUDAExtension - sources += source_cuda - define_macros += [("WITH_CUDA", None)] - extra_compile_args["nvcc"] = [ - "-DCUDA_HAS_FP16=1", - "-D__CUDA_NO_HALF_OPERATORS__", - "-D__CUDA_NO_HALF_CONVERSIONS__", - "-D__CUDA_NO_HALF2_OPERATORS__", - ] - - sources = [os.path.join(extensions_dir, s) for s in sources] - - include_dirs = [extensions_dir] - - ext_modules = [ - extension( - "maskrcnn_benchmark._C", - sources, - include_dirs=include_dirs, - define_macros=define_macros, - extra_compile_args=extra_compile_args, - ) - ] - - return ext_modules - - -setup( - name="maskrcnn_benchmark", - version="0.1", - author="fmassa", - url="https://github.com/facebookresearch/maskrcnn-benchmark", - description="object detection in pytorch", - packages=find_packages(exclude=("configs", "tests",)), - # install_requires=requirements, - ext_modules=get_extensions(), - cmdclass={"build_ext": torch.utils.cpp_extension.BuildExtension}, -) diff --git a/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/common/gradcam.py b/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/common/gradcam.py deleted file mode 100644 index d53a5254d4b319eaf2cbfbd081b0ca8e38c5c7a0..0000000000000000000000000000000000000000 --- a/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/common/gradcam.py +++ /dev/null @@ -1,24 +0,0 @@ -import numpy as np -from matplotlib import pyplot as plt -from scipy.ndimage import filters -from skimage import transform as skimage_transform - - -def getAttMap(img, attMap, blur=True, overlap=True): - attMap -= attMap.min() - if attMap.max() > 0: - attMap /= attMap.max() - attMap = skimage_transform.resize(attMap, (img.shape[:2]), order=3, mode="constant") - if blur: - attMap = filters.gaussian_filter(attMap, 0.02 * max(img.shape[:2])) - attMap -= attMap.min() - attMap /= attMap.max() - cmap = plt.get_cmap("jet") - attMapV = cmap(attMap) - attMapV = np.delete(attMapV, 3, 2) - if overlap: - attMap = ( - 1 * (1 - attMap**0.7).reshape(attMap.shape + (1,)) * img - + (attMap**0.7).reshape(attMap.shape + (1,)) * attMapV - ) - return attMap diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/openapi/models.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/openapi/models.py deleted file mode 100644 index 2268dd229091d10dd0535bd21515b40409b8ce1b..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/openapi/models.py +++ /dev/null @@ -1,611 +0,0 @@ -from enum import Enum -from typing import Any, Callable, Dict, Iterable, List, Optional, Set, Type, Union - -from fastapi._compat import ( - PYDANTIC_V2, - CoreSchema, - GetJsonSchemaHandler, - JsonSchemaValue, - _model_rebuild, - general_plain_validator_function, -) -from fastapi.logger import logger -from pydantic import AnyUrl, BaseModel, Field -from typing_extensions import Annotated, Literal -from typing_extensions import deprecated as typing_deprecated - -try: - import email_validator - - assert email_validator # make autoflake ignore the unused import - from pydantic import EmailStr -except ImportError: # pragma: no cover - - class EmailStr(str): # type: ignore - @classmethod - def __get_validators__(cls) -> Iterable[Callable[..., Any]]: - yield cls.validate - - @classmethod - def validate(cls, v: Any) -> str: - logger.warning( - "email-validator not installed, email fields will be treated as str.\n" - "To install, run: pip install email-validator" - ) - return str(v) - - @classmethod - def _validate(cls, __input_value: Any, _: Any) -> str: - logger.warning( - "email-validator not installed, email fields will be treated as str.\n" - "To install, run: pip install email-validator" - ) - return str(__input_value) - - @classmethod - def __get_pydantic_json_schema__( - cls, core_schema: CoreSchema, handler: GetJsonSchemaHandler - ) -> JsonSchemaValue: - return {"type": "string", "format": "email"} - - @classmethod - def __get_pydantic_core_schema__( - cls, source: Type[Any], handler: Callable[[Any], CoreSchema] - ) -> CoreSchema: - return general_plain_validator_function(cls._validate) - - -class Contact(BaseModel): - name: Optional[str] = None - url: Optional[AnyUrl] = None - email: Optional[EmailStr] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class License(BaseModel): - name: str - identifier: Optional[str] = None - url: Optional[AnyUrl] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class Info(BaseModel): - title: str - summary: Optional[str] = None - description: Optional[str] = None - termsOfService: Optional[str] = None - contact: Optional[Contact] = None - license: Optional[License] = None - version: str - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class ServerVariable(BaseModel): - enum: Annotated[Optional[List[str]], Field(min_length=1)] = None - default: str - description: Optional[str] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class Server(BaseModel): - url: Union[AnyUrl, str] - description: Optional[str] = None - variables: Optional[Dict[str, ServerVariable]] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class Reference(BaseModel): - ref: str = Field(alias="$ref") - - -class Discriminator(BaseModel): - propertyName: str - mapping: Optional[Dict[str, str]] = None - - -class XML(BaseModel): - name: Optional[str] = None - namespace: Optional[str] = None - prefix: Optional[str] = None - attribute: Optional[bool] = None - wrapped: Optional[bool] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class ExternalDocumentation(BaseModel): - description: Optional[str] = None - url: AnyUrl - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class Schema(BaseModel): - # Ref: JSON Schema 2020-12: https://json-schema.org/draft/2020-12/json-schema-core.html#name-the-json-schema-core-vocabu - # Core Vocabulary - schema_: Optional[str] = Field(default=None, alias="$schema") - vocabulary: Optional[str] = Field(default=None, alias="$vocabulary") - id: Optional[str] = Field(default=None, alias="$id") - anchor: Optional[str] = Field(default=None, alias="$anchor") - dynamicAnchor: Optional[str] = Field(default=None, alias="$dynamicAnchor") - ref: Optional[str] = Field(default=None, alias="$ref") - dynamicRef: Optional[str] = Field(default=None, alias="$dynamicRef") - defs: Optional[Dict[str, "SchemaOrBool"]] = Field(default=None, alias="$defs") - comment: Optional[str] = Field(default=None, alias="$comment") - # Ref: JSON Schema 2020-12: https://json-schema.org/draft/2020-12/json-schema-core.html#name-a-vocabulary-for-applying-s - # A Vocabulary for Applying Subschemas - allOf: Optional[List["SchemaOrBool"]] = None - anyOf: Optional[List["SchemaOrBool"]] = None - oneOf: Optional[List["SchemaOrBool"]] = None - not_: Optional["SchemaOrBool"] = Field(default=None, alias="not") - if_: Optional["SchemaOrBool"] = Field(default=None, alias="if") - then: Optional["SchemaOrBool"] = None - else_: Optional["SchemaOrBool"] = Field(default=None, alias="else") - dependentSchemas: Optional[Dict[str, "SchemaOrBool"]] = None - prefixItems: Optional[List["SchemaOrBool"]] = None - # TODO: uncomment and remove below when deprecating Pydantic v1 - # It generales a list of schemas for tuples, before prefixItems was available - # items: Optional["SchemaOrBool"] = None - items: Optional[Union["SchemaOrBool", List["SchemaOrBool"]]] = None - contains: Optional["SchemaOrBool"] = None - properties: Optional[Dict[str, "SchemaOrBool"]] = None - patternProperties: Optional[Dict[str, "SchemaOrBool"]] = None - additionalProperties: Optional["SchemaOrBool"] = None - propertyNames: Optional["SchemaOrBool"] = None - unevaluatedItems: Optional["SchemaOrBool"] = None - unevaluatedProperties: Optional["SchemaOrBool"] = None - # Ref: JSON Schema Validation 2020-12: https://json-schema.org/draft/2020-12/json-schema-validation.html#name-a-vocabulary-for-structural - # A Vocabulary for Structural Validation - type: Optional[str] = None - enum: Optional[List[Any]] = None - const: Optional[Any] = None - multipleOf: Optional[float] = Field(default=None, gt=0) - maximum: Optional[float] = None - exclusiveMaximum: Optional[float] = None - minimum: Optional[float] = None - exclusiveMinimum: Optional[float] = None - maxLength: Optional[int] = Field(default=None, ge=0) - minLength: Optional[int] = Field(default=None, ge=0) - pattern: Optional[str] = None - maxItems: Optional[int] = Field(default=None, ge=0) - minItems: Optional[int] = Field(default=None, ge=0) - uniqueItems: Optional[bool] = None - maxContains: Optional[int] = Field(default=None, ge=0) - minContains: Optional[int] = Field(default=None, ge=0) - maxProperties: Optional[int] = Field(default=None, ge=0) - minProperties: Optional[int] = Field(default=None, ge=0) - required: Optional[List[str]] = None - dependentRequired: Optional[Dict[str, Set[str]]] = None - # Ref: JSON Schema Validation 2020-12: https://json-schema.org/draft/2020-12/json-schema-validation.html#name-vocabularies-for-semantic-c - # Vocabularies for Semantic Content With "format" - format: Optional[str] = None - # Ref: JSON Schema Validation 2020-12: https://json-schema.org/draft/2020-12/json-schema-validation.html#name-a-vocabulary-for-the-conten - # A Vocabulary for the Contents of String-Encoded Data - contentEncoding: Optional[str] = None - contentMediaType: Optional[str] = None - contentSchema: Optional["SchemaOrBool"] = None - # Ref: JSON Schema Validation 2020-12: https://json-schema.org/draft/2020-12/json-schema-validation.html#name-a-vocabulary-for-basic-meta - # A Vocabulary for Basic Meta-Data Annotations - title: Optional[str] = None - description: Optional[str] = None - default: Optional[Any] = None - deprecated: Optional[bool] = None - readOnly: Optional[bool] = None - writeOnly: Optional[bool] = None - examples: Optional[List[Any]] = None - # Ref: OpenAPI 3.1.0: https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.1.0.md#schema-object - # Schema Object - discriminator: Optional[Discriminator] = None - xml: Optional[XML] = None - externalDocs: Optional[ExternalDocumentation] = None - example: Annotated[ - Optional[Any], - typing_deprecated( - "Deprecated in OpenAPI 3.1.0 that now uses JSON Schema 2020-12, " - "although still supported. Use examples instead." - ), - ] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -# Ref: https://json-schema.org/draft/2020-12/json-schema-core.html#name-json-schema-documents -# A JSON Schema MUST be an object or a boolean. -SchemaOrBool = Union[Schema, bool] - - -class Example(BaseModel): - summary: Optional[str] = None - description: Optional[str] = None - value: Optional[Any] = None - externalValue: Optional[AnyUrl] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class ParameterInType(Enum): - query = "query" - header = "header" - path = "path" - cookie = "cookie" - - -class Encoding(BaseModel): - contentType: Optional[str] = None - headers: Optional[Dict[str, Union["Header", Reference]]] = None - style: Optional[str] = None - explode: Optional[bool] = None - allowReserved: Optional[bool] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class MediaType(BaseModel): - schema_: Optional[Union[Schema, Reference]] = Field(default=None, alias="schema") - example: Optional[Any] = None - examples: Optional[Dict[str, Union[Example, Reference]]] = None - encoding: Optional[Dict[str, Encoding]] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class ParameterBase(BaseModel): - description: Optional[str] = None - required: Optional[bool] = None - deprecated: Optional[bool] = None - # Serialization rules for simple scenarios - style: Optional[str] = None - explode: Optional[bool] = None - allowReserved: Optional[bool] = None - schema_: Optional[Union[Schema, Reference]] = Field(default=None, alias="schema") - example: Optional[Any] = None - examples: Optional[Dict[str, Union[Example, Reference]]] = None - # Serialization rules for more complex scenarios - content: Optional[Dict[str, MediaType]] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class Parameter(ParameterBase): - name: str - in_: ParameterInType = Field(alias="in") - - -class Header(ParameterBase): - pass - - -class RequestBody(BaseModel): - description: Optional[str] = None - content: Dict[str, MediaType] - required: Optional[bool] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class Link(BaseModel): - operationRef: Optional[str] = None - operationId: Optional[str] = None - parameters: Optional[Dict[str, Union[Any, str]]] = None - requestBody: Optional[Union[Any, str]] = None - description: Optional[str] = None - server: Optional[Server] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class Response(BaseModel): - description: str - headers: Optional[Dict[str, Union[Header, Reference]]] = None - content: Optional[Dict[str, MediaType]] = None - links: Optional[Dict[str, Union[Link, Reference]]] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class Operation(BaseModel): - tags: Optional[List[str]] = None - summary: Optional[str] = None - description: Optional[str] = None - externalDocs: Optional[ExternalDocumentation] = None - operationId: Optional[str] = None - parameters: Optional[List[Union[Parameter, Reference]]] = None - requestBody: Optional[Union[RequestBody, Reference]] = None - # Using Any for Specification Extensions - responses: Optional[Dict[str, Union[Response, Any]]] = None - callbacks: Optional[Dict[str, Union[Dict[str, "PathItem"], Reference]]] = None - deprecated: Optional[bool] = None - security: Optional[List[Dict[str, List[str]]]] = None - servers: Optional[List[Server]] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class PathItem(BaseModel): - ref: Optional[str] = Field(default=None, alias="$ref") - summary: Optional[str] = None - description: Optional[str] = None - get: Optional[Operation] = None - put: Optional[Operation] = None - post: Optional[Operation] = None - delete: Optional[Operation] = None - options: Optional[Operation] = None - head: Optional[Operation] = None - patch: Optional[Operation] = None - trace: Optional[Operation] = None - servers: Optional[List[Server]] = None - parameters: Optional[List[Union[Parameter, Reference]]] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class SecuritySchemeType(Enum): - apiKey = "apiKey" - http = "http" - oauth2 = "oauth2" - openIdConnect = "openIdConnect" - - -class SecurityBase(BaseModel): - type_: SecuritySchemeType = Field(alias="type") - description: Optional[str] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class APIKeyIn(Enum): - query = "query" - header = "header" - cookie = "cookie" - - -class APIKey(SecurityBase): - type_: SecuritySchemeType = Field(default=SecuritySchemeType.apiKey, alias="type") - in_: APIKeyIn = Field(alias="in") - name: str - - -class HTTPBase(SecurityBase): - type_: SecuritySchemeType = Field(default=SecuritySchemeType.http, alias="type") - scheme: str - - -class HTTPBearer(HTTPBase): - scheme: Literal["bearer"] = "bearer" - bearerFormat: Optional[str] = None - - -class OAuthFlow(BaseModel): - refreshUrl: Optional[str] = None - scopes: Dict[str, str] = {} - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class OAuthFlowImplicit(OAuthFlow): - authorizationUrl: str - - -class OAuthFlowPassword(OAuthFlow): - tokenUrl: str - - -class OAuthFlowClientCredentials(OAuthFlow): - tokenUrl: str - - -class OAuthFlowAuthorizationCode(OAuthFlow): - authorizationUrl: str - tokenUrl: str - - -class OAuthFlows(BaseModel): - implicit: Optional[OAuthFlowImplicit] = None - password: Optional[OAuthFlowPassword] = None - clientCredentials: Optional[OAuthFlowClientCredentials] = None - authorizationCode: Optional[OAuthFlowAuthorizationCode] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class OAuth2(SecurityBase): - type_: SecuritySchemeType = Field(default=SecuritySchemeType.oauth2, alias="type") - flows: OAuthFlows - - -class OpenIdConnect(SecurityBase): - type_: SecuritySchemeType = Field( - default=SecuritySchemeType.openIdConnect, alias="type" - ) - openIdConnectUrl: str - - -SecurityScheme = Union[APIKey, HTTPBase, OAuth2, OpenIdConnect, HTTPBearer] - - -class Components(BaseModel): - schemas: Optional[Dict[str, Union[Schema, Reference]]] = None - responses: Optional[Dict[str, Union[Response, Reference]]] = None - parameters: Optional[Dict[str, Union[Parameter, Reference]]] = None - examples: Optional[Dict[str, Union[Example, Reference]]] = None - requestBodies: Optional[Dict[str, Union[RequestBody, Reference]]] = None - headers: Optional[Dict[str, Union[Header, Reference]]] = None - securitySchemes: Optional[Dict[str, Union[SecurityScheme, Reference]]] = None - links: Optional[Dict[str, Union[Link, Reference]]] = None - # Using Any for Specification Extensions - callbacks: Optional[Dict[str, Union[Dict[str, PathItem], Reference, Any]]] = None - pathItems: Optional[Dict[str, Union[PathItem, Reference]]] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class Tag(BaseModel): - name: str - description: Optional[str] = None - externalDocs: Optional[ExternalDocumentation] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class OpenAPI(BaseModel): - openapi: str - info: Info - jsonSchemaDialect: Optional[str] = None - servers: Optional[List[Server]] = None - # Using Any for Specification Extensions - paths: Optional[Dict[str, Union[PathItem, Any]]] = None - webhooks: Optional[Dict[str, Union[PathItem, Reference]]] = None - components: Optional[Components] = None - security: Optional[List[Dict[str, List[str]]]] = None - tags: Optional[List[Tag]] = None - externalDocs: Optional[ExternalDocumentation] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -_model_rebuild(Schema) -_model_rebuild(Operation) -_model_rebuild(Encoding) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Button-9b719f62.css b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Button-9b719f62.css deleted file mode 100644 index 1febd1de643feeadb668f5d0fc297f661ce47482..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Button-9b719f62.css +++ /dev/null @@ -1 +0,0 @@ -.block.svelte-90oupt{position:relative;margin:0;box-shadow:var(--block-shadow);border-width:var(--block-border-width);border-color:var(--block-border-color);border-radius:var(--block-radius);background:var(--block-background-fill);width:100%;line-height:var(--line-sm)}.block.border_focus.svelte-90oupt{border-color:var(--color-accent)}.padded.svelte-90oupt{padding:var(--block-padding)}.hidden.svelte-90oupt{display:none}.hide-container.svelte-90oupt{margin:0;box-shadow:none;--block-border-width:0;background:transparent;padding:0;overflow:visible}div.svelte-e8n7p6{margin-bottom:var(--spacing-lg);color:var(--block-info-text-color);font-weight:var(--block-info-text-weight);font-size:var(--block-info-text-size);line-height:var(--line-sm)}span.has-info.svelte-1gfkn6j{margin-bottom:var(--spacing-xs)}span.svelte-1gfkn6j:not(.has-info){margin-bottom:var(--spacing-lg)}span.svelte-1gfkn6j{display:inline-block;position:relative;z-index:var(--layer-4);border:solid var(--block-title-border-width) var(--block-title-border-color);border-radius:var(--block-title-radius);background:var(--block-title-background-fill);padding:var(--block-title-padding);color:var(--block-title-text-color);font-weight:var(--block-title-text-weight);font-size:var(--block-title-text-size);line-height:var(--line-sm)}.hide.svelte-1gfkn6j{margin:0;height:0}div.svelte-1mwvhlq{display:inline-flex;align-items:center;z-index:var(--layer-2);box-shadow:var(--block-label-shadow);border:var(--block-label-border-width) solid var(--border-color-primary);border-top:none;border-left:none;border-radius:var(--block-label-radius);background:var(--block-label-background-fill);padding:var(--block-label-padding);pointer-events:none;color:var(--block-label-text-color);font-weight:var(--block-label-text-weight);font-size:var(--block-label-text-size);line-height:var(--line-sm)}.gr-group div.svelte-1mwvhlq{border-top-left-radius:0}div.float.svelte-1mwvhlq{position:absolute;top:var(--block-label-margin);left:var(--block-label-margin)}div.svelte-1mwvhlq:not(.float){position:static;margin-top:var(--block-label-margin);margin-left:var(--block-label-margin)}.hide.svelte-1mwvhlq{height:0}span.svelte-1mwvhlq{opacity:.8;margin-right:var(--size-2);width:calc(var(--block-label-text-size) - 1px);height:calc(var(--block-label-text-size) - 1px)}.hide-label.svelte-1mwvhlq{box-shadow:none;border-width:0;background:transparent;overflow:visible}button.svelte-1030q2h{display:flex;justify-content:center;align-items:center;gap:1px;z-index:var(--layer-1);box-shadow:var(--shadow-drop);border:1px solid var(--button-secondary-border-color);border-radius:var(--radius-sm);background:var(--background-fill-primary);padding:2px;color:var(--block-label-text-color)}button.svelte-1030q2h:hover{cursor:pointer;border:2px solid var(--button-secondary-border-color-hover);padding:1px;color:var(--block-label-text-color)}span.svelte-1030q2h{padding:0 1px;font-size:10px}div.svelte-1030q2h{padding:2px;width:14px;height:14px}.pending.svelte-1030q2h{animation:svelte-1030q2h-flash .5s infinite}@keyframes svelte-1030q2h-flash{0%{opacity:.5}50%{opacity:1}to{opacity:.5}}.empty.svelte-lk9eg8{display:flex;justify-content:center;align-items:center;margin-top:calc(0px - var(--size-6));height:var(--size-full)}.icon.svelte-lk9eg8{opacity:.5;height:var(--size-5);color:var(--body-text-color)}.small.svelte-lk9eg8{min-height:calc(var(--size-32) - 20px)}.large.svelte-lk9eg8{min-height:calc(var(--size-64) - 20px)}.unpadded_box.svelte-lk9eg8{margin-top:0}.small_parent.svelte-lk9eg8{min-height:100%!important}.dropdown-arrow.svelte-p5edak{fill:var(--body-text-color);margin-right:var(--size-2);width:var(--size-5)}button.svelte-1e89no8{display:inline-flex;justify-content:center;align-items:center;transition:var(--button-transition);box-shadow:var(--button-shadow);padding:var(--size-0-5) var(--size-2);text-align:center}button.svelte-1e89no8:hover,button[disabled].svelte-1e89no8{box-shadow:var(--button-shadow-hover)}button.svelte-1e89no8:active{box-shadow:var(--button-shadow-active)}button[disabled].svelte-1e89no8{opacity:.5;filter:grayscale(30%);cursor:not-allowed}.hidden.svelte-1e89no8{display:none}.primary.svelte-1e89no8{border:var(--button-border-width) solid var(--button-primary-border-color);background:var(--button-primary-background-fill);color:var(--button-primary-text-color)}.primary.svelte-1e89no8:hover,.primary[disabled].svelte-1e89no8{border-color:var(--button-primary-border-color-hover);background:var(--button-primary-background-fill-hover);color:var(--button-primary-text-color-hover)}.secondary.svelte-1e89no8{border:var(--button-border-width) solid var(--button-secondary-border-color);background:var(--button-secondary-background-fill);color:var(--button-secondary-text-color)}.secondary.svelte-1e89no8:hover,.secondary[disabled].svelte-1e89no8{border-color:var(--button-secondary-border-color-hover);background:var(--button-secondary-background-fill-hover);color:var(--button-secondary-text-color-hover)}.stop.svelte-1e89no8{border:var(--button-border-width) solid var(--button-cancel-border-color);background:var(--button-cancel-background-fill);color:var(--button-cancel-text-color)}.stop.svelte-1e89no8:hover,.stop[disabled].svelte-1e89no8{border-color:var(--button-cancel-border-color-hover);background:var(--button-cancel-background-fill-hover);color:var(--button-cancel-text-color-hover)}.sm.svelte-1e89no8{border-radius:var(--button-small-radius);padding:var(--button-small-padding);font-weight:var(--button-small-text-weight);font-size:var(--button-small-text-size)}.lg.svelte-1e89no8{border-radius:var(--button-large-radius);padding:var(--button-large-padding);font-weight:var(--button-large-text-weight);font-size:var(--button-large-text-size)} diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/index.html b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/index.html deleted file mode 100644 index 78e36810f98d2d6ec71a95092a1d7828a4ffc972..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/index.html +++ /dev/null @@ -1,84 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/spaces/DRAGSclub/README/README.md b/spaces/DRAGSclub/README/README.md deleted file mode 100644 index 2ac40a86a64be6bd47e89f8e15493adbf433833e..0000000000000000000000000000000000000000 --- a/spaces/DRAGSclub/README/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: README -emoji: 🔥 -colorFrom: purple -colorTo: indigo -sdk: static -pinned: false ---- - -Edit this `README.md` markdown file to author your organization card 🔥 diff --git a/spaces/Darkk88/medium-GPT4/app.py b/spaces/Darkk88/medium-GPT4/app.py deleted file mode 100644 index 9caa518a2040f2462c7ba70d684f3ed92bf2185d..0000000000000000000000000000000000000000 --- a/spaces/Darkk88/medium-GPT4/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/ingen51/DialoGPT-medium-GPT4").launch() \ No newline at end of file diff --git a/spaces/Datasculptor/MusicGen/audiocraft/data/__init__.py b/spaces/Datasculptor/MusicGen/audiocraft/data/__init__.py deleted file mode 100644 index 708a3dcead8dda89374a021177481dacae9f7fe9..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/MusicGen/audiocraft/data/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# flake8: noqa -from . import audio, audio_dataset diff --git a/spaces/Deepak107/Bottle_images/README.md b/spaces/Deepak107/Bottle_images/README.md deleted file mode 100644 index de2e2c1f760a9362ce8999ad2f0b7a14ea1ea83d..0000000000000000000000000000000000000000 --- a/spaces/Deepak107/Bottle_images/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Bottle Images -emoji: 🐢 -colorFrom: green -colorTo: blue -sdk: gradio -sdk_version: 3.2 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Duskfallcrew/textual-inversion-training/app.py b/spaces/Duskfallcrew/textual-inversion-training/app.py deleted file mode 100644 index f6ed5cd899a841034993df3f7e6861811b7a0442..0000000000000000000000000000000000000000 --- a/spaces/Duskfallcrew/textual-inversion-training/app.py +++ /dev/null @@ -1,559 +0,0 @@ -import gradio as gr -import os -from pathlib import Path -import argparse -import shutil -# from train_dreambooth import run_training -from textual_inversion import run_training -from convertosd import convert -from PIL import Image -from slugify import slugify -import requests -import torch -import zipfile -import tarfile -import urllib.parse -import gc -from diffusers import StableDiffusionPipeline -from huggingface_hub import snapshot_download - - -is_spaces = True if "SPACE_ID" in os.environ else False -#is_shared_ui = True if "IS_SHARED_UI" in os.environ else False -if(is_spaces): - is_shared_ui = True if ("lvkaokao/textual-inversion-training" in os.environ['SPACE_ID'] or "Intel/textual-inversion-training" in os.environ['SPACE_ID']) else False -else: - is_shared_ui = False - -css = ''' - .instruction{position: absolute; top: 0;right: 0;margin-top: 0px !important} - .arrow{position: absolute;top: 0;right: -110px;margin-top: -8px !important} - #component-4, #component-3, #component-10{min-height: 0} - .duplicate-button img{margin: 0} -''' -maximum_concepts = 1 - -#Pre download the files -''' -model_v1_4 = snapshot_download(repo_id="CompVis/stable-diffusion-v1-4") -#model_v1_5 = snapshot_download(repo_id="runwayml/stable-diffusion-v1-5") -model_v1_5 = snapshot_download(repo_id="stabilityai/stable-diffusion-2") -model_v2_512 = snapshot_download(repo_id="stabilityai/stable-diffusion-2-base", revision="fp16") -safety_checker = snapshot_download(repo_id="multimodalart/sd-sc") -''' -model_v1_4 = "CompVis/stable-diffusion-v1-4" -model_v1_5 = "stabilityai/stable-diffusion-2" -model_v2_512 = "stabilityai/stable-diffusion-2-base" - -model_to_load = model_v1_4 - - -with zipfile.ZipFile("mix.zip", 'r') as zip_ref: - zip_ref.extractall(".") - -def swap_text(option): - mandatory_liability = "You must have the right to do so and you are liable for the images you use, example:" - if(option == "object"): - instance_prompt_example = "cttoy" - freeze_for = 30 - return [f"You are going to train `object`(s), upload 5-10 images of each object you are planning on training on from different angles/perspectives. {mandatory_liability}:", '''''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to 512x512.", freeze_for, gr.update(visible=False)] - elif(option == "person"): - instance_prompt_example = "julcto" - freeze_for = 70 - return [f"You are going to train a `person`(s), upload 10-20 images of each person you are planning on training on from different angles/perspectives. {mandatory_liability}:", '''''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to 512x512.", freeze_for, gr.update(visible=True)] - elif(option == "style"): - instance_prompt_example = "trsldamrl" - freeze_for = 10 - return [f"You are going to train a `style`, upload 10-20 images of the style you are planning on training on. Name the files with the words you would like {mandatory_liability}:", '''''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to 512x512.", freeze_for, gr.update(visible=False)] - -def swap_base_model(selected_model): - global model_to_load - if(selected_model == "v1-4"): - model_to_load = model_v1_4 - elif(selected_model == "v1-5"): - model_to_load = model_v1_5 - else: - model_to_load = model_v2_512 - -def count_files(*inputs): - file_counter = 0 - concept_counter = 0 - for i, input in enumerate(inputs): - if(i < maximum_concepts-1): - files = inputs[i] - if(files): - concept_counter+=1 - file_counter+=len(files) - uses_custom = inputs[-1] - type_of_thing = inputs[-4] - if(uses_custom): - Training_Steps = int(inputs[-3]) - else: - Training_Steps = file_counter*200 - if(Training_Steps > 2400): - Training_Steps=2400 - elif(Training_Steps < 1400): - Training_Steps=1400 - if(is_spaces): - summary_sentence = f'''The training should take around 24 hours for 1000 steps using the default free CPU.

      ''' - else: - summary_sentence = f'''You are going to train {concept_counter} {type_of_thing}(s), with {file_counter} images for {Training_Steps} steps.

      ''' - - return([gr.update(visible=True), gr.update(visible=True, value=summary_sentence)]) - -def update_steps(*files_list): - file_counter = 0 - for i, files in enumerate(files_list): - if(files): - file_counter+=len(files) - return(gr.update(value=file_counter*200)) - -def pad_image(image): - w, h = image.size - if w == h: - return image - elif w > h: - new_image = Image.new(image.mode, (w, w), (0, 0, 0)) - new_image.paste(image, (0, (w - h) // 2)) - return new_image - else: - new_image = Image.new(image.mode, (h, h), (0, 0, 0)) - new_image.paste(image, ((h - w) // 2, 0)) - return new_image - -def train(*inputs): - if is_shared_ui: - raise gr.Error("This Space only works in duplicated instances") - - torch.cuda.empty_cache() - if 'pipe' in globals(): - global pipe, pipe_is_set - del pipe - pipe_is_set = False - gc.collect() - - if os.path.exists("output_model"): shutil.rmtree('output_model') - if os.path.exists("concept_images"): shutil.rmtree('concept_images') - if os.path.exists("diffusers_model.tar"): os.remove("diffusers_model.tar") - if os.path.exists("model.ckpt"): os.remove("model.ckpt") - if os.path.exists("hastrained.success"): os.remove("hastrained.success") - file_counter = 0 - print(inputs) - - os.makedirs('concept_images', exist_ok=True) - files = inputs[maximum_concepts*3] - init_word = inputs[maximum_concepts*2] - prompt = inputs[maximum_concepts] - if(prompt == "" or prompt == None): - raise gr.Error("You forgot to define your concept prompt") - - for j, file_temp in enumerate(files): - file = Image.open(file_temp.name) - image = pad_image(file) - image = image.resize((512, 512)) - extension = file_temp.name.split(".")[1] - image = image.convert('RGB') - image.save(f'concept_images/{j+1}.jpg', format="JPEG", quality = 100) - file_counter += 1 - - - os.makedirs('output_model',exist_ok=True) - uses_custom = inputs[-1] - type_of_thing = inputs[-4] - remove_attribution_after = inputs[-6] - experimental_face_improvement = inputs[-9] - which_model = inputs[-10] - if(uses_custom): - Training_Steps = int(inputs[-3]) - else: - Training_Steps = 1000 - - print(os.listdir("concept_images")) - - args_general = argparse.Namespace( - pretrained_model_name_or_path = model_to_load, - train_data_dir="concept_images", - learnable_property=type_of_thing, - placeholder_token=prompt, - initializer_token=init_word, - resolution=512, - train_batch_size=1, - gradient_accumulation_steps=2, - use_bf16=True, - max_train_steps=Training_Steps, - learning_rate=5.0e-4, - scale_lr=True, - lr_scheduler="constant", - lr_warmup_steps=0, - output_dir="output_model", - ) - print("Starting single training...") - lock_file = open("intraining.lock", "w") - lock_file.close() - run_training(args_general) - - gc.collect() - torch.cuda.empty_cache() - if(which_model in ["v1-5"]): - print("Adding Safety Checker to the model...") - shutil.copytree(f"{safety_checker}/feature_extractor", "output_model/feature_extractor") - shutil.copytree(f"{safety_checker}/safety_checker", "output_model/safety_checker") - shutil.copy(f"model_index.json", "output_model/model_index.json") - - if(not remove_attribution_after): - print("Archiving model file...") - with tarfile.open("diffusers_model.tar", "w") as tar: - tar.add("output_model", arcname=os.path.basename("output_model")) - if os.path.exists("intraining.lock"): os.remove("intraining.lock") - trained_file = open("hastrained.success", "w") - trained_file.close() - print(os.listdir("output_model")) - print("Training completed!") - return [ - gr.update(visible=True, value=["diffusers_model.tar"]), #result - gr.update(visible=True), #try_your_model - gr.update(visible=True), #push_to_hub - gr.update(visible=True), #convert_button - gr.update(visible=False), #training_ongoing - gr.update(visible=True) #completed_training - ] - else: - hf_token = inputs[-5] - model_name = inputs[-7] - where_to_upload = inputs[-8] - push(model_name, where_to_upload, hf_token, which_model, True) - hardware_url = f"https://huggingface.co/spaces/{os.environ['SPACE_ID']}/hardware" - headers = { "authorization" : f"Bearer {hf_token}"} - body = {'flavor': 'cpu-basic'} - requests.post(hardware_url, json = body, headers=headers) - -import time -pipe_is_set = False -def generate(prompt, steps): - - print("prompt: ", prompt) - print("steps: ", steps) - - torch.cuda.empty_cache() - from diffusers import StableDiffusionPipeline - global pipe_is_set - if(not pipe_is_set): - global pipe - if torch.cuda.is_available(): - pipe = StableDiffusionPipeline.from_pretrained("./output_model", torch_dtype=torch.float16) - pipe = pipe.to("cuda") - else: - pipe = StableDiffusionPipeline.from_pretrained("./output_model", torch_dtype=torch.float) - pipe_is_set = True - - start_time = time.time() - image = pipe(prompt, num_inference_steps=steps, guidance_scale=7.5).images[0] - print("cost: ", time.time() - start_time) - return(image) - -def push(model_name, where_to_upload, hf_token, which_model, comes_from_automated=False): - - if(not os.path.exists("model.ckpt")): - convert("output_model", "model.ckpt") - from huggingface_hub import HfApi, HfFolder, CommitOperationAdd - from huggingface_hub import create_repo - model_name_slug = slugify(model_name) - api = HfApi() - your_username = api.whoami(token=hf_token)["name"] - if(where_to_upload == "My personal profile"): - model_id = f"{your_username}/{model_name_slug}" - else: - model_id = f"sd-dreambooth-library/{model_name_slug}" - headers = {"Authorization" : f"Bearer: {hf_token}", "Content-Type": "application/json"} - response = requests.post("https://huggingface.co/organizations/sd-dreambooth-library/share/SSeOwppVCscfTEzFGQaqpfcjukVeNrKNHX", headers=headers) - - images_upload = os.listdir("concept_images") - image_string = "" - instance_prompt_list = [] - previous_instance_prompt = '' - for i, image in enumerate(images_upload): - instance_prompt = image.split("_")[0] - if(instance_prompt != previous_instance_prompt): - title_instance_prompt_string = instance_prompt - instance_prompt_list.append(instance_prompt) - else: - title_instance_prompt_string = '' - previous_instance_prompt = instance_prompt - image_string = f'''{title_instance_prompt_string} {"(use that on your prompt)" if title_instance_prompt_string != "" else ""} -{image_string}![{instance_prompt} {i}](https://huggingface.co/{model_id}/resolve/main/concept_images/{urllib.parse.quote(image)})''' - readme_text = f'''--- -license: creativeml-openrail-m -tags: -- text-to-image ---- -### {model_name} Dreambooth model trained by {api.whoami(token=hf_token)["name"]} with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the {which_model} base model - -You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! - -Sample pictures of: -{image_string} -''' - #Save the readme to a file - readme_file = open("model.README.md", "w") - readme_file.write(readme_text) - readme_file.close() - #Save the token identifier to a file - text_file = open("token_identifier.txt", "w") - text_file.write(', '.join(instance_prompt_list)) - text_file.close() - try: - create_repo(model_id,private=True, token=hf_token) - except: - import time - epoch_time = str(int(time.time())) - create_repo(f"{model_id}-{epoch_time}", private=True,token=hf_token) - operations = [ - CommitOperationAdd(path_in_repo="token_identifier.txt", path_or_fileobj="token_identifier.txt"), - CommitOperationAdd(path_in_repo="README.md", path_or_fileobj="model.README.md"), - CommitOperationAdd(path_in_repo=f"model.ckpt",path_or_fileobj="model.ckpt") - ] - api.create_commit( - repo_id=model_id, - operations=operations, - commit_message=f"Upload the model {model_name}", - token=hf_token - ) - api.upload_folder( - folder_path="output_model", - repo_id=model_id, - token=hf_token - ) - api.upload_folder( - folder_path="concept_images", - path_in_repo="concept_images", - repo_id=model_id, - token=hf_token - ) - if is_spaces: - if(not comes_from_automated): - extra_message = "Don't forget to remove the GPU attribution after you play with it." - else: - extra_message = "The GPU has been removed automatically as requested, and you can try the model via the model page" - api.create_discussion(repo_id=os.environ['SPACE_ID'], title=f"Your model {model_name} has finished trained from the Dreambooth Train Spaces!", description=f"Your model has been successfully uploaded to: https://huggingface.co/{model_id}. {extra_message}",repo_type="space", token=hf_token) - - return [gr.update(visible=True, value=f"Successfully uploaded your model. Access it [here](https://huggingface.co/{model_id})"), gr.update(visible=True, value=["diffusers_model.tar", "model.ckpt"])] - -def convert_to_ckpt(): - convert("output_model", "model.ckpt") - return gr.update(visible=True, value=["diffusers_model.tar", "model.ckpt"]) - -def check_status(top_description): - print('=='*20) - print(os.listdir("./")) - - if os.path.exists("hastrained.success"): - if is_spaces: - update_top_tag = gr.update(value=f''' -
      -

      Your model has finished training ✅

      -

      Yay, congratulations on training your model. Scroll down to play with with it, save it (either downloading it or on the Hugging Face Hub). Once you are done, your model is safe, and you don't want to train a new one, go to the settings page and downgrade your Space to a CPU Basic

      -
      - ''') - else: - update_top_tag = gr.update(value=f''' -
      -

      Your model has finished training ✅

      -

      Yay, congratulations on training your model. Scroll down to play with with it, save it (either downloading it or on the Hugging Face Hub).

      -
      - ''') - show_outputs = True - elif os.path.exists("intraining.lock"): - update_top_tag = gr.update(value=''' -
      -

      Don't worry, your model is still training! ⌛

      -

      You closed the tab while your model was training, but it's all good! It is still training right now. You can click the "Open logs" button above here to check the training status. Once training is done, reload this tab to interact with your model

      -
      - ''') - show_outputs = False - else: - update_top_tag = gr.update(value=top_description) - show_outputs = False - if os.path.exists("diffusers_model.tar"): - update_files_tag = gr.update(visible=show_outputs, value=["diffusers_model.tar"]) - else: - update_files_tag = gr.update(visible=show_outputs) - return [ - update_top_tag, #top_description - gr.update(visible=show_outputs), #try_your_model - gr.update(visible=show_outputs), #push_to_hub - update_files_tag, #result - gr.update(visible=show_outputs), #convert_button - ] - -def checkbox_swap(checkbox): - return [gr.update(visible=checkbox), gr.update(visible=checkbox), gr.update(visible=checkbox), gr.update(visible=checkbox)] - -with gr.Blocks(css=css) as demo: - with gr.Box(): - if is_shared_ui: - top_description = gr.HTML(f''' -
      -

      Attention - This Space doesn't work in this shared UI

      -

      For it to work, you can either run locally or duplicate the Space and run it on your own profile using the free CPU or a (paid) private T4 GPU for training. CPU training takes a long time while each T4 costs US$0.60/h which should cost < US$1 to train most models using default settings!  Duplicate Space

      - - -
      - ''') - elif(is_spaces): - top_description = gr.HTML(f''' -
      -

      You have successfully duplicated the Textual Inversion Training Space 🎉

      -

      If you want to use CPU, it will take a long time to run the training below. If you want to use GPU, please get this ready: attribute a T4 GPU to it (via the Settings tab) and run the training below. You will be billed by the minute from when you activate the GPU until when it is turned it off.

      -
      - ''') - else: - top_description = gr.HTML(f''' -
      -

      You have successfully cloned the Dreambooth Training Space locally 🎉

      -

      Do a pip install requirements-local.txt

      -
      - ''') - gr.Markdown("# Textual Inversion Training UI 💭") - gr.Markdown("Customize Stable Diffusion by training it on a new concept. This Space is based on [Intel® Neural Compressor](https://github.com/intel/neural-compressor/tree/master/examples/pytorch/diffusion_model/diffusers/textual_inversion) with [🧨 diffusers](https://github.com/huggingface/diffusers)") - - with gr.Row() as what_are_you_training: - type_of_thing = gr.Dropdown(label="What would you like to train?", choices=["object", "person", "style"], value="object", interactive=True) - base_model_to_use = gr.Dropdown(label="Which base model would you like to use?", choices=["v1-4", "v1-5", "v2-512"], value="v1-4", interactive=True) - - #Very hacky approach to emulate dynamically created Gradio components - with gr.Row() as upload_your_concept: - with gr.Column(): - thing_description = gr.Markdown("You are going to train an `object`, please upload 1-5 images of the object to teach new concepts to Stable Diffusion, example") - thing_experimental = gr.Checkbox(label="Improve faces (prior preservation) - can take longer training but can improve faces", visible=False, value=False) - thing_image_example = gr.HTML('''''') - things_naming = gr.Markdown("You should name your concept with a unique made up word that never appears in the model vocab (e.g.: `dicoo*` here). **The meaning of the initial word** is to initialize the concept word embedding which will make training easy (e.g.: `toy` here). Images will be automatically cropped to 512x512.") - - with gr.Column(): - file_collection = [] - concept_collection = [] - init_collection = [] - buttons_collection = [] - delete_collection = [] - is_visible = [] - - row = [None] * maximum_concepts - for x in range(maximum_concepts): - ordinal = lambda n: "%d%s" % (n, "tsnrhtdd"[(n // 10 % 10 != 1) * (n % 10 < 4) * n % 10::4]) - if(x == 0): - visible = True - is_visible.append(gr.State(value=True)) - else: - visible = False - is_visible.append(gr.State(value=False)) - - file_collection.append(gr.File(label=f'''Upload the images for your {ordinal(x+1) if (x>0) else ""} concept''', file_count="multiple", interactive=True, visible=visible)) - with gr.Column(visible=visible) as row[x]: - concept_collection.append(gr.Textbox(label=f'''{ordinal(x+1) if (x>0) else ""} concept word - use a unique, made up word to avoid collisions''')) - init_collection.append(gr.Textbox(label=f'''{ordinal(x+1) if (x>0) else ""} initial word - to init the concept embedding''')) - with gr.Row(): - if(x < maximum_concepts-1): - buttons_collection.append(gr.Button(value="Add +1 concept", visible=visible)) - if(x > 0): - delete_collection.append(gr.Button(value=f"Delete {ordinal(x+1)} concept")) - - counter_add = 1 - for button in buttons_collection: - if(counter_add < len(buttons_collection)): - button.click(lambda: - [gr.update(visible=True),gr.update(visible=True), gr.update(visible=False), gr.update(visible=True), True, None], - None, - [row[counter_add], file_collection[counter_add], buttons_collection[counter_add-1], buttons_collection[counter_add], is_visible[counter_add], file_collection[counter_add]], queue=False) - else: - button.click(lambda:[gr.update(visible=True),gr.update(visible=True), gr.update(visible=False), True], None, [row[counter_add], file_collection[counter_add], buttons_collection[counter_add-1], is_visible[counter_add]], queue=False) - counter_add += 1 - - counter_delete = 1 - for delete_button in delete_collection: - if(counter_delete < len(delete_collection)+1): - delete_button.click(lambda:[gr.update(visible=False),gr.update(visible=False), gr.update(visible=True), False], None, [file_collection[counter_delete], row[counter_delete], buttons_collection[counter_delete-1], is_visible[counter_delete]], queue=False) - counter_delete += 1 - - with gr.Accordion("Custom Settings", open=False): - swap_auto_calculated = gr.Checkbox(label="Use custom settings") - gr.Markdown("The default steps is 1000. If your results aren't really what you wanted, it may be underfitting and you need more steps.") - steps = gr.Number(label="How many steps", value=1000) - # need to remove - perc_txt_encoder = gr.Number(label="Percentage of the training steps the text-encoder should be trained as well", value=30, visible=False) - # perc_txt_encoder = 30 - - with gr.Box(visible=False) as training_summary: - training_summary_text = gr.HTML("", visible=False, label="Training Summary") - is_advanced_visible = True if is_spaces else False - training_summary_checkbox = gr.Checkbox(label="Automatically remove paid GPU attribution and upload model to the Hugging Face Hub after training", value=False, visible=is_advanced_visible) - training_summary_model_name = gr.Textbox(label="Name of your model", visible=False) - training_summary_where_to_upload = gr.Dropdown(["My personal profile", "Public Library"], label="Upload to", visible=False) - training_summary_token_message = gr.Markdown("[A Hugging Face write access token](https://huggingface.co/settings/tokens), go to \"New token\" -> Role : Write. A regular read token won't work here.", visible=False) - training_summary_token = gr.Textbox(label="Hugging Face Write Token", type="password", visible=False) - - train_btn = gr.Button("Start Training") - - training_ongoing = gr.Markdown("## Training is ongoing ⌛... You can close this tab if you like or just wait. If you did not check the `Remove GPU After training`, you can come back here to try your model and upload it after training. Don't forget to remove the GPU attribution after you are done. ", visible=False) - - #Post-training UI - completed_training = gr.Markdown('''# ✅ Training completed. - ### Don't forget to remove the GPU attribution after you are done trying and uploading your model''', visible=False) - - with gr.Row(): - with gr.Box(visible=True) as try_your_model: - gr.Markdown("## Try your model") - prompt = gr.Textbox(label="Type your prompt") - result_image = gr.Image() - inference_steps = gr.Slider(minimum=1, maximum=150, value=50, step=1) - generate_button = gr.Button("Generate Image") - - with gr.Box(visible=False) as push_to_hub: - gr.Markdown("## Push to Hugging Face Hub") - model_name = gr.Textbox(label="Name of your model", placeholder="Tarsila do Amaral Style") - where_to_upload = gr.Dropdown(["My personal profile", "Public Library"], label="Upload to") - gr.Markdown("[A Hugging Face write access token](https://huggingface.co/settings/tokens), go to \"New token\" -> Role : Write. A regular read token won't work here.") - hf_token = gr.Textbox(label="Hugging Face Write Token", type="password") - - push_button = gr.Button("Push to the Hub") - - result = gr.File(label="Download the uploaded models in the diffusers format", visible=True) - success_message_upload = gr.Markdown(visible=False) - convert_button = gr.Button("Convert to CKPT", visible=False) - - #Swap the examples and the % of text encoder trained depending if it is an object, person or style - type_of_thing.change(fn=swap_text, inputs=[type_of_thing], outputs=[thing_description, thing_image_example, things_naming, perc_txt_encoder, thing_experimental], queue=False, show_progress=False) - - #Swap the base model - base_model_to_use.change(fn=swap_base_model, inputs=base_model_to_use, outputs=[]) - - #Update the summary box below the UI according to how many images are uploaded and whether users are using custom settings or not - for file in file_collection: - #file.change(fn=update_steps,inputs=file_collection, outputs=steps) - file.change(fn=count_files, inputs=file_collection+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False) - - steps.change(fn=count_files, inputs=file_collection+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False) - perc_txt_encoder.change(fn=count_files, inputs=file_collection+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False) - - #Give more options if the user wants to finish everything after training - if(is_spaces): - training_summary_checkbox.change(fn=checkbox_swap, inputs=training_summary_checkbox, outputs=[training_summary_token_message, training_summary_token, training_summary_model_name, training_summary_where_to_upload],queue=False, show_progress=False) - #Add a message for while it is in training - train_btn.click(lambda:gr.update(visible=True), inputs=None, outputs=training_ongoing) - - #The main train function - train_btn.click(fn=train, inputs=is_visible+concept_collection+init_collection+file_collection+[base_model_to_use]+[thing_experimental]+[training_summary_where_to_upload]+[training_summary_model_name]+[training_summary_checkbox]+[training_summary_token]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[result, try_your_model, push_to_hub, convert_button, training_ongoing, completed_training], queue=False) - - #Button to generate an image from your trained model after training - print('=='*20) - print(prompt) - print(inference_steps) - generate_button.click(fn=generate, inputs=[prompt, inference_steps], outputs=result_image, queue=False) - - #Button to push the model to the Hugging Face Hub - push_button.click(fn=push, inputs=[model_name, where_to_upload, hf_token, base_model_to_use], outputs=[success_message_upload, result], queue=False) - #Button to convert the model to ckpt format - convert_button.click(fn=convert_to_ckpt, inputs=[], outputs=result, queue=False) - - #Checks if the training is running - demo.load(fn=check_status, inputs=top_description, outputs=[top_description, try_your_model, push_to_hub, result, convert_button], queue=False, show_progress=False) - -demo.queue(default_enabled=False).launch(debug=True) diff --git a/spaces/ECCV2022/bytetrack/deploy/TensorRT/cpp/include/BYTETracker.h b/spaces/ECCV2022/bytetrack/deploy/TensorRT/cpp/include/BYTETracker.h deleted file mode 100644 index e3dda973fa27ccdb85a27841ec2a1cf8dcc1e9b0..0000000000000000000000000000000000000000 --- a/spaces/ECCV2022/bytetrack/deploy/TensorRT/cpp/include/BYTETracker.h +++ /dev/null @@ -1,49 +0,0 @@ -#pragma once - -#include "STrack.h" - -struct Object -{ - cv::Rect_ rect; - int label; - float prob; -}; - -class BYTETracker -{ -public: - BYTETracker(int frame_rate = 30, int track_buffer = 30); - ~BYTETracker(); - - vector update(const vector& objects); - Scalar get_color(int idx); - -private: - vector joint_stracks(vector &tlista, vector &tlistb); - vector joint_stracks(vector &tlista, vector &tlistb); - - vector sub_stracks(vector &tlista, vector &tlistb); - void remove_duplicate_stracks(vector &resa, vector &resb, vector &stracksa, vector &stracksb); - - void linear_assignment(vector > &cost_matrix, int cost_matrix_size, int cost_matrix_size_size, float thresh, - vector > &matches, vector &unmatched_a, vector &unmatched_b); - vector > iou_distance(vector &atracks, vector &btracks, int &dist_size, int &dist_size_size); - vector > iou_distance(vector &atracks, vector &btracks); - vector > ious(vector > &atlbrs, vector > &btlbrs); - - double lapjv(const vector > &cost, vector &rowsol, vector &colsol, - bool extend_cost = false, float cost_limit = LONG_MAX, bool return_cost = true); - -private: - - float track_thresh; - float high_thresh; - float match_thresh; - int frame_id; - int max_time_lost; - - vector tracked_stracks; - vector lost_stracks; - vector removed_stracks; - byte_kalman::KalmanFilter kalman_filter; -}; \ No newline at end of file diff --git a/spaces/EDGAhab/Aatrox-Talking/modules.py b/spaces/EDGAhab/Aatrox-Talking/modules.py deleted file mode 100644 index 9c7fd9cd6eb8b7e0ec0e08957e970744a374a924..0000000000000000000000000000000000000000 --- a/spaces/EDGAhab/Aatrox-Talking/modules.py +++ /dev/null @@ -1,390 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/EDGAhab/Aatrox-Talking/monotonic_align/core.c b/spaces/EDGAhab/Aatrox-Talking/monotonic_align/core.c deleted file mode 100644 index 5631d20a9a00db29e143a6e8e4e5c378d6bb850a..0000000000000000000000000000000000000000 --- a/spaces/EDGAhab/Aatrox-Talking/monotonic_align/core.c +++ /dev/null @@ -1,21299 +0,0 @@ -/* Generated by Cython 0.29.21 */ - -/* BEGIN: Cython Metadata -{ - "distutils": { - "name": "monotonic_align.core", - "sources": [ - "core.pyx" - ] - }, - "module_name": "monotonic_align.core" -} -END: Cython Metadata */ - -#define PY_SSIZE_T_CLEAN -#include "Python.h" -#ifndef Py_PYTHON_H - #error Python headers needed to compile C extensions, please install development version of Python. -#elif PY_VERSION_HEX < 0x02060000 || (0x03000000 <= PY_VERSION_HEX && PY_VERSION_HEX < 0x03030000) - #error Cython requires Python 2.6+ or Python 3.3+. -#else -#define CYTHON_ABI "0_29_21" -#define CYTHON_HEX_VERSION 0x001D15F0 -#define CYTHON_FUTURE_DIVISION 0 -#include -#ifndef offsetof - #define offsetof(type, member) ( (size_t) & ((type*)0) -> member ) -#endif -#if !defined(WIN32) && !defined(MS_WINDOWS) - #ifndef __stdcall - #define __stdcall - #endif - #ifndef __cdecl - #define __cdecl - #endif - #ifndef __fastcall - #define __fastcall - #endif -#endif -#ifndef DL_IMPORT - #define DL_IMPORT(t) t -#endif -#ifndef DL_EXPORT - #define DL_EXPORT(t) t -#endif -#define __PYX_COMMA , -#ifndef HAVE_LONG_LONG - #if PY_VERSION_HEX >= 0x02070000 - #define HAVE_LONG_LONG - #endif -#endif -#ifndef PY_LONG_LONG - #define PY_LONG_LONG LONG_LONG -#endif -#ifndef Py_HUGE_VAL - #define Py_HUGE_VAL HUGE_VAL -#endif -#ifdef PYPY_VERSION - #define CYTHON_COMPILING_IN_PYPY 1 - #define CYTHON_COMPILING_IN_PYSTON 0 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #undef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 0 - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #if PY_VERSION_HEX < 0x03050000 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #elif !defined(CYTHON_USE_ASYNC_SLOTS) - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #undef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 0 - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #undef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 1 - #undef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 0 - #undef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 0 - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 0 - #undef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 0 - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 -#elif defined(PYSTON_VERSION) - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_PYSTON 1 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #ifndef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 1 - #endif - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #ifndef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 1 - #endif - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #ifndef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 0 - #endif - #ifndef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 1 - #endif - #ifndef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 1 - #endif - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 0 - #undef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 0 - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 -#else - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_PYSTON 0 - #define CYTHON_COMPILING_IN_CPYTHON 1 - #ifndef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 1 - #endif - #if PY_VERSION_HEX < 0x02070000 - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #elif !defined(CYTHON_USE_PYTYPE_LOOKUP) - #define CYTHON_USE_PYTYPE_LOOKUP 1 - #endif - #if PY_MAJOR_VERSION < 3 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #elif !defined(CYTHON_USE_ASYNC_SLOTS) - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #if PY_VERSION_HEX < 0x02070000 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #elif !defined(CYTHON_USE_PYLONG_INTERNALS) - #define CYTHON_USE_PYLONG_INTERNALS 1 - #endif - #ifndef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 1 - #endif - #ifndef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 1 - #endif - #if PY_VERSION_HEX < 0x030300F0 - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #elif !defined(CYTHON_USE_UNICODE_WRITER) - #define CYTHON_USE_UNICODE_WRITER 1 - #endif - #ifndef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 0 - #endif - #ifndef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 1 - #endif - #ifndef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 1 - #endif - #ifndef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 1 - #endif - #ifndef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 1 - #endif - #ifndef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT (PY_VERSION_HEX >= 0x03050000) - #endif - #ifndef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE (PY_VERSION_HEX >= 0x030400a1) - #endif - #ifndef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS (PY_VERSION_HEX >= 0x030600B1) - #endif - #ifndef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK (PY_VERSION_HEX >= 0x030700A3) - #endif -#endif -#if !defined(CYTHON_FAST_PYCCALL) -#define CYTHON_FAST_PYCCALL (CYTHON_FAST_PYCALL && PY_VERSION_HEX >= 0x030600B1) -#endif -#if CYTHON_USE_PYLONG_INTERNALS - #include "longintrepr.h" - #undef SHIFT - #undef BASE - #undef MASK - #ifdef SIZEOF_VOID_P - enum { __pyx_check_sizeof_voidp = 1 / (int)(SIZEOF_VOID_P == sizeof(void*)) }; - #endif -#endif -#ifndef __has_attribute - #define __has_attribute(x) 0 -#endif -#ifndef __has_cpp_attribute - #define __has_cpp_attribute(x) 0 -#endif -#ifndef CYTHON_RESTRICT - #if defined(__GNUC__) - #define CYTHON_RESTRICT __restrict__ - #elif defined(_MSC_VER) && _MSC_VER >= 1400 - #define CYTHON_RESTRICT __restrict - #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define CYTHON_RESTRICT restrict - #else - #define CYTHON_RESTRICT - #endif -#endif -#ifndef CYTHON_UNUSED -# if defined(__GNUC__) -# if !(defined(__cplusplus)) || (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4)) -# define CYTHON_UNUSED __attribute__ ((__unused__)) -# else -# define CYTHON_UNUSED -# endif -# elif defined(__ICC) || (defined(__INTEL_COMPILER) && !defined(_MSC_VER)) -# define CYTHON_UNUSED __attribute__ ((__unused__)) -# else -# define CYTHON_UNUSED -# endif -#endif -#ifndef CYTHON_MAYBE_UNUSED_VAR -# if defined(__cplusplus) - template void CYTHON_MAYBE_UNUSED_VAR( const T& ) { } -# else -# define CYTHON_MAYBE_UNUSED_VAR(x) (void)(x) -# endif -#endif -#ifndef CYTHON_NCP_UNUSED -# if CYTHON_COMPILING_IN_CPYTHON -# define CYTHON_NCP_UNUSED -# else -# define CYTHON_NCP_UNUSED CYTHON_UNUSED -# endif -#endif -#define __Pyx_void_to_None(void_result) ((void)(void_result), Py_INCREF(Py_None), Py_None) -#ifdef _MSC_VER - #ifndef _MSC_STDINT_H_ - #if _MSC_VER < 1300 - typedef unsigned char uint8_t; - typedef unsigned int uint32_t; - #else - typedef unsigned __int8 uint8_t; - typedef unsigned __int32 uint32_t; - #endif - #endif -#else - #include -#endif -#ifndef CYTHON_FALLTHROUGH - #if defined(__cplusplus) && __cplusplus >= 201103L - #if __has_cpp_attribute(fallthrough) - #define CYTHON_FALLTHROUGH [[fallthrough]] - #elif __has_cpp_attribute(clang::fallthrough) - #define CYTHON_FALLTHROUGH [[clang::fallthrough]] - #elif __has_cpp_attribute(gnu::fallthrough) - #define CYTHON_FALLTHROUGH [[gnu::fallthrough]] - #endif - #endif - #ifndef CYTHON_FALLTHROUGH - #if __has_attribute(fallthrough) - #define CYTHON_FALLTHROUGH __attribute__((fallthrough)) - #else - #define CYTHON_FALLTHROUGH - #endif - #endif - #if defined(__clang__ ) && defined(__apple_build_version__) - #if __apple_build_version__ < 7000000 - #undef CYTHON_FALLTHROUGH - #define CYTHON_FALLTHROUGH - #endif - #endif -#endif - -#ifndef CYTHON_INLINE - #if defined(__clang__) - #define CYTHON_INLINE __inline__ __attribute__ ((__unused__)) - #elif defined(__GNUC__) - #define CYTHON_INLINE __inline__ - #elif defined(_MSC_VER) - #define CYTHON_INLINE __inline - #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define CYTHON_INLINE inline - #else - #define CYTHON_INLINE - #endif -#endif - -#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX < 0x02070600 && !defined(Py_OptimizeFlag) - #define Py_OptimizeFlag 0 -#endif -#define __PYX_BUILD_PY_SSIZE_T "n" -#define CYTHON_FORMAT_SSIZE_T "z" -#if PY_MAJOR_VERSION < 3 - #define __Pyx_BUILTIN_MODULE_NAME "__builtin__" - #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_New(a+k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) - #define __Pyx_DefaultClassType PyClass_Type -#else - #define __Pyx_BUILTIN_MODULE_NAME "builtins" -#if PY_VERSION_HEX >= 0x030800A4 && PY_VERSION_HEX < 0x030800B2 - #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_New(a, 0, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) -#else - #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) -#endif - #define __Pyx_DefaultClassType PyType_Type -#endif -#ifndef Py_TPFLAGS_CHECKTYPES - #define Py_TPFLAGS_CHECKTYPES 0 -#endif -#ifndef Py_TPFLAGS_HAVE_INDEX - #define Py_TPFLAGS_HAVE_INDEX 0 -#endif -#ifndef Py_TPFLAGS_HAVE_NEWBUFFER - #define Py_TPFLAGS_HAVE_NEWBUFFER 0 -#endif -#ifndef Py_TPFLAGS_HAVE_FINALIZE - #define Py_TPFLAGS_HAVE_FINALIZE 0 -#endif -#ifndef METH_STACKLESS - #define METH_STACKLESS 0 -#endif -#if PY_VERSION_HEX <= 0x030700A3 || !defined(METH_FASTCALL) - #ifndef METH_FASTCALL - #define METH_FASTCALL 0x80 - #endif - typedef PyObject *(*__Pyx_PyCFunctionFast) (PyObject *self, PyObject *const *args, Py_ssize_t nargs); - typedef PyObject *(*__Pyx_PyCFunctionFastWithKeywords) (PyObject *self, PyObject *const *args, - Py_ssize_t nargs, PyObject *kwnames); -#else - #define __Pyx_PyCFunctionFast _PyCFunctionFast - #define __Pyx_PyCFunctionFastWithKeywords _PyCFunctionFastWithKeywords -#endif -#if CYTHON_FAST_PYCCALL -#define __Pyx_PyFastCFunction_Check(func)\ - ((PyCFunction_Check(func) && (METH_FASTCALL == (PyCFunction_GET_FLAGS(func) & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_KEYWORDS | METH_STACKLESS))))) -#else -#define __Pyx_PyFastCFunction_Check(func) 0 -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Malloc) - #define PyObject_Malloc(s) PyMem_Malloc(s) - #define PyObject_Free(p) PyMem_Free(p) - #define PyObject_Realloc(p) PyMem_Realloc(p) -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030400A1 - #define PyMem_RawMalloc(n) PyMem_Malloc(n) - #define PyMem_RawRealloc(p, n) PyMem_Realloc(p, n) - #define PyMem_RawFree(p) PyMem_Free(p) -#endif -#if CYTHON_COMPILING_IN_PYSTON - #define __Pyx_PyCode_HasFreeVars(co) PyCode_HasFreeVars(co) - #define __Pyx_PyFrame_SetLineNumber(frame, lineno) PyFrame_SetLineNumber(frame, lineno) -#else - #define __Pyx_PyCode_HasFreeVars(co) (PyCode_GetNumFree(co) > 0) - #define __Pyx_PyFrame_SetLineNumber(frame, lineno) (frame)->f_lineno = (lineno) -#endif -#if !CYTHON_FAST_THREAD_STATE || PY_VERSION_HEX < 0x02070000 - #define __Pyx_PyThreadState_Current PyThreadState_GET() -#elif PY_VERSION_HEX >= 0x03060000 - #define __Pyx_PyThreadState_Current _PyThreadState_UncheckedGet() -#elif PY_VERSION_HEX >= 0x03000000 - #define __Pyx_PyThreadState_Current PyThreadState_GET() -#else - #define __Pyx_PyThreadState_Current _PyThreadState_Current -#endif -#if PY_VERSION_HEX < 0x030700A2 && !defined(PyThread_tss_create) && !defined(Py_tss_NEEDS_INIT) -#include "pythread.h" -#define Py_tss_NEEDS_INIT 0 -typedef int Py_tss_t; -static CYTHON_INLINE int PyThread_tss_create(Py_tss_t *key) { - *key = PyThread_create_key(); - return 0; -} -static CYTHON_INLINE Py_tss_t * PyThread_tss_alloc(void) { - Py_tss_t *key = (Py_tss_t *)PyObject_Malloc(sizeof(Py_tss_t)); - *key = Py_tss_NEEDS_INIT; - return key; -} -static CYTHON_INLINE void PyThread_tss_free(Py_tss_t *key) { - PyObject_Free(key); -} -static CYTHON_INLINE int PyThread_tss_is_created(Py_tss_t *key) { - return *key != Py_tss_NEEDS_INIT; -} -static CYTHON_INLINE void PyThread_tss_delete(Py_tss_t *key) { - PyThread_delete_key(*key); - *key = Py_tss_NEEDS_INIT; -} -static CYTHON_INLINE int PyThread_tss_set(Py_tss_t *key, void *value) { - return PyThread_set_key_value(*key, value); -} -static CYTHON_INLINE void * PyThread_tss_get(Py_tss_t *key) { - return PyThread_get_key_value(*key); -} -#endif -#if CYTHON_COMPILING_IN_CPYTHON || defined(_PyDict_NewPresized) -#define __Pyx_PyDict_NewPresized(n) ((n <= 8) ? PyDict_New() : _PyDict_NewPresized(n)) -#else -#define __Pyx_PyDict_NewPresized(n) PyDict_New() -#endif -#if PY_MAJOR_VERSION >= 3 || CYTHON_FUTURE_DIVISION - #define __Pyx_PyNumber_Divide(x,y) PyNumber_TrueDivide(x,y) - #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceTrueDivide(x,y) -#else - #define __Pyx_PyNumber_Divide(x,y) PyNumber_Divide(x,y) - #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceDivide(x,y) -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1 && CYTHON_USE_UNICODE_INTERNALS -#define __Pyx_PyDict_GetItemStr(dict, name) _PyDict_GetItem_KnownHash(dict, name, ((PyASCIIObject *) name)->hash) -#else -#define __Pyx_PyDict_GetItemStr(dict, name) PyDict_GetItem(dict, name) -#endif -#if PY_VERSION_HEX > 0x03030000 && defined(PyUnicode_KIND) - #define CYTHON_PEP393_ENABLED 1 - #define __Pyx_PyUnicode_READY(op) (likely(PyUnicode_IS_READY(op)) ?\ - 0 : _PyUnicode_Ready((PyObject *)(op))) - #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_LENGTH(u) - #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_READ_CHAR(u, i) - #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) PyUnicode_MAX_CHAR_VALUE(u) - #define __Pyx_PyUnicode_KIND(u) PyUnicode_KIND(u) - #define __Pyx_PyUnicode_DATA(u) PyUnicode_DATA(u) - #define __Pyx_PyUnicode_READ(k, d, i) PyUnicode_READ(k, d, i) - #define __Pyx_PyUnicode_WRITE(k, d, i, ch) PyUnicode_WRITE(k, d, i, ch) - #if defined(PyUnicode_IS_READY) && defined(PyUnicode_GET_SIZE) - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : PyUnicode_GET_SIZE(u))) - #else - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_LENGTH(u)) - #endif -#else - #define CYTHON_PEP393_ENABLED 0 - #define PyUnicode_1BYTE_KIND 1 - #define PyUnicode_2BYTE_KIND 2 - #define PyUnicode_4BYTE_KIND 4 - #define __Pyx_PyUnicode_READY(op) (0) - #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_SIZE(u) - #define __Pyx_PyUnicode_READ_CHAR(u, i) ((Py_UCS4)(PyUnicode_AS_UNICODE(u)[i])) - #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) ((sizeof(Py_UNICODE) == 2) ? 65535 : 1114111) - #define __Pyx_PyUnicode_KIND(u) (sizeof(Py_UNICODE)) - #define __Pyx_PyUnicode_DATA(u) ((void*)PyUnicode_AS_UNICODE(u)) - #define __Pyx_PyUnicode_READ(k, d, i) ((void)(k), (Py_UCS4)(((Py_UNICODE*)d)[i])) - #define __Pyx_PyUnicode_WRITE(k, d, i, ch) (((void)(k)), ((Py_UNICODE*)d)[i] = ch) - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_SIZE(u)) -#endif -#if CYTHON_COMPILING_IN_PYPY - #define __Pyx_PyUnicode_Concat(a, b) PyNumber_Add(a, b) - #define __Pyx_PyUnicode_ConcatSafe(a, b) PyNumber_Add(a, b) -#else - #define __Pyx_PyUnicode_Concat(a, b) PyUnicode_Concat(a, b) - #define __Pyx_PyUnicode_ConcatSafe(a, b) ((unlikely((a) == Py_None) || unlikely((b) == Py_None)) ?\ - PyNumber_Add(a, b) : __Pyx_PyUnicode_Concat(a, b)) -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyUnicode_Contains) - #define PyUnicode_Contains(u, s) PySequence_Contains(u, s) -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyByteArray_Check) - #define PyByteArray_Check(obj) PyObject_TypeCheck(obj, &PyByteArray_Type) -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Format) - #define PyObject_Format(obj, fmt) PyObject_CallMethod(obj, "__format__", "O", fmt) -#endif -#define __Pyx_PyString_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyString_Check(b) && !PyString_CheckExact(b)))) ? PyNumber_Remainder(a, b) : __Pyx_PyString_Format(a, b)) -#define __Pyx_PyUnicode_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyUnicode_Check(b) && !PyUnicode_CheckExact(b)))) ? PyNumber_Remainder(a, b) : PyUnicode_Format(a, b)) -#if PY_MAJOR_VERSION >= 3 - #define __Pyx_PyString_Format(a, b) PyUnicode_Format(a, b) -#else - #define __Pyx_PyString_Format(a, b) PyString_Format(a, b) -#endif -#if PY_MAJOR_VERSION < 3 && !defined(PyObject_ASCII) - #define PyObject_ASCII(o) PyObject_Repr(o) -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyBaseString_Type PyUnicode_Type - #define PyStringObject PyUnicodeObject - #define PyString_Type PyUnicode_Type - #define PyString_Check PyUnicode_Check - #define PyString_CheckExact PyUnicode_CheckExact -#ifndef PyObject_Unicode - #define PyObject_Unicode PyObject_Str -#endif -#endif -#if PY_MAJOR_VERSION >= 3 - #define __Pyx_PyBaseString_Check(obj) PyUnicode_Check(obj) - #define __Pyx_PyBaseString_CheckExact(obj) PyUnicode_CheckExact(obj) -#else - #define __Pyx_PyBaseString_Check(obj) (PyString_Check(obj) || PyUnicode_Check(obj)) - #define __Pyx_PyBaseString_CheckExact(obj) (PyString_CheckExact(obj) || PyUnicode_CheckExact(obj)) -#endif -#ifndef PySet_CheckExact - #define PySet_CheckExact(obj) (Py_TYPE(obj) == &PySet_Type) -#endif -#if PY_VERSION_HEX >= 0x030900A4 - #define __Pyx_SET_REFCNT(obj, refcnt) Py_SET_REFCNT(obj, refcnt) - #define __Pyx_SET_SIZE(obj, size) Py_SET_SIZE(obj, size) -#else - #define __Pyx_SET_REFCNT(obj, refcnt) Py_REFCNT(obj) = (refcnt) - #define __Pyx_SET_SIZE(obj, size) Py_SIZE(obj) = (size) -#endif -#if CYTHON_ASSUME_SAFE_MACROS - #define __Pyx_PySequence_SIZE(seq) Py_SIZE(seq) -#else - #define __Pyx_PySequence_SIZE(seq) PySequence_Size(seq) -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyIntObject PyLongObject - #define PyInt_Type PyLong_Type - #define PyInt_Check(op) PyLong_Check(op) - #define PyInt_CheckExact(op) PyLong_CheckExact(op) - #define PyInt_FromString PyLong_FromString - #define PyInt_FromUnicode PyLong_FromUnicode - #define PyInt_FromLong PyLong_FromLong - #define PyInt_FromSize_t PyLong_FromSize_t - #define PyInt_FromSsize_t PyLong_FromSsize_t - #define PyInt_AsLong PyLong_AsLong - #define PyInt_AS_LONG PyLong_AS_LONG - #define PyInt_AsSsize_t PyLong_AsSsize_t - #define PyInt_AsUnsignedLongMask PyLong_AsUnsignedLongMask - #define PyInt_AsUnsignedLongLongMask PyLong_AsUnsignedLongLongMask - #define PyNumber_Int PyNumber_Long -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyBoolObject PyLongObject -#endif -#if PY_MAJOR_VERSION >= 3 && CYTHON_COMPILING_IN_PYPY - #ifndef PyUnicode_InternFromString - #define PyUnicode_InternFromString(s) PyUnicode_FromString(s) - #endif -#endif -#if PY_VERSION_HEX < 0x030200A4 - typedef long Py_hash_t; - #define __Pyx_PyInt_FromHash_t PyInt_FromLong - #define __Pyx_PyInt_AsHash_t PyInt_AsLong -#else - #define __Pyx_PyInt_FromHash_t PyInt_FromSsize_t - #define __Pyx_PyInt_AsHash_t PyInt_AsSsize_t -#endif -#if PY_MAJOR_VERSION >= 3 - #define __Pyx_PyMethod_New(func, self, klass) ((self) ? ((void)(klass), PyMethod_New(func, self)) : __Pyx_NewRef(func)) -#else - #define __Pyx_PyMethod_New(func, self, klass) PyMethod_New(func, self, klass) -#endif -#if CYTHON_USE_ASYNC_SLOTS - #if PY_VERSION_HEX >= 0x030500B1 - #define __Pyx_PyAsyncMethodsStruct PyAsyncMethods - #define __Pyx_PyType_AsAsync(obj) (Py_TYPE(obj)->tp_as_async) - #else - #define __Pyx_PyType_AsAsync(obj) ((__Pyx_PyAsyncMethodsStruct*) (Py_TYPE(obj)->tp_reserved)) - #endif -#else - #define __Pyx_PyType_AsAsync(obj) NULL -#endif -#ifndef __Pyx_PyAsyncMethodsStruct - typedef struct { - unaryfunc am_await; - unaryfunc am_aiter; - unaryfunc am_anext; - } __Pyx_PyAsyncMethodsStruct; -#endif - -#if defined(WIN32) || defined(MS_WINDOWS) - #define _USE_MATH_DEFINES -#endif -#include -#ifdef NAN -#define __PYX_NAN() ((float) NAN) -#else -static CYTHON_INLINE float __PYX_NAN() { - float value; - memset(&value, 0xFF, sizeof(value)); - return value; -} -#endif -#if defined(__CYGWIN__) && defined(_LDBL_EQ_DBL) -#define __Pyx_truncl trunc -#else -#define __Pyx_truncl truncl -#endif - -#define __PYX_MARK_ERR_POS(f_index, lineno) \ - { __pyx_filename = __pyx_f[f_index]; (void)__pyx_filename; __pyx_lineno = lineno; (void)__pyx_lineno; __pyx_clineno = __LINE__; (void)__pyx_clineno; } -#define __PYX_ERR(f_index, lineno, Ln_error) \ - { __PYX_MARK_ERR_POS(f_index, lineno) goto Ln_error; } - -#ifndef __PYX_EXTERN_C - #ifdef __cplusplus - #define __PYX_EXTERN_C extern "C" - #else - #define __PYX_EXTERN_C extern - #endif -#endif - -#define __PYX_HAVE__monotonic_align__core -#define __PYX_HAVE_API__monotonic_align__core -/* Early includes */ -#include "pythread.h" -#include -#include -#include -#include "pystate.h" -#ifdef _OPENMP -#include -#endif /* _OPENMP */ - -#if defined(PYREX_WITHOUT_ASSERTIONS) && !defined(CYTHON_WITHOUT_ASSERTIONS) -#define CYTHON_WITHOUT_ASSERTIONS -#endif - -typedef struct {PyObject **p; const char *s; const Py_ssize_t n; const char* encoding; - const char is_unicode; const char is_str; const char intern; } __Pyx_StringTabEntry; - -#define __PYX_DEFAULT_STRING_ENCODING_IS_ASCII 0 -#define __PYX_DEFAULT_STRING_ENCODING_IS_UTF8 0 -#define __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT (PY_MAJOR_VERSION >= 3 && __PYX_DEFAULT_STRING_ENCODING_IS_UTF8) -#define __PYX_DEFAULT_STRING_ENCODING "" -#define __Pyx_PyObject_FromString __Pyx_PyBytes_FromString -#define __Pyx_PyObject_FromStringAndSize __Pyx_PyBytes_FromStringAndSize -#define __Pyx_uchar_cast(c) ((unsigned char)c) -#define __Pyx_long_cast(x) ((long)x) -#define __Pyx_fits_Py_ssize_t(v, type, is_signed) (\ - (sizeof(type) < sizeof(Py_ssize_t)) ||\ - (sizeof(type) > sizeof(Py_ssize_t) &&\ - likely(v < (type)PY_SSIZE_T_MAX ||\ - v == (type)PY_SSIZE_T_MAX) &&\ - (!is_signed || likely(v > (type)PY_SSIZE_T_MIN ||\ - v == (type)PY_SSIZE_T_MIN))) ||\ - (sizeof(type) == sizeof(Py_ssize_t) &&\ - (is_signed || likely(v < (type)PY_SSIZE_T_MAX ||\ - v == (type)PY_SSIZE_T_MAX))) ) -static CYTHON_INLINE int __Pyx_is_valid_index(Py_ssize_t i, Py_ssize_t limit) { - return (size_t) i < (size_t) limit; -} -#if defined (__cplusplus) && __cplusplus >= 201103L - #include - #define __Pyx_sst_abs(value) std::abs(value) -#elif SIZEOF_INT >= SIZEOF_SIZE_T - #define __Pyx_sst_abs(value) abs(value) -#elif SIZEOF_LONG >= SIZEOF_SIZE_T - #define __Pyx_sst_abs(value) labs(value) -#elif defined (_MSC_VER) - #define __Pyx_sst_abs(value) ((Py_ssize_t)_abs64(value)) -#elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define __Pyx_sst_abs(value) llabs(value) -#elif defined (__GNUC__) - #define __Pyx_sst_abs(value) __builtin_llabs(value) -#else - #define __Pyx_sst_abs(value) ((value<0) ? -value : value) -#endif -static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject*); -static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject*, Py_ssize_t* length); -#define __Pyx_PyByteArray_FromString(s) PyByteArray_FromStringAndSize((const char*)s, strlen((const char*)s)) -#define __Pyx_PyByteArray_FromStringAndSize(s, l) PyByteArray_FromStringAndSize((const char*)s, l) -#define __Pyx_PyBytes_FromString PyBytes_FromString -#define __Pyx_PyBytes_FromStringAndSize PyBytes_FromStringAndSize -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char*); -#if PY_MAJOR_VERSION < 3 - #define __Pyx_PyStr_FromString __Pyx_PyBytes_FromString - #define __Pyx_PyStr_FromStringAndSize __Pyx_PyBytes_FromStringAndSize -#else - #define __Pyx_PyStr_FromString __Pyx_PyUnicode_FromString - #define __Pyx_PyStr_FromStringAndSize __Pyx_PyUnicode_FromStringAndSize -#endif -#define __Pyx_PyBytes_AsWritableString(s) ((char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsWritableSString(s) ((signed char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsWritableUString(s) ((unsigned char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsString(s) ((const char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsSString(s) ((const signed char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsUString(s) ((const unsigned char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyObject_AsWritableString(s) ((char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsWritableSString(s) ((signed char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsWritableUString(s) ((unsigned char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsSString(s) ((const signed char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsUString(s) ((const unsigned char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_FromCString(s) __Pyx_PyObject_FromString((const char*)s) -#define __Pyx_PyBytes_FromCString(s) __Pyx_PyBytes_FromString((const char*)s) -#define __Pyx_PyByteArray_FromCString(s) __Pyx_PyByteArray_FromString((const char*)s) -#define __Pyx_PyStr_FromCString(s) __Pyx_PyStr_FromString((const char*)s) -#define __Pyx_PyUnicode_FromCString(s) __Pyx_PyUnicode_FromString((const char*)s) -static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE *u) { - const Py_UNICODE *u_end = u; - while (*u_end++) ; - return (size_t)(u_end - u - 1); -} -#define __Pyx_PyUnicode_FromUnicode(u) PyUnicode_FromUnicode(u, __Pyx_Py_UNICODE_strlen(u)) -#define __Pyx_PyUnicode_FromUnicodeAndLength PyUnicode_FromUnicode -#define __Pyx_PyUnicode_AsUnicode PyUnicode_AsUnicode -#define __Pyx_NewRef(obj) (Py_INCREF(obj), obj) -#define __Pyx_Owned_Py_None(b) __Pyx_NewRef(Py_None) -static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b); -static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*); -static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject*); -static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x); -#define __Pyx_PySequence_Tuple(obj)\ - (likely(PyTuple_CheckExact(obj)) ? __Pyx_NewRef(obj) : PySequence_Tuple(obj)) -static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*); -static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t); -#if CYTHON_ASSUME_SAFE_MACROS -#define __pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x)) -#else -#define __pyx_PyFloat_AsDouble(x) PyFloat_AsDouble(x) -#endif -#define __pyx_PyFloat_AsFloat(x) ((float) __pyx_PyFloat_AsDouble(x)) -#if PY_MAJOR_VERSION >= 3 -#define __Pyx_PyNumber_Int(x) (PyLong_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Long(x)) -#else -#define __Pyx_PyNumber_Int(x) (PyInt_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Int(x)) -#endif -#define __Pyx_PyNumber_Float(x) (PyFloat_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Float(x)) -#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII -static int __Pyx_sys_getdefaultencoding_not_ascii; -static int __Pyx_init_sys_getdefaultencoding_params(void) { - PyObject* sys; - PyObject* default_encoding = NULL; - PyObject* ascii_chars_u = NULL; - PyObject* ascii_chars_b = NULL; - const char* default_encoding_c; - sys = PyImport_ImportModule("sys"); - if (!sys) goto bad; - default_encoding = PyObject_CallMethod(sys, (char*) "getdefaultencoding", NULL); - Py_DECREF(sys); - if (!default_encoding) goto bad; - default_encoding_c = PyBytes_AsString(default_encoding); - if (!default_encoding_c) goto bad; - if (strcmp(default_encoding_c, "ascii") == 0) { - __Pyx_sys_getdefaultencoding_not_ascii = 0; - } else { - char ascii_chars[128]; - int c; - for (c = 0; c < 128; c++) { - ascii_chars[c] = c; - } - __Pyx_sys_getdefaultencoding_not_ascii = 1; - ascii_chars_u = PyUnicode_DecodeASCII(ascii_chars, 128, NULL); - if (!ascii_chars_u) goto bad; - ascii_chars_b = PyUnicode_AsEncodedString(ascii_chars_u, default_encoding_c, NULL); - if (!ascii_chars_b || !PyBytes_Check(ascii_chars_b) || memcmp(ascii_chars, PyBytes_AS_STRING(ascii_chars_b), 128) != 0) { - PyErr_Format( - PyExc_ValueError, - "This module compiled with c_string_encoding=ascii, but default encoding '%.200s' is not a superset of ascii.", - default_encoding_c); - goto bad; - } - Py_DECREF(ascii_chars_u); - Py_DECREF(ascii_chars_b); - } - Py_DECREF(default_encoding); - return 0; -bad: - Py_XDECREF(default_encoding); - Py_XDECREF(ascii_chars_u); - Py_XDECREF(ascii_chars_b); - return -1; -} -#endif -#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT && PY_MAJOR_VERSION >= 3 -#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_DecodeUTF8(c_str, size, NULL) -#else -#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_Decode(c_str, size, __PYX_DEFAULT_STRING_ENCODING, NULL) -#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT -static char* __PYX_DEFAULT_STRING_ENCODING; -static int __Pyx_init_sys_getdefaultencoding_params(void) { - PyObject* sys; - PyObject* default_encoding = NULL; - char* default_encoding_c; - sys = PyImport_ImportModule("sys"); - if (!sys) goto bad; - default_encoding = PyObject_CallMethod(sys, (char*) (const char*) "getdefaultencoding", NULL); - Py_DECREF(sys); - if (!default_encoding) goto bad; - default_encoding_c = PyBytes_AsString(default_encoding); - if (!default_encoding_c) goto bad; - __PYX_DEFAULT_STRING_ENCODING = (char*) malloc(strlen(default_encoding_c) + 1); - if (!__PYX_DEFAULT_STRING_ENCODING) goto bad; - strcpy(__PYX_DEFAULT_STRING_ENCODING, default_encoding_c); - Py_DECREF(default_encoding); - return 0; -bad: - Py_XDECREF(default_encoding); - return -1; -} -#endif -#endif - - -/* Test for GCC > 2.95 */ -#if defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95))) - #define likely(x) __builtin_expect(!!(x), 1) - #define unlikely(x) __builtin_expect(!!(x), 0) -#else /* !__GNUC__ or GCC < 2.95 */ - #define likely(x) (x) - #define unlikely(x) (x) -#endif /* __GNUC__ */ -static CYTHON_INLINE void __Pyx_pretend_to_initialize(void* ptr) { (void)ptr; } - -static PyObject *__pyx_m = NULL; -static PyObject *__pyx_d; -static PyObject *__pyx_b; -static PyObject *__pyx_cython_runtime = NULL; -static PyObject *__pyx_empty_tuple; -static PyObject *__pyx_empty_bytes; -static PyObject *__pyx_empty_unicode; -static int __pyx_lineno; -static int __pyx_clineno = 0; -static const char * __pyx_cfilenm= __FILE__; -static const char *__pyx_filename; - - -static const char *__pyx_f[] = { - "core.pyx", - "stringsource", -}; -/* NoFastGil.proto */ -#define __Pyx_PyGILState_Ensure PyGILState_Ensure -#define __Pyx_PyGILState_Release PyGILState_Release -#define __Pyx_FastGIL_Remember() -#define __Pyx_FastGIL_Forget() -#define __Pyx_FastGilFuncInit() - -/* MemviewSliceStruct.proto */ -struct __pyx_memoryview_obj; -typedef struct { - struct __pyx_memoryview_obj *memview; - char *data; - Py_ssize_t shape[8]; - Py_ssize_t strides[8]; - Py_ssize_t suboffsets[8]; -} __Pyx_memviewslice; -#define __Pyx_MemoryView_Len(m) (m.shape[0]) - -/* Atomics.proto */ -#include -#ifndef CYTHON_ATOMICS - #define CYTHON_ATOMICS 1 -#endif -#define __pyx_atomic_int_type int -#if CYTHON_ATOMICS && __GNUC__ >= 4 && (__GNUC_MINOR__ > 1 ||\ - (__GNUC_MINOR__ == 1 && __GNUC_PATCHLEVEL >= 2)) &&\ - !defined(__i386__) - #define __pyx_atomic_incr_aligned(value, lock) __sync_fetch_and_add(value, 1) - #define __pyx_atomic_decr_aligned(value, lock) __sync_fetch_and_sub(value, 1) - #ifdef __PYX_DEBUG_ATOMICS - #warning "Using GNU atomics" - #endif -#elif CYTHON_ATOMICS && defined(_MSC_VER) && 0 - #include - #undef __pyx_atomic_int_type - #define __pyx_atomic_int_type LONG - #define __pyx_atomic_incr_aligned(value, lock) InterlockedIncrement(value) - #define __pyx_atomic_decr_aligned(value, lock) InterlockedDecrement(value) - #ifdef __PYX_DEBUG_ATOMICS - #pragma message ("Using MSVC atomics") - #endif -#elif CYTHON_ATOMICS && (defined(__ICC) || defined(__INTEL_COMPILER)) && 0 - #define __pyx_atomic_incr_aligned(value, lock) _InterlockedIncrement(value) - #define __pyx_atomic_decr_aligned(value, lock) _InterlockedDecrement(value) - #ifdef __PYX_DEBUG_ATOMICS - #warning "Using Intel atomics" - #endif -#else - #undef CYTHON_ATOMICS - #define CYTHON_ATOMICS 0 - #ifdef __PYX_DEBUG_ATOMICS - #warning "Not using atomics" - #endif -#endif -typedef volatile __pyx_atomic_int_type __pyx_atomic_int; -#if CYTHON_ATOMICS - #define __pyx_add_acquisition_count(memview)\ - __pyx_atomic_incr_aligned(__pyx_get_slice_count_pointer(memview), memview->lock) - #define __pyx_sub_acquisition_count(memview)\ - __pyx_atomic_decr_aligned(__pyx_get_slice_count_pointer(memview), memview->lock) -#else - #define __pyx_add_acquisition_count(memview)\ - __pyx_add_acquisition_count_locked(__pyx_get_slice_count_pointer(memview), memview->lock) - #define __pyx_sub_acquisition_count(memview)\ - __pyx_sub_acquisition_count_locked(__pyx_get_slice_count_pointer(memview), memview->lock) -#endif - -/* ForceInitThreads.proto */ -#ifndef __PYX_FORCE_INIT_THREADS - #define __PYX_FORCE_INIT_THREADS 0 -#endif - -/* BufferFormatStructs.proto */ -#define IS_UNSIGNED(type) (((type) -1) > 0) -struct __Pyx_StructField_; -#define __PYX_BUF_FLAGS_PACKED_STRUCT (1 << 0) -typedef struct { - const char* name; - struct __Pyx_StructField_* fields; - size_t size; - size_t arraysize[8]; - int ndim; - char typegroup; - char is_unsigned; - int flags; -} __Pyx_TypeInfo; -typedef struct __Pyx_StructField_ { - __Pyx_TypeInfo* type; - const char* name; - size_t offset; -} __Pyx_StructField; -typedef struct { - __Pyx_StructField* field; - size_t parent_offset; -} __Pyx_BufFmt_StackElem; -typedef struct { - __Pyx_StructField root; - __Pyx_BufFmt_StackElem* head; - size_t fmt_offset; - size_t new_count, enc_count; - size_t struct_alignment; - int is_complex; - char enc_type; - char new_packmode; - char enc_packmode; - char is_valid_array; -} __Pyx_BufFmt_Context; - - -/*--- Type declarations ---*/ -struct __pyx_array_obj; -struct __pyx_MemviewEnum_obj; -struct __pyx_memoryview_obj; -struct __pyx_memoryviewslice_obj; -struct __pyx_opt_args_15monotonic_align_4core_maximum_path_each; - -/* "monotonic_align/core.pyx":7 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<< - * cdef int x - * cdef int y - */ -struct __pyx_opt_args_15monotonic_align_4core_maximum_path_each { - int __pyx_n; - float max_neg_val; -}; - -/* "View.MemoryView":105 - * - * @cname("__pyx_array") - * cdef class array: # <<<<<<<<<<<<<< - * - * cdef: - */ -struct __pyx_array_obj { - PyObject_HEAD - struct __pyx_vtabstruct_array *__pyx_vtab; - char *data; - Py_ssize_t len; - char *format; - int ndim; - Py_ssize_t *_shape; - Py_ssize_t *_strides; - Py_ssize_t itemsize; - PyObject *mode; - PyObject *_format; - void (*callback_free_data)(void *); - int free_data; - int dtype_is_object; -}; - - -/* "View.MemoryView":279 - * - * @cname('__pyx_MemviewEnum') - * cdef class Enum(object): # <<<<<<<<<<<<<< - * cdef object name - * def __init__(self, name): - */ -struct __pyx_MemviewEnum_obj { - PyObject_HEAD - PyObject *name; -}; - - -/* "View.MemoryView":330 - * - * @cname('__pyx_memoryview') - * cdef class memoryview(object): # <<<<<<<<<<<<<< - * - * cdef object obj - */ -struct __pyx_memoryview_obj { - PyObject_HEAD - struct __pyx_vtabstruct_memoryview *__pyx_vtab; - PyObject *obj; - PyObject *_size; - PyObject *_array_interface; - PyThread_type_lock lock; - __pyx_atomic_int acquisition_count[2]; - __pyx_atomic_int *acquisition_count_aligned_p; - Py_buffer view; - int flags; - int dtype_is_object; - __Pyx_TypeInfo *typeinfo; -}; - - -/* "View.MemoryView":965 - * - * @cname('__pyx_memoryviewslice') - * cdef class _memoryviewslice(memoryview): # <<<<<<<<<<<<<< - * "Internal class for passing memoryview slices to Python" - * - */ -struct __pyx_memoryviewslice_obj { - struct __pyx_memoryview_obj __pyx_base; - __Pyx_memviewslice from_slice; - PyObject *from_object; - PyObject *(*to_object_func)(char *); - int (*to_dtype_func)(char *, PyObject *); -}; - - - -/* "View.MemoryView":105 - * - * @cname("__pyx_array") - * cdef class array: # <<<<<<<<<<<<<< - * - * cdef: - */ - -struct __pyx_vtabstruct_array { - PyObject *(*get_memview)(struct __pyx_array_obj *); -}; -static struct __pyx_vtabstruct_array *__pyx_vtabptr_array; - - -/* "View.MemoryView":330 - * - * @cname('__pyx_memoryview') - * cdef class memoryview(object): # <<<<<<<<<<<<<< - * - * cdef object obj - */ - -struct __pyx_vtabstruct_memoryview { - char *(*get_item_pointer)(struct __pyx_memoryview_obj *, PyObject *); - PyObject *(*is_slice)(struct __pyx_memoryview_obj *, PyObject *); - PyObject *(*setitem_slice_assignment)(struct __pyx_memoryview_obj *, PyObject *, PyObject *); - PyObject *(*setitem_slice_assign_scalar)(struct __pyx_memoryview_obj *, struct __pyx_memoryview_obj *, PyObject *); - PyObject *(*setitem_indexed)(struct __pyx_memoryview_obj *, PyObject *, PyObject *); - PyObject *(*convert_item_to_object)(struct __pyx_memoryview_obj *, char *); - PyObject *(*assign_item_from_object)(struct __pyx_memoryview_obj *, char *, PyObject *); -}; -static struct __pyx_vtabstruct_memoryview *__pyx_vtabptr_memoryview; - - -/* "View.MemoryView":965 - * - * @cname('__pyx_memoryviewslice') - * cdef class _memoryviewslice(memoryview): # <<<<<<<<<<<<<< - * "Internal class for passing memoryview slices to Python" - * - */ - -struct __pyx_vtabstruct__memoryviewslice { - struct __pyx_vtabstruct_memoryview __pyx_base; -}; -static struct __pyx_vtabstruct__memoryviewslice *__pyx_vtabptr__memoryviewslice; - -/* --- Runtime support code (head) --- */ -/* Refnanny.proto */ -#ifndef CYTHON_REFNANNY - #define CYTHON_REFNANNY 0 -#endif -#if CYTHON_REFNANNY - typedef struct { - void (*INCREF)(void*, PyObject*, int); - void (*DECREF)(void*, PyObject*, int); - void (*GOTREF)(void*, PyObject*, int); - void (*GIVEREF)(void*, PyObject*, int); - void* (*SetupContext)(const char*, int, const char*); - void (*FinishContext)(void**); - } __Pyx_RefNannyAPIStruct; - static __Pyx_RefNannyAPIStruct *__Pyx_RefNanny = NULL; - static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname); - #define __Pyx_RefNannyDeclarations void *__pyx_refnanny = NULL; -#ifdef WITH_THREAD - #define __Pyx_RefNannySetupContext(name, acquire_gil)\ - if (acquire_gil) {\ - PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\ - PyGILState_Release(__pyx_gilstate_save);\ - } else {\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\ - } -#else - #define __Pyx_RefNannySetupContext(name, acquire_gil)\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__) -#endif - #define __Pyx_RefNannyFinishContext()\ - __Pyx_RefNanny->FinishContext(&__pyx_refnanny) - #define __Pyx_INCREF(r) __Pyx_RefNanny->INCREF(__pyx_refnanny, (PyObject *)(r), __LINE__) - #define __Pyx_DECREF(r) __Pyx_RefNanny->DECREF(__pyx_refnanny, (PyObject *)(r), __LINE__) - #define __Pyx_GOTREF(r) __Pyx_RefNanny->GOTREF(__pyx_refnanny, (PyObject *)(r), __LINE__) - #define __Pyx_GIVEREF(r) __Pyx_RefNanny->GIVEREF(__pyx_refnanny, (PyObject *)(r), __LINE__) - #define __Pyx_XINCREF(r) do { if((r) != NULL) {__Pyx_INCREF(r); }} while(0) - #define __Pyx_XDECREF(r) do { if((r) != NULL) {__Pyx_DECREF(r); }} while(0) - #define __Pyx_XGOTREF(r) do { if((r) != NULL) {__Pyx_GOTREF(r); }} while(0) - #define __Pyx_XGIVEREF(r) do { if((r) != NULL) {__Pyx_GIVEREF(r);}} while(0) -#else - #define __Pyx_RefNannyDeclarations - #define __Pyx_RefNannySetupContext(name, acquire_gil) - #define __Pyx_RefNannyFinishContext() - #define __Pyx_INCREF(r) Py_INCREF(r) - #define __Pyx_DECREF(r) Py_DECREF(r) - #define __Pyx_GOTREF(r) - #define __Pyx_GIVEREF(r) - #define __Pyx_XINCREF(r) Py_XINCREF(r) - #define __Pyx_XDECREF(r) Py_XDECREF(r) - #define __Pyx_XGOTREF(r) - #define __Pyx_XGIVEREF(r) -#endif -#define __Pyx_XDECREF_SET(r, v) do {\ - PyObject *tmp = (PyObject *) r;\ - r = v; __Pyx_XDECREF(tmp);\ - } while (0) -#define __Pyx_DECREF_SET(r, v) do {\ - PyObject *tmp = (PyObject *) r;\ - r = v; __Pyx_DECREF(tmp);\ - } while (0) -#define __Pyx_CLEAR(r) do { PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);} while(0) -#define __Pyx_XCLEAR(r) do { if((r) != NULL) {PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);}} while(0) - -/* PyObjectGetAttrStr.proto */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name); -#else -#define __Pyx_PyObject_GetAttrStr(o,n) PyObject_GetAttr(o,n) -#endif - -/* GetBuiltinName.proto */ -static PyObject *__Pyx_GetBuiltinName(PyObject *name); - -/* MemviewSliceInit.proto */ -#define __Pyx_BUF_MAX_NDIMS %(BUF_MAX_NDIMS)d -#define __Pyx_MEMVIEW_DIRECT 1 -#define __Pyx_MEMVIEW_PTR 2 -#define __Pyx_MEMVIEW_FULL 4 -#define __Pyx_MEMVIEW_CONTIG 8 -#define __Pyx_MEMVIEW_STRIDED 16 -#define __Pyx_MEMVIEW_FOLLOW 32 -#define __Pyx_IS_C_CONTIG 1 -#define __Pyx_IS_F_CONTIG 2 -static int __Pyx_init_memviewslice( - struct __pyx_memoryview_obj *memview, - int ndim, - __Pyx_memviewslice *memviewslice, - int memview_is_new_reference); -static CYTHON_INLINE int __pyx_add_acquisition_count_locked( - __pyx_atomic_int *acquisition_count, PyThread_type_lock lock); -static CYTHON_INLINE int __pyx_sub_acquisition_count_locked( - __pyx_atomic_int *acquisition_count, PyThread_type_lock lock); -#define __pyx_get_slice_count_pointer(memview) (memview->acquisition_count_aligned_p) -#define __pyx_get_slice_count(memview) (*__pyx_get_slice_count_pointer(memview)) -#define __PYX_INC_MEMVIEW(slice, have_gil) __Pyx_INC_MEMVIEW(slice, have_gil, __LINE__) -#define __PYX_XDEC_MEMVIEW(slice, have_gil) __Pyx_XDEC_MEMVIEW(slice, have_gil, __LINE__) -static CYTHON_INLINE void __Pyx_INC_MEMVIEW(__Pyx_memviewslice *, int, int); -static CYTHON_INLINE void __Pyx_XDEC_MEMVIEW(__Pyx_memviewslice *, int, int); - -/* RaiseArgTupleInvalid.proto */ -static void __Pyx_RaiseArgtupleInvalid(const char* func_name, int exact, - Py_ssize_t num_min, Py_ssize_t num_max, Py_ssize_t num_found); - -/* RaiseDoubleKeywords.proto */ -static void __Pyx_RaiseDoubleKeywordsError(const char* func_name, PyObject* kw_name); - -/* ParseKeywords.proto */ -static int __Pyx_ParseOptionalKeywords(PyObject *kwds, PyObject **argnames[],\ - PyObject *kwds2, PyObject *values[], Py_ssize_t num_pos_args,\ - const char* function_name); - -/* None.proto */ -static CYTHON_INLINE void __Pyx_RaiseUnboundLocalError(const char *varname); - -/* ArgTypeTest.proto */ -#define __Pyx_ArgTypeTest(obj, type, none_allowed, name, exact)\ - ((likely((Py_TYPE(obj) == type) | (none_allowed && (obj == Py_None)))) ? 1 :\ - __Pyx__ArgTypeTest(obj, type, name, exact)) -static int __Pyx__ArgTypeTest(PyObject *obj, PyTypeObject *type, const char *name, int exact); - -/* PyObjectCall.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw); -#else -#define __Pyx_PyObject_Call(func, arg, kw) PyObject_Call(func, arg, kw) -#endif - -/* PyThreadStateGet.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyThreadState_declare PyThreadState *__pyx_tstate; -#define __Pyx_PyThreadState_assign __pyx_tstate = __Pyx_PyThreadState_Current; -#define __Pyx_PyErr_Occurred() __pyx_tstate->curexc_type -#else -#define __Pyx_PyThreadState_declare -#define __Pyx_PyThreadState_assign -#define __Pyx_PyErr_Occurred() PyErr_Occurred() -#endif - -/* PyErrFetchRestore.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyErr_Clear() __Pyx_ErrRestore(NULL, NULL, NULL) -#define __Pyx_ErrRestoreWithState(type, value, tb) __Pyx_ErrRestoreInState(PyThreadState_GET(), type, value, tb) -#define __Pyx_ErrFetchWithState(type, value, tb) __Pyx_ErrFetchInState(PyThreadState_GET(), type, value, tb) -#define __Pyx_ErrRestore(type, value, tb) __Pyx_ErrRestoreInState(__pyx_tstate, type, value, tb) -#define __Pyx_ErrFetch(type, value, tb) __Pyx_ErrFetchInState(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); -static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#if CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_PyErr_SetNone(exc) (Py_INCREF(exc), __Pyx_ErrRestore((exc), NULL, NULL)) -#else -#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc) -#endif -#else -#define __Pyx_PyErr_Clear() PyErr_Clear() -#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc) -#define __Pyx_ErrRestoreWithState(type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetchWithState(type, value, tb) PyErr_Fetch(type, value, tb) -#define __Pyx_ErrRestoreInState(tstate, type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetchInState(tstate, type, value, tb) PyErr_Fetch(type, value, tb) -#define __Pyx_ErrRestore(type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetch(type, value, tb) PyErr_Fetch(type, value, tb) -#endif - -/* RaiseException.proto */ -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause); - -/* PyCFunctionFastCall.proto */ -#if CYTHON_FAST_PYCCALL -static CYTHON_INLINE PyObject *__Pyx_PyCFunction_FastCall(PyObject *func, PyObject **args, Py_ssize_t nargs); -#else -#define __Pyx_PyCFunction_FastCall(func, args, nargs) (assert(0), NULL) -#endif - -/* PyFunctionFastCall.proto */ -#if CYTHON_FAST_PYCALL -#define __Pyx_PyFunction_FastCall(func, args, nargs)\ - __Pyx_PyFunction_FastCallDict((func), (args), (nargs), NULL) -#if 1 || PY_VERSION_HEX < 0x030600B1 -static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs); -#else -#define __Pyx_PyFunction_FastCallDict(func, args, nargs, kwargs) _PyFunction_FastCallDict(func, args, nargs, kwargs) -#endif -#define __Pyx_BUILD_ASSERT_EXPR(cond)\ - (sizeof(char [1 - 2*!(cond)]) - 1) -#ifndef Py_MEMBER_SIZE -#define Py_MEMBER_SIZE(type, member) sizeof(((type *)0)->member) -#endif - static size_t __pyx_pyframe_localsplus_offset = 0; - #include "frameobject.h" - #define __Pxy_PyFrame_Initialize_Offsets()\ - ((void)__Pyx_BUILD_ASSERT_EXPR(sizeof(PyFrameObject) == offsetof(PyFrameObject, f_localsplus) + Py_MEMBER_SIZE(PyFrameObject, f_localsplus)),\ - (void)(__pyx_pyframe_localsplus_offset = ((size_t)PyFrame_Type.tp_basicsize) - Py_MEMBER_SIZE(PyFrameObject, f_localsplus))) - #define __Pyx_PyFrame_GetLocalsplus(frame)\ - (assert(__pyx_pyframe_localsplus_offset), (PyObject **)(((char *)(frame)) + __pyx_pyframe_localsplus_offset)) -#endif - -/* PyObjectCall2Args.proto */ -static CYTHON_UNUSED PyObject* __Pyx_PyObject_Call2Args(PyObject* function, PyObject* arg1, PyObject* arg2); - -/* PyObjectCallMethO.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg); -#endif - -/* PyObjectCallOneArg.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg); - -/* IncludeStringH.proto */ -#include - -/* BytesEquals.proto */ -static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals); - -/* UnicodeEquals.proto */ -static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals); - -/* StrEquals.proto */ -#if PY_MAJOR_VERSION >= 3 -#define __Pyx_PyString_Equals __Pyx_PyUnicode_Equals -#else -#define __Pyx_PyString_Equals __Pyx_PyBytes_Equals -#endif - -/* None.proto */ -static CYTHON_INLINE Py_ssize_t __Pyx_div_Py_ssize_t(Py_ssize_t, Py_ssize_t); - -/* UnaryNegOverflows.proto */ -#define UNARY_NEG_WOULD_OVERFLOW(x)\ - (((x) < 0) & ((unsigned long)(x) == 0-(unsigned long)(x))) - -static CYTHON_UNUSED int __pyx_array_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/ -static PyObject *__pyx_array_get_memview(struct __pyx_array_obj *); /*proto*/ -/* GetAttr.proto */ -static CYTHON_INLINE PyObject *__Pyx_GetAttr(PyObject *, PyObject *); - -/* GetItemInt.proto */ -#define __Pyx_GetItemInt(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_Fast(o, (Py_ssize_t)i, is_list, wraparound, boundscheck) :\ - (is_list ? (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL) :\ - __Pyx_GetItemInt_Generic(o, to_py_func(i)))) -#define __Pyx_GetItemInt_List(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_List_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\ - (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL)) -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i, - int wraparound, int boundscheck); -#define __Pyx_GetItemInt_Tuple(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_Tuple_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\ - (PyErr_SetString(PyExc_IndexError, "tuple index out of range"), (PyObject*)NULL)) -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i, - int wraparound, int boundscheck); -static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j); -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i, - int is_list, int wraparound, int boundscheck); - -/* ObjectGetItem.proto */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject *__Pyx_PyObject_GetItem(PyObject *obj, PyObject* key); -#else -#define __Pyx_PyObject_GetItem(obj, key) PyObject_GetItem(obj, key) -#endif - -/* decode_c_string_utf16.proto */ -static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16(const char *s, Py_ssize_t size, const char *errors) { - int byteorder = 0; - return PyUnicode_DecodeUTF16(s, size, errors, &byteorder); -} -static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16LE(const char *s, Py_ssize_t size, const char *errors) { - int byteorder = -1; - return PyUnicode_DecodeUTF16(s, size, errors, &byteorder); -} -static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16BE(const char *s, Py_ssize_t size, const char *errors) { - int byteorder = 1; - return PyUnicode_DecodeUTF16(s, size, errors, &byteorder); -} - -/* decode_c_string.proto */ -static CYTHON_INLINE PyObject* __Pyx_decode_c_string( - const char* cstring, Py_ssize_t start, Py_ssize_t stop, - const char* encoding, const char* errors, - PyObject* (*decode_func)(const char *s, Py_ssize_t size, const char *errors)); - -/* PyErrExceptionMatches.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyErr_ExceptionMatches(err) __Pyx_PyErr_ExceptionMatchesInState(__pyx_tstate, err) -static CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadState* tstate, PyObject* err); -#else -#define __Pyx_PyErr_ExceptionMatches(err) PyErr_ExceptionMatches(err) -#endif - -/* GetAttr3.proto */ -static CYTHON_INLINE PyObject *__Pyx_GetAttr3(PyObject *, PyObject *, PyObject *); - -/* PyDictVersioning.proto */ -#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS -#define __PYX_DICT_VERSION_INIT ((PY_UINT64_T) -1) -#define __PYX_GET_DICT_VERSION(dict) (((PyDictObject*)(dict))->ma_version_tag) -#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var)\ - (version_var) = __PYX_GET_DICT_VERSION(dict);\ - (cache_var) = (value); -#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) {\ - static PY_UINT64_T __pyx_dict_version = 0;\ - static PyObject *__pyx_dict_cached_value = NULL;\ - if (likely(__PYX_GET_DICT_VERSION(DICT) == __pyx_dict_version)) {\ - (VAR) = __pyx_dict_cached_value;\ - } else {\ - (VAR) = __pyx_dict_cached_value = (LOOKUP);\ - __pyx_dict_version = __PYX_GET_DICT_VERSION(DICT);\ - }\ -} -static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj); -static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj); -static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version); -#else -#define __PYX_GET_DICT_VERSION(dict) (0) -#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var) -#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) (VAR) = (LOOKUP); -#endif - -/* GetModuleGlobalName.proto */ -#if CYTHON_USE_DICT_VERSIONS -#define __Pyx_GetModuleGlobalName(var, name) {\ - static PY_UINT64_T __pyx_dict_version = 0;\ - static PyObject *__pyx_dict_cached_value = NULL;\ - (var) = (likely(__pyx_dict_version == __PYX_GET_DICT_VERSION(__pyx_d))) ?\ - (likely(__pyx_dict_cached_value) ? __Pyx_NewRef(__pyx_dict_cached_value) : __Pyx_GetBuiltinName(name)) :\ - __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\ -} -#define __Pyx_GetModuleGlobalNameUncached(var, name) {\ - PY_UINT64_T __pyx_dict_version;\ - PyObject *__pyx_dict_cached_value;\ - (var) = __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\ -} -static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value); -#else -#define __Pyx_GetModuleGlobalName(var, name) (var) = __Pyx__GetModuleGlobalName(name) -#define __Pyx_GetModuleGlobalNameUncached(var, name) (var) = __Pyx__GetModuleGlobalName(name) -static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name); -#endif - -/* RaiseTooManyValuesToUnpack.proto */ -static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected); - -/* RaiseNeedMoreValuesToUnpack.proto */ -static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index); - -/* RaiseNoneIterError.proto */ -static CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void); - -/* ExtTypeTest.proto */ -static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type); - -/* GetTopmostException.proto */ -#if CYTHON_USE_EXC_INFO_STACK -static _PyErr_StackItem * __Pyx_PyErr_GetTopmostException(PyThreadState *tstate); -#endif - -/* SaveResetException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_ExceptionSave(type, value, tb) __Pyx__ExceptionSave(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#define __Pyx_ExceptionReset(type, value, tb) __Pyx__ExceptionReset(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); -#else -#define __Pyx_ExceptionSave(type, value, tb) PyErr_GetExcInfo(type, value, tb) -#define __Pyx_ExceptionReset(type, value, tb) PyErr_SetExcInfo(type, value, tb) -#endif - -/* GetException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_GetException(type, value, tb) __Pyx__GetException(__pyx_tstate, type, value, tb) -static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#else -static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb); -#endif - -/* SwapException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_ExceptionSwap(type, value, tb) __Pyx__ExceptionSwap(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionSwap(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#else -static CYTHON_INLINE void __Pyx_ExceptionSwap(PyObject **type, PyObject **value, PyObject **tb); -#endif - -/* Import.proto */ -static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level); - -/* FastTypeChecks.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_TypeCheck(obj, type) __Pyx_IsSubtype(Py_TYPE(obj), (PyTypeObject *)type) -static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b); -static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches(PyObject *err, PyObject *type); -static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches2(PyObject *err, PyObject *type1, PyObject *type2); -#else -#define __Pyx_TypeCheck(obj, type) PyObject_TypeCheck(obj, (PyTypeObject *)type) -#define __Pyx_PyErr_GivenExceptionMatches(err, type) PyErr_GivenExceptionMatches(err, type) -#define __Pyx_PyErr_GivenExceptionMatches2(err, type1, type2) (PyErr_GivenExceptionMatches(err, type1) || PyErr_GivenExceptionMatches(err, type2)) -#endif -#define __Pyx_PyException_Check(obj) __Pyx_TypeCheck(obj, PyExc_Exception) - -static CYTHON_UNUSED int __pyx_memoryview_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/ -/* ListCompAppend.proto */ -#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS -static CYTHON_INLINE int __Pyx_ListComp_Append(PyObject* list, PyObject* x) { - PyListObject* L = (PyListObject*) list; - Py_ssize_t len = Py_SIZE(list); - if (likely(L->allocated > len)) { - Py_INCREF(x); - PyList_SET_ITEM(list, len, x); - __Pyx_SET_SIZE(list, len + 1); - return 0; - } - return PyList_Append(list, x); -} -#else -#define __Pyx_ListComp_Append(L,x) PyList_Append(L,x) -#endif - -/* PyIntBinop.proto */ -#if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyInt_AddObjC(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check); -#else -#define __Pyx_PyInt_AddObjC(op1, op2, intval, inplace, zerodivision_check)\ - (inplace ? PyNumber_InPlaceAdd(op1, op2) : PyNumber_Add(op1, op2)) -#endif - -/* ListExtend.proto */ -static CYTHON_INLINE int __Pyx_PyList_Extend(PyObject* L, PyObject* v) { -#if CYTHON_COMPILING_IN_CPYTHON - PyObject* none = _PyList_Extend((PyListObject*)L, v); - if (unlikely(!none)) - return -1; - Py_DECREF(none); - return 0; -#else - return PyList_SetSlice(L, PY_SSIZE_T_MAX, PY_SSIZE_T_MAX, v); -#endif -} - -/* ListAppend.proto */ -#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS -static CYTHON_INLINE int __Pyx_PyList_Append(PyObject* list, PyObject* x) { - PyListObject* L = (PyListObject*) list; - Py_ssize_t len = Py_SIZE(list); - if (likely(L->allocated > len) & likely(len > (L->allocated >> 1))) { - Py_INCREF(x); - PyList_SET_ITEM(list, len, x); - __Pyx_SET_SIZE(list, len + 1); - return 0; - } - return PyList_Append(list, x); -} -#else -#define __Pyx_PyList_Append(L,x) PyList_Append(L,x) -#endif - -/* None.proto */ -static CYTHON_INLINE long __Pyx_div_long(long, long); - -/* ImportFrom.proto */ -static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name); - -/* HasAttr.proto */ -static CYTHON_INLINE int __Pyx_HasAttr(PyObject *, PyObject *); - -/* PyObject_GenericGetAttrNoDict.proto */ -#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000 -static CYTHON_INLINE PyObject* __Pyx_PyObject_GenericGetAttrNoDict(PyObject* obj, PyObject* attr_name); -#else -#define __Pyx_PyObject_GenericGetAttrNoDict PyObject_GenericGetAttr -#endif - -/* PyObject_GenericGetAttr.proto */ -#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000 -static PyObject* __Pyx_PyObject_GenericGetAttr(PyObject* obj, PyObject* attr_name); -#else -#define __Pyx_PyObject_GenericGetAttr PyObject_GenericGetAttr -#endif - -/* SetVTable.proto */ -static int __Pyx_SetVtable(PyObject *dict, void *vtable); - -/* PyObjectGetAttrStrNoError.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStrNoError(PyObject* obj, PyObject* attr_name); - -/* SetupReduce.proto */ -static int __Pyx_setup_reduce(PyObject* type_obj); - -/* CLineInTraceback.proto */ -#ifdef CYTHON_CLINE_IN_TRACEBACK -#define __Pyx_CLineForTraceback(tstate, c_line) (((CYTHON_CLINE_IN_TRACEBACK)) ? c_line : 0) -#else -static int __Pyx_CLineForTraceback(PyThreadState *tstate, int c_line); -#endif - -/* CodeObjectCache.proto */ -typedef struct { - PyCodeObject* code_object; - int code_line; -} __Pyx_CodeObjectCacheEntry; -struct __Pyx_CodeObjectCache { - int count; - int max_count; - __Pyx_CodeObjectCacheEntry* entries; -}; -static struct __Pyx_CodeObjectCache __pyx_code_cache = {0,0,NULL}; -static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line); -static PyCodeObject *__pyx_find_code_object(int code_line); -static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object); - -/* AddTraceback.proto */ -static void __Pyx_AddTraceback(const char *funcname, int c_line, - int py_line, const char *filename); - -#if PY_MAJOR_VERSION < 3 - static int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags); - static void __Pyx_ReleaseBuffer(Py_buffer *view); -#else - #define __Pyx_GetBuffer PyObject_GetBuffer - #define __Pyx_ReleaseBuffer PyBuffer_Release -#endif - - -/* BufferStructDeclare.proto */ -typedef struct { - Py_ssize_t shape, strides, suboffsets; -} __Pyx_Buf_DimInfo; -typedef struct { - size_t refcount; - Py_buffer pybuffer; -} __Pyx_Buffer; -typedef struct { - __Pyx_Buffer *rcbuffer; - char *data; - __Pyx_Buf_DimInfo diminfo[8]; -} __Pyx_LocalBuf_ND; - -/* MemviewSliceIsContig.proto */ -static int __pyx_memviewslice_is_contig(const __Pyx_memviewslice mvs, char order, int ndim); - -/* OverlappingSlices.proto */ -static int __pyx_slices_overlap(__Pyx_memviewslice *slice1, - __Pyx_memviewslice *slice2, - int ndim, size_t itemsize); - -/* Capsule.proto */ -static CYTHON_INLINE PyObject *__pyx_capsule_create(void *p, const char *sig); - -/* IsLittleEndian.proto */ -static CYTHON_INLINE int __Pyx_Is_Little_Endian(void); - -/* BufferFormatCheck.proto */ -static const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const char* ts); -static void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx, - __Pyx_BufFmt_StackElem* stack, - __Pyx_TypeInfo* type); - -/* TypeInfoCompare.proto */ -static int __pyx_typeinfo_cmp(__Pyx_TypeInfo *a, __Pyx_TypeInfo *b); - -/* MemviewSliceValidateAndInit.proto */ -static int __Pyx_ValidateAndInit_memviewslice( - int *axes_specs, - int c_or_f_flag, - int buf_flags, - int ndim, - __Pyx_TypeInfo *dtype, - __Pyx_BufFmt_StackElem stack[], - __Pyx_memviewslice *memviewslice, - PyObject *original_obj); - -/* ObjectToMemviewSlice.proto */ -static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_int(PyObject *, int writable_flag); - -/* ObjectToMemviewSlice.proto */ -static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_float(PyObject *, int writable_flag); - -/* ObjectToMemviewSlice.proto */ -static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_dc_int(PyObject *, int writable_flag); - -/* CIntToPy.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value); - -/* CIntToPy.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value); - -/* MemviewSliceCopyTemplate.proto */ -static __Pyx_memviewslice -__pyx_memoryview_copy_new_contig(const __Pyx_memviewslice *from_mvs, - const char *mode, int ndim, - size_t sizeof_dtype, int contig_flag, - int dtype_is_object); - -/* CIntFromPy.proto */ -static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *); - -/* CIntFromPy.proto */ -static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *); - -/* CIntFromPy.proto */ -static CYTHON_INLINE char __Pyx_PyInt_As_char(PyObject *); - -/* CheckBinaryVersion.proto */ -static int __Pyx_check_binary_version(void); - -/* InitStrings.proto */ -static int __Pyx_InitStrings(__Pyx_StringTabEntry *t); - -static PyObject *__pyx_array_get_memview(struct __pyx_array_obj *__pyx_v_self); /* proto*/ -static char *__pyx_memoryview_get_item_pointer(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index); /* proto*/ -static PyObject *__pyx_memoryview_is_slice(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj); /* proto*/ -static PyObject *__pyx_memoryview_setitem_slice_assignment(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_dst, PyObject *__pyx_v_src); /* proto*/ -static PyObject *__pyx_memoryview_setitem_slice_assign_scalar(struct __pyx_memoryview_obj *__pyx_v_self, struct __pyx_memoryview_obj *__pyx_v_dst, PyObject *__pyx_v_value); /* proto*/ -static PyObject *__pyx_memoryview_setitem_indexed(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value); /* proto*/ -static PyObject *__pyx_memoryview_convert_item_to_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp); /* proto*/ -static PyObject *__pyx_memoryview_assign_item_from_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value); /* proto*/ -static PyObject *__pyx_memoryviewslice_convert_item_to_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp); /* proto*/ -static PyObject *__pyx_memoryviewslice_assign_item_from_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value); /* proto*/ - -/* Module declarations from 'cython.view' */ - -/* Module declarations from 'cython' */ - -/* Module declarations from 'monotonic_align.core' */ -static PyTypeObject *__pyx_array_type = 0; -static PyTypeObject *__pyx_MemviewEnum_type = 0; -static PyTypeObject *__pyx_memoryview_type = 0; -static PyTypeObject *__pyx_memoryviewslice_type = 0; -static PyObject *generic = 0; -static PyObject *strided = 0; -static PyObject *indirect = 0; -static PyObject *contiguous = 0; -static PyObject *indirect_contiguous = 0; -static int __pyx_memoryview_thread_locks_used; -static PyThread_type_lock __pyx_memoryview_thread_locks[8]; -static void __pyx_f_15monotonic_align_4core_maximum_path_each(__Pyx_memviewslice, __Pyx_memviewslice, int, int, struct __pyx_opt_args_15monotonic_align_4core_maximum_path_each *__pyx_optional_args); /*proto*/ -static void __pyx_f_15monotonic_align_4core_maximum_path_c(__Pyx_memviewslice, __Pyx_memviewslice, __Pyx_memviewslice, __Pyx_memviewslice, int __pyx_skip_dispatch); /*proto*/ -static struct __pyx_array_obj *__pyx_array_new(PyObject *, Py_ssize_t, char *, char *, char *); /*proto*/ -static void *__pyx_align_pointer(void *, size_t); /*proto*/ -static PyObject *__pyx_memoryview_new(PyObject *, int, int, __Pyx_TypeInfo *); /*proto*/ -static CYTHON_INLINE int __pyx_memoryview_check(PyObject *); /*proto*/ -static PyObject *_unellipsify(PyObject *, int); /*proto*/ -static PyObject *assert_direct_dimensions(Py_ssize_t *, int); /*proto*/ -static struct __pyx_memoryview_obj *__pyx_memview_slice(struct __pyx_memoryview_obj *, PyObject *); /*proto*/ -static int __pyx_memoryview_slice_memviewslice(__Pyx_memviewslice *, Py_ssize_t, Py_ssize_t, Py_ssize_t, int, int, int *, Py_ssize_t, Py_ssize_t, Py_ssize_t, int, int, int, int); /*proto*/ -static char *__pyx_pybuffer_index(Py_buffer *, char *, Py_ssize_t, Py_ssize_t); /*proto*/ -static int __pyx_memslice_transpose(__Pyx_memviewslice *); /*proto*/ -static PyObject *__pyx_memoryview_fromslice(__Pyx_memviewslice, int, PyObject *(*)(char *), int (*)(char *, PyObject *), int); /*proto*/ -static __Pyx_memviewslice *__pyx_memoryview_get_slice_from_memoryview(struct __pyx_memoryview_obj *, __Pyx_memviewslice *); /*proto*/ -static void __pyx_memoryview_slice_copy(struct __pyx_memoryview_obj *, __Pyx_memviewslice *); /*proto*/ -static PyObject *__pyx_memoryview_copy_object(struct __pyx_memoryview_obj *); /*proto*/ -static PyObject *__pyx_memoryview_copy_object_from_slice(struct __pyx_memoryview_obj *, __Pyx_memviewslice *); /*proto*/ -static Py_ssize_t abs_py_ssize_t(Py_ssize_t); /*proto*/ -static char __pyx_get_best_slice_order(__Pyx_memviewslice *, int); /*proto*/ -static void _copy_strided_to_strided(char *, Py_ssize_t *, char *, Py_ssize_t *, Py_ssize_t *, Py_ssize_t *, int, size_t); /*proto*/ -static void copy_strided_to_strided(__Pyx_memviewslice *, __Pyx_memviewslice *, int, size_t); /*proto*/ -static Py_ssize_t __pyx_memoryview_slice_get_size(__Pyx_memviewslice *, int); /*proto*/ -static Py_ssize_t __pyx_fill_contig_strides_array(Py_ssize_t *, Py_ssize_t *, Py_ssize_t, int, char); /*proto*/ -static void *__pyx_memoryview_copy_data_to_temp(__Pyx_memviewslice *, __Pyx_memviewslice *, char, int); /*proto*/ -static int __pyx_memoryview_err_extents(int, Py_ssize_t, Py_ssize_t); /*proto*/ -static int __pyx_memoryview_err_dim(PyObject *, char *, int); /*proto*/ -static int __pyx_memoryview_err(PyObject *, char *); /*proto*/ -static int __pyx_memoryview_copy_contents(__Pyx_memviewslice, __Pyx_memviewslice, int, int, int); /*proto*/ -static void __pyx_memoryview_broadcast_leading(__Pyx_memviewslice *, int, int); /*proto*/ -static void __pyx_memoryview_refcount_copying(__Pyx_memviewslice *, int, int, int); /*proto*/ -static void __pyx_memoryview_refcount_objects_in_slice_with_gil(char *, Py_ssize_t *, Py_ssize_t *, int, int); /*proto*/ -static void __pyx_memoryview_refcount_objects_in_slice(char *, Py_ssize_t *, Py_ssize_t *, int, int); /*proto*/ -static void __pyx_memoryview_slice_assign_scalar(__Pyx_memviewslice *, int, size_t, void *, int); /*proto*/ -static void __pyx_memoryview__slice_assign_scalar(char *, Py_ssize_t *, Py_ssize_t *, int, size_t, void *); /*proto*/ -static PyObject *__pyx_unpickle_Enum__set_state(struct __pyx_MemviewEnum_obj *, PyObject *); /*proto*/ -static __Pyx_TypeInfo __Pyx_TypeInfo_int = { "int", NULL, sizeof(int), { 0 }, 0, IS_UNSIGNED(int) ? 'U' : 'I', IS_UNSIGNED(int), 0 }; -static __Pyx_TypeInfo __Pyx_TypeInfo_float = { "float", NULL, sizeof(float), { 0 }, 0, 'R', 0, 0 }; -#define __Pyx_MODULE_NAME "monotonic_align.core" -extern int __pyx_module_is_main_monotonic_align__core; -int __pyx_module_is_main_monotonic_align__core = 0; - -/* Implementation of 'monotonic_align.core' */ -static PyObject *__pyx_builtin_range; -static PyObject *__pyx_builtin_ValueError; -static PyObject *__pyx_builtin_MemoryError; -static PyObject *__pyx_builtin_enumerate; -static PyObject *__pyx_builtin_TypeError; -static PyObject *__pyx_builtin_Ellipsis; -static PyObject *__pyx_builtin_id; -static PyObject *__pyx_builtin_IndexError; -static const char __pyx_k_O[] = "O"; -static const char __pyx_k_c[] = "c"; -static const char __pyx_k_id[] = "id"; -static const char __pyx_k_new[] = "__new__"; -static const char __pyx_k_obj[] = "obj"; -static const char __pyx_k_base[] = "base"; -static const char __pyx_k_dict[] = "__dict__"; -static const char __pyx_k_main[] = "__main__"; -static const char __pyx_k_mode[] = "mode"; -static const char __pyx_k_name[] = "name"; -static const char __pyx_k_ndim[] = "ndim"; -static const char __pyx_k_pack[] = "pack"; -static const char __pyx_k_size[] = "size"; -static const char __pyx_k_step[] = "step"; -static const char __pyx_k_stop[] = "stop"; -static const char __pyx_k_t_xs[] = "t_xs"; -static const char __pyx_k_t_ys[] = "t_ys"; -static const char __pyx_k_test[] = "__test__"; -static const char __pyx_k_ASCII[] = "ASCII"; -static const char __pyx_k_class[] = "__class__"; -static const char __pyx_k_error[] = "error"; -static const char __pyx_k_flags[] = "flags"; -static const char __pyx_k_paths[] = "paths"; -static const char __pyx_k_range[] = "range"; -static const char __pyx_k_shape[] = "shape"; -static const char __pyx_k_start[] = "start"; -static const char __pyx_k_encode[] = "encode"; -static const char __pyx_k_format[] = "format"; -static const char __pyx_k_import[] = "__import__"; -static const char __pyx_k_name_2[] = "__name__"; -static const char __pyx_k_pickle[] = "pickle"; -static const char __pyx_k_reduce[] = "__reduce__"; -static const char __pyx_k_struct[] = "struct"; -static const char __pyx_k_unpack[] = "unpack"; -static const char __pyx_k_update[] = "update"; -static const char __pyx_k_values[] = "values"; -static const char __pyx_k_fortran[] = "fortran"; -static const char __pyx_k_memview[] = "memview"; -static const char __pyx_k_Ellipsis[] = "Ellipsis"; -static const char __pyx_k_getstate[] = "__getstate__"; -static const char __pyx_k_itemsize[] = "itemsize"; -static const char __pyx_k_pyx_type[] = "__pyx_type"; -static const char __pyx_k_setstate[] = "__setstate__"; -static const char __pyx_k_TypeError[] = "TypeError"; -static const char __pyx_k_enumerate[] = "enumerate"; -static const char __pyx_k_pyx_state[] = "__pyx_state"; -static const char __pyx_k_reduce_ex[] = "__reduce_ex__"; -static const char __pyx_k_IndexError[] = "IndexError"; -static const char __pyx_k_ValueError[] = "ValueError"; -static const char __pyx_k_pyx_result[] = "__pyx_result"; -static const char __pyx_k_pyx_vtable[] = "__pyx_vtable__"; -static const char __pyx_k_MemoryError[] = "MemoryError"; -static const char __pyx_k_PickleError[] = "PickleError"; -static const char __pyx_k_pyx_checksum[] = "__pyx_checksum"; -static const char __pyx_k_stringsource[] = "stringsource"; -static const char __pyx_k_pyx_getbuffer[] = "__pyx_getbuffer"; -static const char __pyx_k_reduce_cython[] = "__reduce_cython__"; -static const char __pyx_k_View_MemoryView[] = "View.MemoryView"; -static const char __pyx_k_allocate_buffer[] = "allocate_buffer"; -static const char __pyx_k_dtype_is_object[] = "dtype_is_object"; -static const char __pyx_k_pyx_PickleError[] = "__pyx_PickleError"; -static const char __pyx_k_setstate_cython[] = "__setstate_cython__"; -static const char __pyx_k_pyx_unpickle_Enum[] = "__pyx_unpickle_Enum"; -static const char __pyx_k_cline_in_traceback[] = "cline_in_traceback"; -static const char __pyx_k_strided_and_direct[] = ""; -static const char __pyx_k_strided_and_indirect[] = ""; -static const char __pyx_k_contiguous_and_direct[] = ""; -static const char __pyx_k_MemoryView_of_r_object[] = ""; -static const char __pyx_k_MemoryView_of_r_at_0x_x[] = ""; -static const char __pyx_k_contiguous_and_indirect[] = ""; -static const char __pyx_k_Cannot_index_with_type_s[] = "Cannot index with type '%s'"; -static const char __pyx_k_Invalid_shape_in_axis_d_d[] = "Invalid shape in axis %d: %d."; -static const char __pyx_k_itemsize_0_for_cython_array[] = "itemsize <= 0 for cython.array"; -static const char __pyx_k_unable_to_allocate_array_data[] = "unable to allocate array data."; -static const char __pyx_k_strided_and_direct_or_indirect[] = ""; -static const char __pyx_k_Buffer_view_does_not_expose_stri[] = "Buffer view does not expose strides"; -static const char __pyx_k_Can_only_create_a_buffer_that_is[] = "Can only create a buffer that is contiguous in memory."; -static const char __pyx_k_Cannot_assign_to_read_only_memor[] = "Cannot assign to read-only memoryview"; -static const char __pyx_k_Cannot_create_writable_memory_vi[] = "Cannot create writable memory view from read-only memoryview"; -static const char __pyx_k_Empty_shape_tuple_for_cython_arr[] = "Empty shape tuple for cython.array"; -static const char __pyx_k_Incompatible_checksums_s_vs_0xb0[] = "Incompatible checksums (%s vs 0xb068931 = (name))"; -static const char __pyx_k_Indirect_dimensions_not_supporte[] = "Indirect dimensions not supported"; -static const char __pyx_k_Invalid_mode_expected_c_or_fortr[] = "Invalid mode, expected 'c' or 'fortran', got %s"; -static const char __pyx_k_Out_of_bounds_on_buffer_access_a[] = "Out of bounds on buffer access (axis %d)"; -static const char __pyx_k_Unable_to_convert_item_to_object[] = "Unable to convert item to object"; -static const char __pyx_k_got_differing_extents_in_dimensi[] = "got differing extents in dimension %d (got %d and %d)"; -static const char __pyx_k_no_default___reduce___due_to_non[] = "no default __reduce__ due to non-trivial __cinit__"; -static const char __pyx_k_unable_to_allocate_shape_and_str[] = "unable to allocate shape and strides."; -static PyObject *__pyx_n_s_ASCII; -static PyObject *__pyx_kp_s_Buffer_view_does_not_expose_stri; -static PyObject *__pyx_kp_s_Can_only_create_a_buffer_that_is; -static PyObject *__pyx_kp_s_Cannot_assign_to_read_only_memor; -static PyObject *__pyx_kp_s_Cannot_create_writable_memory_vi; -static PyObject *__pyx_kp_s_Cannot_index_with_type_s; -static PyObject *__pyx_n_s_Ellipsis; -static PyObject *__pyx_kp_s_Empty_shape_tuple_for_cython_arr; -static PyObject *__pyx_kp_s_Incompatible_checksums_s_vs_0xb0; -static PyObject *__pyx_n_s_IndexError; -static PyObject *__pyx_kp_s_Indirect_dimensions_not_supporte; -static PyObject *__pyx_kp_s_Invalid_mode_expected_c_or_fortr; -static PyObject *__pyx_kp_s_Invalid_shape_in_axis_d_d; -static PyObject *__pyx_n_s_MemoryError; -static PyObject *__pyx_kp_s_MemoryView_of_r_at_0x_x; -static PyObject *__pyx_kp_s_MemoryView_of_r_object; -static PyObject *__pyx_n_b_O; -static PyObject *__pyx_kp_s_Out_of_bounds_on_buffer_access_a; -static PyObject *__pyx_n_s_PickleError; -static PyObject *__pyx_n_s_TypeError; -static PyObject *__pyx_kp_s_Unable_to_convert_item_to_object; -static PyObject *__pyx_n_s_ValueError; -static PyObject *__pyx_n_s_View_MemoryView; -static PyObject *__pyx_n_s_allocate_buffer; -static PyObject *__pyx_n_s_base; -static PyObject *__pyx_n_s_c; -static PyObject *__pyx_n_u_c; -static PyObject *__pyx_n_s_class; -static PyObject *__pyx_n_s_cline_in_traceback; -static PyObject *__pyx_kp_s_contiguous_and_direct; -static PyObject *__pyx_kp_s_contiguous_and_indirect; -static PyObject *__pyx_n_s_dict; -static PyObject *__pyx_n_s_dtype_is_object; -static PyObject *__pyx_n_s_encode; -static PyObject *__pyx_n_s_enumerate; -static PyObject *__pyx_n_s_error; -static PyObject *__pyx_n_s_flags; -static PyObject *__pyx_n_s_format; -static PyObject *__pyx_n_s_fortran; -static PyObject *__pyx_n_u_fortran; -static PyObject *__pyx_n_s_getstate; -static PyObject *__pyx_kp_s_got_differing_extents_in_dimensi; -static PyObject *__pyx_n_s_id; -static PyObject *__pyx_n_s_import; -static PyObject *__pyx_n_s_itemsize; -static PyObject *__pyx_kp_s_itemsize_0_for_cython_array; -static PyObject *__pyx_n_s_main; -static PyObject *__pyx_n_s_memview; -static PyObject *__pyx_n_s_mode; -static PyObject *__pyx_n_s_name; -static PyObject *__pyx_n_s_name_2; -static PyObject *__pyx_n_s_ndim; -static PyObject *__pyx_n_s_new; -static PyObject *__pyx_kp_s_no_default___reduce___due_to_non; -static PyObject *__pyx_n_s_obj; -static PyObject *__pyx_n_s_pack; -static PyObject *__pyx_n_s_paths; -static PyObject *__pyx_n_s_pickle; -static PyObject *__pyx_n_s_pyx_PickleError; -static PyObject *__pyx_n_s_pyx_checksum; -static PyObject *__pyx_n_s_pyx_getbuffer; -static PyObject *__pyx_n_s_pyx_result; -static PyObject *__pyx_n_s_pyx_state; -static PyObject *__pyx_n_s_pyx_type; -static PyObject *__pyx_n_s_pyx_unpickle_Enum; -static PyObject *__pyx_n_s_pyx_vtable; -static PyObject *__pyx_n_s_range; -static PyObject *__pyx_n_s_reduce; -static PyObject *__pyx_n_s_reduce_cython; -static PyObject *__pyx_n_s_reduce_ex; -static PyObject *__pyx_n_s_setstate; -static PyObject *__pyx_n_s_setstate_cython; -static PyObject *__pyx_n_s_shape; -static PyObject *__pyx_n_s_size; -static PyObject *__pyx_n_s_start; -static PyObject *__pyx_n_s_step; -static PyObject *__pyx_n_s_stop; -static PyObject *__pyx_kp_s_strided_and_direct; -static PyObject *__pyx_kp_s_strided_and_direct_or_indirect; -static PyObject *__pyx_kp_s_strided_and_indirect; -static PyObject *__pyx_kp_s_stringsource; -static PyObject *__pyx_n_s_struct; -static PyObject *__pyx_n_s_t_xs; -static PyObject *__pyx_n_s_t_ys; -static PyObject *__pyx_n_s_test; -static PyObject *__pyx_kp_s_unable_to_allocate_array_data; -static PyObject *__pyx_kp_s_unable_to_allocate_shape_and_str; -static PyObject *__pyx_n_s_unpack; -static PyObject *__pyx_n_s_update; -static PyObject *__pyx_n_s_values; -static PyObject *__pyx_pf_15monotonic_align_4core_maximum_path_c(CYTHON_UNUSED PyObject *__pyx_self, __Pyx_memviewslice __pyx_v_paths, __Pyx_memviewslice __pyx_v_values, __Pyx_memviewslice __pyx_v_t_ys, __Pyx_memviewslice __pyx_v_t_xs); /* proto */ -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array___cinit__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_shape, Py_ssize_t __pyx_v_itemsize, PyObject *__pyx_v_format, PyObject *__pyx_v_mode, int __pyx_v_allocate_buffer); /* proto */ -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_2__getbuffer__(struct __pyx_array_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /* proto */ -static void __pyx_array___pyx_pf_15View_dot_MemoryView_5array_4__dealloc__(struct __pyx_array_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_5array_7memview___get__(struct __pyx_array_obj *__pyx_v_self); /* proto */ -static Py_ssize_t __pyx_array___pyx_pf_15View_dot_MemoryView_5array_6__len__(struct __pyx_array_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_8__getattr__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_attr); /* proto */ -static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_10__getitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item); /* proto */ -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_12__setitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value); /* proto */ -static PyObject *__pyx_pf___pyx_array___reduce_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_array_2__setstate_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */ -static int __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum___init__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v_name); /* proto */ -static PyObject *__pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum_2__repr__(struct __pyx_MemviewEnum_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_MemviewEnum___reduce_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_MemviewEnum_2__setstate_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v___pyx_state); /* proto */ -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview___cinit__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj, int __pyx_v_flags, int __pyx_v_dtype_is_object); /* proto */ -static void __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_2__dealloc__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_4__getitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index); /* proto */ -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_6__setitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value); /* proto */ -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_8__getbuffer__(struct __pyx_memoryview_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_1T___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4base___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_5shape___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_7strides___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_10suboffsets___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4ndim___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_8itemsize___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_6nbytes___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4size___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static Py_ssize_t __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_10__len__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_12__repr__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_14__str__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_16is_c_contig(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_18is_f_contig(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_20copy(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_22copy_fortran(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_memoryview___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_memoryview_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */ -static void __pyx_memoryviewslice___pyx_pf_15View_dot_MemoryView_16_memoryviewslice___dealloc__(struct __pyx_memoryviewslice_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_16_memoryviewslice_4base___get__(struct __pyx_memoryviewslice_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_memoryviewslice___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_memoryviewslice_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView___pyx_unpickle_Enum(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v___pyx_type, long __pyx_v___pyx_checksum, PyObject *__pyx_v___pyx_state); /* proto */ -static PyObject *__pyx_tp_new_array(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_Enum(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_memoryview(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new__memoryviewslice(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_int_0; -static PyObject *__pyx_int_1; -static PyObject *__pyx_int_184977713; -static PyObject *__pyx_int_neg_1; -static float __pyx_k_; -static PyObject *__pyx_tuple__2; -static PyObject *__pyx_tuple__3; -static PyObject *__pyx_tuple__4; -static PyObject *__pyx_tuple__5; -static PyObject *__pyx_tuple__6; -static PyObject *__pyx_tuple__7; -static PyObject *__pyx_tuple__8; -static PyObject *__pyx_tuple__9; -static PyObject *__pyx_slice__16; -static PyObject *__pyx_tuple__10; -static PyObject *__pyx_tuple__11; -static PyObject *__pyx_tuple__12; -static PyObject *__pyx_tuple__13; -static PyObject *__pyx_tuple__14; -static PyObject *__pyx_tuple__15; -static PyObject *__pyx_tuple__17; -static PyObject *__pyx_tuple__18; -static PyObject *__pyx_tuple__19; -static PyObject *__pyx_tuple__20; -static PyObject *__pyx_tuple__21; -static PyObject *__pyx_tuple__22; -static PyObject *__pyx_tuple__23; -static PyObject *__pyx_tuple__24; -static PyObject *__pyx_tuple__25; -static PyObject *__pyx_codeobj__26; -/* Late includes */ - -/* "monotonic_align/core.pyx":7 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<< - * cdef int x - * cdef int y - */ - -static void __pyx_f_15monotonic_align_4core_maximum_path_each(__Pyx_memviewslice __pyx_v_path, __Pyx_memviewslice __pyx_v_value, int __pyx_v_t_y, int __pyx_v_t_x, struct __pyx_opt_args_15monotonic_align_4core_maximum_path_each *__pyx_optional_args) { - float __pyx_v_max_neg_val = __pyx_k_; - int __pyx_v_x; - int __pyx_v_y; - float __pyx_v_v_prev; - float __pyx_v_v_cur; - int __pyx_v_index; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - long __pyx_t_4; - int __pyx_t_5; - long __pyx_t_6; - long __pyx_t_7; - int __pyx_t_8; - Py_ssize_t __pyx_t_9; - Py_ssize_t __pyx_t_10; - float __pyx_t_11; - float __pyx_t_12; - float __pyx_t_13; - int __pyx_t_14; - Py_ssize_t __pyx_t_15; - Py_ssize_t __pyx_t_16; - if (__pyx_optional_args) { - if (__pyx_optional_args->__pyx_n > 0) { - __pyx_v_max_neg_val = __pyx_optional_args->max_neg_val; - } - } - - /* "monotonic_align/core.pyx":13 - * cdef float v_cur - * cdef float tmp - * cdef int index = t_x - 1 # <<<<<<<<<<<<<< - * - * for y in range(t_y): - */ - __pyx_v_index = (__pyx_v_t_x - 1); - - /* "monotonic_align/core.pyx":15 - * cdef int index = t_x - 1 - * - * for y in range(t_y): # <<<<<<<<<<<<<< - * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - * if x == y: - */ - __pyx_t_1 = __pyx_v_t_y; - __pyx_t_2 = __pyx_t_1; - for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) { - __pyx_v_y = __pyx_t_3; - - /* "monotonic_align/core.pyx":16 - * - * for y in range(t_y): - * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): # <<<<<<<<<<<<<< - * if x == y: - * v_cur = max_neg_val - */ - __pyx_t_4 = (__pyx_v_y + 1); - __pyx_t_5 = __pyx_v_t_x; - if (((__pyx_t_4 < __pyx_t_5) != 0)) { - __pyx_t_6 = __pyx_t_4; - } else { - __pyx_t_6 = __pyx_t_5; - } - __pyx_t_4 = __pyx_t_6; - __pyx_t_5 = ((__pyx_v_t_x + __pyx_v_y) - __pyx_v_t_y); - __pyx_t_6 = 0; - if (((__pyx_t_5 > __pyx_t_6) != 0)) { - __pyx_t_7 = __pyx_t_5; - } else { - __pyx_t_7 = __pyx_t_6; - } - __pyx_t_6 = __pyx_t_4; - for (__pyx_t_5 = __pyx_t_7; __pyx_t_5 < __pyx_t_6; __pyx_t_5+=1) { - __pyx_v_x = __pyx_t_5; - - /* "monotonic_align/core.pyx":17 - * for y in range(t_y): - * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - * if x == y: # <<<<<<<<<<<<<< - * v_cur = max_neg_val - * else: - */ - __pyx_t_8 = ((__pyx_v_x == __pyx_v_y) != 0); - if (__pyx_t_8) { - - /* "monotonic_align/core.pyx":18 - * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - * if x == y: - * v_cur = max_neg_val # <<<<<<<<<<<<<< - * else: - * v_cur = value[y-1, x] - */ - __pyx_v_v_cur = __pyx_v_max_neg_val; - - /* "monotonic_align/core.pyx":17 - * for y in range(t_y): - * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - * if x == y: # <<<<<<<<<<<<<< - * v_cur = max_neg_val - * else: - */ - goto __pyx_L7; - } - - /* "monotonic_align/core.pyx":20 - * v_cur = max_neg_val - * else: - * v_cur = value[y-1, x] # <<<<<<<<<<<<<< - * if x == 0: - * if y == 0: - */ - /*else*/ { - __pyx_t_9 = (__pyx_v_y - 1); - __pyx_t_10 = __pyx_v_x; - __pyx_v_v_cur = (*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_9 * __pyx_v_value.strides[0]) )) + __pyx_t_10)) ))); - } - __pyx_L7:; - - /* "monotonic_align/core.pyx":21 - * else: - * v_cur = value[y-1, x] - * if x == 0: # <<<<<<<<<<<<<< - * if y == 0: - * v_prev = 0. - */ - __pyx_t_8 = ((__pyx_v_x == 0) != 0); - if (__pyx_t_8) { - - /* "monotonic_align/core.pyx":22 - * v_cur = value[y-1, x] - * if x == 0: - * if y == 0: # <<<<<<<<<<<<<< - * v_prev = 0. - * else: - */ - __pyx_t_8 = ((__pyx_v_y == 0) != 0); - if (__pyx_t_8) { - - /* "monotonic_align/core.pyx":23 - * if x == 0: - * if y == 0: - * v_prev = 0. # <<<<<<<<<<<<<< - * else: - * v_prev = max_neg_val - */ - __pyx_v_v_prev = 0.; - - /* "monotonic_align/core.pyx":22 - * v_cur = value[y-1, x] - * if x == 0: - * if y == 0: # <<<<<<<<<<<<<< - * v_prev = 0. - * else: - */ - goto __pyx_L9; - } - - /* "monotonic_align/core.pyx":25 - * v_prev = 0. - * else: - * v_prev = max_neg_val # <<<<<<<<<<<<<< - * else: - * v_prev = value[y-1, x-1] - */ - /*else*/ { - __pyx_v_v_prev = __pyx_v_max_neg_val; - } - __pyx_L9:; - - /* "monotonic_align/core.pyx":21 - * else: - * v_cur = value[y-1, x] - * if x == 0: # <<<<<<<<<<<<<< - * if y == 0: - * v_prev = 0. - */ - goto __pyx_L8; - } - - /* "monotonic_align/core.pyx":27 - * v_prev = max_neg_val - * else: - * v_prev = value[y-1, x-1] # <<<<<<<<<<<<<< - * value[y, x] += max(v_prev, v_cur) - * - */ - /*else*/ { - __pyx_t_10 = (__pyx_v_y - 1); - __pyx_t_9 = (__pyx_v_x - 1); - __pyx_v_v_prev = (*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_10 * __pyx_v_value.strides[0]) )) + __pyx_t_9)) ))); - } - __pyx_L8:; - - /* "monotonic_align/core.pyx":28 - * else: - * v_prev = value[y-1, x-1] - * value[y, x] += max(v_prev, v_cur) # <<<<<<<<<<<<<< - * - * for y in range(t_y - 1, -1, -1): - */ - __pyx_t_11 = __pyx_v_v_cur; - __pyx_t_12 = __pyx_v_v_prev; - if (((__pyx_t_11 > __pyx_t_12) != 0)) { - __pyx_t_13 = __pyx_t_11; - } else { - __pyx_t_13 = __pyx_t_12; - } - __pyx_t_9 = __pyx_v_y; - __pyx_t_10 = __pyx_v_x; - *((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_9 * __pyx_v_value.strides[0]) )) + __pyx_t_10)) )) += __pyx_t_13; - } - } - - /* "monotonic_align/core.pyx":30 - * value[y, x] += max(v_prev, v_cur) - * - * for y in range(t_y - 1, -1, -1): # <<<<<<<<<<<<<< - * path[y, index] = 1 - * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): - */ - for (__pyx_t_1 = (__pyx_v_t_y - 1); __pyx_t_1 > -1; __pyx_t_1-=1) { - __pyx_v_y = __pyx_t_1; - - /* "monotonic_align/core.pyx":31 - * - * for y in range(t_y - 1, -1, -1): - * path[y, index] = 1 # <<<<<<<<<<<<<< - * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): - * index = index - 1 - */ - __pyx_t_10 = __pyx_v_y; - __pyx_t_9 = __pyx_v_index; - *((int *) ( /* dim=1 */ ((char *) (((int *) ( /* dim=0 */ (__pyx_v_path.data + __pyx_t_10 * __pyx_v_path.strides[0]) )) + __pyx_t_9)) )) = 1; - - /* "monotonic_align/core.pyx":32 - * for y in range(t_y - 1, -1, -1): - * path[y, index] = 1 - * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): # <<<<<<<<<<<<<< - * index = index - 1 - * - */ - __pyx_t_14 = ((__pyx_v_index != 0) != 0); - if (__pyx_t_14) { - } else { - __pyx_t_8 = __pyx_t_14; - goto __pyx_L13_bool_binop_done; - } - __pyx_t_14 = ((__pyx_v_index == __pyx_v_y) != 0); - if (!__pyx_t_14) { - } else { - __pyx_t_8 = __pyx_t_14; - goto __pyx_L13_bool_binop_done; - } - __pyx_t_9 = (__pyx_v_y - 1); - __pyx_t_10 = __pyx_v_index; - __pyx_t_15 = (__pyx_v_y - 1); - __pyx_t_16 = (__pyx_v_index - 1); - __pyx_t_14 = (((*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_9 * __pyx_v_value.strides[0]) )) + __pyx_t_10)) ))) < (*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_15 * __pyx_v_value.strides[0]) )) + __pyx_t_16)) )))) != 0); - __pyx_t_8 = __pyx_t_14; - __pyx_L13_bool_binop_done:; - if (__pyx_t_8) { - - /* "monotonic_align/core.pyx":33 - * path[y, index] = 1 - * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): - * index = index - 1 # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_index = (__pyx_v_index - 1); - - /* "monotonic_align/core.pyx":32 - * for y in range(t_y - 1, -1, -1): - * path[y, index] = 1 - * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): # <<<<<<<<<<<<<< - * index = index - 1 - * - */ - } - } - - /* "monotonic_align/core.pyx":7 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<< - * cdef int x - * cdef int y - */ - - /* function exit code */ -} - -/* "monotonic_align/core.pyx":38 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cpdef void maximum_path_c(int[:,:,::1] paths, float[:,:,::1] values, int[::1] t_ys, int[::1] t_xs) nogil: # <<<<<<<<<<<<<< - * cdef int b = paths.shape[0] - * cdef int i - */ - -static PyObject *__pyx_pw_15monotonic_align_4core_1maximum_path_c(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static void __pyx_f_15monotonic_align_4core_maximum_path_c(__Pyx_memviewslice __pyx_v_paths, __Pyx_memviewslice __pyx_v_values, __Pyx_memviewslice __pyx_v_t_ys, __Pyx_memviewslice __pyx_v_t_xs, CYTHON_UNUSED int __pyx_skip_dispatch) { - CYTHON_UNUSED int __pyx_v_b; - int __pyx_v_i; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - __Pyx_memviewslice __pyx_t_4 = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_memviewslice __pyx_t_5 = { 0, 0, { 0 }, { 0 }, { 0 } }; - Py_ssize_t __pyx_t_6; - Py_ssize_t __pyx_t_7; - - /* "monotonic_align/core.pyx":39 - * @cython.wraparound(False) - * cpdef void maximum_path_c(int[:,:,::1] paths, float[:,:,::1] values, int[::1] t_ys, int[::1] t_xs) nogil: - * cdef int b = paths.shape[0] # <<<<<<<<<<<<<< - * cdef int i - * for i in prange(b, nogil=True): - */ - __pyx_v_b = (__pyx_v_paths.shape[0]); - - /* "monotonic_align/core.pyx":41 - * cdef int b = paths.shape[0] - * cdef int i - * for i in prange(b, nogil=True): # <<<<<<<<<<<<<< - * maximum_path_each(paths[i], values[i], t_ys[i], t_xs[i]) - */ - { - #ifdef WITH_THREAD - PyThreadState *_save; - Py_UNBLOCK_THREADS - __Pyx_FastGIL_Remember(); - #endif - /*try:*/ { - __pyx_t_1 = __pyx_v_b; - if ((1 == 0)) abort(); - { - #if ((defined(__APPLE__) || defined(__OSX__)) && (defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95))))) - #undef likely - #undef unlikely - #define likely(x) (x) - #define unlikely(x) (x) - #endif - __pyx_t_3 = (__pyx_t_1 - 0 + 1 - 1/abs(1)) / 1; - if (__pyx_t_3 > 0) - { - #ifdef _OPENMP - #pragma omp parallel private(__pyx_t_6, __pyx_t_7) firstprivate(__pyx_t_4, __pyx_t_5) - #endif /* _OPENMP */ - { - #ifdef _OPENMP - #pragma omp for firstprivate(__pyx_v_i) lastprivate(__pyx_v_i) - #endif /* _OPENMP */ - for (__pyx_t_2 = 0; __pyx_t_2 < __pyx_t_3; __pyx_t_2++){ - { - __pyx_v_i = (int)(0 + 1 * __pyx_t_2); - - /* "monotonic_align/core.pyx":42 - * cdef int i - * for i in prange(b, nogil=True): - * maximum_path_each(paths[i], values[i], t_ys[i], t_xs[i]) # <<<<<<<<<<<<<< - */ - __pyx_t_4.data = __pyx_v_paths.data; - __pyx_t_4.memview = __pyx_v_paths.memview; - __PYX_INC_MEMVIEW(&__pyx_t_4, 0); - { - Py_ssize_t __pyx_tmp_idx = __pyx_v_i; - Py_ssize_t __pyx_tmp_stride = __pyx_v_paths.strides[0]; - __pyx_t_4.data += __pyx_tmp_idx * __pyx_tmp_stride; -} - -__pyx_t_4.shape[0] = __pyx_v_paths.shape[1]; -__pyx_t_4.strides[0] = __pyx_v_paths.strides[1]; - __pyx_t_4.suboffsets[0] = -1; - -__pyx_t_4.shape[1] = __pyx_v_paths.shape[2]; -__pyx_t_4.strides[1] = __pyx_v_paths.strides[2]; - __pyx_t_4.suboffsets[1] = -1; - -__pyx_t_5.data = __pyx_v_values.data; - __pyx_t_5.memview = __pyx_v_values.memview; - __PYX_INC_MEMVIEW(&__pyx_t_5, 0); - { - Py_ssize_t __pyx_tmp_idx = __pyx_v_i; - Py_ssize_t __pyx_tmp_stride = __pyx_v_values.strides[0]; - __pyx_t_5.data += __pyx_tmp_idx * __pyx_tmp_stride; -} - -__pyx_t_5.shape[0] = __pyx_v_values.shape[1]; -__pyx_t_5.strides[0] = __pyx_v_values.strides[1]; - __pyx_t_5.suboffsets[0] = -1; - -__pyx_t_5.shape[1] = __pyx_v_values.shape[2]; -__pyx_t_5.strides[1] = __pyx_v_values.strides[2]; - __pyx_t_5.suboffsets[1] = -1; - -__pyx_t_6 = __pyx_v_i; - __pyx_t_7 = __pyx_v_i; - __pyx_f_15monotonic_align_4core_maximum_path_each(__pyx_t_4, __pyx_t_5, (*((int *) ( /* dim=0 */ ((char *) (((int *) __pyx_v_t_ys.data) + __pyx_t_6)) ))), (*((int *) ( /* dim=0 */ ((char *) (((int *) __pyx_v_t_xs.data) + __pyx_t_7)) ))), NULL); - __PYX_XDEC_MEMVIEW(&__pyx_t_4, 0); - __pyx_t_4.memview = NULL; - __pyx_t_4.data = NULL; - __PYX_XDEC_MEMVIEW(&__pyx_t_5, 0); - __pyx_t_5.memview = NULL; - __pyx_t_5.data = NULL; - } - } - } - } - } - #if ((defined(__APPLE__) || defined(__OSX__)) && (defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95))))) - #undef likely - #undef unlikely - #define likely(x) __builtin_expect(!!(x), 1) - #define unlikely(x) __builtin_expect(!!(x), 0) - #endif - } - - /* "monotonic_align/core.pyx":41 - * cdef int b = paths.shape[0] - * cdef int i - * for i in prange(b, nogil=True): # <<<<<<<<<<<<<< - * maximum_path_each(paths[i], values[i], t_ys[i], t_xs[i]) - */ - /*finally:*/ { - /*normal exit:*/{ - #ifdef WITH_THREAD - __Pyx_FastGIL_Forget(); - Py_BLOCK_THREADS - #endif - goto __pyx_L5; - } - __pyx_L5:; - } - } - - /* "monotonic_align/core.pyx":38 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cpdef void maximum_path_c(int[:,:,::1] paths, float[:,:,::1] values, int[::1] t_ys, int[::1] t_xs) nogil: # <<<<<<<<<<<<<< - * cdef int b = paths.shape[0] - * cdef int i - */ - - /* function exit code */ -} - -/* Python wrapper */ -static PyObject *__pyx_pw_15monotonic_align_4core_1maximum_path_c(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static PyObject *__pyx_pw_15monotonic_align_4core_1maximum_path_c(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - __Pyx_memviewslice __pyx_v_paths = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_memviewslice __pyx_v_values = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_memviewslice __pyx_v_t_ys = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_memviewslice __pyx_v_t_xs = { 0, 0, { 0 }, { 0 }, { 0 } }; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("maximum_path_c (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_paths,&__pyx_n_s_values,&__pyx_n_s_t_ys,&__pyx_n_s_t_xs,0}; - PyObject* values[4] = {0,0,0,0}; - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_paths)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_values)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, 1); __PYX_ERR(0, 38, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_t_ys)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, 2); __PYX_ERR(0, 38, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (likely((values[3] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_t_xs)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, 3); __PYX_ERR(0, 38, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "maximum_path_c") < 0)) __PYX_ERR(0, 38, __pyx_L3_error) - } - } else if (PyTuple_GET_SIZE(__pyx_args) != 4) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - values[3] = PyTuple_GET_ITEM(__pyx_args, 3); - } - __pyx_v_paths = __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_int(values[0], PyBUF_WRITABLE); if (unlikely(!__pyx_v_paths.memview)) __PYX_ERR(0, 38, __pyx_L3_error) - __pyx_v_values = __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_float(values[1], PyBUF_WRITABLE); if (unlikely(!__pyx_v_values.memview)) __PYX_ERR(0, 38, __pyx_L3_error) - __pyx_v_t_ys = __Pyx_PyObject_to_MemoryviewSlice_dc_int(values[2], PyBUF_WRITABLE); if (unlikely(!__pyx_v_t_ys.memview)) __PYX_ERR(0, 38, __pyx_L3_error) - __pyx_v_t_xs = __Pyx_PyObject_to_MemoryviewSlice_dc_int(values[3], PyBUF_WRITABLE); if (unlikely(!__pyx_v_t_xs.memview)) __PYX_ERR(0, 38, __pyx_L3_error) - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 38, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("monotonic_align.core.maximum_path_c", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_15monotonic_align_4core_maximum_path_c(__pyx_self, __pyx_v_paths, __pyx_v_values, __pyx_v_t_ys, __pyx_v_t_xs); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15monotonic_align_4core_maximum_path_c(CYTHON_UNUSED PyObject *__pyx_self, __Pyx_memviewslice __pyx_v_paths, __Pyx_memviewslice __pyx_v_values, __Pyx_memviewslice __pyx_v_t_ys, __Pyx_memviewslice __pyx_v_t_xs) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("maximum_path_c", 0); - __Pyx_XDECREF(__pyx_r); - if (unlikely(!__pyx_v_paths.memview)) { __Pyx_RaiseUnboundLocalError("paths"); __PYX_ERR(0, 38, __pyx_L1_error) } - if (unlikely(!__pyx_v_values.memview)) { __Pyx_RaiseUnboundLocalError("values"); __PYX_ERR(0, 38, __pyx_L1_error) } - if (unlikely(!__pyx_v_t_ys.memview)) { __Pyx_RaiseUnboundLocalError("t_ys"); __PYX_ERR(0, 38, __pyx_L1_error) } - if (unlikely(!__pyx_v_t_xs.memview)) { __Pyx_RaiseUnboundLocalError("t_xs"); __PYX_ERR(0, 38, __pyx_L1_error) } - __pyx_t_1 = __Pyx_void_to_None(__pyx_f_15monotonic_align_4core_maximum_path_c(__pyx_v_paths, __pyx_v_values, __pyx_v_t_ys, __pyx_v_t_xs, 0)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 38, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("monotonic_align.core.maximum_path_c", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __PYX_XDEC_MEMVIEW(&__pyx_v_paths, 1); - __PYX_XDEC_MEMVIEW(&__pyx_v_values, 1); - __PYX_XDEC_MEMVIEW(&__pyx_v_t_ys, 1); - __PYX_XDEC_MEMVIEW(&__pyx_v_t_xs, 1); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":122 - * cdef bint dtype_is_object - * - * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, # <<<<<<<<<<<<<< - * mode="c", bint allocate_buffer=True): - * - */ - -/* Python wrapper */ -static int __pyx_array___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static int __pyx_array___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_shape = 0; - Py_ssize_t __pyx_v_itemsize; - PyObject *__pyx_v_format = 0; - PyObject *__pyx_v_mode = 0; - int __pyx_v_allocate_buffer; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__cinit__ (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_shape,&__pyx_n_s_itemsize,&__pyx_n_s_format,&__pyx_n_s_mode,&__pyx_n_s_allocate_buffer,0}; - PyObject* values[5] = {0,0,0,0,0}; - values[3] = ((PyObject *)__pyx_n_s_c); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 5: values[4] = PyTuple_GET_ITEM(__pyx_args, 4); - CYTHON_FALLTHROUGH; - case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_shape)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_itemsize)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 3, 5, 1); __PYX_ERR(1, 122, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_format)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 3, 5, 2); __PYX_ERR(1, 122, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (kw_args > 0) { - PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_mode); - if (value) { values[3] = value; kw_args--; } - } - CYTHON_FALLTHROUGH; - case 4: - if (kw_args > 0) { - PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_allocate_buffer); - if (value) { values[4] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__cinit__") < 0)) __PYX_ERR(1, 122, __pyx_L3_error) - } - } else { - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 5: values[4] = PyTuple_GET_ITEM(__pyx_args, 4); - CYTHON_FALLTHROUGH; - case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - __pyx_v_shape = ((PyObject*)values[0]); - __pyx_v_itemsize = __Pyx_PyIndex_AsSsize_t(values[1]); if (unlikely((__pyx_v_itemsize == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 122, __pyx_L3_error) - __pyx_v_format = values[2]; - __pyx_v_mode = values[3]; - if (values[4]) { - __pyx_v_allocate_buffer = __Pyx_PyObject_IsTrue(values[4]); if (unlikely((__pyx_v_allocate_buffer == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 123, __pyx_L3_error) - } else { - - /* "View.MemoryView":123 - * - * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, - * mode="c", bint allocate_buffer=True): # <<<<<<<<<<<<<< - * - * cdef int idx - */ - __pyx_v_allocate_buffer = ((int)1); - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 3, 5, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(1, 122, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("View.MemoryView.array.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return -1; - __pyx_L4_argument_unpacking_done:; - if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_shape), (&PyTuple_Type), 1, "shape", 1))) __PYX_ERR(1, 122, __pyx_L1_error) - if (unlikely(((PyObject *)__pyx_v_format) == Py_None)) { - PyErr_Format(PyExc_TypeError, "Argument '%.200s' must not be None", "format"); __PYX_ERR(1, 122, __pyx_L1_error) - } - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array___cinit__(((struct __pyx_array_obj *)__pyx_v_self), __pyx_v_shape, __pyx_v_itemsize, __pyx_v_format, __pyx_v_mode, __pyx_v_allocate_buffer); - - /* "View.MemoryView":122 - * cdef bint dtype_is_object - * - * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, # <<<<<<<<<<<<<< - * mode="c", bint allocate_buffer=True): - * - */ - - /* function exit code */ - goto __pyx_L0; - __pyx_L1_error:; - __pyx_r = -1; - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array___cinit__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_shape, Py_ssize_t __pyx_v_itemsize, PyObject *__pyx_v_format, PyObject *__pyx_v_mode, int __pyx_v_allocate_buffer) { - int __pyx_v_idx; - Py_ssize_t __pyx_v_i; - Py_ssize_t __pyx_v_dim; - PyObject **__pyx_v_p; - char __pyx_v_order; - int __pyx_r; - __Pyx_RefNannyDeclarations - Py_ssize_t __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - char *__pyx_t_7; - int __pyx_t_8; - Py_ssize_t __pyx_t_9; - PyObject *__pyx_t_10 = NULL; - Py_ssize_t __pyx_t_11; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__cinit__", 0); - __Pyx_INCREF(__pyx_v_format); - - /* "View.MemoryView":129 - * cdef PyObject **p - * - * self.ndim = len(shape) # <<<<<<<<<<<<<< - * self.itemsize = itemsize - * - */ - if (unlikely(__pyx_v_shape == Py_None)) { - PyErr_SetString(PyExc_TypeError, "object of type 'NoneType' has no len()"); - __PYX_ERR(1, 129, __pyx_L1_error) - } - __pyx_t_1 = PyTuple_GET_SIZE(__pyx_v_shape); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(1, 129, __pyx_L1_error) - __pyx_v_self->ndim = ((int)__pyx_t_1); - - /* "View.MemoryView":130 - * - * self.ndim = len(shape) - * self.itemsize = itemsize # <<<<<<<<<<<<<< - * - * if not self.ndim: - */ - __pyx_v_self->itemsize = __pyx_v_itemsize; - - /* "View.MemoryView":132 - * self.itemsize = itemsize - * - * if not self.ndim: # <<<<<<<<<<<<<< - * raise ValueError("Empty shape tuple for cython.array") - * - */ - __pyx_t_2 = ((!(__pyx_v_self->ndim != 0)) != 0); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":133 - * - * if not self.ndim: - * raise ValueError("Empty shape tuple for cython.array") # <<<<<<<<<<<<<< - * - * if itemsize <= 0: - */ - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__2, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 133, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 133, __pyx_L1_error) - - /* "View.MemoryView":132 - * self.itemsize = itemsize - * - * if not self.ndim: # <<<<<<<<<<<<<< - * raise ValueError("Empty shape tuple for cython.array") - * - */ - } - - /* "View.MemoryView":135 - * raise ValueError("Empty shape tuple for cython.array") - * - * if itemsize <= 0: # <<<<<<<<<<<<<< - * raise ValueError("itemsize <= 0 for cython.array") - * - */ - __pyx_t_2 = ((__pyx_v_itemsize <= 0) != 0); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":136 - * - * if itemsize <= 0: - * raise ValueError("itemsize <= 0 for cython.array") # <<<<<<<<<<<<<< - * - * if not isinstance(format, bytes): - */ - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__3, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 136, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 136, __pyx_L1_error) - - /* "View.MemoryView":135 - * raise ValueError("Empty shape tuple for cython.array") - * - * if itemsize <= 0: # <<<<<<<<<<<<<< - * raise ValueError("itemsize <= 0 for cython.array") - * - */ - } - - /* "View.MemoryView":138 - * raise ValueError("itemsize <= 0 for cython.array") - * - * if not isinstance(format, bytes): # <<<<<<<<<<<<<< - * format = format.encode('ASCII') - * self._format = format # keep a reference to the byte string - */ - __pyx_t_2 = PyBytes_Check(__pyx_v_format); - __pyx_t_4 = ((!(__pyx_t_2 != 0)) != 0); - if (__pyx_t_4) { - - /* "View.MemoryView":139 - * - * if not isinstance(format, bytes): - * format = format.encode('ASCII') # <<<<<<<<<<<<<< - * self._format = format # keep a reference to the byte string - * self.format = self._format - */ - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_v_format, __pyx_n_s_encode); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 139, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = NULL; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_5))) { - __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_5); - if (likely(__pyx_t_6)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5); - __Pyx_INCREF(__pyx_t_6); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_5, function); - } - } - __pyx_t_3 = (__pyx_t_6) ? __Pyx_PyObject_Call2Args(__pyx_t_5, __pyx_t_6, __pyx_n_s_ASCII) : __Pyx_PyObject_CallOneArg(__pyx_t_5, __pyx_n_s_ASCII); - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 139, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF_SET(__pyx_v_format, __pyx_t_3); - __pyx_t_3 = 0; - - /* "View.MemoryView":138 - * raise ValueError("itemsize <= 0 for cython.array") - * - * if not isinstance(format, bytes): # <<<<<<<<<<<<<< - * format = format.encode('ASCII') - * self._format = format # keep a reference to the byte string - */ - } - - /* "View.MemoryView":140 - * if not isinstance(format, bytes): - * format = format.encode('ASCII') - * self._format = format # keep a reference to the byte string # <<<<<<<<<<<<<< - * self.format = self._format - * - */ - if (!(likely(PyBytes_CheckExact(__pyx_v_format))||((__pyx_v_format) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "bytes", Py_TYPE(__pyx_v_format)->tp_name), 0))) __PYX_ERR(1, 140, __pyx_L1_error) - __pyx_t_3 = __pyx_v_format; - __Pyx_INCREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __Pyx_GOTREF(__pyx_v_self->_format); - __Pyx_DECREF(__pyx_v_self->_format); - __pyx_v_self->_format = ((PyObject*)__pyx_t_3); - __pyx_t_3 = 0; - - /* "View.MemoryView":141 - * format = format.encode('ASCII') - * self._format = format # keep a reference to the byte string - * self.format = self._format # <<<<<<<<<<<<<< - * - * - */ - if (unlikely(__pyx_v_self->_format == Py_None)) { - PyErr_SetString(PyExc_TypeError, "expected bytes, NoneType found"); - __PYX_ERR(1, 141, __pyx_L1_error) - } - __pyx_t_7 = __Pyx_PyBytes_AsWritableString(__pyx_v_self->_format); if (unlikely((!__pyx_t_7) && PyErr_Occurred())) __PYX_ERR(1, 141, __pyx_L1_error) - __pyx_v_self->format = __pyx_t_7; - - /* "View.MemoryView":144 - * - * - * self._shape = PyObject_Malloc(sizeof(Py_ssize_t)*self.ndim*2) # <<<<<<<<<<<<<< - * self._strides = self._shape + self.ndim - * - */ - __pyx_v_self->_shape = ((Py_ssize_t *)PyObject_Malloc((((sizeof(Py_ssize_t)) * __pyx_v_self->ndim) * 2))); - - /* "View.MemoryView":145 - * - * self._shape = PyObject_Malloc(sizeof(Py_ssize_t)*self.ndim*2) - * self._strides = self._shape + self.ndim # <<<<<<<<<<<<<< - * - * if not self._shape: - */ - __pyx_v_self->_strides = (__pyx_v_self->_shape + __pyx_v_self->ndim); - - /* "View.MemoryView":147 - * self._strides = self._shape + self.ndim - * - * if not self._shape: # <<<<<<<<<<<<<< - * raise MemoryError("unable to allocate shape and strides.") - * - */ - __pyx_t_4 = ((!(__pyx_v_self->_shape != 0)) != 0); - if (unlikely(__pyx_t_4)) { - - /* "View.MemoryView":148 - * - * if not self._shape: - * raise MemoryError("unable to allocate shape and strides.") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_MemoryError, __pyx_tuple__4, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 148, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 148, __pyx_L1_error) - - /* "View.MemoryView":147 - * self._strides = self._shape + self.ndim - * - * if not self._shape: # <<<<<<<<<<<<<< - * raise MemoryError("unable to allocate shape and strides.") - * - */ - } - - /* "View.MemoryView":151 - * - * - * for idx, dim in enumerate(shape): # <<<<<<<<<<<<<< - * if dim <= 0: - * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim)) - */ - __pyx_t_8 = 0; - __pyx_t_3 = __pyx_v_shape; __Pyx_INCREF(__pyx_t_3); __pyx_t_1 = 0; - for (;;) { - if (__pyx_t_1 >= PyTuple_GET_SIZE(__pyx_t_3)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_5 = PyTuple_GET_ITEM(__pyx_t_3, __pyx_t_1); __Pyx_INCREF(__pyx_t_5); __pyx_t_1++; if (unlikely(0 < 0)) __PYX_ERR(1, 151, __pyx_L1_error) - #else - __pyx_t_5 = PySequence_ITEM(__pyx_t_3, __pyx_t_1); __pyx_t_1++; if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 151, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - __pyx_t_9 = __Pyx_PyIndex_AsSsize_t(__pyx_t_5); if (unlikely((__pyx_t_9 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 151, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_v_dim = __pyx_t_9; - __pyx_v_idx = __pyx_t_8; - __pyx_t_8 = (__pyx_t_8 + 1); - - /* "View.MemoryView":152 - * - * for idx, dim in enumerate(shape): - * if dim <= 0: # <<<<<<<<<<<<<< - * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim)) - * self._shape[idx] = dim - */ - __pyx_t_4 = ((__pyx_v_dim <= 0) != 0); - if (unlikely(__pyx_t_4)) { - - /* "View.MemoryView":153 - * for idx, dim in enumerate(shape): - * if dim <= 0: - * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim)) # <<<<<<<<<<<<<< - * self._shape[idx] = dim - * - */ - __pyx_t_5 = __Pyx_PyInt_From_int(__pyx_v_idx); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 153, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = PyInt_FromSsize_t(__pyx_v_dim); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 153, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_10 = PyTuple_New(2); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 153, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_10, 0, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_6); - PyTuple_SET_ITEM(__pyx_t_10, 1, __pyx_t_6); - __pyx_t_5 = 0; - __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_PyString_Format(__pyx_kp_s_Invalid_shape_in_axis_d_d, __pyx_t_10); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 153, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_10 = __Pyx_PyObject_CallOneArg(__pyx_builtin_ValueError, __pyx_t_6); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 153, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_Raise(__pyx_t_10, 0, 0, 0); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __PYX_ERR(1, 153, __pyx_L1_error) - - /* "View.MemoryView":152 - * - * for idx, dim in enumerate(shape): - * if dim <= 0: # <<<<<<<<<<<<<< - * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim)) - * self._shape[idx] = dim - */ - } - - /* "View.MemoryView":154 - * if dim <= 0: - * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim)) - * self._shape[idx] = dim # <<<<<<<<<<<<<< - * - * cdef char order - */ - (__pyx_v_self->_shape[__pyx_v_idx]) = __pyx_v_dim; - - /* "View.MemoryView":151 - * - * - * for idx, dim in enumerate(shape): # <<<<<<<<<<<<<< - * if dim <= 0: - * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim)) - */ - } - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "View.MemoryView":157 - * - * cdef char order - * if mode == 'fortran': # <<<<<<<<<<<<<< - * order = b'F' - * self.mode = u'fortran' - */ - __pyx_t_4 = (__Pyx_PyString_Equals(__pyx_v_mode, __pyx_n_s_fortran, Py_EQ)); if (unlikely(__pyx_t_4 < 0)) __PYX_ERR(1, 157, __pyx_L1_error) - if (__pyx_t_4) { - - /* "View.MemoryView":158 - * cdef char order - * if mode == 'fortran': - * order = b'F' # <<<<<<<<<<<<<< - * self.mode = u'fortran' - * elif mode == 'c': - */ - __pyx_v_order = 'F'; - - /* "View.MemoryView":159 - * if mode == 'fortran': - * order = b'F' - * self.mode = u'fortran' # <<<<<<<<<<<<<< - * elif mode == 'c': - * order = b'C' - */ - __Pyx_INCREF(__pyx_n_u_fortran); - __Pyx_GIVEREF(__pyx_n_u_fortran); - __Pyx_GOTREF(__pyx_v_self->mode); - __Pyx_DECREF(__pyx_v_self->mode); - __pyx_v_self->mode = __pyx_n_u_fortran; - - /* "View.MemoryView":157 - * - * cdef char order - * if mode == 'fortran': # <<<<<<<<<<<<<< - * order = b'F' - * self.mode = u'fortran' - */ - goto __pyx_L10; - } - - /* "View.MemoryView":160 - * order = b'F' - * self.mode = u'fortran' - * elif mode == 'c': # <<<<<<<<<<<<<< - * order = b'C' - * self.mode = u'c' - */ - __pyx_t_4 = (__Pyx_PyString_Equals(__pyx_v_mode, __pyx_n_s_c, Py_EQ)); if (unlikely(__pyx_t_4 < 0)) __PYX_ERR(1, 160, __pyx_L1_error) - if (likely(__pyx_t_4)) { - - /* "View.MemoryView":161 - * self.mode = u'fortran' - * elif mode == 'c': - * order = b'C' # <<<<<<<<<<<<<< - * self.mode = u'c' - * else: - */ - __pyx_v_order = 'C'; - - /* "View.MemoryView":162 - * elif mode == 'c': - * order = b'C' - * self.mode = u'c' # <<<<<<<<<<<<<< - * else: - * raise ValueError("Invalid mode, expected 'c' or 'fortran', got %s" % mode) - */ - __Pyx_INCREF(__pyx_n_u_c); - __Pyx_GIVEREF(__pyx_n_u_c); - __Pyx_GOTREF(__pyx_v_self->mode); - __Pyx_DECREF(__pyx_v_self->mode); - __pyx_v_self->mode = __pyx_n_u_c; - - /* "View.MemoryView":160 - * order = b'F' - * self.mode = u'fortran' - * elif mode == 'c': # <<<<<<<<<<<<<< - * order = b'C' - * self.mode = u'c' - */ - goto __pyx_L10; - } - - /* "View.MemoryView":164 - * self.mode = u'c' - * else: - * raise ValueError("Invalid mode, expected 'c' or 'fortran', got %s" % mode) # <<<<<<<<<<<<<< - * - * self.len = fill_contig_strides_array(self._shape, self._strides, - */ - /*else*/ { - __pyx_t_3 = __Pyx_PyString_FormatSafe(__pyx_kp_s_Invalid_mode_expected_c_or_fortr, __pyx_v_mode); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 164, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_10 = __Pyx_PyObject_CallOneArg(__pyx_builtin_ValueError, __pyx_t_3); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 164, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_10, 0, 0, 0); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __PYX_ERR(1, 164, __pyx_L1_error) - } - __pyx_L10:; - - /* "View.MemoryView":166 - * raise ValueError("Invalid mode, expected 'c' or 'fortran', got %s" % mode) - * - * self.len = fill_contig_strides_array(self._shape, self._strides, # <<<<<<<<<<<<<< - * itemsize, self.ndim, order) - * - */ - __pyx_v_self->len = __pyx_fill_contig_strides_array(__pyx_v_self->_shape, __pyx_v_self->_strides, __pyx_v_itemsize, __pyx_v_self->ndim, __pyx_v_order); - - /* "View.MemoryView":169 - * itemsize, self.ndim, order) - * - * self.free_data = allocate_buffer # <<<<<<<<<<<<<< - * self.dtype_is_object = format == b'O' - * if allocate_buffer: - */ - __pyx_v_self->free_data = __pyx_v_allocate_buffer; - - /* "View.MemoryView":170 - * - * self.free_data = allocate_buffer - * self.dtype_is_object = format == b'O' # <<<<<<<<<<<<<< - * if allocate_buffer: - * - */ - __pyx_t_10 = PyObject_RichCompare(__pyx_v_format, __pyx_n_b_O, Py_EQ); __Pyx_XGOTREF(__pyx_t_10); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 170, __pyx_L1_error) - __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_t_10); if (unlikely((__pyx_t_4 == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 170, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_v_self->dtype_is_object = __pyx_t_4; - - /* "View.MemoryView":171 - * self.free_data = allocate_buffer - * self.dtype_is_object = format == b'O' - * if allocate_buffer: # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_4 = (__pyx_v_allocate_buffer != 0); - if (__pyx_t_4) { - - /* "View.MemoryView":174 - * - * - * self.data = malloc(self.len) # <<<<<<<<<<<<<< - * if not self.data: - * raise MemoryError("unable to allocate array data.") - */ - __pyx_v_self->data = ((char *)malloc(__pyx_v_self->len)); - - /* "View.MemoryView":175 - * - * self.data = malloc(self.len) - * if not self.data: # <<<<<<<<<<<<<< - * raise MemoryError("unable to allocate array data.") - * - */ - __pyx_t_4 = ((!(__pyx_v_self->data != 0)) != 0); - if (unlikely(__pyx_t_4)) { - - /* "View.MemoryView":176 - * self.data = malloc(self.len) - * if not self.data: - * raise MemoryError("unable to allocate array data.") # <<<<<<<<<<<<<< - * - * if self.dtype_is_object: - */ - __pyx_t_10 = __Pyx_PyObject_Call(__pyx_builtin_MemoryError, __pyx_tuple__5, NULL); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 176, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_Raise(__pyx_t_10, 0, 0, 0); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __PYX_ERR(1, 176, __pyx_L1_error) - - /* "View.MemoryView":175 - * - * self.data = malloc(self.len) - * if not self.data: # <<<<<<<<<<<<<< - * raise MemoryError("unable to allocate array data.") - * - */ - } - - /* "View.MemoryView":178 - * raise MemoryError("unable to allocate array data.") - * - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * p = self.data - * for i in range(self.len / itemsize): - */ - __pyx_t_4 = (__pyx_v_self->dtype_is_object != 0); - if (__pyx_t_4) { - - /* "View.MemoryView":179 - * - * if self.dtype_is_object: - * p = self.data # <<<<<<<<<<<<<< - * for i in range(self.len / itemsize): - * p[i] = Py_None - */ - __pyx_v_p = ((PyObject **)__pyx_v_self->data); - - /* "View.MemoryView":180 - * if self.dtype_is_object: - * p = self.data - * for i in range(self.len / itemsize): # <<<<<<<<<<<<<< - * p[i] = Py_None - * Py_INCREF(Py_None) - */ - if (unlikely(__pyx_v_itemsize == 0)) { - PyErr_SetString(PyExc_ZeroDivisionError, "integer division or modulo by zero"); - __PYX_ERR(1, 180, __pyx_L1_error) - } - else if (sizeof(Py_ssize_t) == sizeof(long) && (!(((Py_ssize_t)-1) > 0)) && unlikely(__pyx_v_itemsize == (Py_ssize_t)-1) && unlikely(UNARY_NEG_WOULD_OVERFLOW(__pyx_v_self->len))) { - PyErr_SetString(PyExc_OverflowError, "value too large to perform division"); - __PYX_ERR(1, 180, __pyx_L1_error) - } - __pyx_t_1 = __Pyx_div_Py_ssize_t(__pyx_v_self->len, __pyx_v_itemsize); - __pyx_t_9 = __pyx_t_1; - for (__pyx_t_11 = 0; __pyx_t_11 < __pyx_t_9; __pyx_t_11+=1) { - __pyx_v_i = __pyx_t_11; - - /* "View.MemoryView":181 - * p = self.data - * for i in range(self.len / itemsize): - * p[i] = Py_None # <<<<<<<<<<<<<< - * Py_INCREF(Py_None) - * - */ - (__pyx_v_p[__pyx_v_i]) = Py_None; - - /* "View.MemoryView":182 - * for i in range(self.len / itemsize): - * p[i] = Py_None - * Py_INCREF(Py_None) # <<<<<<<<<<<<<< - * - * @cname('getbuffer') - */ - Py_INCREF(Py_None); - } - - /* "View.MemoryView":178 - * raise MemoryError("unable to allocate array data.") - * - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * p = self.data - * for i in range(self.len / itemsize): - */ - } - - /* "View.MemoryView":171 - * self.free_data = allocate_buffer - * self.dtype_is_object = format == b'O' - * if allocate_buffer: # <<<<<<<<<<<<<< - * - * - */ - } - - /* "View.MemoryView":122 - * cdef bint dtype_is_object - * - * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, # <<<<<<<<<<<<<< - * mode="c", bint allocate_buffer=True): - * - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_10); - __Pyx_AddTraceback("View.MemoryView.array.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_format); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":185 - * - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): # <<<<<<<<<<<<<< - * cdef int bufmode = -1 - * if self.mode == u"c": - */ - -/* Python wrapper */ -static CYTHON_UNUSED int __pyx_array_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/ -static CYTHON_UNUSED int __pyx_array_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) { - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__getbuffer__ (wrapper)", 0); - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_2__getbuffer__(((struct __pyx_array_obj *)__pyx_v_self), ((Py_buffer *)__pyx_v_info), ((int)__pyx_v_flags)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_2__getbuffer__(struct __pyx_array_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) { - int __pyx_v_bufmode; - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - char *__pyx_t_4; - Py_ssize_t __pyx_t_5; - int __pyx_t_6; - Py_ssize_t *__pyx_t_7; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - if (__pyx_v_info == NULL) { - PyErr_SetString(PyExc_BufferError, "PyObject_GetBuffer: view==NULL argument is obsolete"); - return -1; - } - __Pyx_RefNannySetupContext("__getbuffer__", 0); - __pyx_v_info->obj = Py_None; __Pyx_INCREF(Py_None); - __Pyx_GIVEREF(__pyx_v_info->obj); - - /* "View.MemoryView":186 - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): - * cdef int bufmode = -1 # <<<<<<<<<<<<<< - * if self.mode == u"c": - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - */ - __pyx_v_bufmode = -1; - - /* "View.MemoryView":187 - * def __getbuffer__(self, Py_buffer *info, int flags): - * cdef int bufmode = -1 - * if self.mode == u"c": # <<<<<<<<<<<<<< - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * elif self.mode == u"fortran": - */ - __pyx_t_1 = (__Pyx_PyUnicode_Equals(__pyx_v_self->mode, __pyx_n_u_c, Py_EQ)); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 187, __pyx_L1_error) - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":188 - * cdef int bufmode = -1 - * if self.mode == u"c": - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS # <<<<<<<<<<<<<< - * elif self.mode == u"fortran": - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - */ - __pyx_v_bufmode = (PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS); - - /* "View.MemoryView":187 - * def __getbuffer__(self, Py_buffer *info, int flags): - * cdef int bufmode = -1 - * if self.mode == u"c": # <<<<<<<<<<<<<< - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * elif self.mode == u"fortran": - */ - goto __pyx_L3; - } - - /* "View.MemoryView":189 - * if self.mode == u"c": - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * elif self.mode == u"fortran": # <<<<<<<<<<<<<< - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): - */ - __pyx_t_2 = (__Pyx_PyUnicode_Equals(__pyx_v_self->mode, __pyx_n_u_fortran, Py_EQ)); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(1, 189, __pyx_L1_error) - __pyx_t_1 = (__pyx_t_2 != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":190 - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * elif self.mode == u"fortran": - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS # <<<<<<<<<<<<<< - * if not (flags & bufmode): - * raise ValueError("Can only create a buffer that is contiguous in memory.") - */ - __pyx_v_bufmode = (PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS); - - /* "View.MemoryView":189 - * if self.mode == u"c": - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * elif self.mode == u"fortran": # <<<<<<<<<<<<<< - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): - */ - } - __pyx_L3:; - - /* "View.MemoryView":191 - * elif self.mode == u"fortran": - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): # <<<<<<<<<<<<<< - * raise ValueError("Can only create a buffer that is contiguous in memory.") - * info.buf = self.data - */ - __pyx_t_1 = ((!((__pyx_v_flags & __pyx_v_bufmode) != 0)) != 0); - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":192 - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): - * raise ValueError("Can only create a buffer that is contiguous in memory.") # <<<<<<<<<<<<<< - * info.buf = self.data - * info.len = self.len - */ - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__6, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 192, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 192, __pyx_L1_error) - - /* "View.MemoryView":191 - * elif self.mode == u"fortran": - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): # <<<<<<<<<<<<<< - * raise ValueError("Can only create a buffer that is contiguous in memory.") - * info.buf = self.data - */ - } - - /* "View.MemoryView":193 - * if not (flags & bufmode): - * raise ValueError("Can only create a buffer that is contiguous in memory.") - * info.buf = self.data # <<<<<<<<<<<<<< - * info.len = self.len - * info.ndim = self.ndim - */ - __pyx_t_4 = __pyx_v_self->data; - __pyx_v_info->buf = __pyx_t_4; - - /* "View.MemoryView":194 - * raise ValueError("Can only create a buffer that is contiguous in memory.") - * info.buf = self.data - * info.len = self.len # <<<<<<<<<<<<<< - * info.ndim = self.ndim - * info.shape = self._shape - */ - __pyx_t_5 = __pyx_v_self->len; - __pyx_v_info->len = __pyx_t_5; - - /* "View.MemoryView":195 - * info.buf = self.data - * info.len = self.len - * info.ndim = self.ndim # <<<<<<<<<<<<<< - * info.shape = self._shape - * info.strides = self._strides - */ - __pyx_t_6 = __pyx_v_self->ndim; - __pyx_v_info->ndim = __pyx_t_6; - - /* "View.MemoryView":196 - * info.len = self.len - * info.ndim = self.ndim - * info.shape = self._shape # <<<<<<<<<<<<<< - * info.strides = self._strides - * info.suboffsets = NULL - */ - __pyx_t_7 = __pyx_v_self->_shape; - __pyx_v_info->shape = __pyx_t_7; - - /* "View.MemoryView":197 - * info.ndim = self.ndim - * info.shape = self._shape - * info.strides = self._strides # <<<<<<<<<<<<<< - * info.suboffsets = NULL - * info.itemsize = self.itemsize - */ - __pyx_t_7 = __pyx_v_self->_strides; - __pyx_v_info->strides = __pyx_t_7; - - /* "View.MemoryView":198 - * info.shape = self._shape - * info.strides = self._strides - * info.suboffsets = NULL # <<<<<<<<<<<<<< - * info.itemsize = self.itemsize - * info.readonly = 0 - */ - __pyx_v_info->suboffsets = NULL; - - /* "View.MemoryView":199 - * info.strides = self._strides - * info.suboffsets = NULL - * info.itemsize = self.itemsize # <<<<<<<<<<<<<< - * info.readonly = 0 - * - */ - __pyx_t_5 = __pyx_v_self->itemsize; - __pyx_v_info->itemsize = __pyx_t_5; - - /* "View.MemoryView":200 - * info.suboffsets = NULL - * info.itemsize = self.itemsize - * info.readonly = 0 # <<<<<<<<<<<<<< - * - * if flags & PyBUF_FORMAT: - */ - __pyx_v_info->readonly = 0; - - /* "View.MemoryView":202 - * info.readonly = 0 - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * info.format = self.format - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_FORMAT) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":203 - * - * if flags & PyBUF_FORMAT: - * info.format = self.format # <<<<<<<<<<<<<< - * else: - * info.format = NULL - */ - __pyx_t_4 = __pyx_v_self->format; - __pyx_v_info->format = __pyx_t_4; - - /* "View.MemoryView":202 - * info.readonly = 0 - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * info.format = self.format - * else: - */ - goto __pyx_L5; - } - - /* "View.MemoryView":205 - * info.format = self.format - * else: - * info.format = NULL # <<<<<<<<<<<<<< - * - * info.obj = self - */ - /*else*/ { - __pyx_v_info->format = NULL; - } - __pyx_L5:; - - /* "View.MemoryView":207 - * info.format = NULL - * - * info.obj = self # <<<<<<<<<<<<<< - * - * __pyx_getbuffer = capsule( &__pyx_array_getbuffer, "getbuffer(obj, view, flags)") - */ - __Pyx_INCREF(((PyObject *)__pyx_v_self)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_self)); - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); - __pyx_v_info->obj = ((PyObject *)__pyx_v_self); - - /* "View.MemoryView":185 - * - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): # <<<<<<<<<<<<<< - * cdef int bufmode = -1 - * if self.mode == u"c": - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.array.__getbuffer__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - if (__pyx_v_info->obj != NULL) { - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0; - } - goto __pyx_L2; - __pyx_L0:; - if (__pyx_v_info->obj == Py_None) { - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0; - } - __pyx_L2:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":211 - * __pyx_getbuffer = capsule( &__pyx_array_getbuffer, "getbuffer(obj, view, flags)") - * - * def __dealloc__(array self): # <<<<<<<<<<<<<< - * if self.callback_free_data != NULL: - * self.callback_free_data(self.data) - */ - -/* Python wrapper */ -static void __pyx_array___dealloc__(PyObject *__pyx_v_self); /*proto*/ -static void __pyx_array___dealloc__(PyObject *__pyx_v_self) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__dealloc__ (wrapper)", 0); - __pyx_array___pyx_pf_15View_dot_MemoryView_5array_4__dealloc__(((struct __pyx_array_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -static void __pyx_array___pyx_pf_15View_dot_MemoryView_5array_4__dealloc__(struct __pyx_array_obj *__pyx_v_self) { - __Pyx_RefNannyDeclarations - int __pyx_t_1; - __Pyx_RefNannySetupContext("__dealloc__", 0); - - /* "View.MemoryView":212 - * - * def __dealloc__(array self): - * if self.callback_free_data != NULL: # <<<<<<<<<<<<<< - * self.callback_free_data(self.data) - * elif self.free_data: - */ - __pyx_t_1 = ((__pyx_v_self->callback_free_data != NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":213 - * def __dealloc__(array self): - * if self.callback_free_data != NULL: - * self.callback_free_data(self.data) # <<<<<<<<<<<<<< - * elif self.free_data: - * if self.dtype_is_object: - */ - __pyx_v_self->callback_free_data(__pyx_v_self->data); - - /* "View.MemoryView":212 - * - * def __dealloc__(array self): - * if self.callback_free_data != NULL: # <<<<<<<<<<<<<< - * self.callback_free_data(self.data) - * elif self.free_data: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":214 - * if self.callback_free_data != NULL: - * self.callback_free_data(self.data) - * elif self.free_data: # <<<<<<<<<<<<<< - * if self.dtype_is_object: - * refcount_objects_in_slice(self.data, self._shape, - */ - __pyx_t_1 = (__pyx_v_self->free_data != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":215 - * self.callback_free_data(self.data) - * elif self.free_data: - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * refcount_objects_in_slice(self.data, self._shape, - * self._strides, self.ndim, False) - */ - __pyx_t_1 = (__pyx_v_self->dtype_is_object != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":216 - * elif self.free_data: - * if self.dtype_is_object: - * refcount_objects_in_slice(self.data, self._shape, # <<<<<<<<<<<<<< - * self._strides, self.ndim, False) - * free(self.data) - */ - __pyx_memoryview_refcount_objects_in_slice(__pyx_v_self->data, __pyx_v_self->_shape, __pyx_v_self->_strides, __pyx_v_self->ndim, 0); - - /* "View.MemoryView":215 - * self.callback_free_data(self.data) - * elif self.free_data: - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * refcount_objects_in_slice(self.data, self._shape, - * self._strides, self.ndim, False) - */ - } - - /* "View.MemoryView":218 - * refcount_objects_in_slice(self.data, self._shape, - * self._strides, self.ndim, False) - * free(self.data) # <<<<<<<<<<<<<< - * PyObject_Free(self._shape) - * - */ - free(__pyx_v_self->data); - - /* "View.MemoryView":214 - * if self.callback_free_data != NULL: - * self.callback_free_data(self.data) - * elif self.free_data: # <<<<<<<<<<<<<< - * if self.dtype_is_object: - * refcount_objects_in_slice(self.data, self._shape, - */ - } - __pyx_L3:; - - /* "View.MemoryView":219 - * self._strides, self.ndim, False) - * free(self.data) - * PyObject_Free(self._shape) # <<<<<<<<<<<<<< - * - * @property - */ - PyObject_Free(__pyx_v_self->_shape); - - /* "View.MemoryView":211 - * __pyx_getbuffer = capsule( &__pyx_array_getbuffer, "getbuffer(obj, view, flags)") - * - * def __dealloc__(array self): # <<<<<<<<<<<<<< - * if self.callback_free_data != NULL: - * self.callback_free_data(self.data) - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -/* "View.MemoryView":222 - * - * @property - * def memview(self): # <<<<<<<<<<<<<< - * return self.get_memview() - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_5array_7memview_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_5array_7memview_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_5array_7memview___get__(((struct __pyx_array_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_5array_7memview___get__(struct __pyx_array_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":223 - * @property - * def memview(self): - * return self.get_memview() # <<<<<<<<<<<<<< - * - * @cname('get_memview') - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = ((struct __pyx_vtabstruct_array *)__pyx_v_self->__pyx_vtab)->get_memview(__pyx_v_self); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 223, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":222 - * - * @property - * def memview(self): # <<<<<<<<<<<<<< - * return self.get_memview() - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.array.memview.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":226 - * - * @cname('get_memview') - * cdef get_memview(self): # <<<<<<<<<<<<<< - * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE - * return memoryview(self, flags, self.dtype_is_object) - */ - -static PyObject *__pyx_array_get_memview(struct __pyx_array_obj *__pyx_v_self) { - int __pyx_v_flags; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("get_memview", 0); - - /* "View.MemoryView":227 - * @cname('get_memview') - * cdef get_memview(self): - * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE # <<<<<<<<<<<<<< - * return memoryview(self, flags, self.dtype_is_object) - * - */ - __pyx_v_flags = ((PyBUF_ANY_CONTIGUOUS | PyBUF_FORMAT) | PyBUF_WRITABLE); - - /* "View.MemoryView":228 - * cdef get_memview(self): - * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE - * return memoryview(self, flags, self.dtype_is_object) # <<<<<<<<<<<<<< - * - * def __len__(self): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_flags); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 228, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_v_self->dtype_is_object); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 228, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 228, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_v_self)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_self)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_v_self)); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_t_2); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryview_type), __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 228, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":226 - * - * @cname('get_memview') - * cdef get_memview(self): # <<<<<<<<<<<<<< - * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE - * return memoryview(self, flags, self.dtype_is_object) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.array.get_memview", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":230 - * return memoryview(self, flags, self.dtype_is_object) - * - * def __len__(self): # <<<<<<<<<<<<<< - * return self._shape[0] - * - */ - -/* Python wrapper */ -static Py_ssize_t __pyx_array___len__(PyObject *__pyx_v_self); /*proto*/ -static Py_ssize_t __pyx_array___len__(PyObject *__pyx_v_self) { - Py_ssize_t __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__len__ (wrapper)", 0); - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_6__len__(((struct __pyx_array_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static Py_ssize_t __pyx_array___pyx_pf_15View_dot_MemoryView_5array_6__len__(struct __pyx_array_obj *__pyx_v_self) { - Py_ssize_t __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__len__", 0); - - /* "View.MemoryView":231 - * - * def __len__(self): - * return self._shape[0] # <<<<<<<<<<<<<< - * - * def __getattr__(self, attr): - */ - __pyx_r = (__pyx_v_self->_shape[0]); - goto __pyx_L0; - - /* "View.MemoryView":230 - * return memoryview(self, flags, self.dtype_is_object) - * - * def __len__(self): # <<<<<<<<<<<<<< - * return self._shape[0] - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":233 - * return self._shape[0] - * - * def __getattr__(self, attr): # <<<<<<<<<<<<<< - * return getattr(self.memview, attr) - * - */ - -/* Python wrapper */ -static PyObject *__pyx_array___getattr__(PyObject *__pyx_v_self, PyObject *__pyx_v_attr); /*proto*/ -static PyObject *__pyx_array___getattr__(PyObject *__pyx_v_self, PyObject *__pyx_v_attr) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__getattr__ (wrapper)", 0); - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_8__getattr__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v_attr)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_8__getattr__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_attr) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__getattr__", 0); - - /* "View.MemoryView":234 - * - * def __getattr__(self, attr): - * return getattr(self.memview, attr) # <<<<<<<<<<<<<< - * - * def __getitem__(self, item): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_memview); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 234, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_GetAttr(__pyx_t_1, __pyx_v_attr); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 234, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":233 - * return self._shape[0] - * - * def __getattr__(self, attr): # <<<<<<<<<<<<<< - * return getattr(self.memview, attr) - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.array.__getattr__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":236 - * return getattr(self.memview, attr) - * - * def __getitem__(self, item): # <<<<<<<<<<<<<< - * return self.memview[item] - * - */ - -/* Python wrapper */ -static PyObject *__pyx_array___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item); /*proto*/ -static PyObject *__pyx_array___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__getitem__ (wrapper)", 0); - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_10__getitem__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v_item)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_10__getitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__getitem__", 0); - - /* "View.MemoryView":237 - * - * def __getitem__(self, item): - * return self.memview[item] # <<<<<<<<<<<<<< - * - * def __setitem__(self, item, value): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_memview); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 237, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetItem(__pyx_t_1, __pyx_v_item); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 237, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":236 - * return getattr(self.memview, attr) - * - * def __getitem__(self, item): # <<<<<<<<<<<<<< - * return self.memview[item] - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.array.__getitem__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":239 - * return self.memview[item] - * - * def __setitem__(self, item, value): # <<<<<<<<<<<<<< - * self.memview[item] = value - * - */ - -/* Python wrapper */ -static int __pyx_array___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value); /*proto*/ -static int __pyx_array___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value) { - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setitem__ (wrapper)", 0); - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_12__setitem__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v_item), ((PyObject *)__pyx_v_value)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_12__setitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value) { - int __pyx_r; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setitem__", 0); - - /* "View.MemoryView":240 - * - * def __setitem__(self, item, value): - * self.memview[item] = value # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_memview); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 240, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (unlikely(PyObject_SetItem(__pyx_t_1, __pyx_v_item, __pyx_v_value) < 0)) __PYX_ERR(1, 240, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "View.MemoryView":239 - * return self.memview[item] - * - * def __setitem__(self, item, value): # <<<<<<<<<<<<<< - * self.memview[item] = value - * - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.array.__setitem__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_array_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_pw___pyx_array_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_array___reduce_cython__(((struct __pyx_array_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_array___reduce_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__reduce_cython__", 0); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__7, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 2, __pyx_L1_error) - - /* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.array.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_array_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/ -static PyObject *__pyx_pw___pyx_array_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_array_2__setstate_cython__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_array_2__setstate_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setstate_cython__", 0); - - /* "(tree fragment)":4 - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - */ - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__8, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 4, __pyx_L1_error) - - /* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.array.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":244 - * - * @cname("__pyx_array_new") - * cdef array array_cwrapper(tuple shape, Py_ssize_t itemsize, char *format, # <<<<<<<<<<<<<< - * char *mode, char *buf): - * cdef array result - */ - -static struct __pyx_array_obj *__pyx_array_new(PyObject *__pyx_v_shape, Py_ssize_t __pyx_v_itemsize, char *__pyx_v_format, char *__pyx_v_mode, char *__pyx_v_buf) { - struct __pyx_array_obj *__pyx_v_result = 0; - struct __pyx_array_obj *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("array_cwrapper", 0); - - /* "View.MemoryView":248 - * cdef array result - * - * if buf == NULL: # <<<<<<<<<<<<<< - * result = array(shape, itemsize, format, mode.decode('ASCII')) - * else: - */ - __pyx_t_1 = ((__pyx_v_buf == NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":249 - * - * if buf == NULL: - * result = array(shape, itemsize, format, mode.decode('ASCII')) # <<<<<<<<<<<<<< - * else: - * result = array(shape, itemsize, format, mode.decode('ASCII'), - */ - __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_itemsize); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 249, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyBytes_FromString(__pyx_v_format); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 249, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_decode_c_string(__pyx_v_mode, 0, strlen(__pyx_v_mode), NULL, NULL, PyUnicode_DecodeASCII); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 249, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = PyTuple_New(4); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 249, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(__pyx_v_shape); - __Pyx_GIVEREF(__pyx_v_shape); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_v_shape); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_5, 2, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_5, 3, __pyx_t_4); - __pyx_t_2 = 0; - __pyx_t_3 = 0; - __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PyObject_Call(((PyObject *)__pyx_array_type), __pyx_t_5, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 249, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_v_result = ((struct __pyx_array_obj *)__pyx_t_4); - __pyx_t_4 = 0; - - /* "View.MemoryView":248 - * cdef array result - * - * if buf == NULL: # <<<<<<<<<<<<<< - * result = array(shape, itemsize, format, mode.decode('ASCII')) - * else: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":251 - * result = array(shape, itemsize, format, mode.decode('ASCII')) - * else: - * result = array(shape, itemsize, format, mode.decode('ASCII'), # <<<<<<<<<<<<<< - * allocate_buffer=False) - * result.data = buf - */ - /*else*/ { - __pyx_t_4 = PyInt_FromSsize_t(__pyx_v_itemsize); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 251, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = __Pyx_PyBytes_FromString(__pyx_v_format); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 251, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_3 = __Pyx_decode_c_string(__pyx_v_mode, 0, strlen(__pyx_v_mode), NULL, NULL, PyUnicode_DecodeASCII); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 251, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyTuple_New(4); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 251, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_shape); - __Pyx_GIVEREF(__pyx_v_shape); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_shape); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_2, 2, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_2, 3, __pyx_t_3); - __pyx_t_4 = 0; - __pyx_t_5 = 0; - __pyx_t_3 = 0; - - /* "View.MemoryView":252 - * else: - * result = array(shape, itemsize, format, mode.decode('ASCII'), - * allocate_buffer=False) # <<<<<<<<<<<<<< - * result.data = buf - * - */ - __pyx_t_3 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 252, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_t_3, __pyx_n_s_allocate_buffer, Py_False) < 0) __PYX_ERR(1, 252, __pyx_L1_error) - - /* "View.MemoryView":251 - * result = array(shape, itemsize, format, mode.decode('ASCII')) - * else: - * result = array(shape, itemsize, format, mode.decode('ASCII'), # <<<<<<<<<<<<<< - * allocate_buffer=False) - * result.data = buf - */ - __pyx_t_5 = __Pyx_PyObject_Call(((PyObject *)__pyx_array_type), __pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 251, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_result = ((struct __pyx_array_obj *)__pyx_t_5); - __pyx_t_5 = 0; - - /* "View.MemoryView":253 - * result = array(shape, itemsize, format, mode.decode('ASCII'), - * allocate_buffer=False) - * result.data = buf # <<<<<<<<<<<<<< - * - * return result - */ - __pyx_v_result->data = __pyx_v_buf; - } - __pyx_L3:; - - /* "View.MemoryView":255 - * result.data = buf - * - * return result # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(((PyObject *)__pyx_r)); - __Pyx_INCREF(((PyObject *)__pyx_v_result)); - __pyx_r = __pyx_v_result; - goto __pyx_L0; - - /* "View.MemoryView":244 - * - * @cname("__pyx_array_new") - * cdef array array_cwrapper(tuple shape, Py_ssize_t itemsize, char *format, # <<<<<<<<<<<<<< - * char *mode, char *buf): - * cdef array result - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.array_cwrapper", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_result); - __Pyx_XGIVEREF((PyObject *)__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":281 - * cdef class Enum(object): - * cdef object name - * def __init__(self, name): # <<<<<<<<<<<<<< - * self.name = name - * def __repr__(self): - */ - -/* Python wrapper */ -static int __pyx_MemviewEnum___init__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static int __pyx_MemviewEnum___init__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_name = 0; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__init__ (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_name,0}; - PyObject* values[1] = {0}; - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_name)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__init__") < 0)) __PYX_ERR(1, 281, __pyx_L3_error) - } - } else if (PyTuple_GET_SIZE(__pyx_args) != 1) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - } - __pyx_v_name = values[0]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__init__", 1, 1, 1, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(1, 281, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("View.MemoryView.Enum.__init__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return -1; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum___init__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self), __pyx_v_name); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum___init__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v_name) { - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__init__", 0); - - /* "View.MemoryView":282 - * cdef object name - * def __init__(self, name): - * self.name = name # <<<<<<<<<<<<<< - * def __repr__(self): - * return self.name - */ - __Pyx_INCREF(__pyx_v_name); - __Pyx_GIVEREF(__pyx_v_name); - __Pyx_GOTREF(__pyx_v_self->name); - __Pyx_DECREF(__pyx_v_self->name); - __pyx_v_self->name = __pyx_v_name; - - /* "View.MemoryView":281 - * cdef class Enum(object): - * cdef object name - * def __init__(self, name): # <<<<<<<<<<<<<< - * self.name = name - * def __repr__(self): - */ - - /* function exit code */ - __pyx_r = 0; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":283 - * def __init__(self, name): - * self.name = name - * def __repr__(self): # <<<<<<<<<<<<<< - * return self.name - * - */ - -/* Python wrapper */ -static PyObject *__pyx_MemviewEnum___repr__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_MemviewEnum___repr__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__repr__ (wrapper)", 0); - __pyx_r = __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum_2__repr__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum_2__repr__(struct __pyx_MemviewEnum_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__repr__", 0); - - /* "View.MemoryView":284 - * self.name = name - * def __repr__(self): - * return self.name # <<<<<<<<<<<<<< - * - * cdef generic = Enum("") - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_self->name); - __pyx_r = __pyx_v_self->name; - goto __pyx_L0; - - /* "View.MemoryView":283 - * def __init__(self, name): - * self.name = name - * def __repr__(self): # <<<<<<<<<<<<<< - * return self.name - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * cdef tuple state - * cdef object _dict - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_MemviewEnum_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_pw___pyx_MemviewEnum_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_MemviewEnum___reduce_cython__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_MemviewEnum___reduce_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self) { - PyObject *__pyx_v_state = 0; - PyObject *__pyx_v__dict = 0; - int __pyx_v_use_setstate; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - int __pyx_t_3; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__reduce_cython__", 0); - - /* "(tree fragment)":5 - * cdef object _dict - * cdef bint use_setstate - * state = (self.name,) # <<<<<<<<<<<<<< - * _dict = getattr(self, '__dict__', None) - * if _dict is not None: - */ - __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 5, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v_self->name); - __Pyx_GIVEREF(__pyx_v_self->name); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_self->name); - __pyx_v_state = ((PyObject*)__pyx_t_1); - __pyx_t_1 = 0; - - /* "(tree fragment)":6 - * cdef bint use_setstate - * state = (self.name,) - * _dict = getattr(self, '__dict__', None) # <<<<<<<<<<<<<< - * if _dict is not None: - * state += (_dict,) - */ - __pyx_t_1 = __Pyx_GetAttr3(((PyObject *)__pyx_v_self), __pyx_n_s_dict, Py_None); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 6, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v__dict = __pyx_t_1; - __pyx_t_1 = 0; - - /* "(tree fragment)":7 - * state = (self.name,) - * _dict = getattr(self, '__dict__', None) - * if _dict is not None: # <<<<<<<<<<<<<< - * state += (_dict,) - * use_setstate = True - */ - __pyx_t_2 = (__pyx_v__dict != Py_None); - __pyx_t_3 = (__pyx_t_2 != 0); - if (__pyx_t_3) { - - /* "(tree fragment)":8 - * _dict = getattr(self, '__dict__', None) - * if _dict is not None: - * state += (_dict,) # <<<<<<<<<<<<<< - * use_setstate = True - * else: - */ - __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 8, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v__dict); - __Pyx_GIVEREF(__pyx_v__dict); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v__dict); - __pyx_t_4 = PyNumber_InPlaceAdd(__pyx_v_state, __pyx_t_1); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 8, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF_SET(__pyx_v_state, ((PyObject*)__pyx_t_4)); - __pyx_t_4 = 0; - - /* "(tree fragment)":9 - * if _dict is not None: - * state += (_dict,) - * use_setstate = True # <<<<<<<<<<<<<< - * else: - * use_setstate = self.name is not None - */ - __pyx_v_use_setstate = 1; - - /* "(tree fragment)":7 - * state = (self.name,) - * _dict = getattr(self, '__dict__', None) - * if _dict is not None: # <<<<<<<<<<<<<< - * state += (_dict,) - * use_setstate = True - */ - goto __pyx_L3; - } - - /* "(tree fragment)":11 - * use_setstate = True - * else: - * use_setstate = self.name is not None # <<<<<<<<<<<<<< - * if use_setstate: - * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state - */ - /*else*/ { - __pyx_t_3 = (__pyx_v_self->name != Py_None); - __pyx_v_use_setstate = __pyx_t_3; - } - __pyx_L3:; - - /* "(tree fragment)":12 - * else: - * use_setstate = self.name is not None - * if use_setstate: # <<<<<<<<<<<<<< - * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state - * else: - */ - __pyx_t_3 = (__pyx_v_use_setstate != 0); - if (__pyx_t_3) { - - /* "(tree fragment)":13 - * use_setstate = self.name is not None - * if use_setstate: - * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state # <<<<<<<<<<<<<< - * else: - * return __pyx_unpickle_Enum, (type(self), 0xb068931, state) - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_pyx_unpickle_Enum); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 13, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_1 = PyTuple_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 13, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - __Pyx_GIVEREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - PyTuple_SET_ITEM(__pyx_t_1, 0, ((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - __Pyx_INCREF(__pyx_int_184977713); - __Pyx_GIVEREF(__pyx_int_184977713); - PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_int_184977713); - __Pyx_INCREF(Py_None); - __Pyx_GIVEREF(Py_None); - PyTuple_SET_ITEM(__pyx_t_1, 2, Py_None); - __pyx_t_5 = PyTuple_New(3); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 13, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_1); - __Pyx_INCREF(__pyx_v_state); - __Pyx_GIVEREF(__pyx_v_state); - PyTuple_SET_ITEM(__pyx_t_5, 2, __pyx_v_state); - __pyx_t_4 = 0; - __pyx_t_1 = 0; - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - /* "(tree fragment)":12 - * else: - * use_setstate = self.name is not None - * if use_setstate: # <<<<<<<<<<<<<< - * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state - * else: - */ - } - - /* "(tree fragment)":15 - * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state - * else: - * return __pyx_unpickle_Enum, (type(self), 0xb068931, state) # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * __pyx_unpickle_Enum__set_state(self, __pyx_state) - */ - /*else*/ { - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_pyx_unpickle_Enum); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 15, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_1 = PyTuple_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 15, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - __Pyx_GIVEREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - PyTuple_SET_ITEM(__pyx_t_1, 0, ((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - __Pyx_INCREF(__pyx_int_184977713); - __Pyx_GIVEREF(__pyx_int_184977713); - PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_int_184977713); - __Pyx_INCREF(__pyx_v_state); - __Pyx_GIVEREF(__pyx_v_state); - PyTuple_SET_ITEM(__pyx_t_1, 2, __pyx_v_state); - __pyx_t_4 = PyTuple_New(2); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 15, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_1); - __pyx_t_5 = 0; - __pyx_t_1 = 0; - __pyx_r = __pyx_t_4; - __pyx_t_4 = 0; - goto __pyx_L0; - } - - /* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * cdef tuple state - * cdef object _dict - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.Enum.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_state); - __Pyx_XDECREF(__pyx_v__dict); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":16 - * else: - * return __pyx_unpickle_Enum, (type(self), 0xb068931, state) - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * __pyx_unpickle_Enum__set_state(self, __pyx_state) - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_MemviewEnum_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/ -static PyObject *__pyx_pw___pyx_MemviewEnum_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_MemviewEnum_2__setstate_cython__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_MemviewEnum_2__setstate_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setstate_cython__", 0); - - /* "(tree fragment)":17 - * return __pyx_unpickle_Enum, (type(self), 0xb068931, state) - * def __setstate_cython__(self, __pyx_state): - * __pyx_unpickle_Enum__set_state(self, __pyx_state) # <<<<<<<<<<<<<< - */ - if (!(likely(PyTuple_CheckExact(__pyx_v___pyx_state))||((__pyx_v___pyx_state) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "tuple", Py_TYPE(__pyx_v___pyx_state)->tp_name), 0))) __PYX_ERR(1, 17, __pyx_L1_error) - __pyx_t_1 = __pyx_unpickle_Enum__set_state(__pyx_v_self, ((PyObject*)__pyx_v___pyx_state)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 17, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "(tree fragment)":16 - * else: - * return __pyx_unpickle_Enum, (type(self), 0xb068931, state) - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * __pyx_unpickle_Enum__set_state(self, __pyx_state) - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.Enum.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":298 - * - * @cname('__pyx_align_pointer') - * cdef void *align_pointer(void *memory, size_t alignment) nogil: # <<<<<<<<<<<<<< - * "Align pointer memory on a given boundary" - * cdef Py_intptr_t aligned_p = memory - */ - -static void *__pyx_align_pointer(void *__pyx_v_memory, size_t __pyx_v_alignment) { - Py_intptr_t __pyx_v_aligned_p; - size_t __pyx_v_offset; - void *__pyx_r; - int __pyx_t_1; - - /* "View.MemoryView":300 - * cdef void *align_pointer(void *memory, size_t alignment) nogil: - * "Align pointer memory on a given boundary" - * cdef Py_intptr_t aligned_p = memory # <<<<<<<<<<<<<< - * cdef size_t offset - * - */ - __pyx_v_aligned_p = ((Py_intptr_t)__pyx_v_memory); - - /* "View.MemoryView":304 - * - * with cython.cdivision(True): - * offset = aligned_p % alignment # <<<<<<<<<<<<<< - * - * if offset > 0: - */ - __pyx_v_offset = (__pyx_v_aligned_p % __pyx_v_alignment); - - /* "View.MemoryView":306 - * offset = aligned_p % alignment - * - * if offset > 0: # <<<<<<<<<<<<<< - * aligned_p += alignment - offset - * - */ - __pyx_t_1 = ((__pyx_v_offset > 0) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":307 - * - * if offset > 0: - * aligned_p += alignment - offset # <<<<<<<<<<<<<< - * - * return aligned_p - */ - __pyx_v_aligned_p = (__pyx_v_aligned_p + (__pyx_v_alignment - __pyx_v_offset)); - - /* "View.MemoryView":306 - * offset = aligned_p % alignment - * - * if offset > 0: # <<<<<<<<<<<<<< - * aligned_p += alignment - offset - * - */ - } - - /* "View.MemoryView":309 - * aligned_p += alignment - offset - * - * return aligned_p # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = ((void *)__pyx_v_aligned_p); - goto __pyx_L0; - - /* "View.MemoryView":298 - * - * @cname('__pyx_align_pointer') - * cdef void *align_pointer(void *memory, size_t alignment) nogil: # <<<<<<<<<<<<<< - * "Align pointer memory on a given boundary" - * cdef Py_intptr_t aligned_p = memory - */ - - /* function exit code */ - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":345 - * cdef __Pyx_TypeInfo *typeinfo - * - * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False): # <<<<<<<<<<<<<< - * self.obj = obj - * self.flags = flags - */ - -/* Python wrapper */ -static int __pyx_memoryview___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static int __pyx_memoryview___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_obj = 0; - int __pyx_v_flags; - int __pyx_v_dtype_is_object; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__cinit__ (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_obj,&__pyx_n_s_flags,&__pyx_n_s_dtype_is_object,0}; - PyObject* values[3] = {0,0,0}; - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_obj)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_flags)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 2, 3, 1); __PYX_ERR(1, 345, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (kw_args > 0) { - PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_dtype_is_object); - if (value) { values[2] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__cinit__") < 0)) __PYX_ERR(1, 345, __pyx_L3_error) - } - } else { - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - __pyx_v_obj = values[0]; - __pyx_v_flags = __Pyx_PyInt_As_int(values[1]); if (unlikely((__pyx_v_flags == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 345, __pyx_L3_error) - if (values[2]) { - __pyx_v_dtype_is_object = __Pyx_PyObject_IsTrue(values[2]); if (unlikely((__pyx_v_dtype_is_object == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 345, __pyx_L3_error) - } else { - __pyx_v_dtype_is_object = ((int)0); - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 2, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(1, 345, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("View.MemoryView.memoryview.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return -1; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview___cinit__(((struct __pyx_memoryview_obj *)__pyx_v_self), __pyx_v_obj, __pyx_v_flags, __pyx_v_dtype_is_object); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview___cinit__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj, int __pyx_v_flags, int __pyx_v_dtype_is_object) { - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__cinit__", 0); - - /* "View.MemoryView":346 - * - * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False): - * self.obj = obj # <<<<<<<<<<<<<< - * self.flags = flags - * if type(self) is memoryview or obj is not None: - */ - __Pyx_INCREF(__pyx_v_obj); - __Pyx_GIVEREF(__pyx_v_obj); - __Pyx_GOTREF(__pyx_v_self->obj); - __Pyx_DECREF(__pyx_v_self->obj); - __pyx_v_self->obj = __pyx_v_obj; - - /* "View.MemoryView":347 - * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False): - * self.obj = obj - * self.flags = flags # <<<<<<<<<<<<<< - * if type(self) is memoryview or obj is not None: - * __Pyx_GetBuffer(obj, &self.view, flags) - */ - __pyx_v_self->flags = __pyx_v_flags; - - /* "View.MemoryView":348 - * self.obj = obj - * self.flags = flags - * if type(self) is memoryview or obj is not None: # <<<<<<<<<<<<<< - * __Pyx_GetBuffer(obj, &self.view, flags) - * if self.view.obj == NULL: - */ - __pyx_t_2 = (((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))) == ((PyObject *)__pyx_memoryview_type)); - __pyx_t_3 = (__pyx_t_2 != 0); - if (!__pyx_t_3) { - } else { - __pyx_t_1 = __pyx_t_3; - goto __pyx_L4_bool_binop_done; - } - __pyx_t_3 = (__pyx_v_obj != Py_None); - __pyx_t_2 = (__pyx_t_3 != 0); - __pyx_t_1 = __pyx_t_2; - __pyx_L4_bool_binop_done:; - if (__pyx_t_1) { - - /* "View.MemoryView":349 - * self.flags = flags - * if type(self) is memoryview or obj is not None: - * __Pyx_GetBuffer(obj, &self.view, flags) # <<<<<<<<<<<<<< - * if self.view.obj == NULL: - * (<__pyx_buffer *> &self.view).obj = Py_None - */ - __pyx_t_4 = __Pyx_GetBuffer(__pyx_v_obj, (&__pyx_v_self->view), __pyx_v_flags); if (unlikely(__pyx_t_4 == ((int)-1))) __PYX_ERR(1, 349, __pyx_L1_error) - - /* "View.MemoryView":350 - * if type(self) is memoryview or obj is not None: - * __Pyx_GetBuffer(obj, &self.view, flags) - * if self.view.obj == NULL: # <<<<<<<<<<<<<< - * (<__pyx_buffer *> &self.view).obj = Py_None - * Py_INCREF(Py_None) - */ - __pyx_t_1 = ((((PyObject *)__pyx_v_self->view.obj) == NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":351 - * __Pyx_GetBuffer(obj, &self.view, flags) - * if self.view.obj == NULL: - * (<__pyx_buffer *> &self.view).obj = Py_None # <<<<<<<<<<<<<< - * Py_INCREF(Py_None) - * - */ - ((Py_buffer *)(&__pyx_v_self->view))->obj = Py_None; - - /* "View.MemoryView":352 - * if self.view.obj == NULL: - * (<__pyx_buffer *> &self.view).obj = Py_None - * Py_INCREF(Py_None) # <<<<<<<<<<<<<< - * - * global __pyx_memoryview_thread_locks_used - */ - Py_INCREF(Py_None); - - /* "View.MemoryView":350 - * if type(self) is memoryview or obj is not None: - * __Pyx_GetBuffer(obj, &self.view, flags) - * if self.view.obj == NULL: # <<<<<<<<<<<<<< - * (<__pyx_buffer *> &self.view).obj = Py_None - * Py_INCREF(Py_None) - */ - } - - /* "View.MemoryView":348 - * self.obj = obj - * self.flags = flags - * if type(self) is memoryview or obj is not None: # <<<<<<<<<<<<<< - * __Pyx_GetBuffer(obj, &self.view, flags) - * if self.view.obj == NULL: - */ - } - - /* "View.MemoryView":355 - * - * global __pyx_memoryview_thread_locks_used - * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED: # <<<<<<<<<<<<<< - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] - * __pyx_memoryview_thread_locks_used += 1 - */ - __pyx_t_1 = ((__pyx_memoryview_thread_locks_used < 8) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":356 - * global __pyx_memoryview_thread_locks_used - * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED: - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks_used += 1 - * if self.lock is NULL: - */ - __pyx_v_self->lock = (__pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]); - - /* "View.MemoryView":357 - * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED: - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] - * __pyx_memoryview_thread_locks_used += 1 # <<<<<<<<<<<<<< - * if self.lock is NULL: - * self.lock = PyThread_allocate_lock() - */ - __pyx_memoryview_thread_locks_used = (__pyx_memoryview_thread_locks_used + 1); - - /* "View.MemoryView":355 - * - * global __pyx_memoryview_thread_locks_used - * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED: # <<<<<<<<<<<<<< - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] - * __pyx_memoryview_thread_locks_used += 1 - */ - } - - /* "View.MemoryView":358 - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] - * __pyx_memoryview_thread_locks_used += 1 - * if self.lock is NULL: # <<<<<<<<<<<<<< - * self.lock = PyThread_allocate_lock() - * if self.lock is NULL: - */ - __pyx_t_1 = ((__pyx_v_self->lock == NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":359 - * __pyx_memoryview_thread_locks_used += 1 - * if self.lock is NULL: - * self.lock = PyThread_allocate_lock() # <<<<<<<<<<<<<< - * if self.lock is NULL: - * raise MemoryError - */ - __pyx_v_self->lock = PyThread_allocate_lock(); - - /* "View.MemoryView":360 - * if self.lock is NULL: - * self.lock = PyThread_allocate_lock() - * if self.lock is NULL: # <<<<<<<<<<<<<< - * raise MemoryError - * - */ - __pyx_t_1 = ((__pyx_v_self->lock == NULL) != 0); - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":361 - * self.lock = PyThread_allocate_lock() - * if self.lock is NULL: - * raise MemoryError # <<<<<<<<<<<<<< - * - * if flags & PyBUF_FORMAT: - */ - PyErr_NoMemory(); __PYX_ERR(1, 361, __pyx_L1_error) - - /* "View.MemoryView":360 - * if self.lock is NULL: - * self.lock = PyThread_allocate_lock() - * if self.lock is NULL: # <<<<<<<<<<<<<< - * raise MemoryError - * - */ - } - - /* "View.MemoryView":358 - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] - * __pyx_memoryview_thread_locks_used += 1 - * if self.lock is NULL: # <<<<<<<<<<<<<< - * self.lock = PyThread_allocate_lock() - * if self.lock is NULL: - */ - } - - /* "View.MemoryView":363 - * raise MemoryError - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0') - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_FORMAT) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":364 - * - * if flags & PyBUF_FORMAT: - * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0') # <<<<<<<<<<<<<< - * else: - * self.dtype_is_object = dtype_is_object - */ - __pyx_t_2 = (((__pyx_v_self->view.format[0]) == 'O') != 0); - if (__pyx_t_2) { - } else { - __pyx_t_1 = __pyx_t_2; - goto __pyx_L11_bool_binop_done; - } - __pyx_t_2 = (((__pyx_v_self->view.format[1]) == '\x00') != 0); - __pyx_t_1 = __pyx_t_2; - __pyx_L11_bool_binop_done:; - __pyx_v_self->dtype_is_object = __pyx_t_1; - - /* "View.MemoryView":363 - * raise MemoryError - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0') - * else: - */ - goto __pyx_L10; - } - - /* "View.MemoryView":366 - * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0') - * else: - * self.dtype_is_object = dtype_is_object # <<<<<<<<<<<<<< - * - * self.acquisition_count_aligned_p = <__pyx_atomic_int *> align_pointer( - */ - /*else*/ { - __pyx_v_self->dtype_is_object = __pyx_v_dtype_is_object; - } - __pyx_L10:; - - /* "View.MemoryView":368 - * self.dtype_is_object = dtype_is_object - * - * self.acquisition_count_aligned_p = <__pyx_atomic_int *> align_pointer( # <<<<<<<<<<<<<< - * &self.acquisition_count[0], sizeof(__pyx_atomic_int)) - * self.typeinfo = NULL - */ - __pyx_v_self->acquisition_count_aligned_p = ((__pyx_atomic_int *)__pyx_align_pointer(((void *)(&(__pyx_v_self->acquisition_count[0]))), (sizeof(__pyx_atomic_int)))); - - /* "View.MemoryView":370 - * self.acquisition_count_aligned_p = <__pyx_atomic_int *> align_pointer( - * &self.acquisition_count[0], sizeof(__pyx_atomic_int)) - * self.typeinfo = NULL # <<<<<<<<<<<<<< - * - * def __dealloc__(memoryview self): - */ - __pyx_v_self->typeinfo = NULL; - - /* "View.MemoryView":345 - * cdef __Pyx_TypeInfo *typeinfo - * - * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False): # <<<<<<<<<<<<<< - * self.obj = obj - * self.flags = flags - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_AddTraceback("View.MemoryView.memoryview.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":372 - * self.typeinfo = NULL - * - * def __dealloc__(memoryview self): # <<<<<<<<<<<<<< - * if self.obj is not None: - * __Pyx_ReleaseBuffer(&self.view) - */ - -/* Python wrapper */ -static void __pyx_memoryview___dealloc__(PyObject *__pyx_v_self); /*proto*/ -static void __pyx_memoryview___dealloc__(PyObject *__pyx_v_self) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__dealloc__ (wrapper)", 0); - __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_2__dealloc__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -static void __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_2__dealloc__(struct __pyx_memoryview_obj *__pyx_v_self) { - int __pyx_v_i; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - int __pyx_t_5; - PyThread_type_lock __pyx_t_6; - PyThread_type_lock __pyx_t_7; - __Pyx_RefNannySetupContext("__dealloc__", 0); - - /* "View.MemoryView":373 - * - * def __dealloc__(memoryview self): - * if self.obj is not None: # <<<<<<<<<<<<<< - * __Pyx_ReleaseBuffer(&self.view) - * elif (<__pyx_buffer *> &self.view).obj == Py_None: - */ - __pyx_t_1 = (__pyx_v_self->obj != Py_None); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":374 - * def __dealloc__(memoryview self): - * if self.obj is not None: - * __Pyx_ReleaseBuffer(&self.view) # <<<<<<<<<<<<<< - * elif (<__pyx_buffer *> &self.view).obj == Py_None: - * - */ - __Pyx_ReleaseBuffer((&__pyx_v_self->view)); - - /* "View.MemoryView":373 - * - * def __dealloc__(memoryview self): - * if self.obj is not None: # <<<<<<<<<<<<<< - * __Pyx_ReleaseBuffer(&self.view) - * elif (<__pyx_buffer *> &self.view).obj == Py_None: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":375 - * if self.obj is not None: - * __Pyx_ReleaseBuffer(&self.view) - * elif (<__pyx_buffer *> &self.view).obj == Py_None: # <<<<<<<<<<<<<< - * - * (<__pyx_buffer *> &self.view).obj = NULL - */ - __pyx_t_2 = ((((Py_buffer *)(&__pyx_v_self->view))->obj == Py_None) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":377 - * elif (<__pyx_buffer *> &self.view).obj == Py_None: - * - * (<__pyx_buffer *> &self.view).obj = NULL # <<<<<<<<<<<<<< - * Py_DECREF(Py_None) - * - */ - ((Py_buffer *)(&__pyx_v_self->view))->obj = NULL; - - /* "View.MemoryView":378 - * - * (<__pyx_buffer *> &self.view).obj = NULL - * Py_DECREF(Py_None) # <<<<<<<<<<<<<< - * - * cdef int i - */ - Py_DECREF(Py_None); - - /* "View.MemoryView":375 - * if self.obj is not None: - * __Pyx_ReleaseBuffer(&self.view) - * elif (<__pyx_buffer *> &self.view).obj == Py_None: # <<<<<<<<<<<<<< - * - * (<__pyx_buffer *> &self.view).obj = NULL - */ - } - __pyx_L3:; - - /* "View.MemoryView":382 - * cdef int i - * global __pyx_memoryview_thread_locks_used - * if self.lock != NULL: # <<<<<<<<<<<<<< - * for i in range(__pyx_memoryview_thread_locks_used): - * if __pyx_memoryview_thread_locks[i] is self.lock: - */ - __pyx_t_2 = ((__pyx_v_self->lock != NULL) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":383 - * global __pyx_memoryview_thread_locks_used - * if self.lock != NULL: - * for i in range(__pyx_memoryview_thread_locks_used): # <<<<<<<<<<<<<< - * if __pyx_memoryview_thread_locks[i] is self.lock: - * __pyx_memoryview_thread_locks_used -= 1 - */ - __pyx_t_3 = __pyx_memoryview_thread_locks_used; - __pyx_t_4 = __pyx_t_3; - for (__pyx_t_5 = 0; __pyx_t_5 < __pyx_t_4; __pyx_t_5+=1) { - __pyx_v_i = __pyx_t_5; - - /* "View.MemoryView":384 - * if self.lock != NULL: - * for i in range(__pyx_memoryview_thread_locks_used): - * if __pyx_memoryview_thread_locks[i] is self.lock: # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks_used -= 1 - * if i != __pyx_memoryview_thread_locks_used: - */ - __pyx_t_2 = (((__pyx_memoryview_thread_locks[__pyx_v_i]) == __pyx_v_self->lock) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":385 - * for i in range(__pyx_memoryview_thread_locks_used): - * if __pyx_memoryview_thread_locks[i] is self.lock: - * __pyx_memoryview_thread_locks_used -= 1 # <<<<<<<<<<<<<< - * if i != __pyx_memoryview_thread_locks_used: - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( - */ - __pyx_memoryview_thread_locks_used = (__pyx_memoryview_thread_locks_used - 1); - - /* "View.MemoryView":386 - * if __pyx_memoryview_thread_locks[i] is self.lock: - * __pyx_memoryview_thread_locks_used -= 1 - * if i != __pyx_memoryview_thread_locks_used: # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( - * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) - */ - __pyx_t_2 = ((__pyx_v_i != __pyx_memoryview_thread_locks_used) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":388 - * if i != __pyx_memoryview_thread_locks_used: - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( - * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) # <<<<<<<<<<<<<< - * break - * else: - */ - __pyx_t_6 = (__pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]); - __pyx_t_7 = (__pyx_memoryview_thread_locks[__pyx_v_i]); - - /* "View.MemoryView":387 - * __pyx_memoryview_thread_locks_used -= 1 - * if i != __pyx_memoryview_thread_locks_used: - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) - * break - */ - (__pyx_memoryview_thread_locks[__pyx_v_i]) = __pyx_t_6; - (__pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]) = __pyx_t_7; - - /* "View.MemoryView":386 - * if __pyx_memoryview_thread_locks[i] is self.lock: - * __pyx_memoryview_thread_locks_used -= 1 - * if i != __pyx_memoryview_thread_locks_used: # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( - * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) - */ - } - - /* "View.MemoryView":389 - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( - * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) - * break # <<<<<<<<<<<<<< - * else: - * PyThread_free_lock(self.lock) - */ - goto __pyx_L6_break; - - /* "View.MemoryView":384 - * if self.lock != NULL: - * for i in range(__pyx_memoryview_thread_locks_used): - * if __pyx_memoryview_thread_locks[i] is self.lock: # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks_used -= 1 - * if i != __pyx_memoryview_thread_locks_used: - */ - } - } - /*else*/ { - - /* "View.MemoryView":391 - * break - * else: - * PyThread_free_lock(self.lock) # <<<<<<<<<<<<<< - * - * cdef char *get_item_pointer(memoryview self, object index) except NULL: - */ - PyThread_free_lock(__pyx_v_self->lock); - } - __pyx_L6_break:; - - /* "View.MemoryView":382 - * cdef int i - * global __pyx_memoryview_thread_locks_used - * if self.lock != NULL: # <<<<<<<<<<<<<< - * for i in range(__pyx_memoryview_thread_locks_used): - * if __pyx_memoryview_thread_locks[i] is self.lock: - */ - } - - /* "View.MemoryView":372 - * self.typeinfo = NULL - * - * def __dealloc__(memoryview self): # <<<<<<<<<<<<<< - * if self.obj is not None: - * __Pyx_ReleaseBuffer(&self.view) - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -/* "View.MemoryView":393 - * PyThread_free_lock(self.lock) - * - * cdef char *get_item_pointer(memoryview self, object index) except NULL: # <<<<<<<<<<<<<< - * cdef Py_ssize_t dim - * cdef char *itemp = self.view.buf - */ - -static char *__pyx_memoryview_get_item_pointer(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index) { - Py_ssize_t __pyx_v_dim; - char *__pyx_v_itemp; - PyObject *__pyx_v_idx = NULL; - char *__pyx_r; - __Pyx_RefNannyDeclarations - Py_ssize_t __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - Py_ssize_t __pyx_t_3; - PyObject *(*__pyx_t_4)(PyObject *); - PyObject *__pyx_t_5 = NULL; - Py_ssize_t __pyx_t_6; - char *__pyx_t_7; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("get_item_pointer", 0); - - /* "View.MemoryView":395 - * cdef char *get_item_pointer(memoryview self, object index) except NULL: - * cdef Py_ssize_t dim - * cdef char *itemp = self.view.buf # <<<<<<<<<<<<<< - * - * for dim, idx in enumerate(index): - */ - __pyx_v_itemp = ((char *)__pyx_v_self->view.buf); - - /* "View.MemoryView":397 - * cdef char *itemp = self.view.buf - * - * for dim, idx in enumerate(index): # <<<<<<<<<<<<<< - * itemp = pybuffer_index(&self.view, itemp, idx, dim) - * - */ - __pyx_t_1 = 0; - if (likely(PyList_CheckExact(__pyx_v_index)) || PyTuple_CheckExact(__pyx_v_index)) { - __pyx_t_2 = __pyx_v_index; __Pyx_INCREF(__pyx_t_2); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - } else { - __pyx_t_3 = -1; __pyx_t_2 = PyObject_GetIter(__pyx_v_index); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 397, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = Py_TYPE(__pyx_t_2)->tp_iternext; if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 397, __pyx_L1_error) - } - for (;;) { - if (likely(!__pyx_t_4)) { - if (likely(PyList_CheckExact(__pyx_t_2))) { - if (__pyx_t_3 >= PyList_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_5 = PyList_GET_ITEM(__pyx_t_2, __pyx_t_3); __Pyx_INCREF(__pyx_t_5); __pyx_t_3++; if (unlikely(0 < 0)) __PYX_ERR(1, 397, __pyx_L1_error) - #else - __pyx_t_5 = PySequence_ITEM(__pyx_t_2, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 397, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - } else { - if (__pyx_t_3 >= PyTuple_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_5 = PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_3); __Pyx_INCREF(__pyx_t_5); __pyx_t_3++; if (unlikely(0 < 0)) __PYX_ERR(1, 397, __pyx_L1_error) - #else - __pyx_t_5 = PySequence_ITEM(__pyx_t_2, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 397, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - } - } else { - __pyx_t_5 = __pyx_t_4(__pyx_t_2); - if (unlikely(!__pyx_t_5)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(1, 397, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_5); - } - __Pyx_XDECREF_SET(__pyx_v_idx, __pyx_t_5); - __pyx_t_5 = 0; - __pyx_v_dim = __pyx_t_1; - __pyx_t_1 = (__pyx_t_1 + 1); - - /* "View.MemoryView":398 - * - * for dim, idx in enumerate(index): - * itemp = pybuffer_index(&self.view, itemp, idx, dim) # <<<<<<<<<<<<<< - * - * return itemp - */ - __pyx_t_6 = __Pyx_PyIndex_AsSsize_t(__pyx_v_idx); if (unlikely((__pyx_t_6 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 398, __pyx_L1_error) - __pyx_t_7 = __pyx_pybuffer_index((&__pyx_v_self->view), __pyx_v_itemp, __pyx_t_6, __pyx_v_dim); if (unlikely(__pyx_t_7 == ((char *)NULL))) __PYX_ERR(1, 398, __pyx_L1_error) - __pyx_v_itemp = __pyx_t_7; - - /* "View.MemoryView":397 - * cdef char *itemp = self.view.buf - * - * for dim, idx in enumerate(index): # <<<<<<<<<<<<<< - * itemp = pybuffer_index(&self.view, itemp, idx, dim) - * - */ - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "View.MemoryView":400 - * itemp = pybuffer_index(&self.view, itemp, idx, dim) - * - * return itemp # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = __pyx_v_itemp; - goto __pyx_L0; - - /* "View.MemoryView":393 - * PyThread_free_lock(self.lock) - * - * cdef char *get_item_pointer(memoryview self, object index) except NULL: # <<<<<<<<<<<<<< - * cdef Py_ssize_t dim - * cdef char *itemp = self.view.buf - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.memoryview.get_item_pointer", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_idx); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":403 - * - * - * def __getitem__(memoryview self, object index): # <<<<<<<<<<<<<< - * if index is Ellipsis: - * return self - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index); /*proto*/ -static PyObject *__pyx_memoryview___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__getitem__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_4__getitem__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((PyObject *)__pyx_v_index)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_4__getitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index) { - PyObject *__pyx_v_have_slices = NULL; - PyObject *__pyx_v_indices = NULL; - char *__pyx_v_itemp; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - char *__pyx_t_6; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__getitem__", 0); - - /* "View.MemoryView":404 - * - * def __getitem__(memoryview self, object index): - * if index is Ellipsis: # <<<<<<<<<<<<<< - * return self - * - */ - __pyx_t_1 = (__pyx_v_index == __pyx_builtin_Ellipsis); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":405 - * def __getitem__(memoryview self, object index): - * if index is Ellipsis: - * return self # <<<<<<<<<<<<<< - * - * have_slices, indices = _unellipsify(index, self.view.ndim) - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(((PyObject *)__pyx_v_self)); - __pyx_r = ((PyObject *)__pyx_v_self); - goto __pyx_L0; - - /* "View.MemoryView":404 - * - * def __getitem__(memoryview self, object index): - * if index is Ellipsis: # <<<<<<<<<<<<<< - * return self - * - */ - } - - /* "View.MemoryView":407 - * return self - * - * have_slices, indices = _unellipsify(index, self.view.ndim) # <<<<<<<<<<<<<< - * - * cdef char *itemp - */ - __pyx_t_3 = _unellipsify(__pyx_v_index, __pyx_v_self->view.ndim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 407, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (likely(__pyx_t_3 != Py_None)) { - PyObject* sequence = __pyx_t_3; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(1, 407, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_4 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_5 = PyTuple_GET_ITEM(sequence, 1); - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(__pyx_t_5); - #else - __pyx_t_4 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 407, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 407, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } else { - __Pyx_RaiseNoneNotIterableError(); __PYX_ERR(1, 407, __pyx_L1_error) - } - __pyx_v_have_slices = __pyx_t_4; - __pyx_t_4 = 0; - __pyx_v_indices = __pyx_t_5; - __pyx_t_5 = 0; - - /* "View.MemoryView":410 - * - * cdef char *itemp - * if have_slices: # <<<<<<<<<<<<<< - * return memview_slice(self, indices) - * else: - */ - __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_v_have_slices); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(1, 410, __pyx_L1_error) - if (__pyx_t_2) { - - /* "View.MemoryView":411 - * cdef char *itemp - * if have_slices: - * return memview_slice(self, indices) # <<<<<<<<<<<<<< - * else: - * itemp = self.get_item_pointer(indices) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = ((PyObject *)__pyx_memview_slice(__pyx_v_self, __pyx_v_indices)); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 411, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - - /* "View.MemoryView":410 - * - * cdef char *itemp - * if have_slices: # <<<<<<<<<<<<<< - * return memview_slice(self, indices) - * else: - */ - } - - /* "View.MemoryView":413 - * return memview_slice(self, indices) - * else: - * itemp = self.get_item_pointer(indices) # <<<<<<<<<<<<<< - * return self.convert_item_to_object(itemp) - * - */ - /*else*/ { - __pyx_t_6 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->get_item_pointer(__pyx_v_self, __pyx_v_indices); if (unlikely(__pyx_t_6 == ((char *)NULL))) __PYX_ERR(1, 413, __pyx_L1_error) - __pyx_v_itemp = __pyx_t_6; - - /* "View.MemoryView":414 - * else: - * itemp = self.get_item_pointer(indices) - * return self.convert_item_to_object(itemp) # <<<<<<<<<<<<<< - * - * def __setitem__(memoryview self, object index, object value): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->convert_item_to_object(__pyx_v_self, __pyx_v_itemp); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 414, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - } - - /* "View.MemoryView":403 - * - * - * def __getitem__(memoryview self, object index): # <<<<<<<<<<<<<< - * if index is Ellipsis: - * return self - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.memoryview.__getitem__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_have_slices); - __Pyx_XDECREF(__pyx_v_indices); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":416 - * return self.convert_item_to_object(itemp) - * - * def __setitem__(memoryview self, object index, object value): # <<<<<<<<<<<<<< - * if self.view.readonly: - * raise TypeError("Cannot assign to read-only memoryview") - */ - -/* Python wrapper */ -static int __pyx_memoryview___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value); /*proto*/ -static int __pyx_memoryview___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value) { - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setitem__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_6__setitem__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((PyObject *)__pyx_v_index), ((PyObject *)__pyx_v_value)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_6__setitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value) { - PyObject *__pyx_v_have_slices = NULL; - PyObject *__pyx_v_obj = NULL; - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setitem__", 0); - __Pyx_INCREF(__pyx_v_index); - - /* "View.MemoryView":417 - * - * def __setitem__(memoryview self, object index, object value): - * if self.view.readonly: # <<<<<<<<<<<<<< - * raise TypeError("Cannot assign to read-only memoryview") - * - */ - __pyx_t_1 = (__pyx_v_self->view.readonly != 0); - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":418 - * def __setitem__(memoryview self, object index, object value): - * if self.view.readonly: - * raise TypeError("Cannot assign to read-only memoryview") # <<<<<<<<<<<<<< - * - * have_slices, index = _unellipsify(index, self.view.ndim) - */ - __pyx_t_2 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__9, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 418, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_Raise(__pyx_t_2, 0, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __PYX_ERR(1, 418, __pyx_L1_error) - - /* "View.MemoryView":417 - * - * def __setitem__(memoryview self, object index, object value): - * if self.view.readonly: # <<<<<<<<<<<<<< - * raise TypeError("Cannot assign to read-only memoryview") - * - */ - } - - /* "View.MemoryView":420 - * raise TypeError("Cannot assign to read-only memoryview") - * - * have_slices, index = _unellipsify(index, self.view.ndim) # <<<<<<<<<<<<<< - * - * if have_slices: - */ - __pyx_t_2 = _unellipsify(__pyx_v_index, __pyx_v_self->view.ndim); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 420, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (likely(__pyx_t_2 != Py_None)) { - PyObject* sequence = __pyx_t_2; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(1, 420, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_4 = PyTuple_GET_ITEM(sequence, 1); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(__pyx_t_4); - #else - __pyx_t_3 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 420, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 420, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - #endif - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } else { - __Pyx_RaiseNoneNotIterableError(); __PYX_ERR(1, 420, __pyx_L1_error) - } - __pyx_v_have_slices = __pyx_t_3; - __pyx_t_3 = 0; - __Pyx_DECREF_SET(__pyx_v_index, __pyx_t_4); - __pyx_t_4 = 0; - - /* "View.MemoryView":422 - * have_slices, index = _unellipsify(index, self.view.ndim) - * - * if have_slices: # <<<<<<<<<<<<<< - * obj = self.is_slice(value) - * if obj: - */ - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_v_have_slices); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 422, __pyx_L1_error) - if (__pyx_t_1) { - - /* "View.MemoryView":423 - * - * if have_slices: - * obj = self.is_slice(value) # <<<<<<<<<<<<<< - * if obj: - * self.setitem_slice_assignment(self[index], obj) - */ - __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->is_slice(__pyx_v_self, __pyx_v_value); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 423, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_v_obj = __pyx_t_2; - __pyx_t_2 = 0; - - /* "View.MemoryView":424 - * if have_slices: - * obj = self.is_slice(value) - * if obj: # <<<<<<<<<<<<<< - * self.setitem_slice_assignment(self[index], obj) - * else: - */ - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_v_obj); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 424, __pyx_L1_error) - if (__pyx_t_1) { - - /* "View.MemoryView":425 - * obj = self.is_slice(value) - * if obj: - * self.setitem_slice_assignment(self[index], obj) # <<<<<<<<<<<<<< - * else: - * self.setitem_slice_assign_scalar(self[index], value) - */ - __pyx_t_2 = __Pyx_PyObject_GetItem(((PyObject *)__pyx_v_self), __pyx_v_index); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 425, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->setitem_slice_assignment(__pyx_v_self, __pyx_t_2, __pyx_v_obj); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 425, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "View.MemoryView":424 - * if have_slices: - * obj = self.is_slice(value) - * if obj: # <<<<<<<<<<<<<< - * self.setitem_slice_assignment(self[index], obj) - * else: - */ - goto __pyx_L5; - } - - /* "View.MemoryView":427 - * self.setitem_slice_assignment(self[index], obj) - * else: - * self.setitem_slice_assign_scalar(self[index], value) # <<<<<<<<<<<<<< - * else: - * self.setitem_indexed(index, value) - */ - /*else*/ { - __pyx_t_4 = __Pyx_PyObject_GetItem(((PyObject *)__pyx_v_self), __pyx_v_index); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 427, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - if (!(likely(((__pyx_t_4) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_4, __pyx_memoryview_type))))) __PYX_ERR(1, 427, __pyx_L1_error) - __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->setitem_slice_assign_scalar(__pyx_v_self, ((struct __pyx_memoryview_obj *)__pyx_t_4), __pyx_v_value); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 427, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_L5:; - - /* "View.MemoryView":422 - * have_slices, index = _unellipsify(index, self.view.ndim) - * - * if have_slices: # <<<<<<<<<<<<<< - * obj = self.is_slice(value) - * if obj: - */ - goto __pyx_L4; - } - - /* "View.MemoryView":429 - * self.setitem_slice_assign_scalar(self[index], value) - * else: - * self.setitem_indexed(index, value) # <<<<<<<<<<<<<< - * - * cdef is_slice(self, obj): - */ - /*else*/ { - __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->setitem_indexed(__pyx_v_self, __pyx_v_index, __pyx_v_value); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 429, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_L4:; - - /* "View.MemoryView":416 - * return self.convert_item_to_object(itemp) - * - * def __setitem__(memoryview self, object index, object value): # <<<<<<<<<<<<<< - * if self.view.readonly: - * raise TypeError("Cannot assign to read-only memoryview") - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("View.MemoryView.memoryview.__setitem__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_have_slices); - __Pyx_XDECREF(__pyx_v_obj); - __Pyx_XDECREF(__pyx_v_index); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":431 - * self.setitem_indexed(index, value) - * - * cdef is_slice(self, obj): # <<<<<<<<<<<<<< - * if not isinstance(obj, memoryview): - * try: - */ - -static PyObject *__pyx_memoryview_is_slice(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - int __pyx_t_9; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("is_slice", 0); - __Pyx_INCREF(__pyx_v_obj); - - /* "View.MemoryView":432 - * - * cdef is_slice(self, obj): - * if not isinstance(obj, memoryview): # <<<<<<<<<<<<<< - * try: - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - */ - __pyx_t_1 = __Pyx_TypeCheck(__pyx_v_obj, __pyx_memoryview_type); - __pyx_t_2 = ((!(__pyx_t_1 != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":433 - * cdef is_slice(self, obj): - * if not isinstance(obj, memoryview): - * try: # <<<<<<<<<<<<<< - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - * self.dtype_is_object) - */ - { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ExceptionSave(&__pyx_t_3, &__pyx_t_4, &__pyx_t_5); - __Pyx_XGOTREF(__pyx_t_3); - __Pyx_XGOTREF(__pyx_t_4); - __Pyx_XGOTREF(__pyx_t_5); - /*try:*/ { - - /* "View.MemoryView":434 - * if not isinstance(obj, memoryview): - * try: - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, # <<<<<<<<<<<<<< - * self.dtype_is_object) - * except TypeError: - */ - __pyx_t_6 = __Pyx_PyInt_From_int(((__pyx_v_self->flags & (~PyBUF_WRITABLE)) | PyBUF_ANY_CONTIGUOUS)); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 434, __pyx_L4_error) - __Pyx_GOTREF(__pyx_t_6); - - /* "View.MemoryView":435 - * try: - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - * self.dtype_is_object) # <<<<<<<<<<<<<< - * except TypeError: - * return None - */ - __pyx_t_7 = __Pyx_PyBool_FromLong(__pyx_v_self->dtype_is_object); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 435, __pyx_L4_error) - __Pyx_GOTREF(__pyx_t_7); - - /* "View.MemoryView":434 - * if not isinstance(obj, memoryview): - * try: - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, # <<<<<<<<<<<<<< - * self.dtype_is_object) - * except TypeError: - */ - __pyx_t_8 = PyTuple_New(3); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 434, __pyx_L4_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_INCREF(__pyx_v_obj); - __Pyx_GIVEREF(__pyx_v_obj); - PyTuple_SET_ITEM(__pyx_t_8, 0, __pyx_v_obj); - __Pyx_GIVEREF(__pyx_t_6); - PyTuple_SET_ITEM(__pyx_t_8, 1, __pyx_t_6); - __Pyx_GIVEREF(__pyx_t_7); - PyTuple_SET_ITEM(__pyx_t_8, 2, __pyx_t_7); - __pyx_t_6 = 0; - __pyx_t_7 = 0; - __pyx_t_7 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryview_type), __pyx_t_8, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 434, __pyx_L4_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF_SET(__pyx_v_obj, __pyx_t_7); - __pyx_t_7 = 0; - - /* "View.MemoryView":433 - * cdef is_slice(self, obj): - * if not isinstance(obj, memoryview): - * try: # <<<<<<<<<<<<<< - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - * self.dtype_is_object) - */ - } - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - goto __pyx_L9_try_end; - __pyx_L4_error:; - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; - - /* "View.MemoryView":436 - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - * self.dtype_is_object) - * except TypeError: # <<<<<<<<<<<<<< - * return None - * - */ - __pyx_t_9 = __Pyx_PyErr_ExceptionMatches(__pyx_builtin_TypeError); - if (__pyx_t_9) { - __Pyx_AddTraceback("View.MemoryView.memoryview.is_slice", __pyx_clineno, __pyx_lineno, __pyx_filename); - if (__Pyx_GetException(&__pyx_t_7, &__pyx_t_8, &__pyx_t_6) < 0) __PYX_ERR(1, 436, __pyx_L6_except_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_GOTREF(__pyx_t_8); - __Pyx_GOTREF(__pyx_t_6); - - /* "View.MemoryView":437 - * self.dtype_is_object) - * except TypeError: - * return None # <<<<<<<<<<<<<< - * - * return obj - */ - __Pyx_XDECREF(__pyx_r); - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - goto __pyx_L7_except_return; - } - goto __pyx_L6_except_error; - __pyx_L6_except_error:; - - /* "View.MemoryView":433 - * cdef is_slice(self, obj): - * if not isinstance(obj, memoryview): - * try: # <<<<<<<<<<<<<< - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - * self.dtype_is_object) - */ - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_XGIVEREF(__pyx_t_5); - __Pyx_ExceptionReset(__pyx_t_3, __pyx_t_4, __pyx_t_5); - goto __pyx_L1_error; - __pyx_L7_except_return:; - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_XGIVEREF(__pyx_t_5); - __Pyx_ExceptionReset(__pyx_t_3, __pyx_t_4, __pyx_t_5); - goto __pyx_L0; - __pyx_L9_try_end:; - } - - /* "View.MemoryView":432 - * - * cdef is_slice(self, obj): - * if not isinstance(obj, memoryview): # <<<<<<<<<<<<<< - * try: - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - */ - } - - /* "View.MemoryView":439 - * return None - * - * return obj # <<<<<<<<<<<<<< - * - * cdef setitem_slice_assignment(self, dst, src): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_obj); - __pyx_r = __pyx_v_obj; - goto __pyx_L0; - - /* "View.MemoryView":431 - * self.setitem_indexed(index, value) - * - * cdef is_slice(self, obj): # <<<<<<<<<<<<<< - * if not isinstance(obj, memoryview): - * try: - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_AddTraceback("View.MemoryView.memoryview.is_slice", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_obj); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":441 - * return obj - * - * cdef setitem_slice_assignment(self, dst, src): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice dst_slice - * cdef __Pyx_memviewslice src_slice - */ - -static PyObject *__pyx_memoryview_setitem_slice_assignment(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_dst, PyObject *__pyx_v_src) { - __Pyx_memviewslice __pyx_v_dst_slice; - __Pyx_memviewslice __pyx_v_src_slice; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice *__pyx_t_1; - __Pyx_memviewslice *__pyx_t_2; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - int __pyx_t_5; - int __pyx_t_6; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("setitem_slice_assignment", 0); - - /* "View.MemoryView":445 - * cdef __Pyx_memviewslice src_slice - * - * memoryview_copy_contents(get_slice_from_memview(src, &src_slice)[0], # <<<<<<<<<<<<<< - * get_slice_from_memview(dst, &dst_slice)[0], - * src.ndim, dst.ndim, self.dtype_is_object) - */ - if (!(likely(((__pyx_v_src) == Py_None) || likely(__Pyx_TypeTest(__pyx_v_src, __pyx_memoryview_type))))) __PYX_ERR(1, 445, __pyx_L1_error) - __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(((struct __pyx_memoryview_obj *)__pyx_v_src), (&__pyx_v_src_slice)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 445, __pyx_L1_error) - - /* "View.MemoryView":446 - * - * memoryview_copy_contents(get_slice_from_memview(src, &src_slice)[0], - * get_slice_from_memview(dst, &dst_slice)[0], # <<<<<<<<<<<<<< - * src.ndim, dst.ndim, self.dtype_is_object) - * - */ - if (!(likely(((__pyx_v_dst) == Py_None) || likely(__Pyx_TypeTest(__pyx_v_dst, __pyx_memoryview_type))))) __PYX_ERR(1, 446, __pyx_L1_error) - __pyx_t_2 = __pyx_memoryview_get_slice_from_memoryview(((struct __pyx_memoryview_obj *)__pyx_v_dst), (&__pyx_v_dst_slice)); if (unlikely(__pyx_t_2 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 446, __pyx_L1_error) - - /* "View.MemoryView":447 - * memoryview_copy_contents(get_slice_from_memview(src, &src_slice)[0], - * get_slice_from_memview(dst, &dst_slice)[0], - * src.ndim, dst.ndim, self.dtype_is_object) # <<<<<<<<<<<<<< - * - * cdef setitem_slice_assign_scalar(self, memoryview dst, value): - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_src, __pyx_n_s_ndim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 447, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyInt_As_int(__pyx_t_3); if (unlikely((__pyx_t_4 == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 447, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_dst, __pyx_n_s_ndim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 447, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = __Pyx_PyInt_As_int(__pyx_t_3); if (unlikely((__pyx_t_5 == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 447, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "View.MemoryView":445 - * cdef __Pyx_memviewslice src_slice - * - * memoryview_copy_contents(get_slice_from_memview(src, &src_slice)[0], # <<<<<<<<<<<<<< - * get_slice_from_memview(dst, &dst_slice)[0], - * src.ndim, dst.ndim, self.dtype_is_object) - */ - __pyx_t_6 = __pyx_memoryview_copy_contents((__pyx_t_1[0]), (__pyx_t_2[0]), __pyx_t_4, __pyx_t_5, __pyx_v_self->dtype_is_object); if (unlikely(__pyx_t_6 == ((int)-1))) __PYX_ERR(1, 445, __pyx_L1_error) - - /* "View.MemoryView":441 - * return obj - * - * cdef setitem_slice_assignment(self, dst, src): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice dst_slice - * cdef __Pyx_memviewslice src_slice - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.setitem_slice_assignment", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":449 - * src.ndim, dst.ndim, self.dtype_is_object) - * - * cdef setitem_slice_assign_scalar(self, memoryview dst, value): # <<<<<<<<<<<<<< - * cdef int array[128] - * cdef void *tmp = NULL - */ - -static PyObject *__pyx_memoryview_setitem_slice_assign_scalar(struct __pyx_memoryview_obj *__pyx_v_self, struct __pyx_memoryview_obj *__pyx_v_dst, PyObject *__pyx_v_value) { - int __pyx_v_array[0x80]; - void *__pyx_v_tmp; - void *__pyx_v_item; - __Pyx_memviewslice *__pyx_v_dst_slice; - __Pyx_memviewslice __pyx_v_tmp_slice; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice *__pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - int __pyx_t_5; - char const *__pyx_t_6; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - PyObject *__pyx_t_9 = NULL; - PyObject *__pyx_t_10 = NULL; - PyObject *__pyx_t_11 = NULL; - PyObject *__pyx_t_12 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("setitem_slice_assign_scalar", 0); - - /* "View.MemoryView":451 - * cdef setitem_slice_assign_scalar(self, memoryview dst, value): - * cdef int array[128] - * cdef void *tmp = NULL # <<<<<<<<<<<<<< - * cdef void *item - * - */ - __pyx_v_tmp = NULL; - - /* "View.MemoryView":456 - * cdef __Pyx_memviewslice *dst_slice - * cdef __Pyx_memviewslice tmp_slice - * dst_slice = get_slice_from_memview(dst, &tmp_slice) # <<<<<<<<<<<<<< - * - * if self.view.itemsize > sizeof(array): - */ - __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(__pyx_v_dst, (&__pyx_v_tmp_slice)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 456, __pyx_L1_error) - __pyx_v_dst_slice = __pyx_t_1; - - /* "View.MemoryView":458 - * dst_slice = get_slice_from_memview(dst, &tmp_slice) - * - * if self.view.itemsize > sizeof(array): # <<<<<<<<<<<<<< - * tmp = PyMem_Malloc(self.view.itemsize) - * if tmp == NULL: - */ - __pyx_t_2 = ((((size_t)__pyx_v_self->view.itemsize) > (sizeof(__pyx_v_array))) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":459 - * - * if self.view.itemsize > sizeof(array): - * tmp = PyMem_Malloc(self.view.itemsize) # <<<<<<<<<<<<<< - * if tmp == NULL: - * raise MemoryError - */ - __pyx_v_tmp = PyMem_Malloc(__pyx_v_self->view.itemsize); - - /* "View.MemoryView":460 - * if self.view.itemsize > sizeof(array): - * tmp = PyMem_Malloc(self.view.itemsize) - * if tmp == NULL: # <<<<<<<<<<<<<< - * raise MemoryError - * item = tmp - */ - __pyx_t_2 = ((__pyx_v_tmp == NULL) != 0); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":461 - * tmp = PyMem_Malloc(self.view.itemsize) - * if tmp == NULL: - * raise MemoryError # <<<<<<<<<<<<<< - * item = tmp - * else: - */ - PyErr_NoMemory(); __PYX_ERR(1, 461, __pyx_L1_error) - - /* "View.MemoryView":460 - * if self.view.itemsize > sizeof(array): - * tmp = PyMem_Malloc(self.view.itemsize) - * if tmp == NULL: # <<<<<<<<<<<<<< - * raise MemoryError - * item = tmp - */ - } - - /* "View.MemoryView":462 - * if tmp == NULL: - * raise MemoryError - * item = tmp # <<<<<<<<<<<<<< - * else: - * item = array - */ - __pyx_v_item = __pyx_v_tmp; - - /* "View.MemoryView":458 - * dst_slice = get_slice_from_memview(dst, &tmp_slice) - * - * if self.view.itemsize > sizeof(array): # <<<<<<<<<<<<<< - * tmp = PyMem_Malloc(self.view.itemsize) - * if tmp == NULL: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":464 - * item = tmp - * else: - * item = array # <<<<<<<<<<<<<< - * - * try: - */ - /*else*/ { - __pyx_v_item = ((void *)__pyx_v_array); - } - __pyx_L3:; - - /* "View.MemoryView":466 - * item = array - * - * try: # <<<<<<<<<<<<<< - * if self.dtype_is_object: - * ( item)[0] = value - */ - /*try:*/ { - - /* "View.MemoryView":467 - * - * try: - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * ( item)[0] = value - * else: - */ - __pyx_t_2 = (__pyx_v_self->dtype_is_object != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":468 - * try: - * if self.dtype_is_object: - * ( item)[0] = value # <<<<<<<<<<<<<< - * else: - * self.assign_item_from_object( item, value) - */ - (((PyObject **)__pyx_v_item)[0]) = ((PyObject *)__pyx_v_value); - - /* "View.MemoryView":467 - * - * try: - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * ( item)[0] = value - * else: - */ - goto __pyx_L8; - } - - /* "View.MemoryView":470 - * ( item)[0] = value - * else: - * self.assign_item_from_object( item, value) # <<<<<<<<<<<<<< - * - * - */ - /*else*/ { - __pyx_t_3 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->assign_item_from_object(__pyx_v_self, ((char *)__pyx_v_item), __pyx_v_value); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 470, __pyx_L6_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_L8:; - - /* "View.MemoryView":474 - * - * - * if self.view.suboffsets != NULL: # <<<<<<<<<<<<<< - * assert_direct_dimensions(self.view.suboffsets, self.view.ndim) - * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize, - */ - __pyx_t_2 = ((__pyx_v_self->view.suboffsets != NULL) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":475 - * - * if self.view.suboffsets != NULL: - * assert_direct_dimensions(self.view.suboffsets, self.view.ndim) # <<<<<<<<<<<<<< - * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize, - * item, self.dtype_is_object) - */ - __pyx_t_3 = assert_direct_dimensions(__pyx_v_self->view.suboffsets, __pyx_v_self->view.ndim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 475, __pyx_L6_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "View.MemoryView":474 - * - * - * if self.view.suboffsets != NULL: # <<<<<<<<<<<<<< - * assert_direct_dimensions(self.view.suboffsets, self.view.ndim) - * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize, - */ - } - - /* "View.MemoryView":476 - * if self.view.suboffsets != NULL: - * assert_direct_dimensions(self.view.suboffsets, self.view.ndim) - * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize, # <<<<<<<<<<<<<< - * item, self.dtype_is_object) - * finally: - */ - __pyx_memoryview_slice_assign_scalar(__pyx_v_dst_slice, __pyx_v_dst->view.ndim, __pyx_v_self->view.itemsize, __pyx_v_item, __pyx_v_self->dtype_is_object); - } - - /* "View.MemoryView":479 - * item, self.dtype_is_object) - * finally: - * PyMem_Free(tmp) # <<<<<<<<<<<<<< - * - * cdef setitem_indexed(self, index, value): - */ - /*finally:*/ { - /*normal exit:*/{ - PyMem_Free(__pyx_v_tmp); - goto __pyx_L7; - } - __pyx_L6_error:; - /*exception exit:*/{ - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __pyx_t_7 = 0; __pyx_t_8 = 0; __pyx_t_9 = 0; __pyx_t_10 = 0; __pyx_t_11 = 0; __pyx_t_12 = 0; - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PY_MAJOR_VERSION >= 3) __Pyx_ExceptionSwap(&__pyx_t_10, &__pyx_t_11, &__pyx_t_12); - if ((PY_MAJOR_VERSION < 3) || unlikely(__Pyx_GetException(&__pyx_t_7, &__pyx_t_8, &__pyx_t_9) < 0)) __Pyx_ErrFetch(&__pyx_t_7, &__pyx_t_8, &__pyx_t_9); - __Pyx_XGOTREF(__pyx_t_7); - __Pyx_XGOTREF(__pyx_t_8); - __Pyx_XGOTREF(__pyx_t_9); - __Pyx_XGOTREF(__pyx_t_10); - __Pyx_XGOTREF(__pyx_t_11); - __Pyx_XGOTREF(__pyx_t_12); - __pyx_t_4 = __pyx_lineno; __pyx_t_5 = __pyx_clineno; __pyx_t_6 = __pyx_filename; - { - PyMem_Free(__pyx_v_tmp); - } - if (PY_MAJOR_VERSION >= 3) { - __Pyx_XGIVEREF(__pyx_t_10); - __Pyx_XGIVEREF(__pyx_t_11); - __Pyx_XGIVEREF(__pyx_t_12); - __Pyx_ExceptionReset(__pyx_t_10, __pyx_t_11, __pyx_t_12); - } - __Pyx_XGIVEREF(__pyx_t_7); - __Pyx_XGIVEREF(__pyx_t_8); - __Pyx_XGIVEREF(__pyx_t_9); - __Pyx_ErrRestore(__pyx_t_7, __pyx_t_8, __pyx_t_9); - __pyx_t_7 = 0; __pyx_t_8 = 0; __pyx_t_9 = 0; __pyx_t_10 = 0; __pyx_t_11 = 0; __pyx_t_12 = 0; - __pyx_lineno = __pyx_t_4; __pyx_clineno = __pyx_t_5; __pyx_filename = __pyx_t_6; - goto __pyx_L1_error; - } - __pyx_L7:; - } - - /* "View.MemoryView":449 - * src.ndim, dst.ndim, self.dtype_is_object) - * - * cdef setitem_slice_assign_scalar(self, memoryview dst, value): # <<<<<<<<<<<<<< - * cdef int array[128] - * cdef void *tmp = NULL - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.setitem_slice_assign_scalar", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":481 - * PyMem_Free(tmp) - * - * cdef setitem_indexed(self, index, value): # <<<<<<<<<<<<<< - * cdef char *itemp = self.get_item_pointer(index) - * self.assign_item_from_object(itemp, value) - */ - -static PyObject *__pyx_memoryview_setitem_indexed(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value) { - char *__pyx_v_itemp; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - char *__pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("setitem_indexed", 0); - - /* "View.MemoryView":482 - * - * cdef setitem_indexed(self, index, value): - * cdef char *itemp = self.get_item_pointer(index) # <<<<<<<<<<<<<< - * self.assign_item_from_object(itemp, value) - * - */ - __pyx_t_1 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->get_item_pointer(__pyx_v_self, __pyx_v_index); if (unlikely(__pyx_t_1 == ((char *)NULL))) __PYX_ERR(1, 482, __pyx_L1_error) - __pyx_v_itemp = __pyx_t_1; - - /* "View.MemoryView":483 - * cdef setitem_indexed(self, index, value): - * cdef char *itemp = self.get_item_pointer(index) - * self.assign_item_from_object(itemp, value) # <<<<<<<<<<<<<< - * - * cdef convert_item_to_object(self, char *itemp): - */ - __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->assign_item_from_object(__pyx_v_self, __pyx_v_itemp, __pyx_v_value); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 483, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "View.MemoryView":481 - * PyMem_Free(tmp) - * - * cdef setitem_indexed(self, index, value): # <<<<<<<<<<<<<< - * cdef char *itemp = self.get_item_pointer(index) - * self.assign_item_from_object(itemp, value) - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.setitem_indexed", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":485 - * self.assign_item_from_object(itemp, value) - * - * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<< - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - */ - -static PyObject *__pyx_memoryview_convert_item_to_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp) { - PyObject *__pyx_v_struct = NULL; - PyObject *__pyx_v_bytesitem = 0; - PyObject *__pyx_v_result = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - int __pyx_t_8; - PyObject *__pyx_t_9 = NULL; - size_t __pyx_t_10; - int __pyx_t_11; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("convert_item_to_object", 0); - - /* "View.MemoryView":488 - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - * import struct # <<<<<<<<<<<<<< - * cdef bytes bytesitem - * - */ - __pyx_t_1 = __Pyx_Import(__pyx_n_s_struct, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 488, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_struct = __pyx_t_1; - __pyx_t_1 = 0; - - /* "View.MemoryView":491 - * cdef bytes bytesitem - * - * bytesitem = itemp[:self.view.itemsize] # <<<<<<<<<<<<<< - * try: - * result = struct.unpack(self.view.format, bytesitem) - */ - __pyx_t_1 = __Pyx_PyBytes_FromStringAndSize(__pyx_v_itemp + 0, __pyx_v_self->view.itemsize - 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 491, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_bytesitem = ((PyObject*)__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":492 - * - * bytesitem = itemp[:self.view.itemsize] - * try: # <<<<<<<<<<<<<< - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: - */ - { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ExceptionSave(&__pyx_t_2, &__pyx_t_3, &__pyx_t_4); - __Pyx_XGOTREF(__pyx_t_2); - __Pyx_XGOTREF(__pyx_t_3); - __Pyx_XGOTREF(__pyx_t_4); - /*try:*/ { - - /* "View.MemoryView":493 - * bytesitem = itemp[:self.view.itemsize] - * try: - * result = struct.unpack(self.view.format, bytesitem) # <<<<<<<<<<<<<< - * except struct.error: - * raise ValueError("Unable to convert item to object") - */ - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_unpack); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 493, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = __Pyx_PyBytes_FromString(__pyx_v_self->view.format); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 493, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_7 = NULL; - __pyx_t_8 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_5))) { - __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_5); - if (likely(__pyx_t_7)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5); - __Pyx_INCREF(__pyx_t_7); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_5, function); - __pyx_t_8 = 1; - } - } - #if CYTHON_FAST_PYCALL - if (PyFunction_Check(__pyx_t_5)) { - PyObject *__pyx_temp[3] = {__pyx_t_7, __pyx_t_6, __pyx_v_bytesitem}; - __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_5, __pyx_temp+1-__pyx_t_8, 2+__pyx_t_8); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 493, __pyx_L3_error) - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } else - #endif - #if CYTHON_FAST_PYCCALL - if (__Pyx_PyFastCFunction_Check(__pyx_t_5)) { - PyObject *__pyx_temp[3] = {__pyx_t_7, __pyx_t_6, __pyx_v_bytesitem}; - __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_5, __pyx_temp+1-__pyx_t_8, 2+__pyx_t_8); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 493, __pyx_L3_error) - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } else - #endif - { - __pyx_t_9 = PyTuple_New(2+__pyx_t_8); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 493, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_9); - if (__pyx_t_7) { - __Pyx_GIVEREF(__pyx_t_7); PyTuple_SET_ITEM(__pyx_t_9, 0, __pyx_t_7); __pyx_t_7 = NULL; - } - __Pyx_GIVEREF(__pyx_t_6); - PyTuple_SET_ITEM(__pyx_t_9, 0+__pyx_t_8, __pyx_t_6); - __Pyx_INCREF(__pyx_v_bytesitem); - __Pyx_GIVEREF(__pyx_v_bytesitem); - PyTuple_SET_ITEM(__pyx_t_9, 1+__pyx_t_8, __pyx_v_bytesitem); - __pyx_t_6 = 0; - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_5, __pyx_t_9, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 493, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - } - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_v_result = __pyx_t_1; - __pyx_t_1 = 0; - - /* "View.MemoryView":492 - * - * bytesitem = itemp[:self.view.itemsize] - * try: # <<<<<<<<<<<<<< - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: - */ - } - - /* "View.MemoryView":497 - * raise ValueError("Unable to convert item to object") - * else: - * if len(self.view.format) == 1: # <<<<<<<<<<<<<< - * return result[0] - * return result - */ - /*else:*/ { - __pyx_t_10 = strlen(__pyx_v_self->view.format); - __pyx_t_11 = ((__pyx_t_10 == 1) != 0); - if (__pyx_t_11) { - - /* "View.MemoryView":498 - * else: - * if len(self.view.format) == 1: - * return result[0] # <<<<<<<<<<<<<< - * return result - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_result, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 498, __pyx_L5_except_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L6_except_return; - - /* "View.MemoryView":497 - * raise ValueError("Unable to convert item to object") - * else: - * if len(self.view.format) == 1: # <<<<<<<<<<<<<< - * return result[0] - * return result - */ - } - - /* "View.MemoryView":499 - * if len(self.view.format) == 1: - * return result[0] - * return result # <<<<<<<<<<<<<< - * - * cdef assign_item_from_object(self, char *itemp, object value): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_result); - __pyx_r = __pyx_v_result; - goto __pyx_L6_except_return; - } - __pyx_L3_error:; - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "View.MemoryView":494 - * try: - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: # <<<<<<<<<<<<<< - * raise ValueError("Unable to convert item to object") - * else: - */ - __Pyx_ErrFetch(&__pyx_t_1, &__pyx_t_5, &__pyx_t_9); - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_error); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 494, __pyx_L5_except_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_8 = __Pyx_PyErr_GivenExceptionMatches(__pyx_t_1, __pyx_t_6); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_ErrRestore(__pyx_t_1, __pyx_t_5, __pyx_t_9); - __pyx_t_1 = 0; __pyx_t_5 = 0; __pyx_t_9 = 0; - if (__pyx_t_8) { - __Pyx_AddTraceback("View.MemoryView.memoryview.convert_item_to_object", __pyx_clineno, __pyx_lineno, __pyx_filename); - if (__Pyx_GetException(&__pyx_t_9, &__pyx_t_5, &__pyx_t_1) < 0) __PYX_ERR(1, 494, __pyx_L5_except_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_GOTREF(__pyx_t_5); - __Pyx_GOTREF(__pyx_t_1); - - /* "View.MemoryView":495 - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: - * raise ValueError("Unable to convert item to object") # <<<<<<<<<<<<<< - * else: - * if len(self.view.format) == 1: - */ - __pyx_t_6 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__10, NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 495, __pyx_L5_except_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_Raise(__pyx_t_6, 0, 0, 0); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __PYX_ERR(1, 495, __pyx_L5_except_error) - } - goto __pyx_L5_except_error; - __pyx_L5_except_error:; - - /* "View.MemoryView":492 - * - * bytesitem = itemp[:self.view.itemsize] - * try: # <<<<<<<<<<<<<< - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: - */ - __Pyx_XGIVEREF(__pyx_t_2); - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_ExceptionReset(__pyx_t_2, __pyx_t_3, __pyx_t_4); - goto __pyx_L1_error; - __pyx_L6_except_return:; - __Pyx_XGIVEREF(__pyx_t_2); - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_ExceptionReset(__pyx_t_2, __pyx_t_3, __pyx_t_4); - goto __pyx_L0; - } - - /* "View.MemoryView":485 - * self.assign_item_from_object(itemp, value) - * - * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<< - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_9); - __Pyx_AddTraceback("View.MemoryView.memoryview.convert_item_to_object", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_struct); - __Pyx_XDECREF(__pyx_v_bytesitem); - __Pyx_XDECREF(__pyx_v_result); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":501 - * return result - * - * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<< - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - */ - -static PyObject *__pyx_memoryview_assign_item_from_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value) { - PyObject *__pyx_v_struct = NULL; - char __pyx_v_c; - PyObject *__pyx_v_bytesvalue = 0; - Py_ssize_t __pyx_v_i; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - int __pyx_t_3; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - int __pyx_t_7; - PyObject *__pyx_t_8 = NULL; - Py_ssize_t __pyx_t_9; - PyObject *__pyx_t_10 = NULL; - char *__pyx_t_11; - char *__pyx_t_12; - char *__pyx_t_13; - char *__pyx_t_14; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("assign_item_from_object", 0); - - /* "View.MemoryView":504 - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - * import struct # <<<<<<<<<<<<<< - * cdef char c - * cdef bytes bytesvalue - */ - __pyx_t_1 = __Pyx_Import(__pyx_n_s_struct, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 504, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_struct = __pyx_t_1; - __pyx_t_1 = 0; - - /* "View.MemoryView":509 - * cdef Py_ssize_t i - * - * if isinstance(value, tuple): # <<<<<<<<<<<<<< - * bytesvalue = struct.pack(self.view.format, *value) - * else: - */ - __pyx_t_2 = PyTuple_Check(__pyx_v_value); - __pyx_t_3 = (__pyx_t_2 != 0); - if (__pyx_t_3) { - - /* "View.MemoryView":510 - * - * if isinstance(value, tuple): - * bytesvalue = struct.pack(self.view.format, *value) # <<<<<<<<<<<<<< - * else: - * bytesvalue = struct.pack(self.view.format, value) - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_pack); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 510, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = __Pyx_PyBytes_FromString(__pyx_v_self->view.format); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 510, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 510, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_4); - __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PySequence_Tuple(__pyx_v_value); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 510, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_6 = PyNumber_Add(__pyx_t_5, __pyx_t_4); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 510, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_6, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 510, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (!(likely(PyBytes_CheckExact(__pyx_t_4))||((__pyx_t_4) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "bytes", Py_TYPE(__pyx_t_4)->tp_name), 0))) __PYX_ERR(1, 510, __pyx_L1_error) - __pyx_v_bytesvalue = ((PyObject*)__pyx_t_4); - __pyx_t_4 = 0; - - /* "View.MemoryView":509 - * cdef Py_ssize_t i - * - * if isinstance(value, tuple): # <<<<<<<<<<<<<< - * bytesvalue = struct.pack(self.view.format, *value) - * else: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":512 - * bytesvalue = struct.pack(self.view.format, *value) - * else: - * bytesvalue = struct.pack(self.view.format, value) # <<<<<<<<<<<<<< - * - * for i, c in enumerate(bytesvalue): - */ - /*else*/ { - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_pack); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 512, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_1 = __Pyx_PyBytes_FromString(__pyx_v_self->view.format); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 512, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_5 = NULL; - __pyx_t_7 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_6))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_6); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_6); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_6, function); - __pyx_t_7 = 1; - } - } - #if CYTHON_FAST_PYCALL - if (PyFunction_Check(__pyx_t_6)) { - PyObject *__pyx_temp[3] = {__pyx_t_5, __pyx_t_1, __pyx_v_value}; - __pyx_t_4 = __Pyx_PyFunction_FastCall(__pyx_t_6, __pyx_temp+1-__pyx_t_7, 2+__pyx_t_7); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 512, __pyx_L1_error) - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } else - #endif - #if CYTHON_FAST_PYCCALL - if (__Pyx_PyFastCFunction_Check(__pyx_t_6)) { - PyObject *__pyx_temp[3] = {__pyx_t_5, __pyx_t_1, __pyx_v_value}; - __pyx_t_4 = __Pyx_PyCFunction_FastCall(__pyx_t_6, __pyx_temp+1-__pyx_t_7, 2+__pyx_t_7); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 512, __pyx_L1_error) - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } else - #endif - { - __pyx_t_8 = PyTuple_New(2+__pyx_t_7); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 512, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - if (__pyx_t_5) { - __Pyx_GIVEREF(__pyx_t_5); PyTuple_SET_ITEM(__pyx_t_8, 0, __pyx_t_5); __pyx_t_5 = NULL; - } - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_8, 0+__pyx_t_7, __pyx_t_1); - __Pyx_INCREF(__pyx_v_value); - __Pyx_GIVEREF(__pyx_v_value); - PyTuple_SET_ITEM(__pyx_t_8, 1+__pyx_t_7, __pyx_v_value); - __pyx_t_1 = 0; - __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_6, __pyx_t_8, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 512, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - } - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (!(likely(PyBytes_CheckExact(__pyx_t_4))||((__pyx_t_4) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "bytes", Py_TYPE(__pyx_t_4)->tp_name), 0))) __PYX_ERR(1, 512, __pyx_L1_error) - __pyx_v_bytesvalue = ((PyObject*)__pyx_t_4); - __pyx_t_4 = 0; - } - __pyx_L3:; - - /* "View.MemoryView":514 - * bytesvalue = struct.pack(self.view.format, value) - * - * for i, c in enumerate(bytesvalue): # <<<<<<<<<<<<<< - * itemp[i] = c - * - */ - __pyx_t_9 = 0; - if (unlikely(__pyx_v_bytesvalue == Py_None)) { - PyErr_SetString(PyExc_TypeError, "'NoneType' is not iterable"); - __PYX_ERR(1, 514, __pyx_L1_error) - } - __Pyx_INCREF(__pyx_v_bytesvalue); - __pyx_t_10 = __pyx_v_bytesvalue; - __pyx_t_12 = PyBytes_AS_STRING(__pyx_t_10); - __pyx_t_13 = (__pyx_t_12 + PyBytes_GET_SIZE(__pyx_t_10)); - for (__pyx_t_14 = __pyx_t_12; __pyx_t_14 < __pyx_t_13; __pyx_t_14++) { - __pyx_t_11 = __pyx_t_14; - __pyx_v_c = (__pyx_t_11[0]); - - /* "View.MemoryView":515 - * - * for i, c in enumerate(bytesvalue): - * itemp[i] = c # <<<<<<<<<<<<<< - * - * @cname('getbuffer') - */ - __pyx_v_i = __pyx_t_9; - - /* "View.MemoryView":514 - * bytesvalue = struct.pack(self.view.format, value) - * - * for i, c in enumerate(bytesvalue): # <<<<<<<<<<<<<< - * itemp[i] = c - * - */ - __pyx_t_9 = (__pyx_t_9 + 1); - - /* "View.MemoryView":515 - * - * for i, c in enumerate(bytesvalue): - * itemp[i] = c # <<<<<<<<<<<<<< - * - * @cname('getbuffer') - */ - (__pyx_v_itemp[__pyx_v_i]) = __pyx_v_c; - } - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - - /* "View.MemoryView":501 - * return result - * - * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<< - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_XDECREF(__pyx_t_10); - __Pyx_AddTraceback("View.MemoryView.memoryview.assign_item_from_object", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_struct); - __Pyx_XDECREF(__pyx_v_bytesvalue); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":518 - * - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): # <<<<<<<<<<<<<< - * if flags & PyBUF_WRITABLE and self.view.readonly: - * raise ValueError("Cannot create writable memory view from read-only memoryview") - */ - -/* Python wrapper */ -static CYTHON_UNUSED int __pyx_memoryview_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/ -static CYTHON_UNUSED int __pyx_memoryview_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) { - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__getbuffer__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_8__getbuffer__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((Py_buffer *)__pyx_v_info), ((int)__pyx_v_flags)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_8__getbuffer__(struct __pyx_memoryview_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) { - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - Py_ssize_t *__pyx_t_4; - char *__pyx_t_5; - void *__pyx_t_6; - int __pyx_t_7; - Py_ssize_t __pyx_t_8; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - if (__pyx_v_info == NULL) { - PyErr_SetString(PyExc_BufferError, "PyObject_GetBuffer: view==NULL argument is obsolete"); - return -1; - } - __Pyx_RefNannySetupContext("__getbuffer__", 0); - __pyx_v_info->obj = Py_None; __Pyx_INCREF(Py_None); - __Pyx_GIVEREF(__pyx_v_info->obj); - - /* "View.MemoryView":519 - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): - * if flags & PyBUF_WRITABLE and self.view.readonly: # <<<<<<<<<<<<<< - * raise ValueError("Cannot create writable memory view from read-only memoryview") - * - */ - __pyx_t_2 = ((__pyx_v_flags & PyBUF_WRITABLE) != 0); - if (__pyx_t_2) { - } else { - __pyx_t_1 = __pyx_t_2; - goto __pyx_L4_bool_binop_done; - } - __pyx_t_2 = (__pyx_v_self->view.readonly != 0); - __pyx_t_1 = __pyx_t_2; - __pyx_L4_bool_binop_done:; - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":520 - * def __getbuffer__(self, Py_buffer *info, int flags): - * if flags & PyBUF_WRITABLE and self.view.readonly: - * raise ValueError("Cannot create writable memory view from read-only memoryview") # <<<<<<<<<<<<<< - * - * if flags & PyBUF_ND: - */ - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__11, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 520, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 520, __pyx_L1_error) - - /* "View.MemoryView":519 - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): - * if flags & PyBUF_WRITABLE and self.view.readonly: # <<<<<<<<<<<<<< - * raise ValueError("Cannot create writable memory view from read-only memoryview") - * - */ - } - - /* "View.MemoryView":522 - * raise ValueError("Cannot create writable memory view from read-only memoryview") - * - * if flags & PyBUF_ND: # <<<<<<<<<<<<<< - * info.shape = self.view.shape - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_ND) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":523 - * - * if flags & PyBUF_ND: - * info.shape = self.view.shape # <<<<<<<<<<<<<< - * else: - * info.shape = NULL - */ - __pyx_t_4 = __pyx_v_self->view.shape; - __pyx_v_info->shape = __pyx_t_4; - - /* "View.MemoryView":522 - * raise ValueError("Cannot create writable memory view from read-only memoryview") - * - * if flags & PyBUF_ND: # <<<<<<<<<<<<<< - * info.shape = self.view.shape - * else: - */ - goto __pyx_L6; - } - - /* "View.MemoryView":525 - * info.shape = self.view.shape - * else: - * info.shape = NULL # <<<<<<<<<<<<<< - * - * if flags & PyBUF_STRIDES: - */ - /*else*/ { - __pyx_v_info->shape = NULL; - } - __pyx_L6:; - - /* "View.MemoryView":527 - * info.shape = NULL - * - * if flags & PyBUF_STRIDES: # <<<<<<<<<<<<<< - * info.strides = self.view.strides - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_STRIDES) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":528 - * - * if flags & PyBUF_STRIDES: - * info.strides = self.view.strides # <<<<<<<<<<<<<< - * else: - * info.strides = NULL - */ - __pyx_t_4 = __pyx_v_self->view.strides; - __pyx_v_info->strides = __pyx_t_4; - - /* "View.MemoryView":527 - * info.shape = NULL - * - * if flags & PyBUF_STRIDES: # <<<<<<<<<<<<<< - * info.strides = self.view.strides - * else: - */ - goto __pyx_L7; - } - - /* "View.MemoryView":530 - * info.strides = self.view.strides - * else: - * info.strides = NULL # <<<<<<<<<<<<<< - * - * if flags & PyBUF_INDIRECT: - */ - /*else*/ { - __pyx_v_info->strides = NULL; - } - __pyx_L7:; - - /* "View.MemoryView":532 - * info.strides = NULL - * - * if flags & PyBUF_INDIRECT: # <<<<<<<<<<<<<< - * info.suboffsets = self.view.suboffsets - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_INDIRECT) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":533 - * - * if flags & PyBUF_INDIRECT: - * info.suboffsets = self.view.suboffsets # <<<<<<<<<<<<<< - * else: - * info.suboffsets = NULL - */ - __pyx_t_4 = __pyx_v_self->view.suboffsets; - __pyx_v_info->suboffsets = __pyx_t_4; - - /* "View.MemoryView":532 - * info.strides = NULL - * - * if flags & PyBUF_INDIRECT: # <<<<<<<<<<<<<< - * info.suboffsets = self.view.suboffsets - * else: - */ - goto __pyx_L8; - } - - /* "View.MemoryView":535 - * info.suboffsets = self.view.suboffsets - * else: - * info.suboffsets = NULL # <<<<<<<<<<<<<< - * - * if flags & PyBUF_FORMAT: - */ - /*else*/ { - __pyx_v_info->suboffsets = NULL; - } - __pyx_L8:; - - /* "View.MemoryView":537 - * info.suboffsets = NULL - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * info.format = self.view.format - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_FORMAT) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":538 - * - * if flags & PyBUF_FORMAT: - * info.format = self.view.format # <<<<<<<<<<<<<< - * else: - * info.format = NULL - */ - __pyx_t_5 = __pyx_v_self->view.format; - __pyx_v_info->format = __pyx_t_5; - - /* "View.MemoryView":537 - * info.suboffsets = NULL - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * info.format = self.view.format - * else: - */ - goto __pyx_L9; - } - - /* "View.MemoryView":540 - * info.format = self.view.format - * else: - * info.format = NULL # <<<<<<<<<<<<<< - * - * info.buf = self.view.buf - */ - /*else*/ { - __pyx_v_info->format = NULL; - } - __pyx_L9:; - - /* "View.MemoryView":542 - * info.format = NULL - * - * info.buf = self.view.buf # <<<<<<<<<<<<<< - * info.ndim = self.view.ndim - * info.itemsize = self.view.itemsize - */ - __pyx_t_6 = __pyx_v_self->view.buf; - __pyx_v_info->buf = __pyx_t_6; - - /* "View.MemoryView":543 - * - * info.buf = self.view.buf - * info.ndim = self.view.ndim # <<<<<<<<<<<<<< - * info.itemsize = self.view.itemsize - * info.len = self.view.len - */ - __pyx_t_7 = __pyx_v_self->view.ndim; - __pyx_v_info->ndim = __pyx_t_7; - - /* "View.MemoryView":544 - * info.buf = self.view.buf - * info.ndim = self.view.ndim - * info.itemsize = self.view.itemsize # <<<<<<<<<<<<<< - * info.len = self.view.len - * info.readonly = self.view.readonly - */ - __pyx_t_8 = __pyx_v_self->view.itemsize; - __pyx_v_info->itemsize = __pyx_t_8; - - /* "View.MemoryView":545 - * info.ndim = self.view.ndim - * info.itemsize = self.view.itemsize - * info.len = self.view.len # <<<<<<<<<<<<<< - * info.readonly = self.view.readonly - * info.obj = self - */ - __pyx_t_8 = __pyx_v_self->view.len; - __pyx_v_info->len = __pyx_t_8; - - /* "View.MemoryView":546 - * info.itemsize = self.view.itemsize - * info.len = self.view.len - * info.readonly = self.view.readonly # <<<<<<<<<<<<<< - * info.obj = self - * - */ - __pyx_t_1 = __pyx_v_self->view.readonly; - __pyx_v_info->readonly = __pyx_t_1; - - /* "View.MemoryView":547 - * info.len = self.view.len - * info.readonly = self.view.readonly - * info.obj = self # <<<<<<<<<<<<<< - * - * __pyx_getbuffer = capsule( &__pyx_memoryview_getbuffer, "getbuffer(obj, view, flags)") - */ - __Pyx_INCREF(((PyObject *)__pyx_v_self)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_self)); - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); - __pyx_v_info->obj = ((PyObject *)__pyx_v_self); - - /* "View.MemoryView":518 - * - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): # <<<<<<<<<<<<<< - * if flags & PyBUF_WRITABLE and self.view.readonly: - * raise ValueError("Cannot create writable memory view from read-only memoryview") - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.__getbuffer__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - if (__pyx_v_info->obj != NULL) { - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0; - } - goto __pyx_L2; - __pyx_L0:; - if (__pyx_v_info->obj == Py_None) { - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0; - } - __pyx_L2:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":553 - * - * @property - * def T(self): # <<<<<<<<<<<<<< - * cdef _memoryviewslice result = memoryview_copy(self) - * transpose_memslice(&result.from_slice) - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_1T_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_1T_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_1T___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_1T___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - struct __pyx_memoryviewslice_obj *__pyx_v_result = 0; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":554 - * @property - * def T(self): - * cdef _memoryviewslice result = memoryview_copy(self) # <<<<<<<<<<<<<< - * transpose_memslice(&result.from_slice) - * return result - */ - __pyx_t_1 = __pyx_memoryview_copy_object(__pyx_v_self); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 554, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (!(likely(((__pyx_t_1) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_1, __pyx_memoryviewslice_type))))) __PYX_ERR(1, 554, __pyx_L1_error) - __pyx_v_result = ((struct __pyx_memoryviewslice_obj *)__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":555 - * def T(self): - * cdef _memoryviewslice result = memoryview_copy(self) - * transpose_memslice(&result.from_slice) # <<<<<<<<<<<<<< - * return result - * - */ - __pyx_t_2 = __pyx_memslice_transpose((&__pyx_v_result->from_slice)); if (unlikely(__pyx_t_2 == ((int)0))) __PYX_ERR(1, 555, __pyx_L1_error) - - /* "View.MemoryView":556 - * cdef _memoryviewslice result = memoryview_copy(self) - * transpose_memslice(&result.from_slice) - * return result # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(((PyObject *)__pyx_v_result)); - __pyx_r = ((PyObject *)__pyx_v_result); - goto __pyx_L0; - - /* "View.MemoryView":553 - * - * @property - * def T(self): # <<<<<<<<<<<<<< - * cdef _memoryviewslice result = memoryview_copy(self) - * transpose_memslice(&result.from_slice) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview.T.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_result); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":559 - * - * @property - * def base(self): # <<<<<<<<<<<<<< - * return self.obj - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4base_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4base_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_4base___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4base___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":560 - * @property - * def base(self): - * return self.obj # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_self->obj); - __pyx_r = __pyx_v_self->obj; - goto __pyx_L0; - - /* "View.MemoryView":559 - * - * @property - * def base(self): # <<<<<<<<<<<<<< - * return self.obj - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":563 - * - * @property - * def shape(self): # <<<<<<<<<<<<<< - * return tuple([length for length in self.view.shape[:self.view.ndim]]) - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_5shape_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_5shape_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_5shape___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_5shape___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - Py_ssize_t __pyx_v_length; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - Py_ssize_t *__pyx_t_2; - Py_ssize_t *__pyx_t_3; - Py_ssize_t *__pyx_t_4; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":564 - * @property - * def shape(self): - * return tuple([length for length in self.view.shape[:self.view.ndim]]) # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 564, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = (__pyx_v_self->view.shape + __pyx_v_self->view.ndim); - for (__pyx_t_4 = __pyx_v_self->view.shape; __pyx_t_4 < __pyx_t_3; __pyx_t_4++) { - __pyx_t_2 = __pyx_t_4; - __pyx_v_length = (__pyx_t_2[0]); - __pyx_t_5 = PyInt_FromSsize_t(__pyx_v_length); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 564, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - if (unlikely(__Pyx_ListComp_Append(__pyx_t_1, (PyObject*)__pyx_t_5))) __PYX_ERR(1, 564, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - __pyx_t_5 = PyList_AsTuple(((PyObject*)__pyx_t_1)); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 564, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - /* "View.MemoryView":563 - * - * @property - * def shape(self): # <<<<<<<<<<<<<< - * return tuple([length for length in self.view.shape[:self.view.ndim]]) - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.memoryview.shape.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":567 - * - * @property - * def strides(self): # <<<<<<<<<<<<<< - * if self.view.strides == NULL: - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_7strides_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_7strides_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_7strides___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_7strides___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - Py_ssize_t __pyx_v_stride; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - Py_ssize_t *__pyx_t_3; - Py_ssize_t *__pyx_t_4; - Py_ssize_t *__pyx_t_5; - PyObject *__pyx_t_6 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":568 - * @property - * def strides(self): - * if self.view.strides == NULL: # <<<<<<<<<<<<<< - * - * raise ValueError("Buffer view does not expose strides") - */ - __pyx_t_1 = ((__pyx_v_self->view.strides == NULL) != 0); - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":570 - * if self.view.strides == NULL: - * - * raise ValueError("Buffer view does not expose strides") # <<<<<<<<<<<<<< - * - * return tuple([stride for stride in self.view.strides[:self.view.ndim]]) - */ - __pyx_t_2 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__12, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 570, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_Raise(__pyx_t_2, 0, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __PYX_ERR(1, 570, __pyx_L1_error) - - /* "View.MemoryView":568 - * @property - * def strides(self): - * if self.view.strides == NULL: # <<<<<<<<<<<<<< - * - * raise ValueError("Buffer view does not expose strides") - */ - } - - /* "View.MemoryView":572 - * raise ValueError("Buffer view does not expose strides") - * - * return tuple([stride for stride in self.view.strides[:self.view.ndim]]) # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = PyList_New(0); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 572, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = (__pyx_v_self->view.strides + __pyx_v_self->view.ndim); - for (__pyx_t_5 = __pyx_v_self->view.strides; __pyx_t_5 < __pyx_t_4; __pyx_t_5++) { - __pyx_t_3 = __pyx_t_5; - __pyx_v_stride = (__pyx_t_3[0]); - __pyx_t_6 = PyInt_FromSsize_t(__pyx_v_stride); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 572, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - if (unlikely(__Pyx_ListComp_Append(__pyx_t_2, (PyObject*)__pyx_t_6))) __PYX_ERR(1, 572, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } - __pyx_t_6 = PyList_AsTuple(((PyObject*)__pyx_t_2)); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 572, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_r = __pyx_t_6; - __pyx_t_6 = 0; - goto __pyx_L0; - - /* "View.MemoryView":567 - * - * @property - * def strides(self): # <<<<<<<<<<<<<< - * if self.view.strides == NULL: - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("View.MemoryView.memoryview.strides.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":575 - * - * @property - * def suboffsets(self): # <<<<<<<<<<<<<< - * if self.view.suboffsets == NULL: - * return (-1,) * self.view.ndim - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_10suboffsets_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_10suboffsets_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_10suboffsets___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_10suboffsets___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - Py_ssize_t __pyx_v_suboffset; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - Py_ssize_t *__pyx_t_4; - Py_ssize_t *__pyx_t_5; - Py_ssize_t *__pyx_t_6; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":576 - * @property - * def suboffsets(self): - * if self.view.suboffsets == NULL: # <<<<<<<<<<<<<< - * return (-1,) * self.view.ndim - * - */ - __pyx_t_1 = ((__pyx_v_self->view.suboffsets == NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":577 - * def suboffsets(self): - * if self.view.suboffsets == NULL: - * return (-1,) * self.view.ndim # <<<<<<<<<<<<<< - * - * return tuple([suboffset for suboffset in self.view.suboffsets[:self.view.ndim]]) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __Pyx_PyInt_From_int(__pyx_v_self->view.ndim); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 577, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyNumber_Multiply(__pyx_tuple__13, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 577, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - - /* "View.MemoryView":576 - * @property - * def suboffsets(self): - * if self.view.suboffsets == NULL: # <<<<<<<<<<<<<< - * return (-1,) * self.view.ndim - * - */ - } - - /* "View.MemoryView":579 - * return (-1,) * self.view.ndim - * - * return tuple([suboffset for suboffset in self.view.suboffsets[:self.view.ndim]]) # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = PyList_New(0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 579, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = (__pyx_v_self->view.suboffsets + __pyx_v_self->view.ndim); - for (__pyx_t_6 = __pyx_v_self->view.suboffsets; __pyx_t_6 < __pyx_t_5; __pyx_t_6++) { - __pyx_t_4 = __pyx_t_6; - __pyx_v_suboffset = (__pyx_t_4[0]); - __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_suboffset); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 579, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (unlikely(__Pyx_ListComp_Append(__pyx_t_3, (PyObject*)__pyx_t_2))) __PYX_ERR(1, 579, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_t_2 = PyList_AsTuple(((PyObject*)__pyx_t_3)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 579, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":575 - * - * @property - * def suboffsets(self): # <<<<<<<<<<<<<< - * if self.view.suboffsets == NULL: - * return (-1,) * self.view.ndim - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.suboffsets.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":582 - * - * @property - * def ndim(self): # <<<<<<<<<<<<<< - * return self.view.ndim - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4ndim_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4ndim_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_4ndim___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4ndim___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":583 - * @property - * def ndim(self): - * return self.view.ndim # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_self->view.ndim); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 583, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":582 - * - * @property - * def ndim(self): # <<<<<<<<<<<<<< - * return self.view.ndim - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview.ndim.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":586 - * - * @property - * def itemsize(self): # <<<<<<<<<<<<<< - * return self.view.itemsize - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_8itemsize_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_8itemsize_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_8itemsize___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_8itemsize___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":587 - * @property - * def itemsize(self): - * return self.view.itemsize # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = PyInt_FromSsize_t(__pyx_v_self->view.itemsize); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 587, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":586 - * - * @property - * def itemsize(self): # <<<<<<<<<<<<<< - * return self.view.itemsize - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview.itemsize.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":590 - * - * @property - * def nbytes(self): # <<<<<<<<<<<<<< - * return self.size * self.view.itemsize - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_6nbytes_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_6nbytes_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_6nbytes___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_6nbytes___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":591 - * @property - * def nbytes(self): - * return self.size * self.view.itemsize # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_size); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 591, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_self->view.itemsize); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 591, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyNumber_Multiply(__pyx_t_1, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 591, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - - /* "View.MemoryView":590 - * - * @property - * def nbytes(self): # <<<<<<<<<<<<<< - * return self.size * self.view.itemsize - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.nbytes.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":594 - * - * @property - * def size(self): # <<<<<<<<<<<<<< - * if self._size is None: - * result = 1 - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4size_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4size_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_4size___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4size___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_v_result = NULL; - PyObject *__pyx_v_length = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - Py_ssize_t *__pyx_t_3; - Py_ssize_t *__pyx_t_4; - Py_ssize_t *__pyx_t_5; - PyObject *__pyx_t_6 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":595 - * @property - * def size(self): - * if self._size is None: # <<<<<<<<<<<<<< - * result = 1 - * - */ - __pyx_t_1 = (__pyx_v_self->_size == Py_None); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":596 - * def size(self): - * if self._size is None: - * result = 1 # <<<<<<<<<<<<<< - * - * for length in self.view.shape[:self.view.ndim]: - */ - __Pyx_INCREF(__pyx_int_1); - __pyx_v_result = __pyx_int_1; - - /* "View.MemoryView":598 - * result = 1 - * - * for length in self.view.shape[:self.view.ndim]: # <<<<<<<<<<<<<< - * result *= length - * - */ - __pyx_t_4 = (__pyx_v_self->view.shape + __pyx_v_self->view.ndim); - for (__pyx_t_5 = __pyx_v_self->view.shape; __pyx_t_5 < __pyx_t_4; __pyx_t_5++) { - __pyx_t_3 = __pyx_t_5; - __pyx_t_6 = PyInt_FromSsize_t((__pyx_t_3[0])); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 598, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_XDECREF_SET(__pyx_v_length, __pyx_t_6); - __pyx_t_6 = 0; - - /* "View.MemoryView":599 - * - * for length in self.view.shape[:self.view.ndim]: - * result *= length # <<<<<<<<<<<<<< - * - * self._size = result - */ - __pyx_t_6 = PyNumber_InPlaceMultiply(__pyx_v_result, __pyx_v_length); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 599, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF_SET(__pyx_v_result, __pyx_t_6); - __pyx_t_6 = 0; - } - - /* "View.MemoryView":601 - * result *= length - * - * self._size = result # <<<<<<<<<<<<<< - * - * return self._size - */ - __Pyx_INCREF(__pyx_v_result); - __Pyx_GIVEREF(__pyx_v_result); - __Pyx_GOTREF(__pyx_v_self->_size); - __Pyx_DECREF(__pyx_v_self->_size); - __pyx_v_self->_size = __pyx_v_result; - - /* "View.MemoryView":595 - * @property - * def size(self): - * if self._size is None: # <<<<<<<<<<<<<< - * result = 1 - * - */ - } - - /* "View.MemoryView":603 - * self._size = result - * - * return self._size # <<<<<<<<<<<<<< - * - * def __len__(self): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_self->_size); - __pyx_r = __pyx_v_self->_size; - goto __pyx_L0; - - /* "View.MemoryView":594 - * - * @property - * def size(self): # <<<<<<<<<<<<<< - * if self._size is None: - * result = 1 - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("View.MemoryView.memoryview.size.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_result); - __Pyx_XDECREF(__pyx_v_length); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":605 - * return self._size - * - * def __len__(self): # <<<<<<<<<<<<<< - * if self.view.ndim >= 1: - * return self.view.shape[0] - */ - -/* Python wrapper */ -static Py_ssize_t __pyx_memoryview___len__(PyObject *__pyx_v_self); /*proto*/ -static Py_ssize_t __pyx_memoryview___len__(PyObject *__pyx_v_self) { - Py_ssize_t __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__len__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_10__len__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static Py_ssize_t __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_10__len__(struct __pyx_memoryview_obj *__pyx_v_self) { - Py_ssize_t __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - __Pyx_RefNannySetupContext("__len__", 0); - - /* "View.MemoryView":606 - * - * def __len__(self): - * if self.view.ndim >= 1: # <<<<<<<<<<<<<< - * return self.view.shape[0] - * - */ - __pyx_t_1 = ((__pyx_v_self->view.ndim >= 1) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":607 - * def __len__(self): - * if self.view.ndim >= 1: - * return self.view.shape[0] # <<<<<<<<<<<<<< - * - * return 0 - */ - __pyx_r = (__pyx_v_self->view.shape[0]); - goto __pyx_L0; - - /* "View.MemoryView":606 - * - * def __len__(self): - * if self.view.ndim >= 1: # <<<<<<<<<<<<<< - * return self.view.shape[0] - * - */ - } - - /* "View.MemoryView":609 - * return self.view.shape[0] - * - * return 0 # <<<<<<<<<<<<<< - * - * def __repr__(self): - */ - __pyx_r = 0; - goto __pyx_L0; - - /* "View.MemoryView":605 - * return self._size - * - * def __len__(self): # <<<<<<<<<<<<<< - * if self.view.ndim >= 1: - * return self.view.shape[0] - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":611 - * return 0 - * - * def __repr__(self): # <<<<<<<<<<<<<< - * return "" % (self.base.__class__.__name__, - * id(self)) - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview___repr__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_memoryview___repr__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__repr__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_12__repr__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_12__repr__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__repr__", 0); - - /* "View.MemoryView":612 - * - * def __repr__(self): - * return "" % (self.base.__class__.__name__, # <<<<<<<<<<<<<< - * id(self)) - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_base); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 612, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_class); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 612, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_name_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 612, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "View.MemoryView":613 - * def __repr__(self): - * return "" % (self.base.__class__.__name__, - * id(self)) # <<<<<<<<<<<<<< - * - * def __str__(self): - */ - __pyx_t_2 = __Pyx_PyObject_CallOneArg(__pyx_builtin_id, ((PyObject *)__pyx_v_self)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 613, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - - /* "View.MemoryView":612 - * - * def __repr__(self): - * return "" % (self.base.__class__.__name__, # <<<<<<<<<<<<<< - * id(self)) - * - */ - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 612, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_2); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyString_Format(__pyx_kp_s_MemoryView_of_r_at_0x_x, __pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 612, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":611 - * return 0 - * - * def __repr__(self): # <<<<<<<<<<<<<< - * return "" % (self.base.__class__.__name__, - * id(self)) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.__repr__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":615 - * id(self)) - * - * def __str__(self): # <<<<<<<<<<<<<< - * return "" % (self.base.__class__.__name__,) - * - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview___str__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_memoryview___str__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__str__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_14__str__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_14__str__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__str__", 0); - - /* "View.MemoryView":616 - * - * def __str__(self): - * return "" % (self.base.__class__.__name__,) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_base); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 616, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_class); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 616, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_name_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 616, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 616, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_1); - __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyString_Format(__pyx_kp_s_MemoryView_of_r_object, __pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 616, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":615 - * id(self)) - * - * def __str__(self): # <<<<<<<<<<<<<< - * return "" % (self.base.__class__.__name__,) - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.__str__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":619 - * - * - * def is_c_contig(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview_is_c_contig(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_memoryview_is_c_contig(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("is_c_contig (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_16is_c_contig(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_16is_c_contig(struct __pyx_memoryview_obj *__pyx_v_self) { - __Pyx_memviewslice *__pyx_v_mslice; - __Pyx_memviewslice __pyx_v_tmp; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice *__pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("is_c_contig", 0); - - /* "View.MemoryView":622 - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - * mslice = get_slice_from_memview(self, &tmp) # <<<<<<<<<<<<<< - * return slice_is_contig(mslice[0], 'C', self.view.ndim) - * - */ - __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(__pyx_v_self, (&__pyx_v_tmp)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 622, __pyx_L1_error) - __pyx_v_mslice = __pyx_t_1; - - /* "View.MemoryView":623 - * cdef __Pyx_memviewslice tmp - * mslice = get_slice_from_memview(self, &tmp) - * return slice_is_contig(mslice[0], 'C', self.view.ndim) # <<<<<<<<<<<<<< - * - * def is_f_contig(self): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_memviewslice_is_contig((__pyx_v_mslice[0]), 'C', __pyx_v_self->view.ndim)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 623, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":619 - * - * - * def is_c_contig(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.is_c_contig", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":625 - * return slice_is_contig(mslice[0], 'C', self.view.ndim) - * - * def is_f_contig(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview_is_f_contig(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_memoryview_is_f_contig(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("is_f_contig (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_18is_f_contig(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_18is_f_contig(struct __pyx_memoryview_obj *__pyx_v_self) { - __Pyx_memviewslice *__pyx_v_mslice; - __Pyx_memviewslice __pyx_v_tmp; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice *__pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("is_f_contig", 0); - - /* "View.MemoryView":628 - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - * mslice = get_slice_from_memview(self, &tmp) # <<<<<<<<<<<<<< - * return slice_is_contig(mslice[0], 'F', self.view.ndim) - * - */ - __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(__pyx_v_self, (&__pyx_v_tmp)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 628, __pyx_L1_error) - __pyx_v_mslice = __pyx_t_1; - - /* "View.MemoryView":629 - * cdef __Pyx_memviewslice tmp - * mslice = get_slice_from_memview(self, &tmp) - * return slice_is_contig(mslice[0], 'F', self.view.ndim) # <<<<<<<<<<<<<< - * - * def copy(self): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_memviewslice_is_contig((__pyx_v_mslice[0]), 'F', __pyx_v_self->view.ndim)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 629, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":625 - * return slice_is_contig(mslice[0], 'C', self.view.ndim) - * - * def is_f_contig(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.is_f_contig", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":631 - * return slice_is_contig(mslice[0], 'F', self.view.ndim) - * - * def copy(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice mslice - * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview_copy(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_memoryview_copy(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("copy (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_20copy(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_20copy(struct __pyx_memoryview_obj *__pyx_v_self) { - __Pyx_memviewslice __pyx_v_mslice; - int __pyx_v_flags; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("copy", 0); - - /* "View.MemoryView":633 - * def copy(self): - * cdef __Pyx_memviewslice mslice - * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS # <<<<<<<<<<<<<< - * - * slice_copy(self, &mslice) - */ - __pyx_v_flags = (__pyx_v_self->flags & (~PyBUF_F_CONTIGUOUS)); - - /* "View.MemoryView":635 - * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS - * - * slice_copy(self, &mslice) # <<<<<<<<<<<<<< - * mslice = slice_copy_contig(&mslice, "c", self.view.ndim, - * self.view.itemsize, - */ - __pyx_memoryview_slice_copy(__pyx_v_self, (&__pyx_v_mslice)); - - /* "View.MemoryView":636 - * - * slice_copy(self, &mslice) - * mslice = slice_copy_contig(&mslice, "c", self.view.ndim, # <<<<<<<<<<<<<< - * self.view.itemsize, - * flags|PyBUF_C_CONTIGUOUS, - */ - __pyx_t_1 = __pyx_memoryview_copy_new_contig((&__pyx_v_mslice), ((char *)"c"), __pyx_v_self->view.ndim, __pyx_v_self->view.itemsize, (__pyx_v_flags | PyBUF_C_CONTIGUOUS), __pyx_v_self->dtype_is_object); if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 636, __pyx_L1_error) - __pyx_v_mslice = __pyx_t_1; - - /* "View.MemoryView":641 - * self.dtype_is_object) - * - * return memoryview_copy_from_slice(self, &mslice) # <<<<<<<<<<<<<< - * - * def copy_fortran(self): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_memoryview_copy_object_from_slice(__pyx_v_self, (&__pyx_v_mslice)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 641, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":631 - * return slice_is_contig(mslice[0], 'F', self.view.ndim) - * - * def copy(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice mslice - * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.copy", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":643 - * return memoryview_copy_from_slice(self, &mslice) - * - * def copy_fortran(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice src, dst - * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview_copy_fortran(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_memoryview_copy_fortran(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("copy_fortran (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_22copy_fortran(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_22copy_fortran(struct __pyx_memoryview_obj *__pyx_v_self) { - __Pyx_memviewslice __pyx_v_src; - __Pyx_memviewslice __pyx_v_dst; - int __pyx_v_flags; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("copy_fortran", 0); - - /* "View.MemoryView":645 - * def copy_fortran(self): - * cdef __Pyx_memviewslice src, dst - * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS # <<<<<<<<<<<<<< - * - * slice_copy(self, &src) - */ - __pyx_v_flags = (__pyx_v_self->flags & (~PyBUF_C_CONTIGUOUS)); - - /* "View.MemoryView":647 - * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS - * - * slice_copy(self, &src) # <<<<<<<<<<<<<< - * dst = slice_copy_contig(&src, "fortran", self.view.ndim, - * self.view.itemsize, - */ - __pyx_memoryview_slice_copy(__pyx_v_self, (&__pyx_v_src)); - - /* "View.MemoryView":648 - * - * slice_copy(self, &src) - * dst = slice_copy_contig(&src, "fortran", self.view.ndim, # <<<<<<<<<<<<<< - * self.view.itemsize, - * flags|PyBUF_F_CONTIGUOUS, - */ - __pyx_t_1 = __pyx_memoryview_copy_new_contig((&__pyx_v_src), ((char *)"fortran"), __pyx_v_self->view.ndim, __pyx_v_self->view.itemsize, (__pyx_v_flags | PyBUF_F_CONTIGUOUS), __pyx_v_self->dtype_is_object); if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 648, __pyx_L1_error) - __pyx_v_dst = __pyx_t_1; - - /* "View.MemoryView":653 - * self.dtype_is_object) - * - * return memoryview_copy_from_slice(self, &dst) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_memoryview_copy_object_from_slice(__pyx_v_self, (&__pyx_v_dst)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 653, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":643 - * return memoryview_copy_from_slice(self, &mslice) - * - * def copy_fortran(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice src, dst - * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.copy_fortran", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_memoryview_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_pw___pyx_memoryview_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_memoryview___reduce_cython__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_memoryview___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__reduce_cython__", 0); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__14, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 2, __pyx_L1_error) - - /* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_memoryview_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/ -static PyObject *__pyx_pw___pyx_memoryview_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_memoryview_2__setstate_cython__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_memoryview_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setstate_cython__", 0); - - /* "(tree fragment)":4 - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - */ - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__15, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 4, __pyx_L1_error) - - /* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":657 - * - * @cname('__pyx_memoryview_new') - * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo): # <<<<<<<<<<<<<< - * cdef memoryview result = memoryview(o, flags, dtype_is_object) - * result.typeinfo = typeinfo - */ - -static PyObject *__pyx_memoryview_new(PyObject *__pyx_v_o, int __pyx_v_flags, int __pyx_v_dtype_is_object, __Pyx_TypeInfo *__pyx_v_typeinfo) { - struct __pyx_memoryview_obj *__pyx_v_result = 0; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("memoryview_cwrapper", 0); - - /* "View.MemoryView":658 - * @cname('__pyx_memoryview_new') - * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo): - * cdef memoryview result = memoryview(o, flags, dtype_is_object) # <<<<<<<<<<<<<< - * result.typeinfo = typeinfo - * return result - */ - __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_flags); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 658, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_v_dtype_is_object); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 658, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 658, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_o); - __Pyx_GIVEREF(__pyx_v_o); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_o); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_t_2); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryview_type), __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 658, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_result = ((struct __pyx_memoryview_obj *)__pyx_t_2); - __pyx_t_2 = 0; - - /* "View.MemoryView":659 - * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo): - * cdef memoryview result = memoryview(o, flags, dtype_is_object) - * result.typeinfo = typeinfo # <<<<<<<<<<<<<< - * return result - * - */ - __pyx_v_result->typeinfo = __pyx_v_typeinfo; - - /* "View.MemoryView":660 - * cdef memoryview result = memoryview(o, flags, dtype_is_object) - * result.typeinfo = typeinfo - * return result # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_check') - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(((PyObject *)__pyx_v_result)); - __pyx_r = ((PyObject *)__pyx_v_result); - goto __pyx_L0; - - /* "View.MemoryView":657 - * - * @cname('__pyx_memoryview_new') - * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo): # <<<<<<<<<<<<<< - * cdef memoryview result = memoryview(o, flags, dtype_is_object) - * result.typeinfo = typeinfo - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview_cwrapper", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_result); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":663 - * - * @cname('__pyx_memoryview_check') - * cdef inline bint memoryview_check(object o): # <<<<<<<<<<<<<< - * return isinstance(o, memoryview) - * - */ - -static CYTHON_INLINE int __pyx_memoryview_check(PyObject *__pyx_v_o) { - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - __Pyx_RefNannySetupContext("memoryview_check", 0); - - /* "View.MemoryView":664 - * @cname('__pyx_memoryview_check') - * cdef inline bint memoryview_check(object o): - * return isinstance(o, memoryview) # <<<<<<<<<<<<<< - * - * cdef tuple _unellipsify(object index, int ndim): - */ - __pyx_t_1 = __Pyx_TypeCheck(__pyx_v_o, __pyx_memoryview_type); - __pyx_r = __pyx_t_1; - goto __pyx_L0; - - /* "View.MemoryView":663 - * - * @cname('__pyx_memoryview_check') - * cdef inline bint memoryview_check(object o): # <<<<<<<<<<<<<< - * return isinstance(o, memoryview) - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":666 - * return isinstance(o, memoryview) - * - * cdef tuple _unellipsify(object index, int ndim): # <<<<<<<<<<<<<< - * """ - * Replace all ellipses with full slices and fill incomplete indices with - */ - -static PyObject *_unellipsify(PyObject *__pyx_v_index, int __pyx_v_ndim) { - PyObject *__pyx_v_tup = NULL; - PyObject *__pyx_v_result = NULL; - int __pyx_v_have_slices; - int __pyx_v_seen_ellipsis; - CYTHON_UNUSED PyObject *__pyx_v_idx = NULL; - PyObject *__pyx_v_item = NULL; - Py_ssize_t __pyx_v_nslices; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - Py_ssize_t __pyx_t_5; - PyObject *(*__pyx_t_6)(PyObject *); - PyObject *__pyx_t_7 = NULL; - Py_ssize_t __pyx_t_8; - int __pyx_t_9; - int __pyx_t_10; - PyObject *__pyx_t_11 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_unellipsify", 0); - - /* "View.MemoryView":671 - * full slices. - * """ - * if not isinstance(index, tuple): # <<<<<<<<<<<<<< - * tup = (index,) - * else: - */ - __pyx_t_1 = PyTuple_Check(__pyx_v_index); - __pyx_t_2 = ((!(__pyx_t_1 != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":672 - * """ - * if not isinstance(index, tuple): - * tup = (index,) # <<<<<<<<<<<<<< - * else: - * tup = index - */ - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 672, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_index); - __Pyx_GIVEREF(__pyx_v_index); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_index); - __pyx_v_tup = __pyx_t_3; - __pyx_t_3 = 0; - - /* "View.MemoryView":671 - * full slices. - * """ - * if not isinstance(index, tuple): # <<<<<<<<<<<<<< - * tup = (index,) - * else: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":674 - * tup = (index,) - * else: - * tup = index # <<<<<<<<<<<<<< - * - * result = [] - */ - /*else*/ { - __Pyx_INCREF(__pyx_v_index); - __pyx_v_tup = __pyx_v_index; - } - __pyx_L3:; - - /* "View.MemoryView":676 - * tup = index - * - * result = [] # <<<<<<<<<<<<<< - * have_slices = False - * seen_ellipsis = False - */ - __pyx_t_3 = PyList_New(0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 676, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_v_result = ((PyObject*)__pyx_t_3); - __pyx_t_3 = 0; - - /* "View.MemoryView":677 - * - * result = [] - * have_slices = False # <<<<<<<<<<<<<< - * seen_ellipsis = False - * for idx, item in enumerate(tup): - */ - __pyx_v_have_slices = 0; - - /* "View.MemoryView":678 - * result = [] - * have_slices = False - * seen_ellipsis = False # <<<<<<<<<<<<<< - * for idx, item in enumerate(tup): - * if item is Ellipsis: - */ - __pyx_v_seen_ellipsis = 0; - - /* "View.MemoryView":679 - * have_slices = False - * seen_ellipsis = False - * for idx, item in enumerate(tup): # <<<<<<<<<<<<<< - * if item is Ellipsis: - * if not seen_ellipsis: - */ - __Pyx_INCREF(__pyx_int_0); - __pyx_t_3 = __pyx_int_0; - if (likely(PyList_CheckExact(__pyx_v_tup)) || PyTuple_CheckExact(__pyx_v_tup)) { - __pyx_t_4 = __pyx_v_tup; __Pyx_INCREF(__pyx_t_4); __pyx_t_5 = 0; - __pyx_t_6 = NULL; - } else { - __pyx_t_5 = -1; __pyx_t_4 = PyObject_GetIter(__pyx_v_tup); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 679, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_6 = Py_TYPE(__pyx_t_4)->tp_iternext; if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 679, __pyx_L1_error) - } - for (;;) { - if (likely(!__pyx_t_6)) { - if (likely(PyList_CheckExact(__pyx_t_4))) { - if (__pyx_t_5 >= PyList_GET_SIZE(__pyx_t_4)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_7 = PyList_GET_ITEM(__pyx_t_4, __pyx_t_5); __Pyx_INCREF(__pyx_t_7); __pyx_t_5++; if (unlikely(0 < 0)) __PYX_ERR(1, 679, __pyx_L1_error) - #else - __pyx_t_7 = PySequence_ITEM(__pyx_t_4, __pyx_t_5); __pyx_t_5++; if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 679, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - #endif - } else { - if (__pyx_t_5 >= PyTuple_GET_SIZE(__pyx_t_4)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_7 = PyTuple_GET_ITEM(__pyx_t_4, __pyx_t_5); __Pyx_INCREF(__pyx_t_7); __pyx_t_5++; if (unlikely(0 < 0)) __PYX_ERR(1, 679, __pyx_L1_error) - #else - __pyx_t_7 = PySequence_ITEM(__pyx_t_4, __pyx_t_5); __pyx_t_5++; if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 679, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - #endif - } - } else { - __pyx_t_7 = __pyx_t_6(__pyx_t_4); - if (unlikely(!__pyx_t_7)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(1, 679, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_7); - } - __Pyx_XDECREF_SET(__pyx_v_item, __pyx_t_7); - __pyx_t_7 = 0; - __Pyx_INCREF(__pyx_t_3); - __Pyx_XDECREF_SET(__pyx_v_idx, __pyx_t_3); - __pyx_t_7 = __Pyx_PyInt_AddObjC(__pyx_t_3, __pyx_int_1, 1, 0, 0); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 679, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_3); - __pyx_t_3 = __pyx_t_7; - __pyx_t_7 = 0; - - /* "View.MemoryView":680 - * seen_ellipsis = False - * for idx, item in enumerate(tup): - * if item is Ellipsis: # <<<<<<<<<<<<<< - * if not seen_ellipsis: - * result.extend([slice(None)] * (ndim - len(tup) + 1)) - */ - __pyx_t_2 = (__pyx_v_item == __pyx_builtin_Ellipsis); - __pyx_t_1 = (__pyx_t_2 != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":681 - * for idx, item in enumerate(tup): - * if item is Ellipsis: - * if not seen_ellipsis: # <<<<<<<<<<<<<< - * result.extend([slice(None)] * (ndim - len(tup) + 1)) - * seen_ellipsis = True - */ - __pyx_t_1 = ((!(__pyx_v_seen_ellipsis != 0)) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":682 - * if item is Ellipsis: - * if not seen_ellipsis: - * result.extend([slice(None)] * (ndim - len(tup) + 1)) # <<<<<<<<<<<<<< - * seen_ellipsis = True - * else: - */ - __pyx_t_8 = PyObject_Length(__pyx_v_tup); if (unlikely(__pyx_t_8 == ((Py_ssize_t)-1))) __PYX_ERR(1, 682, __pyx_L1_error) - __pyx_t_7 = PyList_New(1 * ((((__pyx_v_ndim - __pyx_t_8) + 1)<0) ? 0:((__pyx_v_ndim - __pyx_t_8) + 1))); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 682, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - { Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < ((__pyx_v_ndim - __pyx_t_8) + 1); __pyx_temp++) { - __Pyx_INCREF(__pyx_slice__16); - __Pyx_GIVEREF(__pyx_slice__16); - PyList_SET_ITEM(__pyx_t_7, __pyx_temp, __pyx_slice__16); - } - } - __pyx_t_9 = __Pyx_PyList_Extend(__pyx_v_result, __pyx_t_7); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 682, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - - /* "View.MemoryView":683 - * if not seen_ellipsis: - * result.extend([slice(None)] * (ndim - len(tup) + 1)) - * seen_ellipsis = True # <<<<<<<<<<<<<< - * else: - * result.append(slice(None)) - */ - __pyx_v_seen_ellipsis = 1; - - /* "View.MemoryView":681 - * for idx, item in enumerate(tup): - * if item is Ellipsis: - * if not seen_ellipsis: # <<<<<<<<<<<<<< - * result.extend([slice(None)] * (ndim - len(tup) + 1)) - * seen_ellipsis = True - */ - goto __pyx_L7; - } - - /* "View.MemoryView":685 - * seen_ellipsis = True - * else: - * result.append(slice(None)) # <<<<<<<<<<<<<< - * have_slices = True - * else: - */ - /*else*/ { - __pyx_t_9 = __Pyx_PyList_Append(__pyx_v_result, __pyx_slice__16); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 685, __pyx_L1_error) - } - __pyx_L7:; - - /* "View.MemoryView":686 - * else: - * result.append(slice(None)) - * have_slices = True # <<<<<<<<<<<<<< - * else: - * if not isinstance(item, slice) and not PyIndex_Check(item): - */ - __pyx_v_have_slices = 1; - - /* "View.MemoryView":680 - * seen_ellipsis = False - * for idx, item in enumerate(tup): - * if item is Ellipsis: # <<<<<<<<<<<<<< - * if not seen_ellipsis: - * result.extend([slice(None)] * (ndim - len(tup) + 1)) - */ - goto __pyx_L6; - } - - /* "View.MemoryView":688 - * have_slices = True - * else: - * if not isinstance(item, slice) and not PyIndex_Check(item): # <<<<<<<<<<<<<< - * raise TypeError("Cannot index with type '%s'" % type(item)) - * - */ - /*else*/ { - __pyx_t_2 = PySlice_Check(__pyx_v_item); - __pyx_t_10 = ((!(__pyx_t_2 != 0)) != 0); - if (__pyx_t_10) { - } else { - __pyx_t_1 = __pyx_t_10; - goto __pyx_L9_bool_binop_done; - } - __pyx_t_10 = ((!(PyIndex_Check(__pyx_v_item) != 0)) != 0); - __pyx_t_1 = __pyx_t_10; - __pyx_L9_bool_binop_done:; - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":689 - * else: - * if not isinstance(item, slice) and not PyIndex_Check(item): - * raise TypeError("Cannot index with type '%s'" % type(item)) # <<<<<<<<<<<<<< - * - * have_slices = have_slices or isinstance(item, slice) - */ - __pyx_t_7 = __Pyx_PyString_FormatSafe(__pyx_kp_s_Cannot_index_with_type_s, ((PyObject *)Py_TYPE(__pyx_v_item))); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 689, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_11 = __Pyx_PyObject_CallOneArg(__pyx_builtin_TypeError, __pyx_t_7); if (unlikely(!__pyx_t_11)) __PYX_ERR(1, 689, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_Raise(__pyx_t_11, 0, 0, 0); - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __PYX_ERR(1, 689, __pyx_L1_error) - - /* "View.MemoryView":688 - * have_slices = True - * else: - * if not isinstance(item, slice) and not PyIndex_Check(item): # <<<<<<<<<<<<<< - * raise TypeError("Cannot index with type '%s'" % type(item)) - * - */ - } - - /* "View.MemoryView":691 - * raise TypeError("Cannot index with type '%s'" % type(item)) - * - * have_slices = have_slices or isinstance(item, slice) # <<<<<<<<<<<<<< - * result.append(item) - * - */ - __pyx_t_10 = (__pyx_v_have_slices != 0); - if (!__pyx_t_10) { - } else { - __pyx_t_1 = __pyx_t_10; - goto __pyx_L11_bool_binop_done; - } - __pyx_t_10 = PySlice_Check(__pyx_v_item); - __pyx_t_2 = (__pyx_t_10 != 0); - __pyx_t_1 = __pyx_t_2; - __pyx_L11_bool_binop_done:; - __pyx_v_have_slices = __pyx_t_1; - - /* "View.MemoryView":692 - * - * have_slices = have_slices or isinstance(item, slice) - * result.append(item) # <<<<<<<<<<<<<< - * - * nslices = ndim - len(result) - */ - __pyx_t_9 = __Pyx_PyList_Append(__pyx_v_result, __pyx_v_item); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 692, __pyx_L1_error) - } - __pyx_L6:; - - /* "View.MemoryView":679 - * have_slices = False - * seen_ellipsis = False - * for idx, item in enumerate(tup): # <<<<<<<<<<<<<< - * if item is Ellipsis: - * if not seen_ellipsis: - */ - } - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "View.MemoryView":694 - * result.append(item) - * - * nslices = ndim - len(result) # <<<<<<<<<<<<<< - * if nslices: - * result.extend([slice(None)] * nslices) - */ - __pyx_t_5 = PyList_GET_SIZE(__pyx_v_result); if (unlikely(__pyx_t_5 == ((Py_ssize_t)-1))) __PYX_ERR(1, 694, __pyx_L1_error) - __pyx_v_nslices = (__pyx_v_ndim - __pyx_t_5); - - /* "View.MemoryView":695 - * - * nslices = ndim - len(result) - * if nslices: # <<<<<<<<<<<<<< - * result.extend([slice(None)] * nslices) - * - */ - __pyx_t_1 = (__pyx_v_nslices != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":696 - * nslices = ndim - len(result) - * if nslices: - * result.extend([slice(None)] * nslices) # <<<<<<<<<<<<<< - * - * return have_slices or nslices, tuple(result) - */ - __pyx_t_3 = PyList_New(1 * ((__pyx_v_nslices<0) ? 0:__pyx_v_nslices)); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 696, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - { Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < __pyx_v_nslices; __pyx_temp++) { - __Pyx_INCREF(__pyx_slice__16); - __Pyx_GIVEREF(__pyx_slice__16); - PyList_SET_ITEM(__pyx_t_3, __pyx_temp, __pyx_slice__16); - } - } - __pyx_t_9 = __Pyx_PyList_Extend(__pyx_v_result, __pyx_t_3); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 696, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "View.MemoryView":695 - * - * nslices = ndim - len(result) - * if nslices: # <<<<<<<<<<<<<< - * result.extend([slice(None)] * nslices) - * - */ - } - - /* "View.MemoryView":698 - * result.extend([slice(None)] * nslices) - * - * return have_slices or nslices, tuple(result) # <<<<<<<<<<<<<< - * - * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): - */ - __Pyx_XDECREF(__pyx_r); - if (!__pyx_v_have_slices) { - } else { - __pyx_t_4 = __Pyx_PyBool_FromLong(__pyx_v_have_slices); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 698, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_3 = __pyx_t_4; - __pyx_t_4 = 0; - goto __pyx_L14_bool_binop_done; - } - __pyx_t_4 = PyInt_FromSsize_t(__pyx_v_nslices); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 698, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_3 = __pyx_t_4; - __pyx_t_4 = 0; - __pyx_L14_bool_binop_done:; - __pyx_t_4 = PyList_AsTuple(__pyx_v_result); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 698, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_11 = PyTuple_New(2); if (unlikely(!__pyx_t_11)) __PYX_ERR(1, 698, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_11, 0, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_11, 1, __pyx_t_4); - __pyx_t_3 = 0; - __pyx_t_4 = 0; - __pyx_r = ((PyObject*)__pyx_t_11); - __pyx_t_11 = 0; - goto __pyx_L0; - - /* "View.MemoryView":666 - * return isinstance(o, memoryview) - * - * cdef tuple _unellipsify(object index, int ndim): # <<<<<<<<<<<<<< - * """ - * Replace all ellipses with full slices and fill incomplete indices with - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_11); - __Pyx_AddTraceback("View.MemoryView._unellipsify", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_tup); - __Pyx_XDECREF(__pyx_v_result); - __Pyx_XDECREF(__pyx_v_idx); - __Pyx_XDECREF(__pyx_v_item); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":700 - * return have_slices or nslices, tuple(result) - * - * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): # <<<<<<<<<<<<<< - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: - */ - -static PyObject *assert_direct_dimensions(Py_ssize_t *__pyx_v_suboffsets, int __pyx_v_ndim) { - Py_ssize_t __pyx_v_suboffset; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - Py_ssize_t *__pyx_t_1; - Py_ssize_t *__pyx_t_2; - Py_ssize_t *__pyx_t_3; - int __pyx_t_4; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("assert_direct_dimensions", 0); - - /* "View.MemoryView":701 - * - * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): - * for suboffset in suboffsets[:ndim]: # <<<<<<<<<<<<<< - * if suboffset >= 0: - * raise ValueError("Indirect dimensions not supported") - */ - __pyx_t_2 = (__pyx_v_suboffsets + __pyx_v_ndim); - for (__pyx_t_3 = __pyx_v_suboffsets; __pyx_t_3 < __pyx_t_2; __pyx_t_3++) { - __pyx_t_1 = __pyx_t_3; - __pyx_v_suboffset = (__pyx_t_1[0]); - - /* "View.MemoryView":702 - * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: # <<<<<<<<<<<<<< - * raise ValueError("Indirect dimensions not supported") - * - */ - __pyx_t_4 = ((__pyx_v_suboffset >= 0) != 0); - if (unlikely(__pyx_t_4)) { - - /* "View.MemoryView":703 - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: - * raise ValueError("Indirect dimensions not supported") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_5 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__17, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 703, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_Raise(__pyx_t_5, 0, 0, 0); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __PYX_ERR(1, 703, __pyx_L1_error) - - /* "View.MemoryView":702 - * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: # <<<<<<<<<<<<<< - * raise ValueError("Indirect dimensions not supported") - * - */ - } - } - - /* "View.MemoryView":700 - * return have_slices or nslices, tuple(result) - * - * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): # <<<<<<<<<<<<<< - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.assert_direct_dimensions", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":710 - * - * @cname('__pyx_memview_slice') - * cdef memoryview memview_slice(memoryview memview, object indices): # <<<<<<<<<<<<<< - * cdef int new_ndim = 0, suboffset_dim = -1, dim - * cdef bint negative_step - */ - -static struct __pyx_memoryview_obj *__pyx_memview_slice(struct __pyx_memoryview_obj *__pyx_v_memview, PyObject *__pyx_v_indices) { - int __pyx_v_new_ndim; - int __pyx_v_suboffset_dim; - int __pyx_v_dim; - __Pyx_memviewslice __pyx_v_src; - __Pyx_memviewslice __pyx_v_dst; - __Pyx_memviewslice *__pyx_v_p_src; - struct __pyx_memoryviewslice_obj *__pyx_v_memviewsliceobj = 0; - __Pyx_memviewslice *__pyx_v_p_dst; - int *__pyx_v_p_suboffset_dim; - Py_ssize_t __pyx_v_start; - Py_ssize_t __pyx_v_stop; - Py_ssize_t __pyx_v_step; - int __pyx_v_have_start; - int __pyx_v_have_stop; - int __pyx_v_have_step; - PyObject *__pyx_v_index = NULL; - struct __pyx_memoryview_obj *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - struct __pyx_memoryview_obj *__pyx_t_4; - char *__pyx_t_5; - int __pyx_t_6; - Py_ssize_t __pyx_t_7; - PyObject *(*__pyx_t_8)(PyObject *); - PyObject *__pyx_t_9 = NULL; - Py_ssize_t __pyx_t_10; - int __pyx_t_11; - Py_ssize_t __pyx_t_12; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("memview_slice", 0); - - /* "View.MemoryView":711 - * @cname('__pyx_memview_slice') - * cdef memoryview memview_slice(memoryview memview, object indices): - * cdef int new_ndim = 0, suboffset_dim = -1, dim # <<<<<<<<<<<<<< - * cdef bint negative_step - * cdef __Pyx_memviewslice src, dst - */ - __pyx_v_new_ndim = 0; - __pyx_v_suboffset_dim = -1; - - /* "View.MemoryView":718 - * - * - * memset(&dst, 0, sizeof(dst)) # <<<<<<<<<<<<<< - * - * cdef _memoryviewslice memviewsliceobj - */ - (void)(memset((&__pyx_v_dst), 0, (sizeof(__pyx_v_dst)))); - - /* "View.MemoryView":722 - * cdef _memoryviewslice memviewsliceobj - * - * assert memview.view.ndim > 0 # <<<<<<<<<<<<<< - * - * if isinstance(memview, _memoryviewslice): - */ - #ifndef CYTHON_WITHOUT_ASSERTIONS - if (unlikely(!Py_OptimizeFlag)) { - if (unlikely(!((__pyx_v_memview->view.ndim > 0) != 0))) { - PyErr_SetNone(PyExc_AssertionError); - __PYX_ERR(1, 722, __pyx_L1_error) - } - } - #endif - - /* "View.MemoryView":724 - * assert memview.view.ndim > 0 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * memviewsliceobj = memview - * p_src = &memviewsliceobj.from_slice - */ - __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":725 - * - * if isinstance(memview, _memoryviewslice): - * memviewsliceobj = memview # <<<<<<<<<<<<<< - * p_src = &memviewsliceobj.from_slice - * else: - */ - if (!(likely(((((PyObject *)__pyx_v_memview)) == Py_None) || likely(__Pyx_TypeTest(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type))))) __PYX_ERR(1, 725, __pyx_L1_error) - __pyx_t_3 = ((PyObject *)__pyx_v_memview); - __Pyx_INCREF(__pyx_t_3); - __pyx_v_memviewsliceobj = ((struct __pyx_memoryviewslice_obj *)__pyx_t_3); - __pyx_t_3 = 0; - - /* "View.MemoryView":726 - * if isinstance(memview, _memoryviewslice): - * memviewsliceobj = memview - * p_src = &memviewsliceobj.from_slice # <<<<<<<<<<<<<< - * else: - * slice_copy(memview, &src) - */ - __pyx_v_p_src = (&__pyx_v_memviewsliceobj->from_slice); - - /* "View.MemoryView":724 - * assert memview.view.ndim > 0 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * memviewsliceobj = memview - * p_src = &memviewsliceobj.from_slice - */ - goto __pyx_L3; - } - - /* "View.MemoryView":728 - * p_src = &memviewsliceobj.from_slice - * else: - * slice_copy(memview, &src) # <<<<<<<<<<<<<< - * p_src = &src - * - */ - /*else*/ { - __pyx_memoryview_slice_copy(__pyx_v_memview, (&__pyx_v_src)); - - /* "View.MemoryView":729 - * else: - * slice_copy(memview, &src) - * p_src = &src # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_p_src = (&__pyx_v_src); - } - __pyx_L3:; - - /* "View.MemoryView":735 - * - * - * dst.memview = p_src.memview # <<<<<<<<<<<<<< - * dst.data = p_src.data - * - */ - __pyx_t_4 = __pyx_v_p_src->memview; - __pyx_v_dst.memview = __pyx_t_4; - - /* "View.MemoryView":736 - * - * dst.memview = p_src.memview - * dst.data = p_src.data # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_5 = __pyx_v_p_src->data; - __pyx_v_dst.data = __pyx_t_5; - - /* "View.MemoryView":741 - * - * - * cdef __Pyx_memviewslice *p_dst = &dst # <<<<<<<<<<<<<< - * cdef int *p_suboffset_dim = &suboffset_dim - * cdef Py_ssize_t start, stop, step - */ - __pyx_v_p_dst = (&__pyx_v_dst); - - /* "View.MemoryView":742 - * - * cdef __Pyx_memviewslice *p_dst = &dst - * cdef int *p_suboffset_dim = &suboffset_dim # <<<<<<<<<<<<<< - * cdef Py_ssize_t start, stop, step - * cdef bint have_start, have_stop, have_step - */ - __pyx_v_p_suboffset_dim = (&__pyx_v_suboffset_dim); - - /* "View.MemoryView":746 - * cdef bint have_start, have_stop, have_step - * - * for dim, index in enumerate(indices): # <<<<<<<<<<<<<< - * if PyIndex_Check(index): - * slice_memviewslice( - */ - __pyx_t_6 = 0; - if (likely(PyList_CheckExact(__pyx_v_indices)) || PyTuple_CheckExact(__pyx_v_indices)) { - __pyx_t_3 = __pyx_v_indices; __Pyx_INCREF(__pyx_t_3); __pyx_t_7 = 0; - __pyx_t_8 = NULL; - } else { - __pyx_t_7 = -1; __pyx_t_3 = PyObject_GetIter(__pyx_v_indices); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 746, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_8 = Py_TYPE(__pyx_t_3)->tp_iternext; if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 746, __pyx_L1_error) - } - for (;;) { - if (likely(!__pyx_t_8)) { - if (likely(PyList_CheckExact(__pyx_t_3))) { - if (__pyx_t_7 >= PyList_GET_SIZE(__pyx_t_3)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_9 = PyList_GET_ITEM(__pyx_t_3, __pyx_t_7); __Pyx_INCREF(__pyx_t_9); __pyx_t_7++; if (unlikely(0 < 0)) __PYX_ERR(1, 746, __pyx_L1_error) - #else - __pyx_t_9 = PySequence_ITEM(__pyx_t_3, __pyx_t_7); __pyx_t_7++; if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 746, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - #endif - } else { - if (__pyx_t_7 >= PyTuple_GET_SIZE(__pyx_t_3)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_9 = PyTuple_GET_ITEM(__pyx_t_3, __pyx_t_7); __Pyx_INCREF(__pyx_t_9); __pyx_t_7++; if (unlikely(0 < 0)) __PYX_ERR(1, 746, __pyx_L1_error) - #else - __pyx_t_9 = PySequence_ITEM(__pyx_t_3, __pyx_t_7); __pyx_t_7++; if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 746, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - #endif - } - } else { - __pyx_t_9 = __pyx_t_8(__pyx_t_3); - if (unlikely(!__pyx_t_9)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(1, 746, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_9); - } - __Pyx_XDECREF_SET(__pyx_v_index, __pyx_t_9); - __pyx_t_9 = 0; - __pyx_v_dim = __pyx_t_6; - __pyx_t_6 = (__pyx_t_6 + 1); - - /* "View.MemoryView":747 - * - * for dim, index in enumerate(indices): - * if PyIndex_Check(index): # <<<<<<<<<<<<<< - * slice_memviewslice( - * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim], - */ - __pyx_t_2 = (PyIndex_Check(__pyx_v_index) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":751 - * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim], - * dim, new_ndim, p_suboffset_dim, - * index, 0, 0, # start, stop, step # <<<<<<<<<<<<<< - * 0, 0, 0, # have_{start,stop,step} - * False) - */ - __pyx_t_10 = __Pyx_PyIndex_AsSsize_t(__pyx_v_index); if (unlikely((__pyx_t_10 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 751, __pyx_L1_error) - - /* "View.MemoryView":748 - * for dim, index in enumerate(indices): - * if PyIndex_Check(index): - * slice_memviewslice( # <<<<<<<<<<<<<< - * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim], - * dim, new_ndim, p_suboffset_dim, - */ - __pyx_t_11 = __pyx_memoryview_slice_memviewslice(__pyx_v_p_dst, (__pyx_v_p_src->shape[__pyx_v_dim]), (__pyx_v_p_src->strides[__pyx_v_dim]), (__pyx_v_p_src->suboffsets[__pyx_v_dim]), __pyx_v_dim, __pyx_v_new_ndim, __pyx_v_p_suboffset_dim, __pyx_t_10, 0, 0, 0, 0, 0, 0); if (unlikely(__pyx_t_11 == ((int)-1))) __PYX_ERR(1, 748, __pyx_L1_error) - - /* "View.MemoryView":747 - * - * for dim, index in enumerate(indices): - * if PyIndex_Check(index): # <<<<<<<<<<<<<< - * slice_memviewslice( - * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim], - */ - goto __pyx_L6; - } - - /* "View.MemoryView":754 - * 0, 0, 0, # have_{start,stop,step} - * False) - * elif index is None: # <<<<<<<<<<<<<< - * p_dst.shape[new_ndim] = 1 - * p_dst.strides[new_ndim] = 0 - */ - __pyx_t_2 = (__pyx_v_index == Py_None); - __pyx_t_1 = (__pyx_t_2 != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":755 - * False) - * elif index is None: - * p_dst.shape[new_ndim] = 1 # <<<<<<<<<<<<<< - * p_dst.strides[new_ndim] = 0 - * p_dst.suboffsets[new_ndim] = -1 - */ - (__pyx_v_p_dst->shape[__pyx_v_new_ndim]) = 1; - - /* "View.MemoryView":756 - * elif index is None: - * p_dst.shape[new_ndim] = 1 - * p_dst.strides[new_ndim] = 0 # <<<<<<<<<<<<<< - * p_dst.suboffsets[new_ndim] = -1 - * new_ndim += 1 - */ - (__pyx_v_p_dst->strides[__pyx_v_new_ndim]) = 0; - - /* "View.MemoryView":757 - * p_dst.shape[new_ndim] = 1 - * p_dst.strides[new_ndim] = 0 - * p_dst.suboffsets[new_ndim] = -1 # <<<<<<<<<<<<<< - * new_ndim += 1 - * else: - */ - (__pyx_v_p_dst->suboffsets[__pyx_v_new_ndim]) = -1L; - - /* "View.MemoryView":758 - * p_dst.strides[new_ndim] = 0 - * p_dst.suboffsets[new_ndim] = -1 - * new_ndim += 1 # <<<<<<<<<<<<<< - * else: - * start = index.start or 0 - */ - __pyx_v_new_ndim = (__pyx_v_new_ndim + 1); - - /* "View.MemoryView":754 - * 0, 0, 0, # have_{start,stop,step} - * False) - * elif index is None: # <<<<<<<<<<<<<< - * p_dst.shape[new_ndim] = 1 - * p_dst.strides[new_ndim] = 0 - */ - goto __pyx_L6; - } - - /* "View.MemoryView":760 - * new_ndim += 1 - * else: - * start = index.start or 0 # <<<<<<<<<<<<<< - * stop = index.stop or 0 - * step = index.step or 0 - */ - /*else*/ { - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_start); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 760, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_9); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 760, __pyx_L1_error) - if (!__pyx_t_1) { - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - } else { - __pyx_t_12 = __Pyx_PyIndex_AsSsize_t(__pyx_t_9); if (unlikely((__pyx_t_12 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 760, __pyx_L1_error) - __pyx_t_10 = __pyx_t_12; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - goto __pyx_L7_bool_binop_done; - } - __pyx_t_10 = 0; - __pyx_L7_bool_binop_done:; - __pyx_v_start = __pyx_t_10; - - /* "View.MemoryView":761 - * else: - * start = index.start or 0 - * stop = index.stop or 0 # <<<<<<<<<<<<<< - * step = index.step or 0 - * - */ - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_stop); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 761, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_9); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 761, __pyx_L1_error) - if (!__pyx_t_1) { - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - } else { - __pyx_t_12 = __Pyx_PyIndex_AsSsize_t(__pyx_t_9); if (unlikely((__pyx_t_12 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 761, __pyx_L1_error) - __pyx_t_10 = __pyx_t_12; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - goto __pyx_L9_bool_binop_done; - } - __pyx_t_10 = 0; - __pyx_L9_bool_binop_done:; - __pyx_v_stop = __pyx_t_10; - - /* "View.MemoryView":762 - * start = index.start or 0 - * stop = index.stop or 0 - * step = index.step or 0 # <<<<<<<<<<<<<< - * - * have_start = index.start is not None - */ - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_step); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 762, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_9); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 762, __pyx_L1_error) - if (!__pyx_t_1) { - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - } else { - __pyx_t_12 = __Pyx_PyIndex_AsSsize_t(__pyx_t_9); if (unlikely((__pyx_t_12 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 762, __pyx_L1_error) - __pyx_t_10 = __pyx_t_12; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - goto __pyx_L11_bool_binop_done; - } - __pyx_t_10 = 0; - __pyx_L11_bool_binop_done:; - __pyx_v_step = __pyx_t_10; - - /* "View.MemoryView":764 - * step = index.step or 0 - * - * have_start = index.start is not None # <<<<<<<<<<<<<< - * have_stop = index.stop is not None - * have_step = index.step is not None - */ - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_start); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 764, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_1 = (__pyx_t_9 != Py_None); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_v_have_start = __pyx_t_1; - - /* "View.MemoryView":765 - * - * have_start = index.start is not None - * have_stop = index.stop is not None # <<<<<<<<<<<<<< - * have_step = index.step is not None - * - */ - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_stop); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 765, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_1 = (__pyx_t_9 != Py_None); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_v_have_stop = __pyx_t_1; - - /* "View.MemoryView":766 - * have_start = index.start is not None - * have_stop = index.stop is not None - * have_step = index.step is not None # <<<<<<<<<<<<<< - * - * slice_memviewslice( - */ - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_step); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 766, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_1 = (__pyx_t_9 != Py_None); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_v_have_step = __pyx_t_1; - - /* "View.MemoryView":768 - * have_step = index.step is not None - * - * slice_memviewslice( # <<<<<<<<<<<<<< - * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim], - * dim, new_ndim, p_suboffset_dim, - */ - __pyx_t_11 = __pyx_memoryview_slice_memviewslice(__pyx_v_p_dst, (__pyx_v_p_src->shape[__pyx_v_dim]), (__pyx_v_p_src->strides[__pyx_v_dim]), (__pyx_v_p_src->suboffsets[__pyx_v_dim]), __pyx_v_dim, __pyx_v_new_ndim, __pyx_v_p_suboffset_dim, __pyx_v_start, __pyx_v_stop, __pyx_v_step, __pyx_v_have_start, __pyx_v_have_stop, __pyx_v_have_step, 1); if (unlikely(__pyx_t_11 == ((int)-1))) __PYX_ERR(1, 768, __pyx_L1_error) - - /* "View.MemoryView":774 - * have_start, have_stop, have_step, - * True) - * new_ndim += 1 # <<<<<<<<<<<<<< - * - * if isinstance(memview, _memoryviewslice): - */ - __pyx_v_new_ndim = (__pyx_v_new_ndim + 1); - } - __pyx_L6:; - - /* "View.MemoryView":746 - * cdef bint have_start, have_stop, have_step - * - * for dim, index in enumerate(indices): # <<<<<<<<<<<<<< - * if PyIndex_Check(index): - * slice_memviewslice( - */ - } - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "View.MemoryView":776 - * new_ndim += 1 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * return memoryview_fromslice(dst, new_ndim, - * memviewsliceobj.to_object_func, - */ - __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":777 - * - * if isinstance(memview, _memoryviewslice): - * return memoryview_fromslice(dst, new_ndim, # <<<<<<<<<<<<<< - * memviewsliceobj.to_object_func, - * memviewsliceobj.to_dtype_func, - */ - __Pyx_XDECREF(((PyObject *)__pyx_r)); - - /* "View.MemoryView":778 - * if isinstance(memview, _memoryviewslice): - * return memoryview_fromslice(dst, new_ndim, - * memviewsliceobj.to_object_func, # <<<<<<<<<<<<<< - * memviewsliceobj.to_dtype_func, - * memview.dtype_is_object) - */ - if (unlikely(!__pyx_v_memviewsliceobj)) { __Pyx_RaiseUnboundLocalError("memviewsliceobj"); __PYX_ERR(1, 778, __pyx_L1_error) } - - /* "View.MemoryView":779 - * return memoryview_fromslice(dst, new_ndim, - * memviewsliceobj.to_object_func, - * memviewsliceobj.to_dtype_func, # <<<<<<<<<<<<<< - * memview.dtype_is_object) - * else: - */ - if (unlikely(!__pyx_v_memviewsliceobj)) { __Pyx_RaiseUnboundLocalError("memviewsliceobj"); __PYX_ERR(1, 779, __pyx_L1_error) } - - /* "View.MemoryView":777 - * - * if isinstance(memview, _memoryviewslice): - * return memoryview_fromslice(dst, new_ndim, # <<<<<<<<<<<<<< - * memviewsliceobj.to_object_func, - * memviewsliceobj.to_dtype_func, - */ - __pyx_t_3 = __pyx_memoryview_fromslice(__pyx_v_dst, __pyx_v_new_ndim, __pyx_v_memviewsliceobj->to_object_func, __pyx_v_memviewsliceobj->to_dtype_func, __pyx_v_memview->dtype_is_object); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 777, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_memoryview_type))))) __PYX_ERR(1, 777, __pyx_L1_error) - __pyx_r = ((struct __pyx_memoryview_obj *)__pyx_t_3); - __pyx_t_3 = 0; - goto __pyx_L0; - - /* "View.MemoryView":776 - * new_ndim += 1 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * return memoryview_fromslice(dst, new_ndim, - * memviewsliceobj.to_object_func, - */ - } - - /* "View.MemoryView":782 - * memview.dtype_is_object) - * else: - * return memoryview_fromslice(dst, new_ndim, NULL, NULL, # <<<<<<<<<<<<<< - * memview.dtype_is_object) - * - */ - /*else*/ { - __Pyx_XDECREF(((PyObject *)__pyx_r)); - - /* "View.MemoryView":783 - * else: - * return memoryview_fromslice(dst, new_ndim, NULL, NULL, - * memview.dtype_is_object) # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_3 = __pyx_memoryview_fromslice(__pyx_v_dst, __pyx_v_new_ndim, NULL, NULL, __pyx_v_memview->dtype_is_object); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 782, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - - /* "View.MemoryView":782 - * memview.dtype_is_object) - * else: - * return memoryview_fromslice(dst, new_ndim, NULL, NULL, # <<<<<<<<<<<<<< - * memview.dtype_is_object) - * - */ - if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_memoryview_type))))) __PYX_ERR(1, 782, __pyx_L1_error) - __pyx_r = ((struct __pyx_memoryview_obj *)__pyx_t_3); - __pyx_t_3 = 0; - goto __pyx_L0; - } - - /* "View.MemoryView":710 - * - * @cname('__pyx_memview_slice') - * cdef memoryview memview_slice(memoryview memview, object indices): # <<<<<<<<<<<<<< - * cdef int new_ndim = 0, suboffset_dim = -1, dim - * cdef bint negative_step - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_9); - __Pyx_AddTraceback("View.MemoryView.memview_slice", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_memviewsliceobj); - __Pyx_XDECREF(__pyx_v_index); - __Pyx_XGIVEREF((PyObject *)__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":807 - * - * @cname('__pyx_memoryview_slice_memviewslice') - * cdef int slice_memviewslice( # <<<<<<<<<<<<<< - * __Pyx_memviewslice *dst, - * Py_ssize_t shape, Py_ssize_t stride, Py_ssize_t suboffset, - */ - -static int __pyx_memoryview_slice_memviewslice(__Pyx_memviewslice *__pyx_v_dst, Py_ssize_t __pyx_v_shape, Py_ssize_t __pyx_v_stride, Py_ssize_t __pyx_v_suboffset, int __pyx_v_dim, int __pyx_v_new_ndim, int *__pyx_v_suboffset_dim, Py_ssize_t __pyx_v_start, Py_ssize_t __pyx_v_stop, Py_ssize_t __pyx_v_step, int __pyx_v_have_start, int __pyx_v_have_stop, int __pyx_v_have_step, int __pyx_v_is_slice) { - Py_ssize_t __pyx_v_new_shape; - int __pyx_v_negative_step; - int __pyx_r; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - - /* "View.MemoryView":827 - * cdef bint negative_step - * - * if not is_slice: # <<<<<<<<<<<<<< - * - * if start < 0: - */ - __pyx_t_1 = ((!(__pyx_v_is_slice != 0)) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":829 - * if not is_slice: - * - * if start < 0: # <<<<<<<<<<<<<< - * start += shape - * if not 0 <= start < shape: - */ - __pyx_t_1 = ((__pyx_v_start < 0) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":830 - * - * if start < 0: - * start += shape # <<<<<<<<<<<<<< - * if not 0 <= start < shape: - * _err_dim(IndexError, "Index out of bounds (axis %d)", dim) - */ - __pyx_v_start = (__pyx_v_start + __pyx_v_shape); - - /* "View.MemoryView":829 - * if not is_slice: - * - * if start < 0: # <<<<<<<<<<<<<< - * start += shape - * if not 0 <= start < shape: - */ - } - - /* "View.MemoryView":831 - * if start < 0: - * start += shape - * if not 0 <= start < shape: # <<<<<<<<<<<<<< - * _err_dim(IndexError, "Index out of bounds (axis %d)", dim) - * else: - */ - __pyx_t_1 = (0 <= __pyx_v_start); - if (__pyx_t_1) { - __pyx_t_1 = (__pyx_v_start < __pyx_v_shape); - } - __pyx_t_2 = ((!(__pyx_t_1 != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":832 - * start += shape - * if not 0 <= start < shape: - * _err_dim(IndexError, "Index out of bounds (axis %d)", dim) # <<<<<<<<<<<<<< - * else: - * - */ - __pyx_t_3 = __pyx_memoryview_err_dim(__pyx_builtin_IndexError, ((char *)"Index out of bounds (axis %d)"), __pyx_v_dim); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 832, __pyx_L1_error) - - /* "View.MemoryView":831 - * if start < 0: - * start += shape - * if not 0 <= start < shape: # <<<<<<<<<<<<<< - * _err_dim(IndexError, "Index out of bounds (axis %d)", dim) - * else: - */ - } - - /* "View.MemoryView":827 - * cdef bint negative_step - * - * if not is_slice: # <<<<<<<<<<<<<< - * - * if start < 0: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":835 - * else: - * - * negative_step = have_step != 0 and step < 0 # <<<<<<<<<<<<<< - * - * if have_step and step == 0: - */ - /*else*/ { - __pyx_t_1 = ((__pyx_v_have_step != 0) != 0); - if (__pyx_t_1) { - } else { - __pyx_t_2 = __pyx_t_1; - goto __pyx_L6_bool_binop_done; - } - __pyx_t_1 = ((__pyx_v_step < 0) != 0); - __pyx_t_2 = __pyx_t_1; - __pyx_L6_bool_binop_done:; - __pyx_v_negative_step = __pyx_t_2; - - /* "View.MemoryView":837 - * negative_step = have_step != 0 and step < 0 - * - * if have_step and step == 0: # <<<<<<<<<<<<<< - * _err_dim(ValueError, "Step may not be zero (axis %d)", dim) - * - */ - __pyx_t_1 = (__pyx_v_have_step != 0); - if (__pyx_t_1) { - } else { - __pyx_t_2 = __pyx_t_1; - goto __pyx_L9_bool_binop_done; - } - __pyx_t_1 = ((__pyx_v_step == 0) != 0); - __pyx_t_2 = __pyx_t_1; - __pyx_L9_bool_binop_done:; - if (__pyx_t_2) { - - /* "View.MemoryView":838 - * - * if have_step and step == 0: - * _err_dim(ValueError, "Step may not be zero (axis %d)", dim) # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_3 = __pyx_memoryview_err_dim(__pyx_builtin_ValueError, ((char *)"Step may not be zero (axis %d)"), __pyx_v_dim); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 838, __pyx_L1_error) - - /* "View.MemoryView":837 - * negative_step = have_step != 0 and step < 0 - * - * if have_step and step == 0: # <<<<<<<<<<<<<< - * _err_dim(ValueError, "Step may not be zero (axis %d)", dim) - * - */ - } - - /* "View.MemoryView":841 - * - * - * if have_start: # <<<<<<<<<<<<<< - * if start < 0: - * start += shape - */ - __pyx_t_2 = (__pyx_v_have_start != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":842 - * - * if have_start: - * if start < 0: # <<<<<<<<<<<<<< - * start += shape - * if start < 0: - */ - __pyx_t_2 = ((__pyx_v_start < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":843 - * if have_start: - * if start < 0: - * start += shape # <<<<<<<<<<<<<< - * if start < 0: - * start = 0 - */ - __pyx_v_start = (__pyx_v_start + __pyx_v_shape); - - /* "View.MemoryView":844 - * if start < 0: - * start += shape - * if start < 0: # <<<<<<<<<<<<<< - * start = 0 - * elif start >= shape: - */ - __pyx_t_2 = ((__pyx_v_start < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":845 - * start += shape - * if start < 0: - * start = 0 # <<<<<<<<<<<<<< - * elif start >= shape: - * if negative_step: - */ - __pyx_v_start = 0; - - /* "View.MemoryView":844 - * if start < 0: - * start += shape - * if start < 0: # <<<<<<<<<<<<<< - * start = 0 - * elif start >= shape: - */ - } - - /* "View.MemoryView":842 - * - * if have_start: - * if start < 0: # <<<<<<<<<<<<<< - * start += shape - * if start < 0: - */ - goto __pyx_L12; - } - - /* "View.MemoryView":846 - * if start < 0: - * start = 0 - * elif start >= shape: # <<<<<<<<<<<<<< - * if negative_step: - * start = shape - 1 - */ - __pyx_t_2 = ((__pyx_v_start >= __pyx_v_shape) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":847 - * start = 0 - * elif start >= shape: - * if negative_step: # <<<<<<<<<<<<<< - * start = shape - 1 - * else: - */ - __pyx_t_2 = (__pyx_v_negative_step != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":848 - * elif start >= shape: - * if negative_step: - * start = shape - 1 # <<<<<<<<<<<<<< - * else: - * start = shape - */ - __pyx_v_start = (__pyx_v_shape - 1); - - /* "View.MemoryView":847 - * start = 0 - * elif start >= shape: - * if negative_step: # <<<<<<<<<<<<<< - * start = shape - 1 - * else: - */ - goto __pyx_L14; - } - - /* "View.MemoryView":850 - * start = shape - 1 - * else: - * start = shape # <<<<<<<<<<<<<< - * else: - * if negative_step: - */ - /*else*/ { - __pyx_v_start = __pyx_v_shape; - } - __pyx_L14:; - - /* "View.MemoryView":846 - * if start < 0: - * start = 0 - * elif start >= shape: # <<<<<<<<<<<<<< - * if negative_step: - * start = shape - 1 - */ - } - __pyx_L12:; - - /* "View.MemoryView":841 - * - * - * if have_start: # <<<<<<<<<<<<<< - * if start < 0: - * start += shape - */ - goto __pyx_L11; - } - - /* "View.MemoryView":852 - * start = shape - * else: - * if negative_step: # <<<<<<<<<<<<<< - * start = shape - 1 - * else: - */ - /*else*/ { - __pyx_t_2 = (__pyx_v_negative_step != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":853 - * else: - * if negative_step: - * start = shape - 1 # <<<<<<<<<<<<<< - * else: - * start = 0 - */ - __pyx_v_start = (__pyx_v_shape - 1); - - /* "View.MemoryView":852 - * start = shape - * else: - * if negative_step: # <<<<<<<<<<<<<< - * start = shape - 1 - * else: - */ - goto __pyx_L15; - } - - /* "View.MemoryView":855 - * start = shape - 1 - * else: - * start = 0 # <<<<<<<<<<<<<< - * - * if have_stop: - */ - /*else*/ { - __pyx_v_start = 0; - } - __pyx_L15:; - } - __pyx_L11:; - - /* "View.MemoryView":857 - * start = 0 - * - * if have_stop: # <<<<<<<<<<<<<< - * if stop < 0: - * stop += shape - */ - __pyx_t_2 = (__pyx_v_have_stop != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":858 - * - * if have_stop: - * if stop < 0: # <<<<<<<<<<<<<< - * stop += shape - * if stop < 0: - */ - __pyx_t_2 = ((__pyx_v_stop < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":859 - * if have_stop: - * if stop < 0: - * stop += shape # <<<<<<<<<<<<<< - * if stop < 0: - * stop = 0 - */ - __pyx_v_stop = (__pyx_v_stop + __pyx_v_shape); - - /* "View.MemoryView":860 - * if stop < 0: - * stop += shape - * if stop < 0: # <<<<<<<<<<<<<< - * stop = 0 - * elif stop > shape: - */ - __pyx_t_2 = ((__pyx_v_stop < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":861 - * stop += shape - * if stop < 0: - * stop = 0 # <<<<<<<<<<<<<< - * elif stop > shape: - * stop = shape - */ - __pyx_v_stop = 0; - - /* "View.MemoryView":860 - * if stop < 0: - * stop += shape - * if stop < 0: # <<<<<<<<<<<<<< - * stop = 0 - * elif stop > shape: - */ - } - - /* "View.MemoryView":858 - * - * if have_stop: - * if stop < 0: # <<<<<<<<<<<<<< - * stop += shape - * if stop < 0: - */ - goto __pyx_L17; - } - - /* "View.MemoryView":862 - * if stop < 0: - * stop = 0 - * elif stop > shape: # <<<<<<<<<<<<<< - * stop = shape - * else: - */ - __pyx_t_2 = ((__pyx_v_stop > __pyx_v_shape) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":863 - * stop = 0 - * elif stop > shape: - * stop = shape # <<<<<<<<<<<<<< - * else: - * if negative_step: - */ - __pyx_v_stop = __pyx_v_shape; - - /* "View.MemoryView":862 - * if stop < 0: - * stop = 0 - * elif stop > shape: # <<<<<<<<<<<<<< - * stop = shape - * else: - */ - } - __pyx_L17:; - - /* "View.MemoryView":857 - * start = 0 - * - * if have_stop: # <<<<<<<<<<<<<< - * if stop < 0: - * stop += shape - */ - goto __pyx_L16; - } - - /* "View.MemoryView":865 - * stop = shape - * else: - * if negative_step: # <<<<<<<<<<<<<< - * stop = -1 - * else: - */ - /*else*/ { - __pyx_t_2 = (__pyx_v_negative_step != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":866 - * else: - * if negative_step: - * stop = -1 # <<<<<<<<<<<<<< - * else: - * stop = shape - */ - __pyx_v_stop = -1L; - - /* "View.MemoryView":865 - * stop = shape - * else: - * if negative_step: # <<<<<<<<<<<<<< - * stop = -1 - * else: - */ - goto __pyx_L19; - } - - /* "View.MemoryView":868 - * stop = -1 - * else: - * stop = shape # <<<<<<<<<<<<<< - * - * if not have_step: - */ - /*else*/ { - __pyx_v_stop = __pyx_v_shape; - } - __pyx_L19:; - } - __pyx_L16:; - - /* "View.MemoryView":870 - * stop = shape - * - * if not have_step: # <<<<<<<<<<<<<< - * step = 1 - * - */ - __pyx_t_2 = ((!(__pyx_v_have_step != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":871 - * - * if not have_step: - * step = 1 # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_step = 1; - - /* "View.MemoryView":870 - * stop = shape - * - * if not have_step: # <<<<<<<<<<<<<< - * step = 1 - * - */ - } - - /* "View.MemoryView":875 - * - * with cython.cdivision(True): - * new_shape = (stop - start) // step # <<<<<<<<<<<<<< - * - * if (stop - start) - step * new_shape: - */ - __pyx_v_new_shape = ((__pyx_v_stop - __pyx_v_start) / __pyx_v_step); - - /* "View.MemoryView":877 - * new_shape = (stop - start) // step - * - * if (stop - start) - step * new_shape: # <<<<<<<<<<<<<< - * new_shape += 1 - * - */ - __pyx_t_2 = (((__pyx_v_stop - __pyx_v_start) - (__pyx_v_step * __pyx_v_new_shape)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":878 - * - * if (stop - start) - step * new_shape: - * new_shape += 1 # <<<<<<<<<<<<<< - * - * if new_shape < 0: - */ - __pyx_v_new_shape = (__pyx_v_new_shape + 1); - - /* "View.MemoryView":877 - * new_shape = (stop - start) // step - * - * if (stop - start) - step * new_shape: # <<<<<<<<<<<<<< - * new_shape += 1 - * - */ - } - - /* "View.MemoryView":880 - * new_shape += 1 - * - * if new_shape < 0: # <<<<<<<<<<<<<< - * new_shape = 0 - * - */ - __pyx_t_2 = ((__pyx_v_new_shape < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":881 - * - * if new_shape < 0: - * new_shape = 0 # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_new_shape = 0; - - /* "View.MemoryView":880 - * new_shape += 1 - * - * if new_shape < 0: # <<<<<<<<<<<<<< - * new_shape = 0 - * - */ - } - - /* "View.MemoryView":884 - * - * - * dst.strides[new_ndim] = stride * step # <<<<<<<<<<<<<< - * dst.shape[new_ndim] = new_shape - * dst.suboffsets[new_ndim] = suboffset - */ - (__pyx_v_dst->strides[__pyx_v_new_ndim]) = (__pyx_v_stride * __pyx_v_step); - - /* "View.MemoryView":885 - * - * dst.strides[new_ndim] = stride * step - * dst.shape[new_ndim] = new_shape # <<<<<<<<<<<<<< - * dst.suboffsets[new_ndim] = suboffset - * - */ - (__pyx_v_dst->shape[__pyx_v_new_ndim]) = __pyx_v_new_shape; - - /* "View.MemoryView":886 - * dst.strides[new_ndim] = stride * step - * dst.shape[new_ndim] = new_shape - * dst.suboffsets[new_ndim] = suboffset # <<<<<<<<<<<<<< - * - * - */ - (__pyx_v_dst->suboffsets[__pyx_v_new_ndim]) = __pyx_v_suboffset; - } - __pyx_L3:; - - /* "View.MemoryView":889 - * - * - * if suboffset_dim[0] < 0: # <<<<<<<<<<<<<< - * dst.data += start * stride - * else: - */ - __pyx_t_2 = (((__pyx_v_suboffset_dim[0]) < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":890 - * - * if suboffset_dim[0] < 0: - * dst.data += start * stride # <<<<<<<<<<<<<< - * else: - * dst.suboffsets[suboffset_dim[0]] += start * stride - */ - __pyx_v_dst->data = (__pyx_v_dst->data + (__pyx_v_start * __pyx_v_stride)); - - /* "View.MemoryView":889 - * - * - * if suboffset_dim[0] < 0: # <<<<<<<<<<<<<< - * dst.data += start * stride - * else: - */ - goto __pyx_L23; - } - - /* "View.MemoryView":892 - * dst.data += start * stride - * else: - * dst.suboffsets[suboffset_dim[0]] += start * stride # <<<<<<<<<<<<<< - * - * if suboffset >= 0: - */ - /*else*/ { - __pyx_t_3 = (__pyx_v_suboffset_dim[0]); - (__pyx_v_dst->suboffsets[__pyx_t_3]) = ((__pyx_v_dst->suboffsets[__pyx_t_3]) + (__pyx_v_start * __pyx_v_stride)); - } - __pyx_L23:; - - /* "View.MemoryView":894 - * dst.suboffsets[suboffset_dim[0]] += start * stride - * - * if suboffset >= 0: # <<<<<<<<<<<<<< - * if not is_slice: - * if new_ndim == 0: - */ - __pyx_t_2 = ((__pyx_v_suboffset >= 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":895 - * - * if suboffset >= 0: - * if not is_slice: # <<<<<<<<<<<<<< - * if new_ndim == 0: - * dst.data = ( dst.data)[0] + suboffset - */ - __pyx_t_2 = ((!(__pyx_v_is_slice != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":896 - * if suboffset >= 0: - * if not is_slice: - * if new_ndim == 0: # <<<<<<<<<<<<<< - * dst.data = ( dst.data)[0] + suboffset - * else: - */ - __pyx_t_2 = ((__pyx_v_new_ndim == 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":897 - * if not is_slice: - * if new_ndim == 0: - * dst.data = ( dst.data)[0] + suboffset # <<<<<<<<<<<<<< - * else: - * _err_dim(IndexError, "All dimensions preceding dimension %d " - */ - __pyx_v_dst->data = ((((char **)__pyx_v_dst->data)[0]) + __pyx_v_suboffset); - - /* "View.MemoryView":896 - * if suboffset >= 0: - * if not is_slice: - * if new_ndim == 0: # <<<<<<<<<<<<<< - * dst.data = ( dst.data)[0] + suboffset - * else: - */ - goto __pyx_L26; - } - - /* "View.MemoryView":899 - * dst.data = ( dst.data)[0] + suboffset - * else: - * _err_dim(IndexError, "All dimensions preceding dimension %d " # <<<<<<<<<<<<<< - * "must be indexed and not sliced", dim) - * else: - */ - /*else*/ { - - /* "View.MemoryView":900 - * else: - * _err_dim(IndexError, "All dimensions preceding dimension %d " - * "must be indexed and not sliced", dim) # <<<<<<<<<<<<<< - * else: - * suboffset_dim[0] = new_ndim - */ - __pyx_t_3 = __pyx_memoryview_err_dim(__pyx_builtin_IndexError, ((char *)"All dimensions preceding dimension %d must be indexed and not sliced"), __pyx_v_dim); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 899, __pyx_L1_error) - } - __pyx_L26:; - - /* "View.MemoryView":895 - * - * if suboffset >= 0: - * if not is_slice: # <<<<<<<<<<<<<< - * if new_ndim == 0: - * dst.data = ( dst.data)[0] + suboffset - */ - goto __pyx_L25; - } - - /* "View.MemoryView":902 - * "must be indexed and not sliced", dim) - * else: - * suboffset_dim[0] = new_ndim # <<<<<<<<<<<<<< - * - * return 0 - */ - /*else*/ { - (__pyx_v_suboffset_dim[0]) = __pyx_v_new_ndim; - } - __pyx_L25:; - - /* "View.MemoryView":894 - * dst.suboffsets[suboffset_dim[0]] += start * stride - * - * if suboffset >= 0: # <<<<<<<<<<<<<< - * if not is_slice: - * if new_ndim == 0: - */ - } - - /* "View.MemoryView":904 - * suboffset_dim[0] = new_ndim - * - * return 0 # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = 0; - goto __pyx_L0; - - /* "View.MemoryView":807 - * - * @cname('__pyx_memoryview_slice_memviewslice') - * cdef int slice_memviewslice( # <<<<<<<<<<<<<< - * __Pyx_memviewslice *dst, - * Py_ssize_t shape, Py_ssize_t stride, Py_ssize_t suboffset, - */ - - /* function exit code */ - __pyx_L1_error:; - { - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_AddTraceback("View.MemoryView.slice_memviewslice", __pyx_clineno, __pyx_lineno, __pyx_filename); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - } - __pyx_r = -1; - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":910 - * - * @cname('__pyx_pybuffer_index') - * cdef char *pybuffer_index(Py_buffer *view, char *bufp, Py_ssize_t index, # <<<<<<<<<<<<<< - * Py_ssize_t dim) except NULL: - * cdef Py_ssize_t shape, stride, suboffset = -1 - */ - -static char *__pyx_pybuffer_index(Py_buffer *__pyx_v_view, char *__pyx_v_bufp, Py_ssize_t __pyx_v_index, Py_ssize_t __pyx_v_dim) { - Py_ssize_t __pyx_v_shape; - Py_ssize_t __pyx_v_stride; - Py_ssize_t __pyx_v_suboffset; - Py_ssize_t __pyx_v_itemsize; - char *__pyx_v_resultp; - char *__pyx_r; - __Pyx_RefNannyDeclarations - Py_ssize_t __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("pybuffer_index", 0); - - /* "View.MemoryView":912 - * cdef char *pybuffer_index(Py_buffer *view, char *bufp, Py_ssize_t index, - * Py_ssize_t dim) except NULL: - * cdef Py_ssize_t shape, stride, suboffset = -1 # <<<<<<<<<<<<<< - * cdef Py_ssize_t itemsize = view.itemsize - * cdef char *resultp - */ - __pyx_v_suboffset = -1L; - - /* "View.MemoryView":913 - * Py_ssize_t dim) except NULL: - * cdef Py_ssize_t shape, stride, suboffset = -1 - * cdef Py_ssize_t itemsize = view.itemsize # <<<<<<<<<<<<<< - * cdef char *resultp - * - */ - __pyx_t_1 = __pyx_v_view->itemsize; - __pyx_v_itemsize = __pyx_t_1; - - /* "View.MemoryView":916 - * cdef char *resultp - * - * if view.ndim == 0: # <<<<<<<<<<<<<< - * shape = view.len / itemsize - * stride = itemsize - */ - __pyx_t_2 = ((__pyx_v_view->ndim == 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":917 - * - * if view.ndim == 0: - * shape = view.len / itemsize # <<<<<<<<<<<<<< - * stride = itemsize - * else: - */ - if (unlikely(__pyx_v_itemsize == 0)) { - PyErr_SetString(PyExc_ZeroDivisionError, "integer division or modulo by zero"); - __PYX_ERR(1, 917, __pyx_L1_error) - } - else if (sizeof(Py_ssize_t) == sizeof(long) && (!(((Py_ssize_t)-1) > 0)) && unlikely(__pyx_v_itemsize == (Py_ssize_t)-1) && unlikely(UNARY_NEG_WOULD_OVERFLOW(__pyx_v_view->len))) { - PyErr_SetString(PyExc_OverflowError, "value too large to perform division"); - __PYX_ERR(1, 917, __pyx_L1_error) - } - __pyx_v_shape = __Pyx_div_Py_ssize_t(__pyx_v_view->len, __pyx_v_itemsize); - - /* "View.MemoryView":918 - * if view.ndim == 0: - * shape = view.len / itemsize - * stride = itemsize # <<<<<<<<<<<<<< - * else: - * shape = view.shape[dim] - */ - __pyx_v_stride = __pyx_v_itemsize; - - /* "View.MemoryView":916 - * cdef char *resultp - * - * if view.ndim == 0: # <<<<<<<<<<<<<< - * shape = view.len / itemsize - * stride = itemsize - */ - goto __pyx_L3; - } - - /* "View.MemoryView":920 - * stride = itemsize - * else: - * shape = view.shape[dim] # <<<<<<<<<<<<<< - * stride = view.strides[dim] - * if view.suboffsets != NULL: - */ - /*else*/ { - __pyx_v_shape = (__pyx_v_view->shape[__pyx_v_dim]); - - /* "View.MemoryView":921 - * else: - * shape = view.shape[dim] - * stride = view.strides[dim] # <<<<<<<<<<<<<< - * if view.suboffsets != NULL: - * suboffset = view.suboffsets[dim] - */ - __pyx_v_stride = (__pyx_v_view->strides[__pyx_v_dim]); - - /* "View.MemoryView":922 - * shape = view.shape[dim] - * stride = view.strides[dim] - * if view.suboffsets != NULL: # <<<<<<<<<<<<<< - * suboffset = view.suboffsets[dim] - * - */ - __pyx_t_2 = ((__pyx_v_view->suboffsets != NULL) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":923 - * stride = view.strides[dim] - * if view.suboffsets != NULL: - * suboffset = view.suboffsets[dim] # <<<<<<<<<<<<<< - * - * if index < 0: - */ - __pyx_v_suboffset = (__pyx_v_view->suboffsets[__pyx_v_dim]); - - /* "View.MemoryView":922 - * shape = view.shape[dim] - * stride = view.strides[dim] - * if view.suboffsets != NULL: # <<<<<<<<<<<<<< - * suboffset = view.suboffsets[dim] - * - */ - } - } - __pyx_L3:; - - /* "View.MemoryView":925 - * suboffset = view.suboffsets[dim] - * - * if index < 0: # <<<<<<<<<<<<<< - * index += view.shape[dim] - * if index < 0: - */ - __pyx_t_2 = ((__pyx_v_index < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":926 - * - * if index < 0: - * index += view.shape[dim] # <<<<<<<<<<<<<< - * if index < 0: - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - */ - __pyx_v_index = (__pyx_v_index + (__pyx_v_view->shape[__pyx_v_dim])); - - /* "View.MemoryView":927 - * if index < 0: - * index += view.shape[dim] - * if index < 0: # <<<<<<<<<<<<<< - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - */ - __pyx_t_2 = ((__pyx_v_index < 0) != 0); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":928 - * index += view.shape[dim] - * if index < 0: - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) # <<<<<<<<<<<<<< - * - * if index >= shape: - */ - __pyx_t_3 = PyInt_FromSsize_t(__pyx_v_dim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 928, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyString_Format(__pyx_kp_s_Out_of_bounds_on_buffer_access_a, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 928, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_builtin_IndexError, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 928, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 928, __pyx_L1_error) - - /* "View.MemoryView":927 - * if index < 0: - * index += view.shape[dim] - * if index < 0: # <<<<<<<<<<<<<< - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - */ - } - - /* "View.MemoryView":925 - * suboffset = view.suboffsets[dim] - * - * if index < 0: # <<<<<<<<<<<<<< - * index += view.shape[dim] - * if index < 0: - */ - } - - /* "View.MemoryView":930 - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - * if index >= shape: # <<<<<<<<<<<<<< - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - */ - __pyx_t_2 = ((__pyx_v_index >= __pyx_v_shape) != 0); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":931 - * - * if index >= shape: - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) # <<<<<<<<<<<<<< - * - * resultp = bufp + index * stride - */ - __pyx_t_3 = PyInt_FromSsize_t(__pyx_v_dim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 931, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyString_Format(__pyx_kp_s_Out_of_bounds_on_buffer_access_a, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 931, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_builtin_IndexError, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 931, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 931, __pyx_L1_error) - - /* "View.MemoryView":930 - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - * if index >= shape: # <<<<<<<<<<<<<< - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - */ - } - - /* "View.MemoryView":933 - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - * resultp = bufp + index * stride # <<<<<<<<<<<<<< - * if suboffset >= 0: - * resultp = ( resultp)[0] + suboffset - */ - __pyx_v_resultp = (__pyx_v_bufp + (__pyx_v_index * __pyx_v_stride)); - - /* "View.MemoryView":934 - * - * resultp = bufp + index * stride - * if suboffset >= 0: # <<<<<<<<<<<<<< - * resultp = ( resultp)[0] + suboffset - * - */ - __pyx_t_2 = ((__pyx_v_suboffset >= 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":935 - * resultp = bufp + index * stride - * if suboffset >= 0: - * resultp = ( resultp)[0] + suboffset # <<<<<<<<<<<<<< - * - * return resultp - */ - __pyx_v_resultp = ((((char **)__pyx_v_resultp)[0]) + __pyx_v_suboffset); - - /* "View.MemoryView":934 - * - * resultp = bufp + index * stride - * if suboffset >= 0: # <<<<<<<<<<<<<< - * resultp = ( resultp)[0] + suboffset - * - */ - } - - /* "View.MemoryView":937 - * resultp = ( resultp)[0] + suboffset - * - * return resultp # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = __pyx_v_resultp; - goto __pyx_L0; - - /* "View.MemoryView":910 - * - * @cname('__pyx_pybuffer_index') - * cdef char *pybuffer_index(Py_buffer *view, char *bufp, Py_ssize_t index, # <<<<<<<<<<<<<< - * Py_ssize_t dim) except NULL: - * cdef Py_ssize_t shape, stride, suboffset = -1 - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("View.MemoryView.pybuffer_index", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":943 - * - * @cname('__pyx_memslice_transpose') - * cdef int transpose_memslice(__Pyx_memviewslice *memslice) nogil except 0: # <<<<<<<<<<<<<< - * cdef int ndim = memslice.memview.view.ndim - * - */ - -static int __pyx_memslice_transpose(__Pyx_memviewslice *__pyx_v_memslice) { - int __pyx_v_ndim; - Py_ssize_t *__pyx_v_shape; - Py_ssize_t *__pyx_v_strides; - int __pyx_v_i; - int __pyx_v_j; - int __pyx_r; - int __pyx_t_1; - Py_ssize_t *__pyx_t_2; - long __pyx_t_3; - long __pyx_t_4; - Py_ssize_t __pyx_t_5; - Py_ssize_t __pyx_t_6; - int __pyx_t_7; - int __pyx_t_8; - int __pyx_t_9; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - - /* "View.MemoryView":944 - * @cname('__pyx_memslice_transpose') - * cdef int transpose_memslice(__Pyx_memviewslice *memslice) nogil except 0: - * cdef int ndim = memslice.memview.view.ndim # <<<<<<<<<<<<<< - * - * cdef Py_ssize_t *shape = memslice.shape - */ - __pyx_t_1 = __pyx_v_memslice->memview->view.ndim; - __pyx_v_ndim = __pyx_t_1; - - /* "View.MemoryView":946 - * cdef int ndim = memslice.memview.view.ndim - * - * cdef Py_ssize_t *shape = memslice.shape # <<<<<<<<<<<<<< - * cdef Py_ssize_t *strides = memslice.strides - * - */ - __pyx_t_2 = __pyx_v_memslice->shape; - __pyx_v_shape = __pyx_t_2; - - /* "View.MemoryView":947 - * - * cdef Py_ssize_t *shape = memslice.shape - * cdef Py_ssize_t *strides = memslice.strides # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_2 = __pyx_v_memslice->strides; - __pyx_v_strides = __pyx_t_2; - - /* "View.MemoryView":951 - * - * cdef int i, j - * for i in range(ndim / 2): # <<<<<<<<<<<<<< - * j = ndim - 1 - i - * strides[i], strides[j] = strides[j], strides[i] - */ - __pyx_t_3 = __Pyx_div_long(__pyx_v_ndim, 2); - __pyx_t_4 = __pyx_t_3; - for (__pyx_t_1 = 0; __pyx_t_1 < __pyx_t_4; __pyx_t_1+=1) { - __pyx_v_i = __pyx_t_1; - - /* "View.MemoryView":952 - * cdef int i, j - * for i in range(ndim / 2): - * j = ndim - 1 - i # <<<<<<<<<<<<<< - * strides[i], strides[j] = strides[j], strides[i] - * shape[i], shape[j] = shape[j], shape[i] - */ - __pyx_v_j = ((__pyx_v_ndim - 1) - __pyx_v_i); - - /* "View.MemoryView":953 - * for i in range(ndim / 2): - * j = ndim - 1 - i - * strides[i], strides[j] = strides[j], strides[i] # <<<<<<<<<<<<<< - * shape[i], shape[j] = shape[j], shape[i] - * - */ - __pyx_t_5 = (__pyx_v_strides[__pyx_v_j]); - __pyx_t_6 = (__pyx_v_strides[__pyx_v_i]); - (__pyx_v_strides[__pyx_v_i]) = __pyx_t_5; - (__pyx_v_strides[__pyx_v_j]) = __pyx_t_6; - - /* "View.MemoryView":954 - * j = ndim - 1 - i - * strides[i], strides[j] = strides[j], strides[i] - * shape[i], shape[j] = shape[j], shape[i] # <<<<<<<<<<<<<< - * - * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0: - */ - __pyx_t_6 = (__pyx_v_shape[__pyx_v_j]); - __pyx_t_5 = (__pyx_v_shape[__pyx_v_i]); - (__pyx_v_shape[__pyx_v_i]) = __pyx_t_6; - (__pyx_v_shape[__pyx_v_j]) = __pyx_t_5; - - /* "View.MemoryView":956 - * shape[i], shape[j] = shape[j], shape[i] - * - * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0: # <<<<<<<<<<<<<< - * _err(ValueError, "Cannot transpose memoryview with indirect dimensions") - * - */ - __pyx_t_8 = (((__pyx_v_memslice->suboffsets[__pyx_v_i]) >= 0) != 0); - if (!__pyx_t_8) { - } else { - __pyx_t_7 = __pyx_t_8; - goto __pyx_L6_bool_binop_done; - } - __pyx_t_8 = (((__pyx_v_memslice->suboffsets[__pyx_v_j]) >= 0) != 0); - __pyx_t_7 = __pyx_t_8; - __pyx_L6_bool_binop_done:; - if (__pyx_t_7) { - - /* "View.MemoryView":957 - * - * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0: - * _err(ValueError, "Cannot transpose memoryview with indirect dimensions") # <<<<<<<<<<<<<< - * - * return 1 - */ - __pyx_t_9 = __pyx_memoryview_err(__pyx_builtin_ValueError, ((char *)"Cannot transpose memoryview with indirect dimensions")); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 957, __pyx_L1_error) - - /* "View.MemoryView":956 - * shape[i], shape[j] = shape[j], shape[i] - * - * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0: # <<<<<<<<<<<<<< - * _err(ValueError, "Cannot transpose memoryview with indirect dimensions") - * - */ - } - } - - /* "View.MemoryView":959 - * _err(ValueError, "Cannot transpose memoryview with indirect dimensions") - * - * return 1 # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = 1; - goto __pyx_L0; - - /* "View.MemoryView":943 - * - * @cname('__pyx_memslice_transpose') - * cdef int transpose_memslice(__Pyx_memviewslice *memslice) nogil except 0: # <<<<<<<<<<<<<< - * cdef int ndim = memslice.memview.view.ndim - * - */ - - /* function exit code */ - __pyx_L1_error:; - { - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_AddTraceback("View.MemoryView.transpose_memslice", __pyx_clineno, __pyx_lineno, __pyx_filename); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - } - __pyx_r = 0; - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":976 - * cdef int (*to_dtype_func)(char *, object) except 0 - * - * def __dealloc__(self): # <<<<<<<<<<<<<< - * __PYX_XDEC_MEMVIEW(&self.from_slice, 1) - * - */ - -/* Python wrapper */ -static void __pyx_memoryviewslice___dealloc__(PyObject *__pyx_v_self); /*proto*/ -static void __pyx_memoryviewslice___dealloc__(PyObject *__pyx_v_self) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__dealloc__ (wrapper)", 0); - __pyx_memoryviewslice___pyx_pf_15View_dot_MemoryView_16_memoryviewslice___dealloc__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -static void __pyx_memoryviewslice___pyx_pf_15View_dot_MemoryView_16_memoryviewslice___dealloc__(struct __pyx_memoryviewslice_obj *__pyx_v_self) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__dealloc__", 0); - - /* "View.MemoryView":977 - * - * def __dealloc__(self): - * __PYX_XDEC_MEMVIEW(&self.from_slice, 1) # <<<<<<<<<<<<<< - * - * cdef convert_item_to_object(self, char *itemp): - */ - __PYX_XDEC_MEMVIEW((&__pyx_v_self->from_slice), 1); - - /* "View.MemoryView":976 - * cdef int (*to_dtype_func)(char *, object) except 0 - * - * def __dealloc__(self): # <<<<<<<<<<<<<< - * __PYX_XDEC_MEMVIEW(&self.from_slice, 1) - * - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -/* "View.MemoryView":979 - * __PYX_XDEC_MEMVIEW(&self.from_slice, 1) - * - * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<< - * if self.to_object_func != NULL: - * return self.to_object_func(itemp) - */ - -static PyObject *__pyx_memoryviewslice_convert_item_to_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("convert_item_to_object", 0); - - /* "View.MemoryView":980 - * - * cdef convert_item_to_object(self, char *itemp): - * if self.to_object_func != NULL: # <<<<<<<<<<<<<< - * return self.to_object_func(itemp) - * else: - */ - __pyx_t_1 = ((__pyx_v_self->to_object_func != NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":981 - * cdef convert_item_to_object(self, char *itemp): - * if self.to_object_func != NULL: - * return self.to_object_func(itemp) # <<<<<<<<<<<<<< - * else: - * return memoryview.convert_item_to_object(self, itemp) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_v_self->to_object_func(__pyx_v_itemp); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 981, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":980 - * - * cdef convert_item_to_object(self, char *itemp): - * if self.to_object_func != NULL: # <<<<<<<<<<<<<< - * return self.to_object_func(itemp) - * else: - */ - } - - /* "View.MemoryView":983 - * return self.to_object_func(itemp) - * else: - * return memoryview.convert_item_to_object(self, itemp) # <<<<<<<<<<<<<< - * - * cdef assign_item_from_object(self, char *itemp, object value): - */ - /*else*/ { - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_memoryview_convert_item_to_object(((struct __pyx_memoryview_obj *)__pyx_v_self), __pyx_v_itemp); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 983, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - } - - /* "View.MemoryView":979 - * __PYX_XDEC_MEMVIEW(&self.from_slice, 1) - * - * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<< - * if self.to_object_func != NULL: - * return self.to_object_func(itemp) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView._memoryviewslice.convert_item_to_object", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":985 - * return memoryview.convert_item_to_object(self, itemp) - * - * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<< - * if self.to_dtype_func != NULL: - * self.to_dtype_func(itemp, value) - */ - -static PyObject *__pyx_memoryviewslice_assign_item_from_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("assign_item_from_object", 0); - - /* "View.MemoryView":986 - * - * cdef assign_item_from_object(self, char *itemp, object value): - * if self.to_dtype_func != NULL: # <<<<<<<<<<<<<< - * self.to_dtype_func(itemp, value) - * else: - */ - __pyx_t_1 = ((__pyx_v_self->to_dtype_func != NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":987 - * cdef assign_item_from_object(self, char *itemp, object value): - * if self.to_dtype_func != NULL: - * self.to_dtype_func(itemp, value) # <<<<<<<<<<<<<< - * else: - * memoryview.assign_item_from_object(self, itemp, value) - */ - __pyx_t_2 = __pyx_v_self->to_dtype_func(__pyx_v_itemp, __pyx_v_value); if (unlikely(__pyx_t_2 == ((int)0))) __PYX_ERR(1, 987, __pyx_L1_error) - - /* "View.MemoryView":986 - * - * cdef assign_item_from_object(self, char *itemp, object value): - * if self.to_dtype_func != NULL: # <<<<<<<<<<<<<< - * self.to_dtype_func(itemp, value) - * else: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":989 - * self.to_dtype_func(itemp, value) - * else: - * memoryview.assign_item_from_object(self, itemp, value) # <<<<<<<<<<<<<< - * - * @property - */ - /*else*/ { - __pyx_t_3 = __pyx_memoryview_assign_item_from_object(((struct __pyx_memoryview_obj *)__pyx_v_self), __pyx_v_itemp, __pyx_v_value); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 989, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_L3:; - - /* "View.MemoryView":985 - * return memoryview.convert_item_to_object(self, itemp) - * - * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<< - * if self.to_dtype_func != NULL: - * self.to_dtype_func(itemp, value) - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView._memoryviewslice.assign_item_from_object", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":992 - * - * @property - * def base(self): # <<<<<<<<<<<<<< - * return self.from_object - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_16_memoryviewslice_4base_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_16_memoryviewslice_4base_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_16_memoryviewslice_4base___get__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_16_memoryviewslice_4base___get__(struct __pyx_memoryviewslice_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":993 - * @property - * def base(self): - * return self.from_object # <<<<<<<<<<<<<< - * - * __pyx_getbuffer = capsule( &__pyx_memoryview_getbuffer, "getbuffer(obj, view, flags)") - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_self->from_object); - __pyx_r = __pyx_v_self->from_object; - goto __pyx_L0; - - /* "View.MemoryView":992 - * - * @property - * def base(self): # <<<<<<<<<<<<<< - * return self.from_object - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_memoryviewslice_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_pw___pyx_memoryviewslice_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_memoryviewslice___reduce_cython__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_memoryviewslice___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__reduce_cython__", 0); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__18, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 2, __pyx_L1_error) - - /* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView._memoryviewslice.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_memoryviewslice_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/ -static PyObject *__pyx_pw___pyx_memoryviewslice_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_memoryviewslice_2__setstate_cython__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_memoryviewslice_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setstate_cython__", 0); - - /* "(tree fragment)":4 - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - */ - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__19, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 4, __pyx_L1_error) - - /* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView._memoryviewslice.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":999 - * - * @cname('__pyx_memoryview_fromslice') - * cdef memoryview_fromslice(__Pyx_memviewslice memviewslice, # <<<<<<<<<<<<<< - * int ndim, - * object (*to_object_func)(char *), - */ - -static PyObject *__pyx_memoryview_fromslice(__Pyx_memviewslice __pyx_v_memviewslice, int __pyx_v_ndim, PyObject *(*__pyx_v_to_object_func)(char *), int (*__pyx_v_to_dtype_func)(char *, PyObject *), int __pyx_v_dtype_is_object) { - struct __pyx_memoryviewslice_obj *__pyx_v_result = 0; - Py_ssize_t __pyx_v_suboffset; - PyObject *__pyx_v_length = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - __Pyx_TypeInfo *__pyx_t_4; - Py_buffer __pyx_t_5; - Py_ssize_t *__pyx_t_6; - Py_ssize_t *__pyx_t_7; - Py_ssize_t *__pyx_t_8; - Py_ssize_t __pyx_t_9; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("memoryview_fromslice", 0); - - /* "View.MemoryView":1007 - * cdef _memoryviewslice result - * - * if memviewslice.memview == Py_None: # <<<<<<<<<<<<<< - * return None - * - */ - __pyx_t_1 = ((((PyObject *)__pyx_v_memviewslice.memview) == Py_None) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1008 - * - * if memviewslice.memview == Py_None: - * return None # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - - /* "View.MemoryView":1007 - * cdef _memoryviewslice result - * - * if memviewslice.memview == Py_None: # <<<<<<<<<<<<<< - * return None - * - */ - } - - /* "View.MemoryView":1013 - * - * - * result = _memoryviewslice(None, 0, dtype_is_object) # <<<<<<<<<<<<<< - * - * result.from_slice = memviewslice - */ - __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_v_dtype_is_object); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1013, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1013, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(Py_None); - __Pyx_GIVEREF(Py_None); - PyTuple_SET_ITEM(__pyx_t_3, 0, Py_None); - __Pyx_INCREF(__pyx_int_0); - __Pyx_GIVEREF(__pyx_int_0); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_int_0); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_t_2); - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryviewslice_type), __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1013, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_result = ((struct __pyx_memoryviewslice_obj *)__pyx_t_2); - __pyx_t_2 = 0; - - /* "View.MemoryView":1015 - * result = _memoryviewslice(None, 0, dtype_is_object) - * - * result.from_slice = memviewslice # <<<<<<<<<<<<<< - * __PYX_INC_MEMVIEW(&memviewslice, 1) - * - */ - __pyx_v_result->from_slice = __pyx_v_memviewslice; - - /* "View.MemoryView":1016 - * - * result.from_slice = memviewslice - * __PYX_INC_MEMVIEW(&memviewslice, 1) # <<<<<<<<<<<<<< - * - * result.from_object = ( memviewslice.memview).base - */ - __PYX_INC_MEMVIEW((&__pyx_v_memviewslice), 1); - - /* "View.MemoryView":1018 - * __PYX_INC_MEMVIEW(&memviewslice, 1) - * - * result.from_object = ( memviewslice.memview).base # <<<<<<<<<<<<<< - * result.typeinfo = memviewslice.memview.typeinfo - * - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_memviewslice.memview), __pyx_n_s_base); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1018, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - __Pyx_GOTREF(__pyx_v_result->from_object); - __Pyx_DECREF(__pyx_v_result->from_object); - __pyx_v_result->from_object = __pyx_t_2; - __pyx_t_2 = 0; - - /* "View.MemoryView":1019 - * - * result.from_object = ( memviewslice.memview).base - * result.typeinfo = memviewslice.memview.typeinfo # <<<<<<<<<<<<<< - * - * result.view = memviewslice.memview.view - */ - __pyx_t_4 = __pyx_v_memviewslice.memview->typeinfo; - __pyx_v_result->__pyx_base.typeinfo = __pyx_t_4; - - /* "View.MemoryView":1021 - * result.typeinfo = memviewslice.memview.typeinfo - * - * result.view = memviewslice.memview.view # <<<<<<<<<<<<<< - * result.view.buf = memviewslice.data - * result.view.ndim = ndim - */ - __pyx_t_5 = __pyx_v_memviewslice.memview->view; - __pyx_v_result->__pyx_base.view = __pyx_t_5; - - /* "View.MemoryView":1022 - * - * result.view = memviewslice.memview.view - * result.view.buf = memviewslice.data # <<<<<<<<<<<<<< - * result.view.ndim = ndim - * (<__pyx_buffer *> &result.view).obj = Py_None - */ - __pyx_v_result->__pyx_base.view.buf = ((void *)__pyx_v_memviewslice.data); - - /* "View.MemoryView":1023 - * result.view = memviewslice.memview.view - * result.view.buf = memviewslice.data - * result.view.ndim = ndim # <<<<<<<<<<<<<< - * (<__pyx_buffer *> &result.view).obj = Py_None - * Py_INCREF(Py_None) - */ - __pyx_v_result->__pyx_base.view.ndim = __pyx_v_ndim; - - /* "View.MemoryView":1024 - * result.view.buf = memviewslice.data - * result.view.ndim = ndim - * (<__pyx_buffer *> &result.view).obj = Py_None # <<<<<<<<<<<<<< - * Py_INCREF(Py_None) - * - */ - ((Py_buffer *)(&__pyx_v_result->__pyx_base.view))->obj = Py_None; - - /* "View.MemoryView":1025 - * result.view.ndim = ndim - * (<__pyx_buffer *> &result.view).obj = Py_None - * Py_INCREF(Py_None) # <<<<<<<<<<<<<< - * - * if (memviewslice.memview).flags & PyBUF_WRITABLE: - */ - Py_INCREF(Py_None); - - /* "View.MemoryView":1027 - * Py_INCREF(Py_None) - * - * if (memviewslice.memview).flags & PyBUF_WRITABLE: # <<<<<<<<<<<<<< - * result.flags = PyBUF_RECORDS - * else: - */ - __pyx_t_1 = ((((struct __pyx_memoryview_obj *)__pyx_v_memviewslice.memview)->flags & PyBUF_WRITABLE) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1028 - * - * if (memviewslice.memview).flags & PyBUF_WRITABLE: - * result.flags = PyBUF_RECORDS # <<<<<<<<<<<<<< - * else: - * result.flags = PyBUF_RECORDS_RO - */ - __pyx_v_result->__pyx_base.flags = PyBUF_RECORDS; - - /* "View.MemoryView":1027 - * Py_INCREF(Py_None) - * - * if (memviewslice.memview).flags & PyBUF_WRITABLE: # <<<<<<<<<<<<<< - * result.flags = PyBUF_RECORDS - * else: - */ - goto __pyx_L4; - } - - /* "View.MemoryView":1030 - * result.flags = PyBUF_RECORDS - * else: - * result.flags = PyBUF_RECORDS_RO # <<<<<<<<<<<<<< - * - * result.view.shape = result.from_slice.shape - */ - /*else*/ { - __pyx_v_result->__pyx_base.flags = PyBUF_RECORDS_RO; - } - __pyx_L4:; - - /* "View.MemoryView":1032 - * result.flags = PyBUF_RECORDS_RO - * - * result.view.shape = result.from_slice.shape # <<<<<<<<<<<<<< - * result.view.strides = result.from_slice.strides - * - */ - __pyx_v_result->__pyx_base.view.shape = ((Py_ssize_t *)__pyx_v_result->from_slice.shape); - - /* "View.MemoryView":1033 - * - * result.view.shape = result.from_slice.shape - * result.view.strides = result.from_slice.strides # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_result->__pyx_base.view.strides = ((Py_ssize_t *)__pyx_v_result->from_slice.strides); - - /* "View.MemoryView":1036 - * - * - * result.view.suboffsets = NULL # <<<<<<<<<<<<<< - * for suboffset in result.from_slice.suboffsets[:ndim]: - * if suboffset >= 0: - */ - __pyx_v_result->__pyx_base.view.suboffsets = NULL; - - /* "View.MemoryView":1037 - * - * result.view.suboffsets = NULL - * for suboffset in result.from_slice.suboffsets[:ndim]: # <<<<<<<<<<<<<< - * if suboffset >= 0: - * result.view.suboffsets = result.from_slice.suboffsets - */ - __pyx_t_7 = (__pyx_v_result->from_slice.suboffsets + __pyx_v_ndim); - for (__pyx_t_8 = __pyx_v_result->from_slice.suboffsets; __pyx_t_8 < __pyx_t_7; __pyx_t_8++) { - __pyx_t_6 = __pyx_t_8; - __pyx_v_suboffset = (__pyx_t_6[0]); - - /* "View.MemoryView":1038 - * result.view.suboffsets = NULL - * for suboffset in result.from_slice.suboffsets[:ndim]: - * if suboffset >= 0: # <<<<<<<<<<<<<< - * result.view.suboffsets = result.from_slice.suboffsets - * break - */ - __pyx_t_1 = ((__pyx_v_suboffset >= 0) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1039 - * for suboffset in result.from_slice.suboffsets[:ndim]: - * if suboffset >= 0: - * result.view.suboffsets = result.from_slice.suboffsets # <<<<<<<<<<<<<< - * break - * - */ - __pyx_v_result->__pyx_base.view.suboffsets = ((Py_ssize_t *)__pyx_v_result->from_slice.suboffsets); - - /* "View.MemoryView":1040 - * if suboffset >= 0: - * result.view.suboffsets = result.from_slice.suboffsets - * break # <<<<<<<<<<<<<< - * - * result.view.len = result.view.itemsize - */ - goto __pyx_L6_break; - - /* "View.MemoryView":1038 - * result.view.suboffsets = NULL - * for suboffset in result.from_slice.suboffsets[:ndim]: - * if suboffset >= 0: # <<<<<<<<<<<<<< - * result.view.suboffsets = result.from_slice.suboffsets - * break - */ - } - } - __pyx_L6_break:; - - /* "View.MemoryView":1042 - * break - * - * result.view.len = result.view.itemsize # <<<<<<<<<<<<<< - * for length in result.view.shape[:ndim]: - * result.view.len *= length - */ - __pyx_t_9 = __pyx_v_result->__pyx_base.view.itemsize; - __pyx_v_result->__pyx_base.view.len = __pyx_t_9; - - /* "View.MemoryView":1043 - * - * result.view.len = result.view.itemsize - * for length in result.view.shape[:ndim]: # <<<<<<<<<<<<<< - * result.view.len *= length - * - */ - __pyx_t_7 = (__pyx_v_result->__pyx_base.view.shape + __pyx_v_ndim); - for (__pyx_t_8 = __pyx_v_result->__pyx_base.view.shape; __pyx_t_8 < __pyx_t_7; __pyx_t_8++) { - __pyx_t_6 = __pyx_t_8; - __pyx_t_2 = PyInt_FromSsize_t((__pyx_t_6[0])); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1043, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_XDECREF_SET(__pyx_v_length, __pyx_t_2); - __pyx_t_2 = 0; - - /* "View.MemoryView":1044 - * result.view.len = result.view.itemsize - * for length in result.view.shape[:ndim]: - * result.view.len *= length # <<<<<<<<<<<<<< - * - * result.to_object_func = to_object_func - */ - __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_result->__pyx_base.view.len); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1044, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyNumber_InPlaceMultiply(__pyx_t_2, __pyx_v_length); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1044, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_9 = __Pyx_PyIndex_AsSsize_t(__pyx_t_3); if (unlikely((__pyx_t_9 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 1044, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_result->__pyx_base.view.len = __pyx_t_9; - } - - /* "View.MemoryView":1046 - * result.view.len *= length - * - * result.to_object_func = to_object_func # <<<<<<<<<<<<<< - * result.to_dtype_func = to_dtype_func - * - */ - __pyx_v_result->to_object_func = __pyx_v_to_object_func; - - /* "View.MemoryView":1047 - * - * result.to_object_func = to_object_func - * result.to_dtype_func = to_dtype_func # <<<<<<<<<<<<<< - * - * return result - */ - __pyx_v_result->to_dtype_func = __pyx_v_to_dtype_func; - - /* "View.MemoryView":1049 - * result.to_dtype_func = to_dtype_func - * - * return result # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_get_slice_from_memoryview') - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(((PyObject *)__pyx_v_result)); - __pyx_r = ((PyObject *)__pyx_v_result); - goto __pyx_L0; - - /* "View.MemoryView":999 - * - * @cname('__pyx_memoryview_fromslice') - * cdef memoryview_fromslice(__Pyx_memviewslice memviewslice, # <<<<<<<<<<<<<< - * int ndim, - * object (*to_object_func)(char *), - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview_fromslice", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_result); - __Pyx_XDECREF(__pyx_v_length); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":1052 - * - * @cname('__pyx_memoryview_get_slice_from_memoryview') - * cdef __Pyx_memviewslice *get_slice_from_memview(memoryview memview, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *mslice) except NULL: - * cdef _memoryviewslice obj - */ - -static __Pyx_memviewslice *__pyx_memoryview_get_slice_from_memoryview(struct __pyx_memoryview_obj *__pyx_v_memview, __Pyx_memviewslice *__pyx_v_mslice) { - struct __pyx_memoryviewslice_obj *__pyx_v_obj = 0; - __Pyx_memviewslice *__pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("get_slice_from_memview", 0); - - /* "View.MemoryView":1055 - * __Pyx_memviewslice *mslice) except NULL: - * cdef _memoryviewslice obj - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * obj = memview - * return &obj.from_slice - */ - __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1056 - * cdef _memoryviewslice obj - * if isinstance(memview, _memoryviewslice): - * obj = memview # <<<<<<<<<<<<<< - * return &obj.from_slice - * else: - */ - if (!(likely(((((PyObject *)__pyx_v_memview)) == Py_None) || likely(__Pyx_TypeTest(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type))))) __PYX_ERR(1, 1056, __pyx_L1_error) - __pyx_t_3 = ((PyObject *)__pyx_v_memview); - __Pyx_INCREF(__pyx_t_3); - __pyx_v_obj = ((struct __pyx_memoryviewslice_obj *)__pyx_t_3); - __pyx_t_3 = 0; - - /* "View.MemoryView":1057 - * if isinstance(memview, _memoryviewslice): - * obj = memview - * return &obj.from_slice # <<<<<<<<<<<<<< - * else: - * slice_copy(memview, mslice) - */ - __pyx_r = (&__pyx_v_obj->from_slice); - goto __pyx_L0; - - /* "View.MemoryView":1055 - * __Pyx_memviewslice *mslice) except NULL: - * cdef _memoryviewslice obj - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * obj = memview - * return &obj.from_slice - */ - } - - /* "View.MemoryView":1059 - * return &obj.from_slice - * else: - * slice_copy(memview, mslice) # <<<<<<<<<<<<<< - * return mslice - * - */ - /*else*/ { - __pyx_memoryview_slice_copy(__pyx_v_memview, __pyx_v_mslice); - - /* "View.MemoryView":1060 - * else: - * slice_copy(memview, mslice) - * return mslice # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_slice_copy') - */ - __pyx_r = __pyx_v_mslice; - goto __pyx_L0; - } - - /* "View.MemoryView":1052 - * - * @cname('__pyx_memoryview_get_slice_from_memoryview') - * cdef __Pyx_memviewslice *get_slice_from_memview(memoryview memview, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *mslice) except NULL: - * cdef _memoryviewslice obj - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.get_slice_from_memview", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_obj); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":1063 - * - * @cname('__pyx_memoryview_slice_copy') - * cdef void slice_copy(memoryview memview, __Pyx_memviewslice *dst): # <<<<<<<<<<<<<< - * cdef int dim - * cdef (Py_ssize_t*) shape, strides, suboffsets - */ - -static void __pyx_memoryview_slice_copy(struct __pyx_memoryview_obj *__pyx_v_memview, __Pyx_memviewslice *__pyx_v_dst) { - int __pyx_v_dim; - Py_ssize_t *__pyx_v_shape; - Py_ssize_t *__pyx_v_strides; - Py_ssize_t *__pyx_v_suboffsets; - __Pyx_RefNannyDeclarations - Py_ssize_t *__pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - Py_ssize_t __pyx_t_5; - __Pyx_RefNannySetupContext("slice_copy", 0); - - /* "View.MemoryView":1067 - * cdef (Py_ssize_t*) shape, strides, suboffsets - * - * shape = memview.view.shape # <<<<<<<<<<<<<< - * strides = memview.view.strides - * suboffsets = memview.view.suboffsets - */ - __pyx_t_1 = __pyx_v_memview->view.shape; - __pyx_v_shape = __pyx_t_1; - - /* "View.MemoryView":1068 - * - * shape = memview.view.shape - * strides = memview.view.strides # <<<<<<<<<<<<<< - * suboffsets = memview.view.suboffsets - * - */ - __pyx_t_1 = __pyx_v_memview->view.strides; - __pyx_v_strides = __pyx_t_1; - - /* "View.MemoryView":1069 - * shape = memview.view.shape - * strides = memview.view.strides - * suboffsets = memview.view.suboffsets # <<<<<<<<<<<<<< - * - * dst.memview = <__pyx_memoryview *> memview - */ - __pyx_t_1 = __pyx_v_memview->view.suboffsets; - __pyx_v_suboffsets = __pyx_t_1; - - /* "View.MemoryView":1071 - * suboffsets = memview.view.suboffsets - * - * dst.memview = <__pyx_memoryview *> memview # <<<<<<<<<<<<<< - * dst.data = memview.view.buf - * - */ - __pyx_v_dst->memview = ((struct __pyx_memoryview_obj *)__pyx_v_memview); - - /* "View.MemoryView":1072 - * - * dst.memview = <__pyx_memoryview *> memview - * dst.data = memview.view.buf # <<<<<<<<<<<<<< - * - * for dim in range(memview.view.ndim): - */ - __pyx_v_dst->data = ((char *)__pyx_v_memview->view.buf); - - /* "View.MemoryView":1074 - * dst.data = memview.view.buf - * - * for dim in range(memview.view.ndim): # <<<<<<<<<<<<<< - * dst.shape[dim] = shape[dim] - * dst.strides[dim] = strides[dim] - */ - __pyx_t_2 = __pyx_v_memview->view.ndim; - __pyx_t_3 = __pyx_t_2; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_dim = __pyx_t_4; - - /* "View.MemoryView":1075 - * - * for dim in range(memview.view.ndim): - * dst.shape[dim] = shape[dim] # <<<<<<<<<<<<<< - * dst.strides[dim] = strides[dim] - * dst.suboffsets[dim] = suboffsets[dim] if suboffsets else -1 - */ - (__pyx_v_dst->shape[__pyx_v_dim]) = (__pyx_v_shape[__pyx_v_dim]); - - /* "View.MemoryView":1076 - * for dim in range(memview.view.ndim): - * dst.shape[dim] = shape[dim] - * dst.strides[dim] = strides[dim] # <<<<<<<<<<<<<< - * dst.suboffsets[dim] = suboffsets[dim] if suboffsets else -1 - * - */ - (__pyx_v_dst->strides[__pyx_v_dim]) = (__pyx_v_strides[__pyx_v_dim]); - - /* "View.MemoryView":1077 - * dst.shape[dim] = shape[dim] - * dst.strides[dim] = strides[dim] - * dst.suboffsets[dim] = suboffsets[dim] if suboffsets else -1 # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_copy_object') - */ - if ((__pyx_v_suboffsets != 0)) { - __pyx_t_5 = (__pyx_v_suboffsets[__pyx_v_dim]); - } else { - __pyx_t_5 = -1L; - } - (__pyx_v_dst->suboffsets[__pyx_v_dim]) = __pyx_t_5; - } - - /* "View.MemoryView":1063 - * - * @cname('__pyx_memoryview_slice_copy') - * cdef void slice_copy(memoryview memview, __Pyx_memviewslice *dst): # <<<<<<<<<<<<<< - * cdef int dim - * cdef (Py_ssize_t*) shape, strides, suboffsets - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -/* "View.MemoryView":1080 - * - * @cname('__pyx_memoryview_copy_object') - * cdef memoryview_copy(memoryview memview): # <<<<<<<<<<<<<< - * "Create a new memoryview object" - * cdef __Pyx_memviewslice memviewslice - */ - -static PyObject *__pyx_memoryview_copy_object(struct __pyx_memoryview_obj *__pyx_v_memview) { - __Pyx_memviewslice __pyx_v_memviewslice; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("memoryview_copy", 0); - - /* "View.MemoryView":1083 - * "Create a new memoryview object" - * cdef __Pyx_memviewslice memviewslice - * slice_copy(memview, &memviewslice) # <<<<<<<<<<<<<< - * return memoryview_copy_from_slice(memview, &memviewslice) - * - */ - __pyx_memoryview_slice_copy(__pyx_v_memview, (&__pyx_v_memviewslice)); - - /* "View.MemoryView":1084 - * cdef __Pyx_memviewslice memviewslice - * slice_copy(memview, &memviewslice) - * return memoryview_copy_from_slice(memview, &memviewslice) # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_copy_object_from_slice') - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __pyx_memoryview_copy_object_from_slice(__pyx_v_memview, (&__pyx_v_memviewslice)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 1084, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":1080 - * - * @cname('__pyx_memoryview_copy_object') - * cdef memoryview_copy(memoryview memview): # <<<<<<<<<<<<<< - * "Create a new memoryview object" - * cdef __Pyx_memviewslice memviewslice - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview_copy", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":1087 - * - * @cname('__pyx_memoryview_copy_object_from_slice') - * cdef memoryview_copy_from_slice(memoryview memview, __Pyx_memviewslice *memviewslice): # <<<<<<<<<<<<<< - * """ - * Create a new memoryview object from a given memoryview object and slice. - */ - -static PyObject *__pyx_memoryview_copy_object_from_slice(struct __pyx_memoryview_obj *__pyx_v_memview, __Pyx_memviewslice *__pyx_v_memviewslice) { - PyObject *(*__pyx_v_to_object_func)(char *); - int (*__pyx_v_to_dtype_func)(char *, PyObject *); - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *(*__pyx_t_3)(char *); - int (*__pyx_t_4)(char *, PyObject *); - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("memoryview_copy_from_slice", 0); - - /* "View.MemoryView":1094 - * cdef int (*to_dtype_func)(char *, object) except 0 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * to_object_func = (<_memoryviewslice> memview).to_object_func - * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func - */ - __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1095 - * - * if isinstance(memview, _memoryviewslice): - * to_object_func = (<_memoryviewslice> memview).to_object_func # <<<<<<<<<<<<<< - * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func - * else: - */ - __pyx_t_3 = ((struct __pyx_memoryviewslice_obj *)__pyx_v_memview)->to_object_func; - __pyx_v_to_object_func = __pyx_t_3; - - /* "View.MemoryView":1096 - * if isinstance(memview, _memoryviewslice): - * to_object_func = (<_memoryviewslice> memview).to_object_func - * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func # <<<<<<<<<<<<<< - * else: - * to_object_func = NULL - */ - __pyx_t_4 = ((struct __pyx_memoryviewslice_obj *)__pyx_v_memview)->to_dtype_func; - __pyx_v_to_dtype_func = __pyx_t_4; - - /* "View.MemoryView":1094 - * cdef int (*to_dtype_func)(char *, object) except 0 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * to_object_func = (<_memoryviewslice> memview).to_object_func - * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func - */ - goto __pyx_L3; - } - - /* "View.MemoryView":1098 - * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func - * else: - * to_object_func = NULL # <<<<<<<<<<<<<< - * to_dtype_func = NULL - * - */ - /*else*/ { - __pyx_v_to_object_func = NULL; - - /* "View.MemoryView":1099 - * else: - * to_object_func = NULL - * to_dtype_func = NULL # <<<<<<<<<<<<<< - * - * return memoryview_fromslice(memviewslice[0], memview.view.ndim, - */ - __pyx_v_to_dtype_func = NULL; - } - __pyx_L3:; - - /* "View.MemoryView":1101 - * to_dtype_func = NULL - * - * return memoryview_fromslice(memviewslice[0], memview.view.ndim, # <<<<<<<<<<<<<< - * to_object_func, to_dtype_func, - * memview.dtype_is_object) - */ - __Pyx_XDECREF(__pyx_r); - - /* "View.MemoryView":1103 - * return memoryview_fromslice(memviewslice[0], memview.view.ndim, - * to_object_func, to_dtype_func, - * memview.dtype_is_object) # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_5 = __pyx_memoryview_fromslice((__pyx_v_memviewslice[0]), __pyx_v_memview->view.ndim, __pyx_v_to_object_func, __pyx_v_to_dtype_func, __pyx_v_memview->dtype_is_object); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 1101, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - /* "View.MemoryView":1087 - * - * @cname('__pyx_memoryview_copy_object_from_slice') - * cdef memoryview_copy_from_slice(memoryview memview, __Pyx_memviewslice *memviewslice): # <<<<<<<<<<<<<< - * """ - * Create a new memoryview object from a given memoryview object and slice. - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.memoryview_copy_from_slice", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":1109 - * - * - * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) nogil: # <<<<<<<<<<<<<< - * if arg < 0: - * return -arg - */ - -static Py_ssize_t abs_py_ssize_t(Py_ssize_t __pyx_v_arg) { - Py_ssize_t __pyx_r; - int __pyx_t_1; - - /* "View.MemoryView":1110 - * - * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) nogil: - * if arg < 0: # <<<<<<<<<<<<<< - * return -arg - * else: - */ - __pyx_t_1 = ((__pyx_v_arg < 0) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1111 - * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) nogil: - * if arg < 0: - * return -arg # <<<<<<<<<<<<<< - * else: - * return arg - */ - __pyx_r = (-__pyx_v_arg); - goto __pyx_L0; - - /* "View.MemoryView":1110 - * - * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) nogil: - * if arg < 0: # <<<<<<<<<<<<<< - * return -arg - * else: - */ - } - - /* "View.MemoryView":1113 - * return -arg - * else: - * return arg # <<<<<<<<<<<<<< - * - * @cname('__pyx_get_best_slice_order') - */ - /*else*/ { - __pyx_r = __pyx_v_arg; - goto __pyx_L0; - } - - /* "View.MemoryView":1109 - * - * - * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) nogil: # <<<<<<<<<<<<<< - * if arg < 0: - * return -arg - */ - - /* function exit code */ - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1116 - * - * @cname('__pyx_get_best_slice_order') - * cdef char get_best_order(__Pyx_memviewslice *mslice, int ndim) nogil: # <<<<<<<<<<<<<< - * """ - * Figure out the best memory access order for a given slice. - */ - -static char __pyx_get_best_slice_order(__Pyx_memviewslice *__pyx_v_mslice, int __pyx_v_ndim) { - int __pyx_v_i; - Py_ssize_t __pyx_v_c_stride; - Py_ssize_t __pyx_v_f_stride; - char __pyx_r; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - - /* "View.MemoryView":1121 - * """ - * cdef int i - * cdef Py_ssize_t c_stride = 0 # <<<<<<<<<<<<<< - * cdef Py_ssize_t f_stride = 0 - * - */ - __pyx_v_c_stride = 0; - - /* "View.MemoryView":1122 - * cdef int i - * cdef Py_ssize_t c_stride = 0 - * cdef Py_ssize_t f_stride = 0 # <<<<<<<<<<<<<< - * - * for i in range(ndim - 1, -1, -1): - */ - __pyx_v_f_stride = 0; - - /* "View.MemoryView":1124 - * cdef Py_ssize_t f_stride = 0 - * - * for i in range(ndim - 1, -1, -1): # <<<<<<<<<<<<<< - * if mslice.shape[i] > 1: - * c_stride = mslice.strides[i] - */ - for (__pyx_t_1 = (__pyx_v_ndim - 1); __pyx_t_1 > -1; __pyx_t_1-=1) { - __pyx_v_i = __pyx_t_1; - - /* "View.MemoryView":1125 - * - * for i in range(ndim - 1, -1, -1): - * if mslice.shape[i] > 1: # <<<<<<<<<<<<<< - * c_stride = mslice.strides[i] - * break - */ - __pyx_t_2 = (((__pyx_v_mslice->shape[__pyx_v_i]) > 1) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1126 - * for i in range(ndim - 1, -1, -1): - * if mslice.shape[i] > 1: - * c_stride = mslice.strides[i] # <<<<<<<<<<<<<< - * break - * - */ - __pyx_v_c_stride = (__pyx_v_mslice->strides[__pyx_v_i]); - - /* "View.MemoryView":1127 - * if mslice.shape[i] > 1: - * c_stride = mslice.strides[i] - * break # <<<<<<<<<<<<<< - * - * for i in range(ndim): - */ - goto __pyx_L4_break; - - /* "View.MemoryView":1125 - * - * for i in range(ndim - 1, -1, -1): - * if mslice.shape[i] > 1: # <<<<<<<<<<<<<< - * c_stride = mslice.strides[i] - * break - */ - } - } - __pyx_L4_break:; - - /* "View.MemoryView":1129 - * break - * - * for i in range(ndim): # <<<<<<<<<<<<<< - * if mslice.shape[i] > 1: - * f_stride = mslice.strides[i] - */ - __pyx_t_1 = __pyx_v_ndim; - __pyx_t_3 = __pyx_t_1; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_i = __pyx_t_4; - - /* "View.MemoryView":1130 - * - * for i in range(ndim): - * if mslice.shape[i] > 1: # <<<<<<<<<<<<<< - * f_stride = mslice.strides[i] - * break - */ - __pyx_t_2 = (((__pyx_v_mslice->shape[__pyx_v_i]) > 1) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1131 - * for i in range(ndim): - * if mslice.shape[i] > 1: - * f_stride = mslice.strides[i] # <<<<<<<<<<<<<< - * break - * - */ - __pyx_v_f_stride = (__pyx_v_mslice->strides[__pyx_v_i]); - - /* "View.MemoryView":1132 - * if mslice.shape[i] > 1: - * f_stride = mslice.strides[i] - * break # <<<<<<<<<<<<<< - * - * if abs_py_ssize_t(c_stride) <= abs_py_ssize_t(f_stride): - */ - goto __pyx_L7_break; - - /* "View.MemoryView":1130 - * - * for i in range(ndim): - * if mslice.shape[i] > 1: # <<<<<<<<<<<<<< - * f_stride = mslice.strides[i] - * break - */ - } - } - __pyx_L7_break:; - - /* "View.MemoryView":1134 - * break - * - * if abs_py_ssize_t(c_stride) <= abs_py_ssize_t(f_stride): # <<<<<<<<<<<<<< - * return 'C' - * else: - */ - __pyx_t_2 = ((abs_py_ssize_t(__pyx_v_c_stride) <= abs_py_ssize_t(__pyx_v_f_stride)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1135 - * - * if abs_py_ssize_t(c_stride) <= abs_py_ssize_t(f_stride): - * return 'C' # <<<<<<<<<<<<<< - * else: - * return 'F' - */ - __pyx_r = 'C'; - goto __pyx_L0; - - /* "View.MemoryView":1134 - * break - * - * if abs_py_ssize_t(c_stride) <= abs_py_ssize_t(f_stride): # <<<<<<<<<<<<<< - * return 'C' - * else: - */ - } - - /* "View.MemoryView":1137 - * return 'C' - * else: - * return 'F' # <<<<<<<<<<<<<< - * - * @cython.cdivision(True) - */ - /*else*/ { - __pyx_r = 'F'; - goto __pyx_L0; - } - - /* "View.MemoryView":1116 - * - * @cname('__pyx_get_best_slice_order') - * cdef char get_best_order(__Pyx_memviewslice *mslice, int ndim) nogil: # <<<<<<<<<<<<<< - * """ - * Figure out the best memory access order for a given slice. - */ - - /* function exit code */ - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1140 - * - * @cython.cdivision(True) - * cdef void _copy_strided_to_strided(char *src_data, Py_ssize_t *src_strides, # <<<<<<<<<<<<<< - * char *dst_data, Py_ssize_t *dst_strides, - * Py_ssize_t *src_shape, Py_ssize_t *dst_shape, - */ - -static void _copy_strided_to_strided(char *__pyx_v_src_data, Py_ssize_t *__pyx_v_src_strides, char *__pyx_v_dst_data, Py_ssize_t *__pyx_v_dst_strides, Py_ssize_t *__pyx_v_src_shape, Py_ssize_t *__pyx_v_dst_shape, int __pyx_v_ndim, size_t __pyx_v_itemsize) { - CYTHON_UNUSED Py_ssize_t __pyx_v_i; - CYTHON_UNUSED Py_ssize_t __pyx_v_src_extent; - Py_ssize_t __pyx_v_dst_extent; - Py_ssize_t __pyx_v_src_stride; - Py_ssize_t __pyx_v_dst_stride; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - Py_ssize_t __pyx_t_4; - Py_ssize_t __pyx_t_5; - Py_ssize_t __pyx_t_6; - - /* "View.MemoryView":1147 - * - * cdef Py_ssize_t i - * cdef Py_ssize_t src_extent = src_shape[0] # <<<<<<<<<<<<<< - * cdef Py_ssize_t dst_extent = dst_shape[0] - * cdef Py_ssize_t src_stride = src_strides[0] - */ - __pyx_v_src_extent = (__pyx_v_src_shape[0]); - - /* "View.MemoryView":1148 - * cdef Py_ssize_t i - * cdef Py_ssize_t src_extent = src_shape[0] - * cdef Py_ssize_t dst_extent = dst_shape[0] # <<<<<<<<<<<<<< - * cdef Py_ssize_t src_stride = src_strides[0] - * cdef Py_ssize_t dst_stride = dst_strides[0] - */ - __pyx_v_dst_extent = (__pyx_v_dst_shape[0]); - - /* "View.MemoryView":1149 - * cdef Py_ssize_t src_extent = src_shape[0] - * cdef Py_ssize_t dst_extent = dst_shape[0] - * cdef Py_ssize_t src_stride = src_strides[0] # <<<<<<<<<<<<<< - * cdef Py_ssize_t dst_stride = dst_strides[0] - * - */ - __pyx_v_src_stride = (__pyx_v_src_strides[0]); - - /* "View.MemoryView":1150 - * cdef Py_ssize_t dst_extent = dst_shape[0] - * cdef Py_ssize_t src_stride = src_strides[0] - * cdef Py_ssize_t dst_stride = dst_strides[0] # <<<<<<<<<<<<<< - * - * if ndim == 1: - */ - __pyx_v_dst_stride = (__pyx_v_dst_strides[0]); - - /* "View.MemoryView":1152 - * cdef Py_ssize_t dst_stride = dst_strides[0] - * - * if ndim == 1: # <<<<<<<<<<<<<< - * if (src_stride > 0 and dst_stride > 0 and - * src_stride == itemsize == dst_stride): - */ - __pyx_t_1 = ((__pyx_v_ndim == 1) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1153 - * - * if ndim == 1: - * if (src_stride > 0 and dst_stride > 0 and # <<<<<<<<<<<<<< - * src_stride == itemsize == dst_stride): - * memcpy(dst_data, src_data, itemsize * dst_extent) - */ - __pyx_t_2 = ((__pyx_v_src_stride > 0) != 0); - if (__pyx_t_2) { - } else { - __pyx_t_1 = __pyx_t_2; - goto __pyx_L5_bool_binop_done; - } - __pyx_t_2 = ((__pyx_v_dst_stride > 0) != 0); - if (__pyx_t_2) { - } else { - __pyx_t_1 = __pyx_t_2; - goto __pyx_L5_bool_binop_done; - } - - /* "View.MemoryView":1154 - * if ndim == 1: - * if (src_stride > 0 and dst_stride > 0 and - * src_stride == itemsize == dst_stride): # <<<<<<<<<<<<<< - * memcpy(dst_data, src_data, itemsize * dst_extent) - * else: - */ - __pyx_t_2 = (((size_t)__pyx_v_src_stride) == __pyx_v_itemsize); - if (__pyx_t_2) { - __pyx_t_2 = (__pyx_v_itemsize == ((size_t)__pyx_v_dst_stride)); - } - __pyx_t_3 = (__pyx_t_2 != 0); - __pyx_t_1 = __pyx_t_3; - __pyx_L5_bool_binop_done:; - - /* "View.MemoryView":1153 - * - * if ndim == 1: - * if (src_stride > 0 and dst_stride > 0 and # <<<<<<<<<<<<<< - * src_stride == itemsize == dst_stride): - * memcpy(dst_data, src_data, itemsize * dst_extent) - */ - if (__pyx_t_1) { - - /* "View.MemoryView":1155 - * if (src_stride > 0 and dst_stride > 0 and - * src_stride == itemsize == dst_stride): - * memcpy(dst_data, src_data, itemsize * dst_extent) # <<<<<<<<<<<<<< - * else: - * for i in range(dst_extent): - */ - (void)(memcpy(__pyx_v_dst_data, __pyx_v_src_data, (__pyx_v_itemsize * __pyx_v_dst_extent))); - - /* "View.MemoryView":1153 - * - * if ndim == 1: - * if (src_stride > 0 and dst_stride > 0 and # <<<<<<<<<<<<<< - * src_stride == itemsize == dst_stride): - * memcpy(dst_data, src_data, itemsize * dst_extent) - */ - goto __pyx_L4; - } - - /* "View.MemoryView":1157 - * memcpy(dst_data, src_data, itemsize * dst_extent) - * else: - * for i in range(dst_extent): # <<<<<<<<<<<<<< - * memcpy(dst_data, src_data, itemsize) - * src_data += src_stride - */ - /*else*/ { - __pyx_t_4 = __pyx_v_dst_extent; - __pyx_t_5 = __pyx_t_4; - for (__pyx_t_6 = 0; __pyx_t_6 < __pyx_t_5; __pyx_t_6+=1) { - __pyx_v_i = __pyx_t_6; - - /* "View.MemoryView":1158 - * else: - * for i in range(dst_extent): - * memcpy(dst_data, src_data, itemsize) # <<<<<<<<<<<<<< - * src_data += src_stride - * dst_data += dst_stride - */ - (void)(memcpy(__pyx_v_dst_data, __pyx_v_src_data, __pyx_v_itemsize)); - - /* "View.MemoryView":1159 - * for i in range(dst_extent): - * memcpy(dst_data, src_data, itemsize) - * src_data += src_stride # <<<<<<<<<<<<<< - * dst_data += dst_stride - * else: - */ - __pyx_v_src_data = (__pyx_v_src_data + __pyx_v_src_stride); - - /* "View.MemoryView":1160 - * memcpy(dst_data, src_data, itemsize) - * src_data += src_stride - * dst_data += dst_stride # <<<<<<<<<<<<<< - * else: - * for i in range(dst_extent): - */ - __pyx_v_dst_data = (__pyx_v_dst_data + __pyx_v_dst_stride); - } - } - __pyx_L4:; - - /* "View.MemoryView":1152 - * cdef Py_ssize_t dst_stride = dst_strides[0] - * - * if ndim == 1: # <<<<<<<<<<<<<< - * if (src_stride > 0 and dst_stride > 0 and - * src_stride == itemsize == dst_stride): - */ - goto __pyx_L3; - } - - /* "View.MemoryView":1162 - * dst_data += dst_stride - * else: - * for i in range(dst_extent): # <<<<<<<<<<<<<< - * _copy_strided_to_strided(src_data, src_strides + 1, - * dst_data, dst_strides + 1, - */ - /*else*/ { - __pyx_t_4 = __pyx_v_dst_extent; - __pyx_t_5 = __pyx_t_4; - for (__pyx_t_6 = 0; __pyx_t_6 < __pyx_t_5; __pyx_t_6+=1) { - __pyx_v_i = __pyx_t_6; - - /* "View.MemoryView":1163 - * else: - * for i in range(dst_extent): - * _copy_strided_to_strided(src_data, src_strides + 1, # <<<<<<<<<<<<<< - * dst_data, dst_strides + 1, - * src_shape + 1, dst_shape + 1, - */ - _copy_strided_to_strided(__pyx_v_src_data, (__pyx_v_src_strides + 1), __pyx_v_dst_data, (__pyx_v_dst_strides + 1), (__pyx_v_src_shape + 1), (__pyx_v_dst_shape + 1), (__pyx_v_ndim - 1), __pyx_v_itemsize); - - /* "View.MemoryView":1167 - * src_shape + 1, dst_shape + 1, - * ndim - 1, itemsize) - * src_data += src_stride # <<<<<<<<<<<<<< - * dst_data += dst_stride - * - */ - __pyx_v_src_data = (__pyx_v_src_data + __pyx_v_src_stride); - - /* "View.MemoryView":1168 - * ndim - 1, itemsize) - * src_data += src_stride - * dst_data += dst_stride # <<<<<<<<<<<<<< - * - * cdef void copy_strided_to_strided(__Pyx_memviewslice *src, - */ - __pyx_v_dst_data = (__pyx_v_dst_data + __pyx_v_dst_stride); - } - } - __pyx_L3:; - - /* "View.MemoryView":1140 - * - * @cython.cdivision(True) - * cdef void _copy_strided_to_strided(char *src_data, Py_ssize_t *src_strides, # <<<<<<<<<<<<<< - * char *dst_data, Py_ssize_t *dst_strides, - * Py_ssize_t *src_shape, Py_ssize_t *dst_shape, - */ - - /* function exit code */ -} - -/* "View.MemoryView":1170 - * dst_data += dst_stride - * - * cdef void copy_strided_to_strided(__Pyx_memviewslice *src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *dst, - * int ndim, size_t itemsize) nogil: - */ - -static void copy_strided_to_strided(__Pyx_memviewslice *__pyx_v_src, __Pyx_memviewslice *__pyx_v_dst, int __pyx_v_ndim, size_t __pyx_v_itemsize) { - - /* "View.MemoryView":1173 - * __Pyx_memviewslice *dst, - * int ndim, size_t itemsize) nogil: - * _copy_strided_to_strided(src.data, src.strides, dst.data, dst.strides, # <<<<<<<<<<<<<< - * src.shape, dst.shape, ndim, itemsize) - * - */ - _copy_strided_to_strided(__pyx_v_src->data, __pyx_v_src->strides, __pyx_v_dst->data, __pyx_v_dst->strides, __pyx_v_src->shape, __pyx_v_dst->shape, __pyx_v_ndim, __pyx_v_itemsize); - - /* "View.MemoryView":1170 - * dst_data += dst_stride - * - * cdef void copy_strided_to_strided(__Pyx_memviewslice *src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *dst, - * int ndim, size_t itemsize) nogil: - */ - - /* function exit code */ -} - -/* "View.MemoryView":1177 - * - * @cname('__pyx_memoryview_slice_get_size') - * cdef Py_ssize_t slice_get_size(__Pyx_memviewslice *src, int ndim) nogil: # <<<<<<<<<<<<<< - * "Return the size of the memory occupied by the slice in number of bytes" - * cdef Py_ssize_t shape, size = src.memview.view.itemsize - */ - -static Py_ssize_t __pyx_memoryview_slice_get_size(__Pyx_memviewslice *__pyx_v_src, int __pyx_v_ndim) { - Py_ssize_t __pyx_v_shape; - Py_ssize_t __pyx_v_size; - Py_ssize_t __pyx_r; - Py_ssize_t __pyx_t_1; - Py_ssize_t *__pyx_t_2; - Py_ssize_t *__pyx_t_3; - Py_ssize_t *__pyx_t_4; - - /* "View.MemoryView":1179 - * cdef Py_ssize_t slice_get_size(__Pyx_memviewslice *src, int ndim) nogil: - * "Return the size of the memory occupied by the slice in number of bytes" - * cdef Py_ssize_t shape, size = src.memview.view.itemsize # <<<<<<<<<<<<<< - * - * for shape in src.shape[:ndim]: - */ - __pyx_t_1 = __pyx_v_src->memview->view.itemsize; - __pyx_v_size = __pyx_t_1; - - /* "View.MemoryView":1181 - * cdef Py_ssize_t shape, size = src.memview.view.itemsize - * - * for shape in src.shape[:ndim]: # <<<<<<<<<<<<<< - * size *= shape - * - */ - __pyx_t_3 = (__pyx_v_src->shape + __pyx_v_ndim); - for (__pyx_t_4 = __pyx_v_src->shape; __pyx_t_4 < __pyx_t_3; __pyx_t_4++) { - __pyx_t_2 = __pyx_t_4; - __pyx_v_shape = (__pyx_t_2[0]); - - /* "View.MemoryView":1182 - * - * for shape in src.shape[:ndim]: - * size *= shape # <<<<<<<<<<<<<< - * - * return size - */ - __pyx_v_size = (__pyx_v_size * __pyx_v_shape); - } - - /* "View.MemoryView":1184 - * size *= shape - * - * return size # <<<<<<<<<<<<<< - * - * @cname('__pyx_fill_contig_strides_array') - */ - __pyx_r = __pyx_v_size; - goto __pyx_L0; - - /* "View.MemoryView":1177 - * - * @cname('__pyx_memoryview_slice_get_size') - * cdef Py_ssize_t slice_get_size(__Pyx_memviewslice *src, int ndim) nogil: # <<<<<<<<<<<<<< - * "Return the size of the memory occupied by the slice in number of bytes" - * cdef Py_ssize_t shape, size = src.memview.view.itemsize - */ - - /* function exit code */ - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1187 - * - * @cname('__pyx_fill_contig_strides_array') - * cdef Py_ssize_t fill_contig_strides_array( # <<<<<<<<<<<<<< - * Py_ssize_t *shape, Py_ssize_t *strides, Py_ssize_t stride, - * int ndim, char order) nogil: - */ - -static Py_ssize_t __pyx_fill_contig_strides_array(Py_ssize_t *__pyx_v_shape, Py_ssize_t *__pyx_v_strides, Py_ssize_t __pyx_v_stride, int __pyx_v_ndim, char __pyx_v_order) { - int __pyx_v_idx; - Py_ssize_t __pyx_r; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - - /* "View.MemoryView":1196 - * cdef int idx - * - * if order == 'F': # <<<<<<<<<<<<<< - * for idx in range(ndim): - * strides[idx] = stride - */ - __pyx_t_1 = ((__pyx_v_order == 'F') != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1197 - * - * if order == 'F': - * for idx in range(ndim): # <<<<<<<<<<<<<< - * strides[idx] = stride - * stride *= shape[idx] - */ - __pyx_t_2 = __pyx_v_ndim; - __pyx_t_3 = __pyx_t_2; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_idx = __pyx_t_4; - - /* "View.MemoryView":1198 - * if order == 'F': - * for idx in range(ndim): - * strides[idx] = stride # <<<<<<<<<<<<<< - * stride *= shape[idx] - * else: - */ - (__pyx_v_strides[__pyx_v_idx]) = __pyx_v_stride; - - /* "View.MemoryView":1199 - * for idx in range(ndim): - * strides[idx] = stride - * stride *= shape[idx] # <<<<<<<<<<<<<< - * else: - * for idx in range(ndim - 1, -1, -1): - */ - __pyx_v_stride = (__pyx_v_stride * (__pyx_v_shape[__pyx_v_idx])); - } - - /* "View.MemoryView":1196 - * cdef int idx - * - * if order == 'F': # <<<<<<<<<<<<<< - * for idx in range(ndim): - * strides[idx] = stride - */ - goto __pyx_L3; - } - - /* "View.MemoryView":1201 - * stride *= shape[idx] - * else: - * for idx in range(ndim - 1, -1, -1): # <<<<<<<<<<<<<< - * strides[idx] = stride - * stride *= shape[idx] - */ - /*else*/ { - for (__pyx_t_2 = (__pyx_v_ndim - 1); __pyx_t_2 > -1; __pyx_t_2-=1) { - __pyx_v_idx = __pyx_t_2; - - /* "View.MemoryView":1202 - * else: - * for idx in range(ndim - 1, -1, -1): - * strides[idx] = stride # <<<<<<<<<<<<<< - * stride *= shape[idx] - * - */ - (__pyx_v_strides[__pyx_v_idx]) = __pyx_v_stride; - - /* "View.MemoryView":1203 - * for idx in range(ndim - 1, -1, -1): - * strides[idx] = stride - * stride *= shape[idx] # <<<<<<<<<<<<<< - * - * return stride - */ - __pyx_v_stride = (__pyx_v_stride * (__pyx_v_shape[__pyx_v_idx])); - } - } - __pyx_L3:; - - /* "View.MemoryView":1205 - * stride *= shape[idx] - * - * return stride # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_copy_data_to_temp') - */ - __pyx_r = __pyx_v_stride; - goto __pyx_L0; - - /* "View.MemoryView":1187 - * - * @cname('__pyx_fill_contig_strides_array') - * cdef Py_ssize_t fill_contig_strides_array( # <<<<<<<<<<<<<< - * Py_ssize_t *shape, Py_ssize_t *strides, Py_ssize_t stride, - * int ndim, char order) nogil: - */ - - /* function exit code */ - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1208 - * - * @cname('__pyx_memoryview_copy_data_to_temp') - * cdef void *copy_data_to_temp(__Pyx_memviewslice *src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *tmpslice, - * char order, - */ - -static void *__pyx_memoryview_copy_data_to_temp(__Pyx_memviewslice *__pyx_v_src, __Pyx_memviewslice *__pyx_v_tmpslice, char __pyx_v_order, int __pyx_v_ndim) { - int __pyx_v_i; - void *__pyx_v_result; - size_t __pyx_v_itemsize; - size_t __pyx_v_size; - void *__pyx_r; - Py_ssize_t __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - struct __pyx_memoryview_obj *__pyx_t_4; - int __pyx_t_5; - int __pyx_t_6; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - - /* "View.MemoryView":1219 - * cdef void *result - * - * cdef size_t itemsize = src.memview.view.itemsize # <<<<<<<<<<<<<< - * cdef size_t size = slice_get_size(src, ndim) - * - */ - __pyx_t_1 = __pyx_v_src->memview->view.itemsize; - __pyx_v_itemsize = __pyx_t_1; - - /* "View.MemoryView":1220 - * - * cdef size_t itemsize = src.memview.view.itemsize - * cdef size_t size = slice_get_size(src, ndim) # <<<<<<<<<<<<<< - * - * result = malloc(size) - */ - __pyx_v_size = __pyx_memoryview_slice_get_size(__pyx_v_src, __pyx_v_ndim); - - /* "View.MemoryView":1222 - * cdef size_t size = slice_get_size(src, ndim) - * - * result = malloc(size) # <<<<<<<<<<<<<< - * if not result: - * _err(MemoryError, NULL) - */ - __pyx_v_result = malloc(__pyx_v_size); - - /* "View.MemoryView":1223 - * - * result = malloc(size) - * if not result: # <<<<<<<<<<<<<< - * _err(MemoryError, NULL) - * - */ - __pyx_t_2 = ((!(__pyx_v_result != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1224 - * result = malloc(size) - * if not result: - * _err(MemoryError, NULL) # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_3 = __pyx_memoryview_err(__pyx_builtin_MemoryError, NULL); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 1224, __pyx_L1_error) - - /* "View.MemoryView":1223 - * - * result = malloc(size) - * if not result: # <<<<<<<<<<<<<< - * _err(MemoryError, NULL) - * - */ - } - - /* "View.MemoryView":1227 - * - * - * tmpslice.data = result # <<<<<<<<<<<<<< - * tmpslice.memview = src.memview - * for i in range(ndim): - */ - __pyx_v_tmpslice->data = ((char *)__pyx_v_result); - - /* "View.MemoryView":1228 - * - * tmpslice.data = result - * tmpslice.memview = src.memview # <<<<<<<<<<<<<< - * for i in range(ndim): - * tmpslice.shape[i] = src.shape[i] - */ - __pyx_t_4 = __pyx_v_src->memview; - __pyx_v_tmpslice->memview = __pyx_t_4; - - /* "View.MemoryView":1229 - * tmpslice.data = result - * tmpslice.memview = src.memview - * for i in range(ndim): # <<<<<<<<<<<<<< - * tmpslice.shape[i] = src.shape[i] - * tmpslice.suboffsets[i] = -1 - */ - __pyx_t_3 = __pyx_v_ndim; - __pyx_t_5 = __pyx_t_3; - for (__pyx_t_6 = 0; __pyx_t_6 < __pyx_t_5; __pyx_t_6+=1) { - __pyx_v_i = __pyx_t_6; - - /* "View.MemoryView":1230 - * tmpslice.memview = src.memview - * for i in range(ndim): - * tmpslice.shape[i] = src.shape[i] # <<<<<<<<<<<<<< - * tmpslice.suboffsets[i] = -1 - * - */ - (__pyx_v_tmpslice->shape[__pyx_v_i]) = (__pyx_v_src->shape[__pyx_v_i]); - - /* "View.MemoryView":1231 - * for i in range(ndim): - * tmpslice.shape[i] = src.shape[i] - * tmpslice.suboffsets[i] = -1 # <<<<<<<<<<<<<< - * - * fill_contig_strides_array(&tmpslice.shape[0], &tmpslice.strides[0], itemsize, - */ - (__pyx_v_tmpslice->suboffsets[__pyx_v_i]) = -1L; - } - - /* "View.MemoryView":1233 - * tmpslice.suboffsets[i] = -1 - * - * fill_contig_strides_array(&tmpslice.shape[0], &tmpslice.strides[0], itemsize, # <<<<<<<<<<<<<< - * ndim, order) - * - */ - (void)(__pyx_fill_contig_strides_array((&(__pyx_v_tmpslice->shape[0])), (&(__pyx_v_tmpslice->strides[0])), __pyx_v_itemsize, __pyx_v_ndim, __pyx_v_order)); - - /* "View.MemoryView":1237 - * - * - * for i in range(ndim): # <<<<<<<<<<<<<< - * if tmpslice.shape[i] == 1: - * tmpslice.strides[i] = 0 - */ - __pyx_t_3 = __pyx_v_ndim; - __pyx_t_5 = __pyx_t_3; - for (__pyx_t_6 = 0; __pyx_t_6 < __pyx_t_5; __pyx_t_6+=1) { - __pyx_v_i = __pyx_t_6; - - /* "View.MemoryView":1238 - * - * for i in range(ndim): - * if tmpslice.shape[i] == 1: # <<<<<<<<<<<<<< - * tmpslice.strides[i] = 0 - * - */ - __pyx_t_2 = (((__pyx_v_tmpslice->shape[__pyx_v_i]) == 1) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1239 - * for i in range(ndim): - * if tmpslice.shape[i] == 1: - * tmpslice.strides[i] = 0 # <<<<<<<<<<<<<< - * - * if slice_is_contig(src[0], order, ndim): - */ - (__pyx_v_tmpslice->strides[__pyx_v_i]) = 0; - - /* "View.MemoryView":1238 - * - * for i in range(ndim): - * if tmpslice.shape[i] == 1: # <<<<<<<<<<<<<< - * tmpslice.strides[i] = 0 - * - */ - } - } - - /* "View.MemoryView":1241 - * tmpslice.strides[i] = 0 - * - * if slice_is_contig(src[0], order, ndim): # <<<<<<<<<<<<<< - * memcpy(result, src.data, size) - * else: - */ - __pyx_t_2 = (__pyx_memviewslice_is_contig((__pyx_v_src[0]), __pyx_v_order, __pyx_v_ndim) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1242 - * - * if slice_is_contig(src[0], order, ndim): - * memcpy(result, src.data, size) # <<<<<<<<<<<<<< - * else: - * copy_strided_to_strided(src, tmpslice, ndim, itemsize) - */ - (void)(memcpy(__pyx_v_result, __pyx_v_src->data, __pyx_v_size)); - - /* "View.MemoryView":1241 - * tmpslice.strides[i] = 0 - * - * if slice_is_contig(src[0], order, ndim): # <<<<<<<<<<<<<< - * memcpy(result, src.data, size) - * else: - */ - goto __pyx_L9; - } - - /* "View.MemoryView":1244 - * memcpy(result, src.data, size) - * else: - * copy_strided_to_strided(src, tmpslice, ndim, itemsize) # <<<<<<<<<<<<<< - * - * return result - */ - /*else*/ { - copy_strided_to_strided(__pyx_v_src, __pyx_v_tmpslice, __pyx_v_ndim, __pyx_v_itemsize); - } - __pyx_L9:; - - /* "View.MemoryView":1246 - * copy_strided_to_strided(src, tmpslice, ndim, itemsize) - * - * return result # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = __pyx_v_result; - goto __pyx_L0; - - /* "View.MemoryView":1208 - * - * @cname('__pyx_memoryview_copy_data_to_temp') - * cdef void *copy_data_to_temp(__Pyx_memviewslice *src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *tmpslice, - * char order, - */ - - /* function exit code */ - __pyx_L1_error:; - { - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_AddTraceback("View.MemoryView.copy_data_to_temp", __pyx_clineno, __pyx_lineno, __pyx_filename); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - } - __pyx_r = NULL; - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1251 - * - * @cname('__pyx_memoryview_err_extents') - * cdef int _err_extents(int i, Py_ssize_t extent1, # <<<<<<<<<<<<<< - * Py_ssize_t extent2) except -1 with gil: - * raise ValueError("got differing extents in dimension %d (got %d and %d)" % - */ - -static int __pyx_memoryview_err_extents(int __pyx_v_i, Py_ssize_t __pyx_v_extent1, Py_ssize_t __pyx_v_extent2) { - int __pyx_r; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_RefNannySetupContext("_err_extents", 0); - - /* "View.MemoryView":1254 - * Py_ssize_t extent2) except -1 with gil: - * raise ValueError("got differing extents in dimension %d (got %d and %d)" % - * (i, extent1, extent2)) # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_err_dim') - */ - __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_i); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 1254, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_extent1); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1254, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyInt_FromSsize_t(__pyx_v_extent2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1254, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyTuple_New(3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 1254, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_4, 2, __pyx_t_3); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - __pyx_t_3 = 0; - - /* "View.MemoryView":1253 - * cdef int _err_extents(int i, Py_ssize_t extent1, - * Py_ssize_t extent2) except -1 with gil: - * raise ValueError("got differing extents in dimension %d (got %d and %d)" % # <<<<<<<<<<<<<< - * (i, extent1, extent2)) - * - */ - __pyx_t_3 = __Pyx_PyString_Format(__pyx_kp_s_got_differing_extents_in_dimensi, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1253, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PyObject_CallOneArg(__pyx_builtin_ValueError, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 1253, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_4, 0, 0, 0); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __PYX_ERR(1, 1253, __pyx_L1_error) - - /* "View.MemoryView":1251 - * - * @cname('__pyx_memoryview_err_extents') - * cdef int _err_extents(int i, Py_ssize_t extent1, # <<<<<<<<<<<<<< - * Py_ssize_t extent2) except -1 with gil: - * raise ValueError("got differing extents in dimension %d (got %d and %d)" % - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("View.MemoryView._err_extents", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __Pyx_RefNannyFinishContext(); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - return __pyx_r; -} - -/* "View.MemoryView":1257 - * - * @cname('__pyx_memoryview_err_dim') - * cdef int _err_dim(object error, char *msg, int dim) except -1 with gil: # <<<<<<<<<<<<<< - * raise error(msg.decode('ascii') % dim) - * - */ - -static int __pyx_memoryview_err_dim(PyObject *__pyx_v_error, char *__pyx_v_msg, int __pyx_v_dim) { - int __pyx_r; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_RefNannySetupContext("_err_dim", 0); - __Pyx_INCREF(__pyx_v_error); - - /* "View.MemoryView":1258 - * @cname('__pyx_memoryview_err_dim') - * cdef int _err_dim(object error, char *msg, int dim) except -1 with gil: - * raise error(msg.decode('ascii') % dim) # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_err') - */ - __pyx_t_2 = __Pyx_decode_c_string(__pyx_v_msg, 0, strlen(__pyx_v_msg), NULL, NULL, PyUnicode_DecodeASCII); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1258, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyInt_From_int(__pyx_v_dim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1258, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyUnicode_Format(__pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 1258, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_INCREF(__pyx_v_error); - __pyx_t_3 = __pyx_v_error; __pyx_t_2 = NULL; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - } - } - __pyx_t_1 = (__pyx_t_2) ? __Pyx_PyObject_Call2Args(__pyx_t_3, __pyx_t_2, __pyx_t_4) : __Pyx_PyObject_CallOneArg(__pyx_t_3, __pyx_t_4); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 1258, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 1258, __pyx_L1_error) - - /* "View.MemoryView":1257 - * - * @cname('__pyx_memoryview_err_dim') - * cdef int _err_dim(object error, char *msg, int dim) except -1 with gil: # <<<<<<<<<<<<<< - * raise error(msg.decode('ascii') % dim) - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("View.MemoryView._err_dim", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __Pyx_XDECREF(__pyx_v_error); - __Pyx_RefNannyFinishContext(); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - return __pyx_r; -} - -/* "View.MemoryView":1261 - * - * @cname('__pyx_memoryview_err') - * cdef int _err(object error, char *msg) except -1 with gil: # <<<<<<<<<<<<<< - * if msg != NULL: - * raise error(msg.decode('ascii')) - */ - -static int __pyx_memoryview_err(PyObject *__pyx_v_error, char *__pyx_v_msg) { - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_RefNannySetupContext("_err", 0); - __Pyx_INCREF(__pyx_v_error); - - /* "View.MemoryView":1262 - * @cname('__pyx_memoryview_err') - * cdef int _err(object error, char *msg) except -1 with gil: - * if msg != NULL: # <<<<<<<<<<<<<< - * raise error(msg.decode('ascii')) - * else: - */ - __pyx_t_1 = ((__pyx_v_msg != NULL) != 0); - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":1263 - * cdef int _err(object error, char *msg) except -1 with gil: - * if msg != NULL: - * raise error(msg.decode('ascii')) # <<<<<<<<<<<<<< - * else: - * raise error - */ - __pyx_t_3 = __Pyx_decode_c_string(__pyx_v_msg, 0, strlen(__pyx_v_msg), NULL, NULL, PyUnicode_DecodeASCII); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1263, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_error); - __pyx_t_4 = __pyx_v_error; __pyx_t_5 = NULL; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_4))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_4); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_4, function); - } - } - __pyx_t_2 = (__pyx_t_5) ? __Pyx_PyObject_Call2Args(__pyx_t_4, __pyx_t_5, __pyx_t_3) : __Pyx_PyObject_CallOneArg(__pyx_t_4, __pyx_t_3); - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1263, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_Raise(__pyx_t_2, 0, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __PYX_ERR(1, 1263, __pyx_L1_error) - - /* "View.MemoryView":1262 - * @cname('__pyx_memoryview_err') - * cdef int _err(object error, char *msg) except -1 with gil: - * if msg != NULL: # <<<<<<<<<<<<<< - * raise error(msg.decode('ascii')) - * else: - */ - } - - /* "View.MemoryView":1265 - * raise error(msg.decode('ascii')) - * else: - * raise error # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_copy_contents') - */ - /*else*/ { - __Pyx_Raise(__pyx_v_error, 0, 0, 0); - __PYX_ERR(1, 1265, __pyx_L1_error) - } - - /* "View.MemoryView":1261 - * - * @cname('__pyx_memoryview_err') - * cdef int _err(object error, char *msg) except -1 with gil: # <<<<<<<<<<<<<< - * if msg != NULL: - * raise error(msg.decode('ascii')) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView._err", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __Pyx_XDECREF(__pyx_v_error); - __Pyx_RefNannyFinishContext(); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - return __pyx_r; -} - -/* "View.MemoryView":1268 - * - * @cname('__pyx_memoryview_copy_contents') - * cdef int memoryview_copy_contents(__Pyx_memviewslice src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice dst, - * int src_ndim, int dst_ndim, - */ - -static int __pyx_memoryview_copy_contents(__Pyx_memviewslice __pyx_v_src, __Pyx_memviewslice __pyx_v_dst, int __pyx_v_src_ndim, int __pyx_v_dst_ndim, int __pyx_v_dtype_is_object) { - void *__pyx_v_tmpdata; - size_t __pyx_v_itemsize; - int __pyx_v_i; - char __pyx_v_order; - int __pyx_v_broadcasting; - int __pyx_v_direct_copy; - __Pyx_memviewslice __pyx_v_tmp; - int __pyx_v_ndim; - int __pyx_r; - Py_ssize_t __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - int __pyx_t_5; - int __pyx_t_6; - void *__pyx_t_7; - int __pyx_t_8; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - - /* "View.MemoryView":1276 - * Check for overlapping memory and verify the shapes. - * """ - * cdef void *tmpdata = NULL # <<<<<<<<<<<<<< - * cdef size_t itemsize = src.memview.view.itemsize - * cdef int i - */ - __pyx_v_tmpdata = NULL; - - /* "View.MemoryView":1277 - * """ - * cdef void *tmpdata = NULL - * cdef size_t itemsize = src.memview.view.itemsize # <<<<<<<<<<<<<< - * cdef int i - * cdef char order = get_best_order(&src, src_ndim) - */ - __pyx_t_1 = __pyx_v_src.memview->view.itemsize; - __pyx_v_itemsize = __pyx_t_1; - - /* "View.MemoryView":1279 - * cdef size_t itemsize = src.memview.view.itemsize - * cdef int i - * cdef char order = get_best_order(&src, src_ndim) # <<<<<<<<<<<<<< - * cdef bint broadcasting = False - * cdef bint direct_copy = False - */ - __pyx_v_order = __pyx_get_best_slice_order((&__pyx_v_src), __pyx_v_src_ndim); - - /* "View.MemoryView":1280 - * cdef int i - * cdef char order = get_best_order(&src, src_ndim) - * cdef bint broadcasting = False # <<<<<<<<<<<<<< - * cdef bint direct_copy = False - * cdef __Pyx_memviewslice tmp - */ - __pyx_v_broadcasting = 0; - - /* "View.MemoryView":1281 - * cdef char order = get_best_order(&src, src_ndim) - * cdef bint broadcasting = False - * cdef bint direct_copy = False # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice tmp - * - */ - __pyx_v_direct_copy = 0; - - /* "View.MemoryView":1284 - * cdef __Pyx_memviewslice tmp - * - * if src_ndim < dst_ndim: # <<<<<<<<<<<<<< - * broadcast_leading(&src, src_ndim, dst_ndim) - * elif dst_ndim < src_ndim: - */ - __pyx_t_2 = ((__pyx_v_src_ndim < __pyx_v_dst_ndim) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1285 - * - * if src_ndim < dst_ndim: - * broadcast_leading(&src, src_ndim, dst_ndim) # <<<<<<<<<<<<<< - * elif dst_ndim < src_ndim: - * broadcast_leading(&dst, dst_ndim, src_ndim) - */ - __pyx_memoryview_broadcast_leading((&__pyx_v_src), __pyx_v_src_ndim, __pyx_v_dst_ndim); - - /* "View.MemoryView":1284 - * cdef __Pyx_memviewslice tmp - * - * if src_ndim < dst_ndim: # <<<<<<<<<<<<<< - * broadcast_leading(&src, src_ndim, dst_ndim) - * elif dst_ndim < src_ndim: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":1286 - * if src_ndim < dst_ndim: - * broadcast_leading(&src, src_ndim, dst_ndim) - * elif dst_ndim < src_ndim: # <<<<<<<<<<<<<< - * broadcast_leading(&dst, dst_ndim, src_ndim) - * - */ - __pyx_t_2 = ((__pyx_v_dst_ndim < __pyx_v_src_ndim) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1287 - * broadcast_leading(&src, src_ndim, dst_ndim) - * elif dst_ndim < src_ndim: - * broadcast_leading(&dst, dst_ndim, src_ndim) # <<<<<<<<<<<<<< - * - * cdef int ndim = max(src_ndim, dst_ndim) - */ - __pyx_memoryview_broadcast_leading((&__pyx_v_dst), __pyx_v_dst_ndim, __pyx_v_src_ndim); - - /* "View.MemoryView":1286 - * if src_ndim < dst_ndim: - * broadcast_leading(&src, src_ndim, dst_ndim) - * elif dst_ndim < src_ndim: # <<<<<<<<<<<<<< - * broadcast_leading(&dst, dst_ndim, src_ndim) - * - */ - } - __pyx_L3:; - - /* "View.MemoryView":1289 - * broadcast_leading(&dst, dst_ndim, src_ndim) - * - * cdef int ndim = max(src_ndim, dst_ndim) # <<<<<<<<<<<<<< - * - * for i in range(ndim): - */ - __pyx_t_3 = __pyx_v_dst_ndim; - __pyx_t_4 = __pyx_v_src_ndim; - if (((__pyx_t_3 > __pyx_t_4) != 0)) { - __pyx_t_5 = __pyx_t_3; - } else { - __pyx_t_5 = __pyx_t_4; - } - __pyx_v_ndim = __pyx_t_5; - - /* "View.MemoryView":1291 - * cdef int ndim = max(src_ndim, dst_ndim) - * - * for i in range(ndim): # <<<<<<<<<<<<<< - * if src.shape[i] != dst.shape[i]: - * if src.shape[i] == 1: - */ - __pyx_t_5 = __pyx_v_ndim; - __pyx_t_3 = __pyx_t_5; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_i = __pyx_t_4; - - /* "View.MemoryView":1292 - * - * for i in range(ndim): - * if src.shape[i] != dst.shape[i]: # <<<<<<<<<<<<<< - * if src.shape[i] == 1: - * broadcasting = True - */ - __pyx_t_2 = (((__pyx_v_src.shape[__pyx_v_i]) != (__pyx_v_dst.shape[__pyx_v_i])) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1293 - * for i in range(ndim): - * if src.shape[i] != dst.shape[i]: - * if src.shape[i] == 1: # <<<<<<<<<<<<<< - * broadcasting = True - * src.strides[i] = 0 - */ - __pyx_t_2 = (((__pyx_v_src.shape[__pyx_v_i]) == 1) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1294 - * if src.shape[i] != dst.shape[i]: - * if src.shape[i] == 1: - * broadcasting = True # <<<<<<<<<<<<<< - * src.strides[i] = 0 - * else: - */ - __pyx_v_broadcasting = 1; - - /* "View.MemoryView":1295 - * if src.shape[i] == 1: - * broadcasting = True - * src.strides[i] = 0 # <<<<<<<<<<<<<< - * else: - * _err_extents(i, dst.shape[i], src.shape[i]) - */ - (__pyx_v_src.strides[__pyx_v_i]) = 0; - - /* "View.MemoryView":1293 - * for i in range(ndim): - * if src.shape[i] != dst.shape[i]: - * if src.shape[i] == 1: # <<<<<<<<<<<<<< - * broadcasting = True - * src.strides[i] = 0 - */ - goto __pyx_L7; - } - - /* "View.MemoryView":1297 - * src.strides[i] = 0 - * else: - * _err_extents(i, dst.shape[i], src.shape[i]) # <<<<<<<<<<<<<< - * - * if src.suboffsets[i] >= 0: - */ - /*else*/ { - __pyx_t_6 = __pyx_memoryview_err_extents(__pyx_v_i, (__pyx_v_dst.shape[__pyx_v_i]), (__pyx_v_src.shape[__pyx_v_i])); if (unlikely(__pyx_t_6 == ((int)-1))) __PYX_ERR(1, 1297, __pyx_L1_error) - } - __pyx_L7:; - - /* "View.MemoryView":1292 - * - * for i in range(ndim): - * if src.shape[i] != dst.shape[i]: # <<<<<<<<<<<<<< - * if src.shape[i] == 1: - * broadcasting = True - */ - } - - /* "View.MemoryView":1299 - * _err_extents(i, dst.shape[i], src.shape[i]) - * - * if src.suboffsets[i] >= 0: # <<<<<<<<<<<<<< - * _err_dim(ValueError, "Dimension %d is not direct", i) - * - */ - __pyx_t_2 = (((__pyx_v_src.suboffsets[__pyx_v_i]) >= 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1300 - * - * if src.suboffsets[i] >= 0: - * _err_dim(ValueError, "Dimension %d is not direct", i) # <<<<<<<<<<<<<< - * - * if slices_overlap(&src, &dst, ndim, itemsize): - */ - __pyx_t_6 = __pyx_memoryview_err_dim(__pyx_builtin_ValueError, ((char *)"Dimension %d is not direct"), __pyx_v_i); if (unlikely(__pyx_t_6 == ((int)-1))) __PYX_ERR(1, 1300, __pyx_L1_error) - - /* "View.MemoryView":1299 - * _err_extents(i, dst.shape[i], src.shape[i]) - * - * if src.suboffsets[i] >= 0: # <<<<<<<<<<<<<< - * _err_dim(ValueError, "Dimension %d is not direct", i) - * - */ - } - } - - /* "View.MemoryView":1302 - * _err_dim(ValueError, "Dimension %d is not direct", i) - * - * if slices_overlap(&src, &dst, ndim, itemsize): # <<<<<<<<<<<<<< - * - * if not slice_is_contig(src, order, ndim): - */ - __pyx_t_2 = (__pyx_slices_overlap((&__pyx_v_src), (&__pyx_v_dst), __pyx_v_ndim, __pyx_v_itemsize) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1304 - * if slices_overlap(&src, &dst, ndim, itemsize): - * - * if not slice_is_contig(src, order, ndim): # <<<<<<<<<<<<<< - * order = get_best_order(&dst, ndim) - * - */ - __pyx_t_2 = ((!(__pyx_memviewslice_is_contig(__pyx_v_src, __pyx_v_order, __pyx_v_ndim) != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1305 - * - * if not slice_is_contig(src, order, ndim): - * order = get_best_order(&dst, ndim) # <<<<<<<<<<<<<< - * - * tmpdata = copy_data_to_temp(&src, &tmp, order, ndim) - */ - __pyx_v_order = __pyx_get_best_slice_order((&__pyx_v_dst), __pyx_v_ndim); - - /* "View.MemoryView":1304 - * if slices_overlap(&src, &dst, ndim, itemsize): - * - * if not slice_is_contig(src, order, ndim): # <<<<<<<<<<<<<< - * order = get_best_order(&dst, ndim) - * - */ - } - - /* "View.MemoryView":1307 - * order = get_best_order(&dst, ndim) - * - * tmpdata = copy_data_to_temp(&src, &tmp, order, ndim) # <<<<<<<<<<<<<< - * src = tmp - * - */ - __pyx_t_7 = __pyx_memoryview_copy_data_to_temp((&__pyx_v_src), (&__pyx_v_tmp), __pyx_v_order, __pyx_v_ndim); if (unlikely(__pyx_t_7 == ((void *)NULL))) __PYX_ERR(1, 1307, __pyx_L1_error) - __pyx_v_tmpdata = __pyx_t_7; - - /* "View.MemoryView":1308 - * - * tmpdata = copy_data_to_temp(&src, &tmp, order, ndim) - * src = tmp # <<<<<<<<<<<<<< - * - * if not broadcasting: - */ - __pyx_v_src = __pyx_v_tmp; - - /* "View.MemoryView":1302 - * _err_dim(ValueError, "Dimension %d is not direct", i) - * - * if slices_overlap(&src, &dst, ndim, itemsize): # <<<<<<<<<<<<<< - * - * if not slice_is_contig(src, order, ndim): - */ - } - - /* "View.MemoryView":1310 - * src = tmp - * - * if not broadcasting: # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_2 = ((!(__pyx_v_broadcasting != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1313 - * - * - * if slice_is_contig(src, 'C', ndim): # <<<<<<<<<<<<<< - * direct_copy = slice_is_contig(dst, 'C', ndim) - * elif slice_is_contig(src, 'F', ndim): - */ - __pyx_t_2 = (__pyx_memviewslice_is_contig(__pyx_v_src, 'C', __pyx_v_ndim) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1314 - * - * if slice_is_contig(src, 'C', ndim): - * direct_copy = slice_is_contig(dst, 'C', ndim) # <<<<<<<<<<<<<< - * elif slice_is_contig(src, 'F', ndim): - * direct_copy = slice_is_contig(dst, 'F', ndim) - */ - __pyx_v_direct_copy = __pyx_memviewslice_is_contig(__pyx_v_dst, 'C', __pyx_v_ndim); - - /* "View.MemoryView":1313 - * - * - * if slice_is_contig(src, 'C', ndim): # <<<<<<<<<<<<<< - * direct_copy = slice_is_contig(dst, 'C', ndim) - * elif slice_is_contig(src, 'F', ndim): - */ - goto __pyx_L12; - } - - /* "View.MemoryView":1315 - * if slice_is_contig(src, 'C', ndim): - * direct_copy = slice_is_contig(dst, 'C', ndim) - * elif slice_is_contig(src, 'F', ndim): # <<<<<<<<<<<<<< - * direct_copy = slice_is_contig(dst, 'F', ndim) - * - */ - __pyx_t_2 = (__pyx_memviewslice_is_contig(__pyx_v_src, 'F', __pyx_v_ndim) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1316 - * direct_copy = slice_is_contig(dst, 'C', ndim) - * elif slice_is_contig(src, 'F', ndim): - * direct_copy = slice_is_contig(dst, 'F', ndim) # <<<<<<<<<<<<<< - * - * if direct_copy: - */ - __pyx_v_direct_copy = __pyx_memviewslice_is_contig(__pyx_v_dst, 'F', __pyx_v_ndim); - - /* "View.MemoryView":1315 - * if slice_is_contig(src, 'C', ndim): - * direct_copy = slice_is_contig(dst, 'C', ndim) - * elif slice_is_contig(src, 'F', ndim): # <<<<<<<<<<<<<< - * direct_copy = slice_is_contig(dst, 'F', ndim) - * - */ - } - __pyx_L12:; - - /* "View.MemoryView":1318 - * direct_copy = slice_is_contig(dst, 'F', ndim) - * - * if direct_copy: # <<<<<<<<<<<<<< - * - * refcount_copying(&dst, dtype_is_object, ndim, False) - */ - __pyx_t_2 = (__pyx_v_direct_copy != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1320 - * if direct_copy: - * - * refcount_copying(&dst, dtype_is_object, ndim, False) # <<<<<<<<<<<<<< - * memcpy(dst.data, src.data, slice_get_size(&src, ndim)) - * refcount_copying(&dst, dtype_is_object, ndim, True) - */ - __pyx_memoryview_refcount_copying((&__pyx_v_dst), __pyx_v_dtype_is_object, __pyx_v_ndim, 0); - - /* "View.MemoryView":1321 - * - * refcount_copying(&dst, dtype_is_object, ndim, False) - * memcpy(dst.data, src.data, slice_get_size(&src, ndim)) # <<<<<<<<<<<<<< - * refcount_copying(&dst, dtype_is_object, ndim, True) - * free(tmpdata) - */ - (void)(memcpy(__pyx_v_dst.data, __pyx_v_src.data, __pyx_memoryview_slice_get_size((&__pyx_v_src), __pyx_v_ndim))); - - /* "View.MemoryView":1322 - * refcount_copying(&dst, dtype_is_object, ndim, False) - * memcpy(dst.data, src.data, slice_get_size(&src, ndim)) - * refcount_copying(&dst, dtype_is_object, ndim, True) # <<<<<<<<<<<<<< - * free(tmpdata) - * return 0 - */ - __pyx_memoryview_refcount_copying((&__pyx_v_dst), __pyx_v_dtype_is_object, __pyx_v_ndim, 1); - - /* "View.MemoryView":1323 - * memcpy(dst.data, src.data, slice_get_size(&src, ndim)) - * refcount_copying(&dst, dtype_is_object, ndim, True) - * free(tmpdata) # <<<<<<<<<<<<<< - * return 0 - * - */ - free(__pyx_v_tmpdata); - - /* "View.MemoryView":1324 - * refcount_copying(&dst, dtype_is_object, ndim, True) - * free(tmpdata) - * return 0 # <<<<<<<<<<<<<< - * - * if order == 'F' == get_best_order(&dst, ndim): - */ - __pyx_r = 0; - goto __pyx_L0; - - /* "View.MemoryView":1318 - * direct_copy = slice_is_contig(dst, 'F', ndim) - * - * if direct_copy: # <<<<<<<<<<<<<< - * - * refcount_copying(&dst, dtype_is_object, ndim, False) - */ - } - - /* "View.MemoryView":1310 - * src = tmp - * - * if not broadcasting: # <<<<<<<<<<<<<< - * - * - */ - } - - /* "View.MemoryView":1326 - * return 0 - * - * if order == 'F' == get_best_order(&dst, ndim): # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_2 = (__pyx_v_order == 'F'); - if (__pyx_t_2) { - __pyx_t_2 = ('F' == __pyx_get_best_slice_order((&__pyx_v_dst), __pyx_v_ndim)); - } - __pyx_t_8 = (__pyx_t_2 != 0); - if (__pyx_t_8) { - - /* "View.MemoryView":1329 - * - * - * transpose_memslice(&src) # <<<<<<<<<<<<<< - * transpose_memslice(&dst) - * - */ - __pyx_t_5 = __pyx_memslice_transpose((&__pyx_v_src)); if (unlikely(__pyx_t_5 == ((int)0))) __PYX_ERR(1, 1329, __pyx_L1_error) - - /* "View.MemoryView":1330 - * - * transpose_memslice(&src) - * transpose_memslice(&dst) # <<<<<<<<<<<<<< - * - * refcount_copying(&dst, dtype_is_object, ndim, False) - */ - __pyx_t_5 = __pyx_memslice_transpose((&__pyx_v_dst)); if (unlikely(__pyx_t_5 == ((int)0))) __PYX_ERR(1, 1330, __pyx_L1_error) - - /* "View.MemoryView":1326 - * return 0 - * - * if order == 'F' == get_best_order(&dst, ndim): # <<<<<<<<<<<<<< - * - * - */ - } - - /* "View.MemoryView":1332 - * transpose_memslice(&dst) - * - * refcount_copying(&dst, dtype_is_object, ndim, False) # <<<<<<<<<<<<<< - * copy_strided_to_strided(&src, &dst, ndim, itemsize) - * refcount_copying(&dst, dtype_is_object, ndim, True) - */ - __pyx_memoryview_refcount_copying((&__pyx_v_dst), __pyx_v_dtype_is_object, __pyx_v_ndim, 0); - - /* "View.MemoryView":1333 - * - * refcount_copying(&dst, dtype_is_object, ndim, False) - * copy_strided_to_strided(&src, &dst, ndim, itemsize) # <<<<<<<<<<<<<< - * refcount_copying(&dst, dtype_is_object, ndim, True) - * - */ - copy_strided_to_strided((&__pyx_v_src), (&__pyx_v_dst), __pyx_v_ndim, __pyx_v_itemsize); - - /* "View.MemoryView":1334 - * refcount_copying(&dst, dtype_is_object, ndim, False) - * copy_strided_to_strided(&src, &dst, ndim, itemsize) - * refcount_copying(&dst, dtype_is_object, ndim, True) # <<<<<<<<<<<<<< - * - * free(tmpdata) - */ - __pyx_memoryview_refcount_copying((&__pyx_v_dst), __pyx_v_dtype_is_object, __pyx_v_ndim, 1); - - /* "View.MemoryView":1336 - * refcount_copying(&dst, dtype_is_object, ndim, True) - * - * free(tmpdata) # <<<<<<<<<<<<<< - * return 0 - * - */ - free(__pyx_v_tmpdata); - - /* "View.MemoryView":1337 - * - * free(tmpdata) - * return 0 # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_broadcast_leading') - */ - __pyx_r = 0; - goto __pyx_L0; - - /* "View.MemoryView":1268 - * - * @cname('__pyx_memoryview_copy_contents') - * cdef int memoryview_copy_contents(__Pyx_memviewslice src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice dst, - * int src_ndim, int dst_ndim, - */ - - /* function exit code */ - __pyx_L1_error:; - { - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_AddTraceback("View.MemoryView.memoryview_copy_contents", __pyx_clineno, __pyx_lineno, __pyx_filename); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - } - __pyx_r = -1; - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1340 - * - * @cname('__pyx_memoryview_broadcast_leading') - * cdef void broadcast_leading(__Pyx_memviewslice *mslice, # <<<<<<<<<<<<<< - * int ndim, - * int ndim_other) nogil: - */ - -static void __pyx_memoryview_broadcast_leading(__Pyx_memviewslice *__pyx_v_mslice, int __pyx_v_ndim, int __pyx_v_ndim_other) { - int __pyx_v_i; - int __pyx_v_offset; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - - /* "View.MemoryView":1344 - * int ndim_other) nogil: - * cdef int i - * cdef int offset = ndim_other - ndim # <<<<<<<<<<<<<< - * - * for i in range(ndim - 1, -1, -1): - */ - __pyx_v_offset = (__pyx_v_ndim_other - __pyx_v_ndim); - - /* "View.MemoryView":1346 - * cdef int offset = ndim_other - ndim - * - * for i in range(ndim - 1, -1, -1): # <<<<<<<<<<<<<< - * mslice.shape[i + offset] = mslice.shape[i] - * mslice.strides[i + offset] = mslice.strides[i] - */ - for (__pyx_t_1 = (__pyx_v_ndim - 1); __pyx_t_1 > -1; __pyx_t_1-=1) { - __pyx_v_i = __pyx_t_1; - - /* "View.MemoryView":1347 - * - * for i in range(ndim - 1, -1, -1): - * mslice.shape[i + offset] = mslice.shape[i] # <<<<<<<<<<<<<< - * mslice.strides[i + offset] = mslice.strides[i] - * mslice.suboffsets[i + offset] = mslice.suboffsets[i] - */ - (__pyx_v_mslice->shape[(__pyx_v_i + __pyx_v_offset)]) = (__pyx_v_mslice->shape[__pyx_v_i]); - - /* "View.MemoryView":1348 - * for i in range(ndim - 1, -1, -1): - * mslice.shape[i + offset] = mslice.shape[i] - * mslice.strides[i + offset] = mslice.strides[i] # <<<<<<<<<<<<<< - * mslice.suboffsets[i + offset] = mslice.suboffsets[i] - * - */ - (__pyx_v_mslice->strides[(__pyx_v_i + __pyx_v_offset)]) = (__pyx_v_mslice->strides[__pyx_v_i]); - - /* "View.MemoryView":1349 - * mslice.shape[i + offset] = mslice.shape[i] - * mslice.strides[i + offset] = mslice.strides[i] - * mslice.suboffsets[i + offset] = mslice.suboffsets[i] # <<<<<<<<<<<<<< - * - * for i in range(offset): - */ - (__pyx_v_mslice->suboffsets[(__pyx_v_i + __pyx_v_offset)]) = (__pyx_v_mslice->suboffsets[__pyx_v_i]); - } - - /* "View.MemoryView":1351 - * mslice.suboffsets[i + offset] = mslice.suboffsets[i] - * - * for i in range(offset): # <<<<<<<<<<<<<< - * mslice.shape[i] = 1 - * mslice.strides[i] = mslice.strides[0] - */ - __pyx_t_1 = __pyx_v_offset; - __pyx_t_2 = __pyx_t_1; - for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) { - __pyx_v_i = __pyx_t_3; - - /* "View.MemoryView":1352 - * - * for i in range(offset): - * mslice.shape[i] = 1 # <<<<<<<<<<<<<< - * mslice.strides[i] = mslice.strides[0] - * mslice.suboffsets[i] = -1 - */ - (__pyx_v_mslice->shape[__pyx_v_i]) = 1; - - /* "View.MemoryView":1353 - * for i in range(offset): - * mslice.shape[i] = 1 - * mslice.strides[i] = mslice.strides[0] # <<<<<<<<<<<<<< - * mslice.suboffsets[i] = -1 - * - */ - (__pyx_v_mslice->strides[__pyx_v_i]) = (__pyx_v_mslice->strides[0]); - - /* "View.MemoryView":1354 - * mslice.shape[i] = 1 - * mslice.strides[i] = mslice.strides[0] - * mslice.suboffsets[i] = -1 # <<<<<<<<<<<<<< - * - * - */ - (__pyx_v_mslice->suboffsets[__pyx_v_i]) = -1L; - } - - /* "View.MemoryView":1340 - * - * @cname('__pyx_memoryview_broadcast_leading') - * cdef void broadcast_leading(__Pyx_memviewslice *mslice, # <<<<<<<<<<<<<< - * int ndim, - * int ndim_other) nogil: - */ - - /* function exit code */ -} - -/* "View.MemoryView":1362 - * - * @cname('__pyx_memoryview_refcount_copying') - * cdef void refcount_copying(__Pyx_memviewslice *dst, bint dtype_is_object, # <<<<<<<<<<<<<< - * int ndim, bint inc) nogil: - * - */ - -static void __pyx_memoryview_refcount_copying(__Pyx_memviewslice *__pyx_v_dst, int __pyx_v_dtype_is_object, int __pyx_v_ndim, int __pyx_v_inc) { - int __pyx_t_1; - - /* "View.MemoryView":1366 - * - * - * if dtype_is_object: # <<<<<<<<<<<<<< - * refcount_objects_in_slice_with_gil(dst.data, dst.shape, - * dst.strides, ndim, inc) - */ - __pyx_t_1 = (__pyx_v_dtype_is_object != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1367 - * - * if dtype_is_object: - * refcount_objects_in_slice_with_gil(dst.data, dst.shape, # <<<<<<<<<<<<<< - * dst.strides, ndim, inc) - * - */ - __pyx_memoryview_refcount_objects_in_slice_with_gil(__pyx_v_dst->data, __pyx_v_dst->shape, __pyx_v_dst->strides, __pyx_v_ndim, __pyx_v_inc); - - /* "View.MemoryView":1366 - * - * - * if dtype_is_object: # <<<<<<<<<<<<<< - * refcount_objects_in_slice_with_gil(dst.data, dst.shape, - * dst.strides, ndim, inc) - */ - } - - /* "View.MemoryView":1362 - * - * @cname('__pyx_memoryview_refcount_copying') - * cdef void refcount_copying(__Pyx_memviewslice *dst, bint dtype_is_object, # <<<<<<<<<<<<<< - * int ndim, bint inc) nogil: - * - */ - - /* function exit code */ -} - -/* "View.MemoryView":1371 - * - * @cname('__pyx_memoryview_refcount_objects_in_slice_with_gil') - * cdef void refcount_objects_in_slice_with_gil(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, - * bint inc) with gil: - */ - -static void __pyx_memoryview_refcount_objects_in_slice_with_gil(char *__pyx_v_data, Py_ssize_t *__pyx_v_shape, Py_ssize_t *__pyx_v_strides, int __pyx_v_ndim, int __pyx_v_inc) { - __Pyx_RefNannyDeclarations - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_RefNannySetupContext("refcount_objects_in_slice_with_gil", 0); - - /* "View.MemoryView":1374 - * Py_ssize_t *strides, int ndim, - * bint inc) with gil: - * refcount_objects_in_slice(data, shape, strides, ndim, inc) # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_refcount_objects_in_slice') - */ - __pyx_memoryview_refcount_objects_in_slice(__pyx_v_data, __pyx_v_shape, __pyx_v_strides, __pyx_v_ndim, __pyx_v_inc); - - /* "View.MemoryView":1371 - * - * @cname('__pyx_memoryview_refcount_objects_in_slice_with_gil') - * cdef void refcount_objects_in_slice_with_gil(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, - * bint inc) with gil: - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif -} - -/* "View.MemoryView":1377 - * - * @cname('__pyx_memoryview_refcount_objects_in_slice') - * cdef void refcount_objects_in_slice(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, bint inc): - * cdef Py_ssize_t i - */ - -static void __pyx_memoryview_refcount_objects_in_slice(char *__pyx_v_data, Py_ssize_t *__pyx_v_shape, Py_ssize_t *__pyx_v_strides, int __pyx_v_ndim, int __pyx_v_inc) { - CYTHON_UNUSED Py_ssize_t __pyx_v_i; - __Pyx_RefNannyDeclarations - Py_ssize_t __pyx_t_1; - Py_ssize_t __pyx_t_2; - Py_ssize_t __pyx_t_3; - int __pyx_t_4; - __Pyx_RefNannySetupContext("refcount_objects_in_slice", 0); - - /* "View.MemoryView":1381 - * cdef Py_ssize_t i - * - * for i in range(shape[0]): # <<<<<<<<<<<<<< - * if ndim == 1: - * if inc: - */ - __pyx_t_1 = (__pyx_v_shape[0]); - __pyx_t_2 = __pyx_t_1; - for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) { - __pyx_v_i = __pyx_t_3; - - /* "View.MemoryView":1382 - * - * for i in range(shape[0]): - * if ndim == 1: # <<<<<<<<<<<<<< - * if inc: - * Py_INCREF(( data)[0]) - */ - __pyx_t_4 = ((__pyx_v_ndim == 1) != 0); - if (__pyx_t_4) { - - /* "View.MemoryView":1383 - * for i in range(shape[0]): - * if ndim == 1: - * if inc: # <<<<<<<<<<<<<< - * Py_INCREF(( data)[0]) - * else: - */ - __pyx_t_4 = (__pyx_v_inc != 0); - if (__pyx_t_4) { - - /* "View.MemoryView":1384 - * if ndim == 1: - * if inc: - * Py_INCREF(( data)[0]) # <<<<<<<<<<<<<< - * else: - * Py_DECREF(( data)[0]) - */ - Py_INCREF((((PyObject **)__pyx_v_data)[0])); - - /* "View.MemoryView":1383 - * for i in range(shape[0]): - * if ndim == 1: - * if inc: # <<<<<<<<<<<<<< - * Py_INCREF(( data)[0]) - * else: - */ - goto __pyx_L6; - } - - /* "View.MemoryView":1386 - * Py_INCREF(( data)[0]) - * else: - * Py_DECREF(( data)[0]) # <<<<<<<<<<<<<< - * else: - * refcount_objects_in_slice(data, shape + 1, strides + 1, - */ - /*else*/ { - Py_DECREF((((PyObject **)__pyx_v_data)[0])); - } - __pyx_L6:; - - /* "View.MemoryView":1382 - * - * for i in range(shape[0]): - * if ndim == 1: # <<<<<<<<<<<<<< - * if inc: - * Py_INCREF(( data)[0]) - */ - goto __pyx_L5; - } - - /* "View.MemoryView":1388 - * Py_DECREF(( data)[0]) - * else: - * refcount_objects_in_slice(data, shape + 1, strides + 1, # <<<<<<<<<<<<<< - * ndim - 1, inc) - * - */ - /*else*/ { - - /* "View.MemoryView":1389 - * else: - * refcount_objects_in_slice(data, shape + 1, strides + 1, - * ndim - 1, inc) # <<<<<<<<<<<<<< - * - * data += strides[0] - */ - __pyx_memoryview_refcount_objects_in_slice(__pyx_v_data, (__pyx_v_shape + 1), (__pyx_v_strides + 1), (__pyx_v_ndim - 1), __pyx_v_inc); - } - __pyx_L5:; - - /* "View.MemoryView":1391 - * ndim - 1, inc) - * - * data += strides[0] # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_data = (__pyx_v_data + (__pyx_v_strides[0])); - } - - /* "View.MemoryView":1377 - * - * @cname('__pyx_memoryview_refcount_objects_in_slice') - * cdef void refcount_objects_in_slice(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, bint inc): - * cdef Py_ssize_t i - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -/* "View.MemoryView":1397 - * - * @cname('__pyx_memoryview_slice_assign_scalar') - * cdef void slice_assign_scalar(__Pyx_memviewslice *dst, int ndim, # <<<<<<<<<<<<<< - * size_t itemsize, void *item, - * bint dtype_is_object) nogil: - */ - -static void __pyx_memoryview_slice_assign_scalar(__Pyx_memviewslice *__pyx_v_dst, int __pyx_v_ndim, size_t __pyx_v_itemsize, void *__pyx_v_item, int __pyx_v_dtype_is_object) { - - /* "View.MemoryView":1400 - * size_t itemsize, void *item, - * bint dtype_is_object) nogil: - * refcount_copying(dst, dtype_is_object, ndim, False) # <<<<<<<<<<<<<< - * _slice_assign_scalar(dst.data, dst.shape, dst.strides, ndim, - * itemsize, item) - */ - __pyx_memoryview_refcount_copying(__pyx_v_dst, __pyx_v_dtype_is_object, __pyx_v_ndim, 0); - - /* "View.MemoryView":1401 - * bint dtype_is_object) nogil: - * refcount_copying(dst, dtype_is_object, ndim, False) - * _slice_assign_scalar(dst.data, dst.shape, dst.strides, ndim, # <<<<<<<<<<<<<< - * itemsize, item) - * refcount_copying(dst, dtype_is_object, ndim, True) - */ - __pyx_memoryview__slice_assign_scalar(__pyx_v_dst->data, __pyx_v_dst->shape, __pyx_v_dst->strides, __pyx_v_ndim, __pyx_v_itemsize, __pyx_v_item); - - /* "View.MemoryView":1403 - * _slice_assign_scalar(dst.data, dst.shape, dst.strides, ndim, - * itemsize, item) - * refcount_copying(dst, dtype_is_object, ndim, True) # <<<<<<<<<<<<<< - * - * - */ - __pyx_memoryview_refcount_copying(__pyx_v_dst, __pyx_v_dtype_is_object, __pyx_v_ndim, 1); - - /* "View.MemoryView":1397 - * - * @cname('__pyx_memoryview_slice_assign_scalar') - * cdef void slice_assign_scalar(__Pyx_memviewslice *dst, int ndim, # <<<<<<<<<<<<<< - * size_t itemsize, void *item, - * bint dtype_is_object) nogil: - */ - - /* function exit code */ -} - -/* "View.MemoryView":1407 - * - * @cname('__pyx_memoryview__slice_assign_scalar') - * cdef void _slice_assign_scalar(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, - * size_t itemsize, void *item) nogil: - */ - -static void __pyx_memoryview__slice_assign_scalar(char *__pyx_v_data, Py_ssize_t *__pyx_v_shape, Py_ssize_t *__pyx_v_strides, int __pyx_v_ndim, size_t __pyx_v_itemsize, void *__pyx_v_item) { - CYTHON_UNUSED Py_ssize_t __pyx_v_i; - Py_ssize_t __pyx_v_stride; - Py_ssize_t __pyx_v_extent; - int __pyx_t_1; - Py_ssize_t __pyx_t_2; - Py_ssize_t __pyx_t_3; - Py_ssize_t __pyx_t_4; - - /* "View.MemoryView":1411 - * size_t itemsize, void *item) nogil: - * cdef Py_ssize_t i - * cdef Py_ssize_t stride = strides[0] # <<<<<<<<<<<<<< - * cdef Py_ssize_t extent = shape[0] - * - */ - __pyx_v_stride = (__pyx_v_strides[0]); - - /* "View.MemoryView":1412 - * cdef Py_ssize_t i - * cdef Py_ssize_t stride = strides[0] - * cdef Py_ssize_t extent = shape[0] # <<<<<<<<<<<<<< - * - * if ndim == 1: - */ - __pyx_v_extent = (__pyx_v_shape[0]); - - /* "View.MemoryView":1414 - * cdef Py_ssize_t extent = shape[0] - * - * if ndim == 1: # <<<<<<<<<<<<<< - * for i in range(extent): - * memcpy(data, item, itemsize) - */ - __pyx_t_1 = ((__pyx_v_ndim == 1) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1415 - * - * if ndim == 1: - * for i in range(extent): # <<<<<<<<<<<<<< - * memcpy(data, item, itemsize) - * data += stride - */ - __pyx_t_2 = __pyx_v_extent; - __pyx_t_3 = __pyx_t_2; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_i = __pyx_t_4; - - /* "View.MemoryView":1416 - * if ndim == 1: - * for i in range(extent): - * memcpy(data, item, itemsize) # <<<<<<<<<<<<<< - * data += stride - * else: - */ - (void)(memcpy(__pyx_v_data, __pyx_v_item, __pyx_v_itemsize)); - - /* "View.MemoryView":1417 - * for i in range(extent): - * memcpy(data, item, itemsize) - * data += stride # <<<<<<<<<<<<<< - * else: - * for i in range(extent): - */ - __pyx_v_data = (__pyx_v_data + __pyx_v_stride); - } - - /* "View.MemoryView":1414 - * cdef Py_ssize_t extent = shape[0] - * - * if ndim == 1: # <<<<<<<<<<<<<< - * for i in range(extent): - * memcpy(data, item, itemsize) - */ - goto __pyx_L3; - } - - /* "View.MemoryView":1419 - * data += stride - * else: - * for i in range(extent): # <<<<<<<<<<<<<< - * _slice_assign_scalar(data, shape + 1, strides + 1, - * ndim - 1, itemsize, item) - */ - /*else*/ { - __pyx_t_2 = __pyx_v_extent; - __pyx_t_3 = __pyx_t_2; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_i = __pyx_t_4; - - /* "View.MemoryView":1420 - * else: - * for i in range(extent): - * _slice_assign_scalar(data, shape + 1, strides + 1, # <<<<<<<<<<<<<< - * ndim - 1, itemsize, item) - * data += stride - */ - __pyx_memoryview__slice_assign_scalar(__pyx_v_data, (__pyx_v_shape + 1), (__pyx_v_strides + 1), (__pyx_v_ndim - 1), __pyx_v_itemsize, __pyx_v_item); - - /* "View.MemoryView":1422 - * _slice_assign_scalar(data, shape + 1, strides + 1, - * ndim - 1, itemsize, item) - * data += stride # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_data = (__pyx_v_data + __pyx_v_stride); - } - } - __pyx_L3:; - - /* "View.MemoryView":1407 - * - * @cname('__pyx_memoryview__slice_assign_scalar') - * cdef void _slice_assign_scalar(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, - * size_t itemsize, void *item) nogil: - */ - - /* function exit code */ -} - -/* "(tree fragment)":1 - * def __pyx_unpickle_Enum(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<< - * cdef object __pyx_PickleError - * cdef object __pyx_result - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_1__pyx_unpickle_Enum(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static PyMethodDef __pyx_mdef_15View_dot_MemoryView_1__pyx_unpickle_Enum = {"__pyx_unpickle_Enum", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_15View_dot_MemoryView_1__pyx_unpickle_Enum, METH_VARARGS|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_15View_dot_MemoryView_1__pyx_unpickle_Enum(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v___pyx_type = 0; - long __pyx_v___pyx_checksum; - PyObject *__pyx_v___pyx_state = 0; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__pyx_unpickle_Enum (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pyx_type,&__pyx_n_s_pyx_checksum,&__pyx_n_s_pyx_state,0}; - PyObject* values[3] = {0,0,0}; - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_pyx_type)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_pyx_checksum)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("__pyx_unpickle_Enum", 1, 3, 3, 1); __PYX_ERR(1, 1, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_pyx_state)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("__pyx_unpickle_Enum", 1, 3, 3, 2); __PYX_ERR(1, 1, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__pyx_unpickle_Enum") < 0)) __PYX_ERR(1, 1, __pyx_L3_error) - } - } else if (PyTuple_GET_SIZE(__pyx_args) != 3) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - } - __pyx_v___pyx_type = values[0]; - __pyx_v___pyx_checksum = __Pyx_PyInt_As_long(values[1]); if (unlikely((__pyx_v___pyx_checksum == (long)-1) && PyErr_Occurred())) __PYX_ERR(1, 1, __pyx_L3_error) - __pyx_v___pyx_state = values[2]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__pyx_unpickle_Enum", 1, 3, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(1, 1, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("View.MemoryView.__pyx_unpickle_Enum", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_15View_dot_MemoryView___pyx_unpickle_Enum(__pyx_self, __pyx_v___pyx_type, __pyx_v___pyx_checksum, __pyx_v___pyx_state); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView___pyx_unpickle_Enum(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v___pyx_type, long __pyx_v___pyx_checksum, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_v___pyx_PickleError = 0; - PyObject *__pyx_v___pyx_result = 0; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - int __pyx_t_6; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__pyx_unpickle_Enum", 0); - - /* "(tree fragment)":4 - * cdef object __pyx_PickleError - * cdef object __pyx_result - * if __pyx_checksum != 0xb068931: # <<<<<<<<<<<<<< - * from pickle import PickleError as __pyx_PickleError - * raise __pyx_PickleError("Incompatible checksums (%s vs 0xb068931 = (name))" % __pyx_checksum) - */ - __pyx_t_1 = ((__pyx_v___pyx_checksum != 0xb068931) != 0); - if (__pyx_t_1) { - - /* "(tree fragment)":5 - * cdef object __pyx_result - * if __pyx_checksum != 0xb068931: - * from pickle import PickleError as __pyx_PickleError # <<<<<<<<<<<<<< - * raise __pyx_PickleError("Incompatible checksums (%s vs 0xb068931 = (name))" % __pyx_checksum) - * __pyx_result = Enum.__new__(__pyx_type) - */ - __pyx_t_2 = PyList_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 5, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_n_s_PickleError); - __Pyx_GIVEREF(__pyx_n_s_PickleError); - PyList_SET_ITEM(__pyx_t_2, 0, __pyx_n_s_PickleError); - __pyx_t_3 = __Pyx_Import(__pyx_n_s_pickle, __pyx_t_2, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 5, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_ImportFrom(__pyx_t_3, __pyx_n_s_PickleError); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 5, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_2); - __pyx_v___pyx_PickleError = __pyx_t_2; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "(tree fragment)":6 - * if __pyx_checksum != 0xb068931: - * from pickle import PickleError as __pyx_PickleError - * raise __pyx_PickleError("Incompatible checksums (%s vs 0xb068931 = (name))" % __pyx_checksum) # <<<<<<<<<<<<<< - * __pyx_result = Enum.__new__(__pyx_type) - * if __pyx_state is not None: - */ - __pyx_t_2 = __Pyx_PyInt_From_long(__pyx_v___pyx_checksum); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 6, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = __Pyx_PyString_Format(__pyx_kp_s_Incompatible_checksums_s_vs_0xb0, __pyx_t_2); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 6, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_INCREF(__pyx_v___pyx_PickleError); - __pyx_t_2 = __pyx_v___pyx_PickleError; __pyx_t_5 = NULL; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - } - } - __pyx_t_3 = (__pyx_t_5) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_5, __pyx_t_4) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 6, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 6, __pyx_L1_error) - - /* "(tree fragment)":4 - * cdef object __pyx_PickleError - * cdef object __pyx_result - * if __pyx_checksum != 0xb068931: # <<<<<<<<<<<<<< - * from pickle import PickleError as __pyx_PickleError - * raise __pyx_PickleError("Incompatible checksums (%s vs 0xb068931 = (name))" % __pyx_checksum) - */ - } - - /* "(tree fragment)":7 - * from pickle import PickleError as __pyx_PickleError - * raise __pyx_PickleError("Incompatible checksums (%s vs 0xb068931 = (name))" % __pyx_checksum) - * __pyx_result = Enum.__new__(__pyx_type) # <<<<<<<<<<<<<< - * if __pyx_state is not None: - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_MemviewEnum_type), __pyx_n_s_new); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 7, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = NULL; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_4)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - } - } - __pyx_t_3 = (__pyx_t_4) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_4, __pyx_v___pyx_type) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_v___pyx_type); - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 7, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v___pyx_result = __pyx_t_3; - __pyx_t_3 = 0; - - /* "(tree fragment)":8 - * raise __pyx_PickleError("Incompatible checksums (%s vs 0xb068931 = (name))" % __pyx_checksum) - * __pyx_result = Enum.__new__(__pyx_type) - * if __pyx_state is not None: # <<<<<<<<<<<<<< - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result - */ - __pyx_t_1 = (__pyx_v___pyx_state != Py_None); - __pyx_t_6 = (__pyx_t_1 != 0); - if (__pyx_t_6) { - - /* "(tree fragment)":9 - * __pyx_result = Enum.__new__(__pyx_type) - * if __pyx_state is not None: - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) # <<<<<<<<<<<<<< - * return __pyx_result - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): - */ - if (!(likely(PyTuple_CheckExact(__pyx_v___pyx_state))||((__pyx_v___pyx_state) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "tuple", Py_TYPE(__pyx_v___pyx_state)->tp_name), 0))) __PYX_ERR(1, 9, __pyx_L1_error) - __pyx_t_3 = __pyx_unpickle_Enum__set_state(((struct __pyx_MemviewEnum_obj *)__pyx_v___pyx_result), ((PyObject*)__pyx_v___pyx_state)); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 9, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "(tree fragment)":8 - * raise __pyx_PickleError("Incompatible checksums (%s vs 0xb068931 = (name))" % __pyx_checksum) - * __pyx_result = Enum.__new__(__pyx_type) - * if __pyx_state is not None: # <<<<<<<<<<<<<< - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result - */ - } - - /* "(tree fragment)":10 - * if __pyx_state is not None: - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result # <<<<<<<<<<<<<< - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): - * __pyx_result.name = __pyx_state[0] - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v___pyx_result); - __pyx_r = __pyx_v___pyx_result; - goto __pyx_L0; - - /* "(tree fragment)":1 - * def __pyx_unpickle_Enum(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<< - * cdef object __pyx_PickleError - * cdef object __pyx_result - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.__pyx_unpickle_Enum", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v___pyx_PickleError); - __Pyx_XDECREF(__pyx_v___pyx_result); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":11 - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): # <<<<<<<<<<<<<< - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): - */ - -static PyObject *__pyx_unpickle_Enum__set_state(struct __pyx_MemviewEnum_obj *__pyx_v___pyx_result, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - Py_ssize_t __pyx_t_3; - int __pyx_t_4; - int __pyx_t_5; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__pyx_unpickle_Enum__set_state", 0); - - /* "(tree fragment)":12 - * return __pyx_result - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): - * __pyx_result.name = __pyx_state[0] # <<<<<<<<<<<<<< - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): - * __pyx_result.__dict__.update(__pyx_state[1]) - */ - if (unlikely(__pyx_v___pyx_state == Py_None)) { - PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable"); - __PYX_ERR(1, 12, __pyx_L1_error) - } - __pyx_t_1 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 12, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __Pyx_GOTREF(__pyx_v___pyx_result->name); - __Pyx_DECREF(__pyx_v___pyx_result->name); - __pyx_v___pyx_result->name = __pyx_t_1; - __pyx_t_1 = 0; - - /* "(tree fragment)":13 - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): # <<<<<<<<<<<<<< - * __pyx_result.__dict__.update(__pyx_state[1]) - */ - if (unlikely(__pyx_v___pyx_state == Py_None)) { - PyErr_SetString(PyExc_TypeError, "object of type 'NoneType' has no len()"); - __PYX_ERR(1, 13, __pyx_L1_error) - } - __pyx_t_3 = PyTuple_GET_SIZE(__pyx_v___pyx_state); if (unlikely(__pyx_t_3 == ((Py_ssize_t)-1))) __PYX_ERR(1, 13, __pyx_L1_error) - __pyx_t_4 = ((__pyx_t_3 > 1) != 0); - if (__pyx_t_4) { - } else { - __pyx_t_2 = __pyx_t_4; - goto __pyx_L4_bool_binop_done; - } - __pyx_t_4 = __Pyx_HasAttr(((PyObject *)__pyx_v___pyx_result), __pyx_n_s_dict); if (unlikely(__pyx_t_4 == ((int)-1))) __PYX_ERR(1, 13, __pyx_L1_error) - __pyx_t_5 = (__pyx_t_4 != 0); - __pyx_t_2 = __pyx_t_5; - __pyx_L4_bool_binop_done:; - if (__pyx_t_2) { - - /* "(tree fragment)":14 - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): - * __pyx_result.__dict__.update(__pyx_state[1]) # <<<<<<<<<<<<<< - */ - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v___pyx_result), __pyx_n_s_dict); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_6, __pyx_n_s_update); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(__pyx_v___pyx_state == Py_None)) { - PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable"); - __PYX_ERR(1, 14, __pyx_L1_error) - } - __pyx_t_6 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_8 = NULL; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_7))) { - __pyx_t_8 = PyMethod_GET_SELF(__pyx_t_7); - if (likely(__pyx_t_8)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_7); - __Pyx_INCREF(__pyx_t_8); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_7, function); - } - } - __pyx_t_1 = (__pyx_t_8) ? __Pyx_PyObject_Call2Args(__pyx_t_7, __pyx_t_8, __pyx_t_6) : __Pyx_PyObject_CallOneArg(__pyx_t_7, __pyx_t_6); - __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "(tree fragment)":13 - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): # <<<<<<<<<<<<<< - * __pyx_result.__dict__.update(__pyx_state[1]) - */ - } - - /* "(tree fragment)":11 - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): # <<<<<<<<<<<<<< - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_AddTraceback("View.MemoryView.__pyx_unpickle_Enum__set_state", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} -static struct __pyx_vtabstruct_array __pyx_vtable_array; - -static PyObject *__pyx_tp_new_array(PyTypeObject *t, PyObject *a, PyObject *k) { - struct __pyx_array_obj *p; - PyObject *o; - if (likely((t->tp_flags & Py_TPFLAGS_IS_ABSTRACT) == 0)) { - o = (*t->tp_alloc)(t, 0); - } else { - o = (PyObject *) PyBaseObject_Type.tp_new(t, __pyx_empty_tuple, 0); - } - if (unlikely(!o)) return 0; - p = ((struct __pyx_array_obj *)o); - p->__pyx_vtab = __pyx_vtabptr_array; - p->mode = ((PyObject*)Py_None); Py_INCREF(Py_None); - p->_format = ((PyObject*)Py_None); Py_INCREF(Py_None); - if (unlikely(__pyx_array___cinit__(o, a, k) < 0)) goto bad; - return o; - bad: - Py_DECREF(o); o = 0; - return NULL; -} - -static void __pyx_tp_dealloc_array(PyObject *o) { - struct __pyx_array_obj *p = (struct __pyx_array_obj *)o; - #if CYTHON_USE_TP_FINALIZE - if (unlikely(PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE) && Py_TYPE(o)->tp_finalize) && (!PyType_IS_GC(Py_TYPE(o)) || !_PyGC_FINALIZED(o))) { - if (PyObject_CallFinalizerFromDealloc(o)) return; - } - #endif - { - PyObject *etype, *eval, *etb; - PyErr_Fetch(&etype, &eval, &etb); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) + 1); - __pyx_array___dealloc__(o); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) - 1); - PyErr_Restore(etype, eval, etb); - } - Py_CLEAR(p->mode); - Py_CLEAR(p->_format); - (*Py_TYPE(o)->tp_free)(o); -} -static PyObject *__pyx_sq_item_array(PyObject *o, Py_ssize_t i) { - PyObject *r; - PyObject *x = PyInt_FromSsize_t(i); if(!x) return 0; - r = Py_TYPE(o)->tp_as_mapping->mp_subscript(o, x); - Py_DECREF(x); - return r; -} - -static int __pyx_mp_ass_subscript_array(PyObject *o, PyObject *i, PyObject *v) { - if (v) { - return __pyx_array___setitem__(o, i, v); - } - else { - PyErr_Format(PyExc_NotImplementedError, - "Subscript deletion not supported by %.200s", Py_TYPE(o)->tp_name); - return -1; - } -} - -static PyObject *__pyx_tp_getattro_array(PyObject *o, PyObject *n) { - PyObject *v = __Pyx_PyObject_GenericGetAttr(o, n); - if (!v && PyErr_ExceptionMatches(PyExc_AttributeError)) { - PyErr_Clear(); - v = __pyx_array___getattr__(o, n); - } - return v; -} - -static PyObject *__pyx_getprop___pyx_array_memview(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_5array_7memview_1__get__(o); -} - -static PyMethodDef __pyx_methods_array[] = { - {"__getattr__", (PyCFunction)__pyx_array___getattr__, METH_O|METH_COEXIST, 0}, - {"__reduce_cython__", (PyCFunction)__pyx_pw___pyx_array_1__reduce_cython__, METH_NOARGS, 0}, - {"__setstate_cython__", (PyCFunction)__pyx_pw___pyx_array_3__setstate_cython__, METH_O, 0}, - {0, 0, 0, 0} -}; - -static struct PyGetSetDef __pyx_getsets_array[] = { - {(char *)"memview", __pyx_getprop___pyx_array_memview, 0, (char *)0, 0}, - {0, 0, 0, 0, 0} -}; - -static PySequenceMethods __pyx_tp_as_sequence_array = { - __pyx_array___len__, /*sq_length*/ - 0, /*sq_concat*/ - 0, /*sq_repeat*/ - __pyx_sq_item_array, /*sq_item*/ - 0, /*sq_slice*/ - 0, /*sq_ass_item*/ - 0, /*sq_ass_slice*/ - 0, /*sq_contains*/ - 0, /*sq_inplace_concat*/ - 0, /*sq_inplace_repeat*/ -}; - -static PyMappingMethods __pyx_tp_as_mapping_array = { - __pyx_array___len__, /*mp_length*/ - __pyx_array___getitem__, /*mp_subscript*/ - __pyx_mp_ass_subscript_array, /*mp_ass_subscript*/ -}; - -static PyBufferProcs __pyx_tp_as_buffer_array = { - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getreadbuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getwritebuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getsegcount*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getcharbuffer*/ - #endif - __pyx_array_getbuffer, /*bf_getbuffer*/ - 0, /*bf_releasebuffer*/ -}; - -static PyTypeObject __pyx_type___pyx_array = { - PyVarObject_HEAD_INIT(0, 0) - "monotonic_align.core.array", /*tp_name*/ - sizeof(struct __pyx_array_obj), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_array, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - &__pyx_tp_as_sequence_array, /*tp_as_sequence*/ - &__pyx_tp_as_mapping_array, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - __pyx_tp_getattro_array, /*tp_getattro*/ - 0, /*tp_setattro*/ - &__pyx_tp_as_buffer_array, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE, /*tp_flags*/ - 0, /*tp_doc*/ - 0, /*tp_traverse*/ - 0, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - __pyx_methods_array, /*tp_methods*/ - 0, /*tp_members*/ - __pyx_getsets_array, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - 0, /*tp_dictoffset*/ - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_array, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - 0, /*tp_finalize*/ - #endif - #if PY_VERSION_HEX >= 0x030800b1 - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif -}; - -static PyObject *__pyx_tp_new_Enum(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) { - struct __pyx_MemviewEnum_obj *p; - PyObject *o; - if (likely((t->tp_flags & Py_TPFLAGS_IS_ABSTRACT) == 0)) { - o = (*t->tp_alloc)(t, 0); - } else { - o = (PyObject *) PyBaseObject_Type.tp_new(t, __pyx_empty_tuple, 0); - } - if (unlikely(!o)) return 0; - p = ((struct __pyx_MemviewEnum_obj *)o); - p->name = Py_None; Py_INCREF(Py_None); - return o; -} - -static void __pyx_tp_dealloc_Enum(PyObject *o) { - struct __pyx_MemviewEnum_obj *p = (struct __pyx_MemviewEnum_obj *)o; - #if CYTHON_USE_TP_FINALIZE - if (unlikely(PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE) && Py_TYPE(o)->tp_finalize) && !_PyGC_FINALIZED(o)) { - if (PyObject_CallFinalizerFromDealloc(o)) return; - } - #endif - PyObject_GC_UnTrack(o); - Py_CLEAR(p->name); - (*Py_TYPE(o)->tp_free)(o); -} - -static int __pyx_tp_traverse_Enum(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_MemviewEnum_obj *p = (struct __pyx_MemviewEnum_obj *)o; - if (p->name) { - e = (*v)(p->name, a); if (e) return e; - } - return 0; -} - -static int __pyx_tp_clear_Enum(PyObject *o) { - PyObject* tmp; - struct __pyx_MemviewEnum_obj *p = (struct __pyx_MemviewEnum_obj *)o; - tmp = ((PyObject*)p->name); - p->name = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - return 0; -} - -static PyMethodDef __pyx_methods_Enum[] = { - {"__reduce_cython__", (PyCFunction)__pyx_pw___pyx_MemviewEnum_1__reduce_cython__, METH_NOARGS, 0}, - {"__setstate_cython__", (PyCFunction)__pyx_pw___pyx_MemviewEnum_3__setstate_cython__, METH_O, 0}, - {0, 0, 0, 0} -}; - -static PyTypeObject __pyx_type___pyx_MemviewEnum = { - PyVarObject_HEAD_INIT(0, 0) - "monotonic_align.core.Enum", /*tp_name*/ - sizeof(struct __pyx_MemviewEnum_obj), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_Enum, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - __pyx_MemviewEnum___repr__, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_Enum, /*tp_traverse*/ - __pyx_tp_clear_Enum, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - __pyx_methods_Enum, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - 0, /*tp_dictoffset*/ - __pyx_MemviewEnum___init__, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_Enum, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - 0, /*tp_finalize*/ - #endif - #if PY_VERSION_HEX >= 0x030800b1 - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif -}; -static struct __pyx_vtabstruct_memoryview __pyx_vtable_memoryview; - -static PyObject *__pyx_tp_new_memoryview(PyTypeObject *t, PyObject *a, PyObject *k) { - struct __pyx_memoryview_obj *p; - PyObject *o; - if (likely((t->tp_flags & Py_TPFLAGS_IS_ABSTRACT) == 0)) { - o = (*t->tp_alloc)(t, 0); - } else { - o = (PyObject *) PyBaseObject_Type.tp_new(t, __pyx_empty_tuple, 0); - } - if (unlikely(!o)) return 0; - p = ((struct __pyx_memoryview_obj *)o); - p->__pyx_vtab = __pyx_vtabptr_memoryview; - p->obj = Py_None; Py_INCREF(Py_None); - p->_size = Py_None; Py_INCREF(Py_None); - p->_array_interface = Py_None; Py_INCREF(Py_None); - p->view.obj = NULL; - if (unlikely(__pyx_memoryview___cinit__(o, a, k) < 0)) goto bad; - return o; - bad: - Py_DECREF(o); o = 0; - return NULL; -} - -static void __pyx_tp_dealloc_memoryview(PyObject *o) { - struct __pyx_memoryview_obj *p = (struct __pyx_memoryview_obj *)o; - #if CYTHON_USE_TP_FINALIZE - if (unlikely(PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE) && Py_TYPE(o)->tp_finalize) && !_PyGC_FINALIZED(o)) { - if (PyObject_CallFinalizerFromDealloc(o)) return; - } - #endif - PyObject_GC_UnTrack(o); - { - PyObject *etype, *eval, *etb; - PyErr_Fetch(&etype, &eval, &etb); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) + 1); - __pyx_memoryview___dealloc__(o); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) - 1); - PyErr_Restore(etype, eval, etb); - } - Py_CLEAR(p->obj); - Py_CLEAR(p->_size); - Py_CLEAR(p->_array_interface); - (*Py_TYPE(o)->tp_free)(o); -} - -static int __pyx_tp_traverse_memoryview(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_memoryview_obj *p = (struct __pyx_memoryview_obj *)o; - if (p->obj) { - e = (*v)(p->obj, a); if (e) return e; - } - if (p->_size) { - e = (*v)(p->_size, a); if (e) return e; - } - if (p->_array_interface) { - e = (*v)(p->_array_interface, a); if (e) return e; - } - if (p->view.obj) { - e = (*v)(p->view.obj, a); if (e) return e; - } - return 0; -} - -static int __pyx_tp_clear_memoryview(PyObject *o) { - PyObject* tmp; - struct __pyx_memoryview_obj *p = (struct __pyx_memoryview_obj *)o; - tmp = ((PyObject*)p->obj); - p->obj = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - tmp = ((PyObject*)p->_size); - p->_size = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - tmp = ((PyObject*)p->_array_interface); - p->_array_interface = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - Py_CLEAR(p->view.obj); - return 0; -} -static PyObject *__pyx_sq_item_memoryview(PyObject *o, Py_ssize_t i) { - PyObject *r; - PyObject *x = PyInt_FromSsize_t(i); if(!x) return 0; - r = Py_TYPE(o)->tp_as_mapping->mp_subscript(o, x); - Py_DECREF(x); - return r; -} - -static int __pyx_mp_ass_subscript_memoryview(PyObject *o, PyObject *i, PyObject *v) { - if (v) { - return __pyx_memoryview___setitem__(o, i, v); - } - else { - PyErr_Format(PyExc_NotImplementedError, - "Subscript deletion not supported by %.200s", Py_TYPE(o)->tp_name); - return -1; - } -} - -static PyObject *__pyx_getprop___pyx_memoryview_T(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_1T_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_base(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_4base_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_shape(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_5shape_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_strides(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_7strides_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_suboffsets(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_10suboffsets_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_ndim(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_4ndim_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_itemsize(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_8itemsize_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_nbytes(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_6nbytes_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_size(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_4size_1__get__(o); -} - -static PyMethodDef __pyx_methods_memoryview[] = { - {"is_c_contig", (PyCFunction)__pyx_memoryview_is_c_contig, METH_NOARGS, 0}, - {"is_f_contig", (PyCFunction)__pyx_memoryview_is_f_contig, METH_NOARGS, 0}, - {"copy", (PyCFunction)__pyx_memoryview_copy, METH_NOARGS, 0}, - {"copy_fortran", (PyCFunction)__pyx_memoryview_copy_fortran, METH_NOARGS, 0}, - {"__reduce_cython__", (PyCFunction)__pyx_pw___pyx_memoryview_1__reduce_cython__, METH_NOARGS, 0}, - {"__setstate_cython__", (PyCFunction)__pyx_pw___pyx_memoryview_3__setstate_cython__, METH_O, 0}, - {0, 0, 0, 0} -}; - -static struct PyGetSetDef __pyx_getsets_memoryview[] = { - {(char *)"T", __pyx_getprop___pyx_memoryview_T, 0, (char *)0, 0}, - {(char *)"base", __pyx_getprop___pyx_memoryview_base, 0, (char *)0, 0}, - {(char *)"shape", __pyx_getprop___pyx_memoryview_shape, 0, (char *)0, 0}, - {(char *)"strides", __pyx_getprop___pyx_memoryview_strides, 0, (char *)0, 0}, - {(char *)"suboffsets", __pyx_getprop___pyx_memoryview_suboffsets, 0, (char *)0, 0}, - {(char *)"ndim", __pyx_getprop___pyx_memoryview_ndim, 0, (char *)0, 0}, - {(char *)"itemsize", __pyx_getprop___pyx_memoryview_itemsize, 0, (char *)0, 0}, - {(char *)"nbytes", __pyx_getprop___pyx_memoryview_nbytes, 0, (char *)0, 0}, - {(char *)"size", __pyx_getprop___pyx_memoryview_size, 0, (char *)0, 0}, - {0, 0, 0, 0, 0} -}; - -static PySequenceMethods __pyx_tp_as_sequence_memoryview = { - __pyx_memoryview___len__, /*sq_length*/ - 0, /*sq_concat*/ - 0, /*sq_repeat*/ - __pyx_sq_item_memoryview, /*sq_item*/ - 0, /*sq_slice*/ - 0, /*sq_ass_item*/ - 0, /*sq_ass_slice*/ - 0, /*sq_contains*/ - 0, /*sq_inplace_concat*/ - 0, /*sq_inplace_repeat*/ -}; - -static PyMappingMethods __pyx_tp_as_mapping_memoryview = { - __pyx_memoryview___len__, /*mp_length*/ - __pyx_memoryview___getitem__, /*mp_subscript*/ - __pyx_mp_ass_subscript_memoryview, /*mp_ass_subscript*/ -}; - -static PyBufferProcs __pyx_tp_as_buffer_memoryview = { - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getreadbuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getwritebuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getsegcount*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getcharbuffer*/ - #endif - __pyx_memoryview_getbuffer, /*bf_getbuffer*/ - 0, /*bf_releasebuffer*/ -}; - -static PyTypeObject __pyx_type___pyx_memoryview = { - PyVarObject_HEAD_INIT(0, 0) - "monotonic_align.core.memoryview", /*tp_name*/ - sizeof(struct __pyx_memoryview_obj), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_memoryview, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - __pyx_memoryview___repr__, /*tp_repr*/ - 0, /*tp_as_number*/ - &__pyx_tp_as_sequence_memoryview, /*tp_as_sequence*/ - &__pyx_tp_as_mapping_memoryview, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - __pyx_memoryview___str__, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - &__pyx_tp_as_buffer_memoryview, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_memoryview, /*tp_traverse*/ - __pyx_tp_clear_memoryview, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - __pyx_methods_memoryview, /*tp_methods*/ - 0, /*tp_members*/ - __pyx_getsets_memoryview, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - 0, /*tp_dictoffset*/ - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_memoryview, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - 0, /*tp_finalize*/ - #endif - #if PY_VERSION_HEX >= 0x030800b1 - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif -}; -static struct __pyx_vtabstruct__memoryviewslice __pyx_vtable__memoryviewslice; - -static PyObject *__pyx_tp_new__memoryviewslice(PyTypeObject *t, PyObject *a, PyObject *k) { - struct __pyx_memoryviewslice_obj *p; - PyObject *o = __pyx_tp_new_memoryview(t, a, k); - if (unlikely(!o)) return 0; - p = ((struct __pyx_memoryviewslice_obj *)o); - p->__pyx_base.__pyx_vtab = (struct __pyx_vtabstruct_memoryview*)__pyx_vtabptr__memoryviewslice; - p->from_object = Py_None; Py_INCREF(Py_None); - p->from_slice.memview = NULL; - return o; -} - -static void __pyx_tp_dealloc__memoryviewslice(PyObject *o) { - struct __pyx_memoryviewslice_obj *p = (struct __pyx_memoryviewslice_obj *)o; - #if CYTHON_USE_TP_FINALIZE - if (unlikely(PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE) && Py_TYPE(o)->tp_finalize) && !_PyGC_FINALIZED(o)) { - if (PyObject_CallFinalizerFromDealloc(o)) return; - } - #endif - PyObject_GC_UnTrack(o); - { - PyObject *etype, *eval, *etb; - PyErr_Fetch(&etype, &eval, &etb); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) + 1); - __pyx_memoryviewslice___dealloc__(o); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) - 1); - PyErr_Restore(etype, eval, etb); - } - Py_CLEAR(p->from_object); - PyObject_GC_Track(o); - __pyx_tp_dealloc_memoryview(o); -} - -static int __pyx_tp_traverse__memoryviewslice(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_memoryviewslice_obj *p = (struct __pyx_memoryviewslice_obj *)o; - e = __pyx_tp_traverse_memoryview(o, v, a); if (e) return e; - if (p->from_object) { - e = (*v)(p->from_object, a); if (e) return e; - } - return 0; -} - -static int __pyx_tp_clear__memoryviewslice(PyObject *o) { - PyObject* tmp; - struct __pyx_memoryviewslice_obj *p = (struct __pyx_memoryviewslice_obj *)o; - __pyx_tp_clear_memoryview(o); - tmp = ((PyObject*)p->from_object); - p->from_object = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - __PYX_XDEC_MEMVIEW(&p->from_slice, 1); - return 0; -} - -static PyObject *__pyx_getprop___pyx_memoryviewslice_base(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_16_memoryviewslice_4base_1__get__(o); -} - -static PyMethodDef __pyx_methods__memoryviewslice[] = { - {"__reduce_cython__", (PyCFunction)__pyx_pw___pyx_memoryviewslice_1__reduce_cython__, METH_NOARGS, 0}, - {"__setstate_cython__", (PyCFunction)__pyx_pw___pyx_memoryviewslice_3__setstate_cython__, METH_O, 0}, - {0, 0, 0, 0} -}; - -static struct PyGetSetDef __pyx_getsets__memoryviewslice[] = { - {(char *)"base", __pyx_getprop___pyx_memoryviewslice_base, 0, (char *)0, 0}, - {0, 0, 0, 0, 0} -}; - -static PyTypeObject __pyx_type___pyx_memoryviewslice = { - PyVarObject_HEAD_INIT(0, 0) - "monotonic_align.core._memoryviewslice", /*tp_name*/ - sizeof(struct __pyx_memoryviewslice_obj), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc__memoryviewslice, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - #if CYTHON_COMPILING_IN_PYPY - __pyx_memoryview___repr__, /*tp_repr*/ - #else - 0, /*tp_repr*/ - #endif - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - #if CYTHON_COMPILING_IN_PYPY - __pyx_memoryview___str__, /*tp_str*/ - #else - 0, /*tp_str*/ - #endif - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - "Internal class for passing memoryview slices to Python", /*tp_doc*/ - __pyx_tp_traverse__memoryviewslice, /*tp_traverse*/ - __pyx_tp_clear__memoryviewslice, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - __pyx_methods__memoryviewslice, /*tp_methods*/ - 0, /*tp_members*/ - __pyx_getsets__memoryviewslice, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - 0, /*tp_dictoffset*/ - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new__memoryviewslice, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - 0, /*tp_finalize*/ - #endif - #if PY_VERSION_HEX >= 0x030800b1 - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif -}; - -static PyMethodDef __pyx_methods[] = { - {"maximum_path_c", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_15monotonic_align_4core_1maximum_path_c, METH_VARARGS|METH_KEYWORDS, 0}, - {0, 0, 0, 0} -}; - -#if PY_MAJOR_VERSION >= 3 -#if CYTHON_PEP489_MULTI_PHASE_INIT -static PyObject* __pyx_pymod_create(PyObject *spec, PyModuleDef *def); /*proto*/ -static int __pyx_pymod_exec_core(PyObject* module); /*proto*/ -static PyModuleDef_Slot __pyx_moduledef_slots[] = { - {Py_mod_create, (void*)__pyx_pymod_create}, - {Py_mod_exec, (void*)__pyx_pymod_exec_core}, - {0, NULL} -}; -#endif - -static struct PyModuleDef __pyx_moduledef = { - PyModuleDef_HEAD_INIT, - "core", - 0, /* m_doc */ - #if CYTHON_PEP489_MULTI_PHASE_INIT - 0, /* m_size */ - #else - -1, /* m_size */ - #endif - __pyx_methods /* m_methods */, - #if CYTHON_PEP489_MULTI_PHASE_INIT - __pyx_moduledef_slots, /* m_slots */ - #else - NULL, /* m_reload */ - #endif - NULL, /* m_traverse */ - NULL, /* m_clear */ - NULL /* m_free */ -}; -#endif -#ifndef CYTHON_SMALL_CODE -#if defined(__clang__) - #define CYTHON_SMALL_CODE -#elif defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 3)) - #define CYTHON_SMALL_CODE __attribute__((cold)) -#else - #define CYTHON_SMALL_CODE -#endif -#endif - -static __Pyx_StringTabEntry __pyx_string_tab[] = { - {&__pyx_n_s_ASCII, __pyx_k_ASCII, sizeof(__pyx_k_ASCII), 0, 0, 1, 1}, - {&__pyx_kp_s_Buffer_view_does_not_expose_stri, __pyx_k_Buffer_view_does_not_expose_stri, sizeof(__pyx_k_Buffer_view_does_not_expose_stri), 0, 0, 1, 0}, - {&__pyx_kp_s_Can_only_create_a_buffer_that_is, __pyx_k_Can_only_create_a_buffer_that_is, sizeof(__pyx_k_Can_only_create_a_buffer_that_is), 0, 0, 1, 0}, - {&__pyx_kp_s_Cannot_assign_to_read_only_memor, __pyx_k_Cannot_assign_to_read_only_memor, sizeof(__pyx_k_Cannot_assign_to_read_only_memor), 0, 0, 1, 0}, - {&__pyx_kp_s_Cannot_create_writable_memory_vi, __pyx_k_Cannot_create_writable_memory_vi, sizeof(__pyx_k_Cannot_create_writable_memory_vi), 0, 0, 1, 0}, - {&__pyx_kp_s_Cannot_index_with_type_s, __pyx_k_Cannot_index_with_type_s, sizeof(__pyx_k_Cannot_index_with_type_s), 0, 0, 1, 0}, - {&__pyx_n_s_Ellipsis, __pyx_k_Ellipsis, sizeof(__pyx_k_Ellipsis), 0, 0, 1, 1}, - {&__pyx_kp_s_Empty_shape_tuple_for_cython_arr, __pyx_k_Empty_shape_tuple_for_cython_arr, sizeof(__pyx_k_Empty_shape_tuple_for_cython_arr), 0, 0, 1, 0}, - {&__pyx_kp_s_Incompatible_checksums_s_vs_0xb0, __pyx_k_Incompatible_checksums_s_vs_0xb0, sizeof(__pyx_k_Incompatible_checksums_s_vs_0xb0), 0, 0, 1, 0}, - {&__pyx_n_s_IndexError, __pyx_k_IndexError, sizeof(__pyx_k_IndexError), 0, 0, 1, 1}, - {&__pyx_kp_s_Indirect_dimensions_not_supporte, __pyx_k_Indirect_dimensions_not_supporte, sizeof(__pyx_k_Indirect_dimensions_not_supporte), 0, 0, 1, 0}, - {&__pyx_kp_s_Invalid_mode_expected_c_or_fortr, __pyx_k_Invalid_mode_expected_c_or_fortr, sizeof(__pyx_k_Invalid_mode_expected_c_or_fortr), 0, 0, 1, 0}, - {&__pyx_kp_s_Invalid_shape_in_axis_d_d, __pyx_k_Invalid_shape_in_axis_d_d, sizeof(__pyx_k_Invalid_shape_in_axis_d_d), 0, 0, 1, 0}, - {&__pyx_n_s_MemoryError, __pyx_k_MemoryError, sizeof(__pyx_k_MemoryError), 0, 0, 1, 1}, - {&__pyx_kp_s_MemoryView_of_r_at_0x_x, __pyx_k_MemoryView_of_r_at_0x_x, sizeof(__pyx_k_MemoryView_of_r_at_0x_x), 0, 0, 1, 0}, - {&__pyx_kp_s_MemoryView_of_r_object, __pyx_k_MemoryView_of_r_object, sizeof(__pyx_k_MemoryView_of_r_object), 0, 0, 1, 0}, - {&__pyx_n_b_O, __pyx_k_O, sizeof(__pyx_k_O), 0, 0, 0, 1}, - {&__pyx_kp_s_Out_of_bounds_on_buffer_access_a, __pyx_k_Out_of_bounds_on_buffer_access_a, sizeof(__pyx_k_Out_of_bounds_on_buffer_access_a), 0, 0, 1, 0}, - {&__pyx_n_s_PickleError, __pyx_k_PickleError, sizeof(__pyx_k_PickleError), 0, 0, 1, 1}, - {&__pyx_n_s_TypeError, __pyx_k_TypeError, sizeof(__pyx_k_TypeError), 0, 0, 1, 1}, - {&__pyx_kp_s_Unable_to_convert_item_to_object, __pyx_k_Unable_to_convert_item_to_object, sizeof(__pyx_k_Unable_to_convert_item_to_object), 0, 0, 1, 0}, - {&__pyx_n_s_ValueError, __pyx_k_ValueError, sizeof(__pyx_k_ValueError), 0, 0, 1, 1}, - {&__pyx_n_s_View_MemoryView, __pyx_k_View_MemoryView, sizeof(__pyx_k_View_MemoryView), 0, 0, 1, 1}, - {&__pyx_n_s_allocate_buffer, __pyx_k_allocate_buffer, sizeof(__pyx_k_allocate_buffer), 0, 0, 1, 1}, - {&__pyx_n_s_base, __pyx_k_base, sizeof(__pyx_k_base), 0, 0, 1, 1}, - {&__pyx_n_s_c, __pyx_k_c, sizeof(__pyx_k_c), 0, 0, 1, 1}, - {&__pyx_n_u_c, __pyx_k_c, sizeof(__pyx_k_c), 0, 1, 0, 1}, - {&__pyx_n_s_class, __pyx_k_class, sizeof(__pyx_k_class), 0, 0, 1, 1}, - {&__pyx_n_s_cline_in_traceback, __pyx_k_cline_in_traceback, sizeof(__pyx_k_cline_in_traceback), 0, 0, 1, 1}, - {&__pyx_kp_s_contiguous_and_direct, __pyx_k_contiguous_and_direct, sizeof(__pyx_k_contiguous_and_direct), 0, 0, 1, 0}, - {&__pyx_kp_s_contiguous_and_indirect, __pyx_k_contiguous_and_indirect, sizeof(__pyx_k_contiguous_and_indirect), 0, 0, 1, 0}, - {&__pyx_n_s_dict, __pyx_k_dict, sizeof(__pyx_k_dict), 0, 0, 1, 1}, - {&__pyx_n_s_dtype_is_object, __pyx_k_dtype_is_object, sizeof(__pyx_k_dtype_is_object), 0, 0, 1, 1}, - {&__pyx_n_s_encode, __pyx_k_encode, sizeof(__pyx_k_encode), 0, 0, 1, 1}, - {&__pyx_n_s_enumerate, __pyx_k_enumerate, sizeof(__pyx_k_enumerate), 0, 0, 1, 1}, - {&__pyx_n_s_error, __pyx_k_error, sizeof(__pyx_k_error), 0, 0, 1, 1}, - {&__pyx_n_s_flags, __pyx_k_flags, sizeof(__pyx_k_flags), 0, 0, 1, 1}, - {&__pyx_n_s_format, __pyx_k_format, sizeof(__pyx_k_format), 0, 0, 1, 1}, - {&__pyx_n_s_fortran, __pyx_k_fortran, sizeof(__pyx_k_fortran), 0, 0, 1, 1}, - {&__pyx_n_u_fortran, __pyx_k_fortran, sizeof(__pyx_k_fortran), 0, 1, 0, 1}, - {&__pyx_n_s_getstate, __pyx_k_getstate, sizeof(__pyx_k_getstate), 0, 0, 1, 1}, - {&__pyx_kp_s_got_differing_extents_in_dimensi, __pyx_k_got_differing_extents_in_dimensi, sizeof(__pyx_k_got_differing_extents_in_dimensi), 0, 0, 1, 0}, - {&__pyx_n_s_id, __pyx_k_id, sizeof(__pyx_k_id), 0, 0, 1, 1}, - {&__pyx_n_s_import, __pyx_k_import, sizeof(__pyx_k_import), 0, 0, 1, 1}, - {&__pyx_n_s_itemsize, __pyx_k_itemsize, sizeof(__pyx_k_itemsize), 0, 0, 1, 1}, - {&__pyx_kp_s_itemsize_0_for_cython_array, __pyx_k_itemsize_0_for_cython_array, sizeof(__pyx_k_itemsize_0_for_cython_array), 0, 0, 1, 0}, - {&__pyx_n_s_main, __pyx_k_main, sizeof(__pyx_k_main), 0, 0, 1, 1}, - {&__pyx_n_s_memview, __pyx_k_memview, sizeof(__pyx_k_memview), 0, 0, 1, 1}, - {&__pyx_n_s_mode, __pyx_k_mode, sizeof(__pyx_k_mode), 0, 0, 1, 1}, - {&__pyx_n_s_name, __pyx_k_name, sizeof(__pyx_k_name), 0, 0, 1, 1}, - {&__pyx_n_s_name_2, __pyx_k_name_2, sizeof(__pyx_k_name_2), 0, 0, 1, 1}, - {&__pyx_n_s_ndim, __pyx_k_ndim, sizeof(__pyx_k_ndim), 0, 0, 1, 1}, - {&__pyx_n_s_new, __pyx_k_new, sizeof(__pyx_k_new), 0, 0, 1, 1}, - {&__pyx_kp_s_no_default___reduce___due_to_non, __pyx_k_no_default___reduce___due_to_non, sizeof(__pyx_k_no_default___reduce___due_to_non), 0, 0, 1, 0}, - {&__pyx_n_s_obj, __pyx_k_obj, sizeof(__pyx_k_obj), 0, 0, 1, 1}, - {&__pyx_n_s_pack, __pyx_k_pack, sizeof(__pyx_k_pack), 0, 0, 1, 1}, - {&__pyx_n_s_paths, __pyx_k_paths, sizeof(__pyx_k_paths), 0, 0, 1, 1}, - {&__pyx_n_s_pickle, __pyx_k_pickle, sizeof(__pyx_k_pickle), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_PickleError, __pyx_k_pyx_PickleError, sizeof(__pyx_k_pyx_PickleError), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_checksum, __pyx_k_pyx_checksum, sizeof(__pyx_k_pyx_checksum), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_getbuffer, __pyx_k_pyx_getbuffer, sizeof(__pyx_k_pyx_getbuffer), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_result, __pyx_k_pyx_result, sizeof(__pyx_k_pyx_result), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_state, __pyx_k_pyx_state, sizeof(__pyx_k_pyx_state), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_type, __pyx_k_pyx_type, sizeof(__pyx_k_pyx_type), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_unpickle_Enum, __pyx_k_pyx_unpickle_Enum, sizeof(__pyx_k_pyx_unpickle_Enum), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_vtable, __pyx_k_pyx_vtable, sizeof(__pyx_k_pyx_vtable), 0, 0, 1, 1}, - {&__pyx_n_s_range, __pyx_k_range, sizeof(__pyx_k_range), 0, 0, 1, 1}, - {&__pyx_n_s_reduce, __pyx_k_reduce, sizeof(__pyx_k_reduce), 0, 0, 1, 1}, - {&__pyx_n_s_reduce_cython, __pyx_k_reduce_cython, sizeof(__pyx_k_reduce_cython), 0, 0, 1, 1}, - {&__pyx_n_s_reduce_ex, __pyx_k_reduce_ex, sizeof(__pyx_k_reduce_ex), 0, 0, 1, 1}, - {&__pyx_n_s_setstate, __pyx_k_setstate, sizeof(__pyx_k_setstate), 0, 0, 1, 1}, - {&__pyx_n_s_setstate_cython, __pyx_k_setstate_cython, sizeof(__pyx_k_setstate_cython), 0, 0, 1, 1}, - {&__pyx_n_s_shape, __pyx_k_shape, sizeof(__pyx_k_shape), 0, 0, 1, 1}, - {&__pyx_n_s_size, __pyx_k_size, sizeof(__pyx_k_size), 0, 0, 1, 1}, - {&__pyx_n_s_start, __pyx_k_start, sizeof(__pyx_k_start), 0, 0, 1, 1}, - {&__pyx_n_s_step, __pyx_k_step, sizeof(__pyx_k_step), 0, 0, 1, 1}, - {&__pyx_n_s_stop, __pyx_k_stop, sizeof(__pyx_k_stop), 0, 0, 1, 1}, - {&__pyx_kp_s_strided_and_direct, __pyx_k_strided_and_direct, sizeof(__pyx_k_strided_and_direct), 0, 0, 1, 0}, - {&__pyx_kp_s_strided_and_direct_or_indirect, __pyx_k_strided_and_direct_or_indirect, sizeof(__pyx_k_strided_and_direct_or_indirect), 0, 0, 1, 0}, - {&__pyx_kp_s_strided_and_indirect, __pyx_k_strided_and_indirect, sizeof(__pyx_k_strided_and_indirect), 0, 0, 1, 0}, - {&__pyx_kp_s_stringsource, __pyx_k_stringsource, sizeof(__pyx_k_stringsource), 0, 0, 1, 0}, - {&__pyx_n_s_struct, __pyx_k_struct, sizeof(__pyx_k_struct), 0, 0, 1, 1}, - {&__pyx_n_s_t_xs, __pyx_k_t_xs, sizeof(__pyx_k_t_xs), 0, 0, 1, 1}, - {&__pyx_n_s_t_ys, __pyx_k_t_ys, sizeof(__pyx_k_t_ys), 0, 0, 1, 1}, - {&__pyx_n_s_test, __pyx_k_test, sizeof(__pyx_k_test), 0, 0, 1, 1}, - {&__pyx_kp_s_unable_to_allocate_array_data, __pyx_k_unable_to_allocate_array_data, sizeof(__pyx_k_unable_to_allocate_array_data), 0, 0, 1, 0}, - {&__pyx_kp_s_unable_to_allocate_shape_and_str, __pyx_k_unable_to_allocate_shape_and_str, sizeof(__pyx_k_unable_to_allocate_shape_and_str), 0, 0, 1, 0}, - {&__pyx_n_s_unpack, __pyx_k_unpack, sizeof(__pyx_k_unpack), 0, 0, 1, 1}, - {&__pyx_n_s_update, __pyx_k_update, sizeof(__pyx_k_update), 0, 0, 1, 1}, - {&__pyx_n_s_values, __pyx_k_values, sizeof(__pyx_k_values), 0, 0, 1, 1}, - {0, 0, 0, 0, 0, 0, 0} -}; -static CYTHON_SMALL_CODE int __Pyx_InitCachedBuiltins(void) { - __pyx_builtin_range = __Pyx_GetBuiltinName(__pyx_n_s_range); if (!__pyx_builtin_range) __PYX_ERR(0, 15, __pyx_L1_error) - __pyx_builtin_ValueError = __Pyx_GetBuiltinName(__pyx_n_s_ValueError); if (!__pyx_builtin_ValueError) __PYX_ERR(1, 133, __pyx_L1_error) - __pyx_builtin_MemoryError = __Pyx_GetBuiltinName(__pyx_n_s_MemoryError); if (!__pyx_builtin_MemoryError) __PYX_ERR(1, 148, __pyx_L1_error) - __pyx_builtin_enumerate = __Pyx_GetBuiltinName(__pyx_n_s_enumerate); if (!__pyx_builtin_enumerate) __PYX_ERR(1, 151, __pyx_L1_error) - __pyx_builtin_TypeError = __Pyx_GetBuiltinName(__pyx_n_s_TypeError); if (!__pyx_builtin_TypeError) __PYX_ERR(1, 2, __pyx_L1_error) - __pyx_builtin_Ellipsis = __Pyx_GetBuiltinName(__pyx_n_s_Ellipsis); if (!__pyx_builtin_Ellipsis) __PYX_ERR(1, 404, __pyx_L1_error) - __pyx_builtin_id = __Pyx_GetBuiltinName(__pyx_n_s_id); if (!__pyx_builtin_id) __PYX_ERR(1, 613, __pyx_L1_error) - __pyx_builtin_IndexError = __Pyx_GetBuiltinName(__pyx_n_s_IndexError); if (!__pyx_builtin_IndexError) __PYX_ERR(1, 832, __pyx_L1_error) - return 0; - __pyx_L1_error:; - return -1; -} - -static CYTHON_SMALL_CODE int __Pyx_InitCachedConstants(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_InitCachedConstants", 0); - - /* "View.MemoryView":133 - * - * if not self.ndim: - * raise ValueError("Empty shape tuple for cython.array") # <<<<<<<<<<<<<< - * - * if itemsize <= 0: - */ - __pyx_tuple__2 = PyTuple_Pack(1, __pyx_kp_s_Empty_shape_tuple_for_cython_arr); if (unlikely(!__pyx_tuple__2)) __PYX_ERR(1, 133, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__2); - __Pyx_GIVEREF(__pyx_tuple__2); - - /* "View.MemoryView":136 - * - * if itemsize <= 0: - * raise ValueError("itemsize <= 0 for cython.array") # <<<<<<<<<<<<<< - * - * if not isinstance(format, bytes): - */ - __pyx_tuple__3 = PyTuple_Pack(1, __pyx_kp_s_itemsize_0_for_cython_array); if (unlikely(!__pyx_tuple__3)) __PYX_ERR(1, 136, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__3); - __Pyx_GIVEREF(__pyx_tuple__3); - - /* "View.MemoryView":148 - * - * if not self._shape: - * raise MemoryError("unable to allocate shape and strides.") # <<<<<<<<<<<<<< - * - * - */ - __pyx_tuple__4 = PyTuple_Pack(1, __pyx_kp_s_unable_to_allocate_shape_and_str); if (unlikely(!__pyx_tuple__4)) __PYX_ERR(1, 148, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__4); - __Pyx_GIVEREF(__pyx_tuple__4); - - /* "View.MemoryView":176 - * self.data = malloc(self.len) - * if not self.data: - * raise MemoryError("unable to allocate array data.") # <<<<<<<<<<<<<< - * - * if self.dtype_is_object: - */ - __pyx_tuple__5 = PyTuple_Pack(1, __pyx_kp_s_unable_to_allocate_array_data); if (unlikely(!__pyx_tuple__5)) __PYX_ERR(1, 176, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__5); - __Pyx_GIVEREF(__pyx_tuple__5); - - /* "View.MemoryView":192 - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): - * raise ValueError("Can only create a buffer that is contiguous in memory.") # <<<<<<<<<<<<<< - * info.buf = self.data - * info.len = self.len - */ - __pyx_tuple__6 = PyTuple_Pack(1, __pyx_kp_s_Can_only_create_a_buffer_that_is); if (unlikely(!__pyx_tuple__6)) __PYX_ERR(1, 192, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__6); - __Pyx_GIVEREF(__pyx_tuple__6); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - __pyx_tuple__7 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__7)) __PYX_ERR(1, 2, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__7); - __Pyx_GIVEREF(__pyx_tuple__7); - - /* "(tree fragment)":4 - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - */ - __pyx_tuple__8 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__8)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__8); - __Pyx_GIVEREF(__pyx_tuple__8); - - /* "View.MemoryView":418 - * def __setitem__(memoryview self, object index, object value): - * if self.view.readonly: - * raise TypeError("Cannot assign to read-only memoryview") # <<<<<<<<<<<<<< - * - * have_slices, index = _unellipsify(index, self.view.ndim) - */ - __pyx_tuple__9 = PyTuple_Pack(1, __pyx_kp_s_Cannot_assign_to_read_only_memor); if (unlikely(!__pyx_tuple__9)) __PYX_ERR(1, 418, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__9); - __Pyx_GIVEREF(__pyx_tuple__9); - - /* "View.MemoryView":495 - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: - * raise ValueError("Unable to convert item to object") # <<<<<<<<<<<<<< - * else: - * if len(self.view.format) == 1: - */ - __pyx_tuple__10 = PyTuple_Pack(1, __pyx_kp_s_Unable_to_convert_item_to_object); if (unlikely(!__pyx_tuple__10)) __PYX_ERR(1, 495, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__10); - __Pyx_GIVEREF(__pyx_tuple__10); - - /* "View.MemoryView":520 - * def __getbuffer__(self, Py_buffer *info, int flags): - * if flags & PyBUF_WRITABLE and self.view.readonly: - * raise ValueError("Cannot create writable memory view from read-only memoryview") # <<<<<<<<<<<<<< - * - * if flags & PyBUF_ND: - */ - __pyx_tuple__11 = PyTuple_Pack(1, __pyx_kp_s_Cannot_create_writable_memory_vi); if (unlikely(!__pyx_tuple__11)) __PYX_ERR(1, 520, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__11); - __Pyx_GIVEREF(__pyx_tuple__11); - - /* "View.MemoryView":570 - * if self.view.strides == NULL: - * - * raise ValueError("Buffer view does not expose strides") # <<<<<<<<<<<<<< - * - * return tuple([stride for stride in self.view.strides[:self.view.ndim]]) - */ - __pyx_tuple__12 = PyTuple_Pack(1, __pyx_kp_s_Buffer_view_does_not_expose_stri); if (unlikely(!__pyx_tuple__12)) __PYX_ERR(1, 570, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__12); - __Pyx_GIVEREF(__pyx_tuple__12); - - /* "View.MemoryView":577 - * def suboffsets(self): - * if self.view.suboffsets == NULL: - * return (-1,) * self.view.ndim # <<<<<<<<<<<<<< - * - * return tuple([suboffset for suboffset in self.view.suboffsets[:self.view.ndim]]) - */ - __pyx_tuple__13 = PyTuple_New(1); if (unlikely(!__pyx_tuple__13)) __PYX_ERR(1, 577, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__13); - __Pyx_INCREF(__pyx_int_neg_1); - __Pyx_GIVEREF(__pyx_int_neg_1); - PyTuple_SET_ITEM(__pyx_tuple__13, 0, __pyx_int_neg_1); - __Pyx_GIVEREF(__pyx_tuple__13); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - __pyx_tuple__14 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__14)) __PYX_ERR(1, 2, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__14); - __Pyx_GIVEREF(__pyx_tuple__14); - - /* "(tree fragment)":4 - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - */ - __pyx_tuple__15 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__15)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__15); - __Pyx_GIVEREF(__pyx_tuple__15); - - /* "View.MemoryView":682 - * if item is Ellipsis: - * if not seen_ellipsis: - * result.extend([slice(None)] * (ndim - len(tup) + 1)) # <<<<<<<<<<<<<< - * seen_ellipsis = True - * else: - */ - __pyx_slice__16 = PySlice_New(Py_None, Py_None, Py_None); if (unlikely(!__pyx_slice__16)) __PYX_ERR(1, 682, __pyx_L1_error) - __Pyx_GOTREF(__pyx_slice__16); - __Pyx_GIVEREF(__pyx_slice__16); - - /* "View.MemoryView":703 - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: - * raise ValueError("Indirect dimensions not supported") # <<<<<<<<<<<<<< - * - * - */ - __pyx_tuple__17 = PyTuple_Pack(1, __pyx_kp_s_Indirect_dimensions_not_supporte); if (unlikely(!__pyx_tuple__17)) __PYX_ERR(1, 703, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__17); - __Pyx_GIVEREF(__pyx_tuple__17); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - __pyx_tuple__18 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__18)) __PYX_ERR(1, 2, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__18); - __Pyx_GIVEREF(__pyx_tuple__18); - - /* "(tree fragment)":4 - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - */ - __pyx_tuple__19 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__19)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__19); - __Pyx_GIVEREF(__pyx_tuple__19); - - /* "View.MemoryView":286 - * return self.name - * - * cdef generic = Enum("") # <<<<<<<<<<<<<< - * cdef strided = Enum("") # default - * cdef indirect = Enum("") - */ - __pyx_tuple__20 = PyTuple_Pack(1, __pyx_kp_s_strided_and_direct_or_indirect); if (unlikely(!__pyx_tuple__20)) __PYX_ERR(1, 286, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__20); - __Pyx_GIVEREF(__pyx_tuple__20); - - /* "View.MemoryView":287 - * - * cdef generic = Enum("") - * cdef strided = Enum("") # default # <<<<<<<<<<<<<< - * cdef indirect = Enum("") - * - */ - __pyx_tuple__21 = PyTuple_Pack(1, __pyx_kp_s_strided_and_direct); if (unlikely(!__pyx_tuple__21)) __PYX_ERR(1, 287, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__21); - __Pyx_GIVEREF(__pyx_tuple__21); - - /* "View.MemoryView":288 - * cdef generic = Enum("") - * cdef strided = Enum("") # default - * cdef indirect = Enum("") # <<<<<<<<<<<<<< - * - * - */ - __pyx_tuple__22 = PyTuple_Pack(1, __pyx_kp_s_strided_and_indirect); if (unlikely(!__pyx_tuple__22)) __PYX_ERR(1, 288, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__22); - __Pyx_GIVEREF(__pyx_tuple__22); - - /* "View.MemoryView":291 - * - * - * cdef contiguous = Enum("") # <<<<<<<<<<<<<< - * cdef indirect_contiguous = Enum("") - * - */ - __pyx_tuple__23 = PyTuple_Pack(1, __pyx_kp_s_contiguous_and_direct); if (unlikely(!__pyx_tuple__23)) __PYX_ERR(1, 291, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__23); - __Pyx_GIVEREF(__pyx_tuple__23); - - /* "View.MemoryView":292 - * - * cdef contiguous = Enum("") - * cdef indirect_contiguous = Enum("") # <<<<<<<<<<<<<< - * - * - */ - __pyx_tuple__24 = PyTuple_Pack(1, __pyx_kp_s_contiguous_and_indirect); if (unlikely(!__pyx_tuple__24)) __PYX_ERR(1, 292, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__24); - __Pyx_GIVEREF(__pyx_tuple__24); - - /* "(tree fragment)":1 - * def __pyx_unpickle_Enum(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<< - * cdef object __pyx_PickleError - * cdef object __pyx_result - */ - __pyx_tuple__25 = PyTuple_Pack(5, __pyx_n_s_pyx_type, __pyx_n_s_pyx_checksum, __pyx_n_s_pyx_state, __pyx_n_s_pyx_PickleError, __pyx_n_s_pyx_result); if (unlikely(!__pyx_tuple__25)) __PYX_ERR(1, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__25); - __Pyx_GIVEREF(__pyx_tuple__25); - __pyx_codeobj__26 = (PyObject*)__Pyx_PyCode_New(3, 0, 5, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__25, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_stringsource, __pyx_n_s_pyx_unpickle_Enum, 1, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__26)) __PYX_ERR(1, 1, __pyx_L1_error) - __Pyx_RefNannyFinishContext(); - return 0; - __pyx_L1_error:; - __Pyx_RefNannyFinishContext(); - return -1; -} - -static CYTHON_SMALL_CODE int __Pyx_InitGlobals(void) { - /* InitThreads.init */ - #ifdef WITH_THREAD -PyEval_InitThreads(); -#endif - -if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1, __pyx_L1_error) - - if (__Pyx_InitStrings(__pyx_string_tab) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - __pyx_int_0 = PyInt_FromLong(0); if (unlikely(!__pyx_int_0)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_1 = PyInt_FromLong(1); if (unlikely(!__pyx_int_1)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_184977713 = PyInt_FromLong(184977713L); if (unlikely(!__pyx_int_184977713)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_neg_1 = PyInt_FromLong(-1); if (unlikely(!__pyx_int_neg_1)) __PYX_ERR(0, 1, __pyx_L1_error) - return 0; - __pyx_L1_error:; - return -1; -} - -static CYTHON_SMALL_CODE int __Pyx_modinit_global_init_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_variable_export_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_function_export_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_type_init_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_type_import_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_variable_import_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_function_import_code(void); /*proto*/ - -static int __Pyx_modinit_global_init_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_global_init_code", 0); - /*--- Global init code ---*/ - generic = Py_None; Py_INCREF(Py_None); - strided = Py_None; Py_INCREF(Py_None); - indirect = Py_None; Py_INCREF(Py_None); - contiguous = Py_None; Py_INCREF(Py_None); - indirect_contiguous = Py_None; Py_INCREF(Py_None); - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_variable_export_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_variable_export_code", 0); - /*--- Variable export code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_function_export_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_function_export_code", 0); - /*--- Function export code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_type_init_code(void) { - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__Pyx_modinit_type_init_code", 0); - /*--- Type init code ---*/ - __pyx_vtabptr_array = &__pyx_vtable_array; - __pyx_vtable_array.get_memview = (PyObject *(*)(struct __pyx_array_obj *))__pyx_array_get_memview; - if (PyType_Ready(&__pyx_type___pyx_array) < 0) __PYX_ERR(1, 105, __pyx_L1_error) - #if PY_VERSION_HEX < 0x030800B1 - __pyx_type___pyx_array.tp_print = 0; - #endif - if (__Pyx_SetVtable(__pyx_type___pyx_array.tp_dict, __pyx_vtabptr_array) < 0) __PYX_ERR(1, 105, __pyx_L1_error) - if (__Pyx_setup_reduce((PyObject*)&__pyx_type___pyx_array) < 0) __PYX_ERR(1, 105, __pyx_L1_error) - __pyx_array_type = &__pyx_type___pyx_array; - if (PyType_Ready(&__pyx_type___pyx_MemviewEnum) < 0) __PYX_ERR(1, 279, __pyx_L1_error) - #if PY_VERSION_HEX < 0x030800B1 - __pyx_type___pyx_MemviewEnum.tp_print = 0; - #endif - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_type___pyx_MemviewEnum.tp_dictoffset && __pyx_type___pyx_MemviewEnum.tp_getattro == PyObject_GenericGetAttr)) { - __pyx_type___pyx_MemviewEnum.tp_getattro = __Pyx_PyObject_GenericGetAttr; - } - if (__Pyx_setup_reduce((PyObject*)&__pyx_type___pyx_MemviewEnum) < 0) __PYX_ERR(1, 279, __pyx_L1_error) - __pyx_MemviewEnum_type = &__pyx_type___pyx_MemviewEnum; - __pyx_vtabptr_memoryview = &__pyx_vtable_memoryview; - __pyx_vtable_memoryview.get_item_pointer = (char *(*)(struct __pyx_memoryview_obj *, PyObject *))__pyx_memoryview_get_item_pointer; - __pyx_vtable_memoryview.is_slice = (PyObject *(*)(struct __pyx_memoryview_obj *, PyObject *))__pyx_memoryview_is_slice; - __pyx_vtable_memoryview.setitem_slice_assignment = (PyObject *(*)(struct __pyx_memoryview_obj *, PyObject *, PyObject *))__pyx_memoryview_setitem_slice_assignment; - __pyx_vtable_memoryview.setitem_slice_assign_scalar = (PyObject *(*)(struct __pyx_memoryview_obj *, struct __pyx_memoryview_obj *, PyObject *))__pyx_memoryview_setitem_slice_assign_scalar; - __pyx_vtable_memoryview.setitem_indexed = (PyObject *(*)(struct __pyx_memoryview_obj *, PyObject *, PyObject *))__pyx_memoryview_setitem_indexed; - __pyx_vtable_memoryview.convert_item_to_object = (PyObject *(*)(struct __pyx_memoryview_obj *, char *))__pyx_memoryview_convert_item_to_object; - __pyx_vtable_memoryview.assign_item_from_object = (PyObject *(*)(struct __pyx_memoryview_obj *, char *, PyObject *))__pyx_memoryview_assign_item_from_object; - if (PyType_Ready(&__pyx_type___pyx_memoryview) < 0) __PYX_ERR(1, 330, __pyx_L1_error) - #if PY_VERSION_HEX < 0x030800B1 - __pyx_type___pyx_memoryview.tp_print = 0; - #endif - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_type___pyx_memoryview.tp_dictoffset && __pyx_type___pyx_memoryview.tp_getattro == PyObject_GenericGetAttr)) { - __pyx_type___pyx_memoryview.tp_getattro = __Pyx_PyObject_GenericGetAttr; - } - if (__Pyx_SetVtable(__pyx_type___pyx_memoryview.tp_dict, __pyx_vtabptr_memoryview) < 0) __PYX_ERR(1, 330, __pyx_L1_error) - if (__Pyx_setup_reduce((PyObject*)&__pyx_type___pyx_memoryview) < 0) __PYX_ERR(1, 330, __pyx_L1_error) - __pyx_memoryview_type = &__pyx_type___pyx_memoryview; - __pyx_vtabptr__memoryviewslice = &__pyx_vtable__memoryviewslice; - __pyx_vtable__memoryviewslice.__pyx_base = *__pyx_vtabptr_memoryview; - __pyx_vtable__memoryviewslice.__pyx_base.convert_item_to_object = (PyObject *(*)(struct __pyx_memoryview_obj *, char *))__pyx_memoryviewslice_convert_item_to_object; - __pyx_vtable__memoryviewslice.__pyx_base.assign_item_from_object = (PyObject *(*)(struct __pyx_memoryview_obj *, char *, PyObject *))__pyx_memoryviewslice_assign_item_from_object; - __pyx_type___pyx_memoryviewslice.tp_base = __pyx_memoryview_type; - if (PyType_Ready(&__pyx_type___pyx_memoryviewslice) < 0) __PYX_ERR(1, 965, __pyx_L1_error) - #if PY_VERSION_HEX < 0x030800B1 - __pyx_type___pyx_memoryviewslice.tp_print = 0; - #endif - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_type___pyx_memoryviewslice.tp_dictoffset && __pyx_type___pyx_memoryviewslice.tp_getattro == PyObject_GenericGetAttr)) { - __pyx_type___pyx_memoryviewslice.tp_getattro = __Pyx_PyObject_GenericGetAttr; - } - if (__Pyx_SetVtable(__pyx_type___pyx_memoryviewslice.tp_dict, __pyx_vtabptr__memoryviewslice) < 0) __PYX_ERR(1, 965, __pyx_L1_error) - if (__Pyx_setup_reduce((PyObject*)&__pyx_type___pyx_memoryviewslice) < 0) __PYX_ERR(1, 965, __pyx_L1_error) - __pyx_memoryviewslice_type = &__pyx_type___pyx_memoryviewslice; - __Pyx_RefNannyFinishContext(); - return 0; - __pyx_L1_error:; - __Pyx_RefNannyFinishContext(); - return -1; -} - -static int __Pyx_modinit_type_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_type_import_code", 0); - /*--- Type import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_variable_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_variable_import_code", 0); - /*--- Variable import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_function_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_function_import_code", 0); - /*--- Function import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - - -#ifndef CYTHON_NO_PYINIT_EXPORT -#define __Pyx_PyMODINIT_FUNC PyMODINIT_FUNC -#elif PY_MAJOR_VERSION < 3 -#ifdef __cplusplus -#define __Pyx_PyMODINIT_FUNC extern "C" void -#else -#define __Pyx_PyMODINIT_FUNC void -#endif -#else -#ifdef __cplusplus -#define __Pyx_PyMODINIT_FUNC extern "C" PyObject * -#else -#define __Pyx_PyMODINIT_FUNC PyObject * -#endif -#endif - - -#if PY_MAJOR_VERSION < 3 -__Pyx_PyMODINIT_FUNC initcore(void) CYTHON_SMALL_CODE; /*proto*/ -__Pyx_PyMODINIT_FUNC initcore(void) -#else -__Pyx_PyMODINIT_FUNC PyInit_core(void) CYTHON_SMALL_CODE; /*proto*/ -__Pyx_PyMODINIT_FUNC PyInit_core(void) -#if CYTHON_PEP489_MULTI_PHASE_INIT -{ - return PyModuleDef_Init(&__pyx_moduledef); -} -static CYTHON_SMALL_CODE int __Pyx_check_single_interpreter(void) { - #if PY_VERSION_HEX >= 0x030700A1 - static PY_INT64_T main_interpreter_id = -1; - PY_INT64_T current_id = PyInterpreterState_GetID(PyThreadState_Get()->interp); - if (main_interpreter_id == -1) { - main_interpreter_id = current_id; - return (unlikely(current_id == -1)) ? -1 : 0; - } else if (unlikely(main_interpreter_id != current_id)) - #else - static PyInterpreterState *main_interpreter = NULL; - PyInterpreterState *current_interpreter = PyThreadState_Get()->interp; - if (!main_interpreter) { - main_interpreter = current_interpreter; - } else if (unlikely(main_interpreter != current_interpreter)) - #endif - { - PyErr_SetString( - PyExc_ImportError, - "Interpreter change detected - this module can only be loaded into one interpreter per process."); - return -1; - } - return 0; -} -static CYTHON_SMALL_CODE int __Pyx_copy_spec_to_module(PyObject *spec, PyObject *moddict, const char* from_name, const char* to_name, int allow_none) { - PyObject *value = PyObject_GetAttrString(spec, from_name); - int result = 0; - if (likely(value)) { - if (allow_none || value != Py_None) { - result = PyDict_SetItemString(moddict, to_name, value); - } - Py_DECREF(value); - } else if (PyErr_ExceptionMatches(PyExc_AttributeError)) { - PyErr_Clear(); - } else { - result = -1; - } - return result; -} -static CYTHON_SMALL_CODE PyObject* __pyx_pymod_create(PyObject *spec, CYTHON_UNUSED PyModuleDef *def) { - PyObject *module = NULL, *moddict, *modname; - if (__Pyx_check_single_interpreter()) - return NULL; - if (__pyx_m) - return __Pyx_NewRef(__pyx_m); - modname = PyObject_GetAttrString(spec, "name"); - if (unlikely(!modname)) goto bad; - module = PyModule_NewObject(modname); - Py_DECREF(modname); - if (unlikely(!module)) goto bad; - moddict = PyModule_GetDict(module); - if (unlikely(!moddict)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "loader", "__loader__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "origin", "__file__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "parent", "__package__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "submodule_search_locations", "__path__", 0) < 0)) goto bad; - return module; -bad: - Py_XDECREF(module); - return NULL; -} - - -static CYTHON_SMALL_CODE int __pyx_pymod_exec_core(PyObject *__pyx_pyinit_module) -#endif -#endif -{ - PyObject *__pyx_t_1 = NULL; - static PyThread_type_lock __pyx_t_2[8]; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannyDeclarations - #if CYTHON_PEP489_MULTI_PHASE_INIT - if (__pyx_m) { - if (__pyx_m == __pyx_pyinit_module) return 0; - PyErr_SetString(PyExc_RuntimeError, "Module 'core' has already been imported. Re-initialisation is not supported."); - return -1; - } - #elif PY_MAJOR_VERSION >= 3 - if (__pyx_m) return __Pyx_NewRef(__pyx_m); - #endif - #if CYTHON_REFNANNY -__Pyx_RefNanny = __Pyx_RefNannyImportAPI("refnanny"); -if (!__Pyx_RefNanny) { - PyErr_Clear(); - __Pyx_RefNanny = __Pyx_RefNannyImportAPI("Cython.Runtime.refnanny"); - if (!__Pyx_RefNanny) - Py_FatalError("failed to import 'refnanny' module"); -} -#endif - __Pyx_RefNannySetupContext("__Pyx_PyMODINIT_FUNC PyInit_core(void)", 0); - if (__Pyx_check_binary_version() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #ifdef __Pxy_PyFrame_Initialize_Offsets - __Pxy_PyFrame_Initialize_Offsets(); - #endif - __pyx_empty_tuple = PyTuple_New(0); if (unlikely(!__pyx_empty_tuple)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_empty_bytes = PyBytes_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_bytes)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_empty_unicode = PyUnicode_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_unicode)) __PYX_ERR(0, 1, __pyx_L1_error) - #ifdef __Pyx_CyFunction_USED - if (__pyx_CyFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_FusedFunction_USED - if (__pyx_FusedFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_Coroutine_USED - if (__pyx_Coroutine_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_Generator_USED - if (__pyx_Generator_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_AsyncGen_USED - if (__pyx_AsyncGen_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_StopAsyncIteration_USED - if (__pyx_StopAsyncIteration_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - /*--- Library function declarations ---*/ - /*--- Threads initialization code ---*/ - #if defined(__PYX_FORCE_INIT_THREADS) && __PYX_FORCE_INIT_THREADS - #ifdef WITH_THREAD /* Python build with threading support? */ - PyEval_InitThreads(); - #endif - #endif - /*--- Module creation code ---*/ - #if CYTHON_PEP489_MULTI_PHASE_INIT - __pyx_m = __pyx_pyinit_module; - Py_INCREF(__pyx_m); - #else - #if PY_MAJOR_VERSION < 3 - __pyx_m = Py_InitModule4("core", __pyx_methods, 0, 0, PYTHON_API_VERSION); Py_XINCREF(__pyx_m); - #else - __pyx_m = PyModule_Create(&__pyx_moduledef); - #endif - if (unlikely(!__pyx_m)) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - __pyx_d = PyModule_GetDict(__pyx_m); if (unlikely(!__pyx_d)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_d); - __pyx_b = PyImport_AddModule(__Pyx_BUILTIN_MODULE_NAME); if (unlikely(!__pyx_b)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_b); - __pyx_cython_runtime = PyImport_AddModule((char *) "cython_runtime"); if (unlikely(!__pyx_cython_runtime)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_cython_runtime); - if (PyObject_SetAttrString(__pyx_m, "__builtins__", __pyx_b) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - /*--- Initialize various global constants etc. ---*/ - if (__Pyx_InitGlobals() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #if PY_MAJOR_VERSION < 3 && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT) - if (__Pyx_init_sys_getdefaultencoding_params() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - if (__pyx_module_is_main_monotonic_align__core) { - if (PyObject_SetAttr(__pyx_m, __pyx_n_s_name_2, __pyx_n_s_main) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - } - #if PY_MAJOR_VERSION >= 3 - { - PyObject *modules = PyImport_GetModuleDict(); if (unlikely(!modules)) __PYX_ERR(0, 1, __pyx_L1_error) - if (!PyDict_GetItemString(modules, "monotonic_align.core")) { - if (unlikely(PyDict_SetItemString(modules, "monotonic_align.core", __pyx_m) < 0)) __PYX_ERR(0, 1, __pyx_L1_error) - } - } - #endif - /*--- Builtin init code ---*/ - if (__Pyx_InitCachedBuiltins() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - /*--- Constants init code ---*/ - if (__Pyx_InitCachedConstants() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - /*--- Global type/function init code ---*/ - (void)__Pyx_modinit_global_init_code(); - (void)__Pyx_modinit_variable_export_code(); - (void)__Pyx_modinit_function_export_code(); - if (unlikely(__Pyx_modinit_type_init_code() < 0)) __PYX_ERR(0, 1, __pyx_L1_error) - (void)__Pyx_modinit_type_import_code(); - (void)__Pyx_modinit_variable_import_code(); - (void)__Pyx_modinit_function_import_code(); - /*--- Execution code ---*/ - #if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED) - if (__Pyx_patch_abc() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - - /* "monotonic_align/core.pyx":7 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<< - * cdef int x - * cdef int y - */ - __pyx_k_ = (-1e9); - - /* "monotonic_align/core.pyx":1 - * cimport cython # <<<<<<<<<<<<<< - * from cython.parallel import prange - * - */ - __pyx_t_1 = __Pyx_PyDict_NewPresized(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_test, __pyx_t_1) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "View.MemoryView":209 - * info.obj = self - * - * __pyx_getbuffer = capsule( &__pyx_array_getbuffer, "getbuffer(obj, view, flags)") # <<<<<<<<<<<<<< - * - * def __dealloc__(array self): - */ - __pyx_t_1 = __pyx_capsule_create(((void *)(&__pyx_array_getbuffer)), ((char *)"getbuffer(obj, view, flags)")); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 209, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem((PyObject *)__pyx_array_type->tp_dict, __pyx_n_s_pyx_getbuffer, __pyx_t_1) < 0) __PYX_ERR(1, 209, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - PyType_Modified(__pyx_array_type); - - /* "View.MemoryView":286 - * return self.name - * - * cdef generic = Enum("") # <<<<<<<<<<<<<< - * cdef strided = Enum("") # default - * cdef indirect = Enum("") - */ - __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__20, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 286, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_XGOTREF(generic); - __Pyx_DECREF_SET(generic, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":287 - * - * cdef generic = Enum("") - * cdef strided = Enum("") # default # <<<<<<<<<<<<<< - * cdef indirect = Enum("") - * - */ - __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__21, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 287, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_XGOTREF(strided); - __Pyx_DECREF_SET(strided, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":288 - * cdef generic = Enum("") - * cdef strided = Enum("") # default - * cdef indirect = Enum("") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__22, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 288, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_XGOTREF(indirect); - __Pyx_DECREF_SET(indirect, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":291 - * - * - * cdef contiguous = Enum("") # <<<<<<<<<<<<<< - * cdef indirect_contiguous = Enum("") - * - */ - __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__23, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 291, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_XGOTREF(contiguous); - __Pyx_DECREF_SET(contiguous, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":292 - * - * cdef contiguous = Enum("") - * cdef indirect_contiguous = Enum("") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__24, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 292, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_XGOTREF(indirect_contiguous); - __Pyx_DECREF_SET(indirect_contiguous, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":316 - * - * DEF THREAD_LOCKS_PREALLOCATED = 8 - * cdef int __pyx_memoryview_thread_locks_used = 0 # <<<<<<<<<<<<<< - * cdef PyThread_type_lock[THREAD_LOCKS_PREALLOCATED] __pyx_memoryview_thread_locks = [ - * PyThread_allocate_lock(), - */ - __pyx_memoryview_thread_locks_used = 0; - - /* "View.MemoryView":317 - * DEF THREAD_LOCKS_PREALLOCATED = 8 - * cdef int __pyx_memoryview_thread_locks_used = 0 - * cdef PyThread_type_lock[THREAD_LOCKS_PREALLOCATED] __pyx_memoryview_thread_locks = [ # <<<<<<<<<<<<<< - * PyThread_allocate_lock(), - * PyThread_allocate_lock(), - */ - __pyx_t_2[0] = PyThread_allocate_lock(); - __pyx_t_2[1] = PyThread_allocate_lock(); - __pyx_t_2[2] = PyThread_allocate_lock(); - __pyx_t_2[3] = PyThread_allocate_lock(); - __pyx_t_2[4] = PyThread_allocate_lock(); - __pyx_t_2[5] = PyThread_allocate_lock(); - __pyx_t_2[6] = PyThread_allocate_lock(); - __pyx_t_2[7] = PyThread_allocate_lock(); - memcpy(&(__pyx_memoryview_thread_locks[0]), __pyx_t_2, sizeof(__pyx_memoryview_thread_locks[0]) * (8)); - - /* "View.MemoryView":549 - * info.obj = self - * - * __pyx_getbuffer = capsule( &__pyx_memoryview_getbuffer, "getbuffer(obj, view, flags)") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_1 = __pyx_capsule_create(((void *)(&__pyx_memoryview_getbuffer)), ((char *)"getbuffer(obj, view, flags)")); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 549, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem((PyObject *)__pyx_memoryview_type->tp_dict, __pyx_n_s_pyx_getbuffer, __pyx_t_1) < 0) __PYX_ERR(1, 549, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - PyType_Modified(__pyx_memoryview_type); - - /* "View.MemoryView":995 - * return self.from_object - * - * __pyx_getbuffer = capsule( &__pyx_memoryview_getbuffer, "getbuffer(obj, view, flags)") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_1 = __pyx_capsule_create(((void *)(&__pyx_memoryview_getbuffer)), ((char *)"getbuffer(obj, view, flags)")); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 995, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem((PyObject *)__pyx_memoryviewslice_type->tp_dict, __pyx_n_s_pyx_getbuffer, __pyx_t_1) < 0) __PYX_ERR(1, 995, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - PyType_Modified(__pyx_memoryviewslice_type); - - /* "(tree fragment)":1 - * def __pyx_unpickle_Enum(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<< - * cdef object __pyx_PickleError - * cdef object __pyx_result - */ - __pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_15View_dot_MemoryView_1__pyx_unpickle_Enum, NULL, __pyx_n_s_View_MemoryView); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_pyx_unpickle_Enum, __pyx_t_1) < 0) __PYX_ERR(1, 1, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "(tree fragment)":11 - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): # <<<<<<<<<<<<<< - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): - */ - - /*--- Wrapped vars code ---*/ - - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - if (__pyx_m) { - if (__pyx_d) { - __Pyx_AddTraceback("init monotonic_align.core", __pyx_clineno, __pyx_lineno, __pyx_filename); - } - Py_CLEAR(__pyx_m); - } else if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_ImportError, "init monotonic_align.core"); - } - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - #if CYTHON_PEP489_MULTI_PHASE_INIT - return (__pyx_m != NULL) ? 0 : -1; - #elif PY_MAJOR_VERSION >= 3 - return __pyx_m; - #else - return; - #endif -} - -/* --- Runtime support code --- */ -/* Refnanny */ -#if CYTHON_REFNANNY -static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname) { - PyObject *m = NULL, *p = NULL; - void *r = NULL; - m = PyImport_ImportModule(modname); - if (!m) goto end; - p = PyObject_GetAttrString(m, "RefNannyAPI"); - if (!p) goto end; - r = PyLong_AsVoidPtr(p); -end: - Py_XDECREF(p); - Py_XDECREF(m); - return (__Pyx_RefNannyAPIStruct *)r; -} -#endif - -/* PyObjectGetAttrStr */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name) { - PyTypeObject* tp = Py_TYPE(obj); - if (likely(tp->tp_getattro)) - return tp->tp_getattro(obj, attr_name); -#if PY_MAJOR_VERSION < 3 - if (likely(tp->tp_getattr)) - return tp->tp_getattr(obj, PyString_AS_STRING(attr_name)); -#endif - return PyObject_GetAttr(obj, attr_name); -} -#endif - -/* GetBuiltinName */ -static PyObject *__Pyx_GetBuiltinName(PyObject *name) { - PyObject* result = __Pyx_PyObject_GetAttrStr(__pyx_b, name); - if (unlikely(!result)) { - PyErr_Format(PyExc_NameError, -#if PY_MAJOR_VERSION >= 3 - "name '%U' is not defined", name); -#else - "name '%.200s' is not defined", PyString_AS_STRING(name)); -#endif - } - return result; -} - -/* MemviewSliceInit */ -static int -__Pyx_init_memviewslice(struct __pyx_memoryview_obj *memview, - int ndim, - __Pyx_memviewslice *memviewslice, - int memview_is_new_reference) -{ - __Pyx_RefNannyDeclarations - int i, retval=-1; - Py_buffer *buf = &memview->view; - __Pyx_RefNannySetupContext("init_memviewslice", 0); - if (unlikely(memviewslice->memview || memviewslice->data)) { - PyErr_SetString(PyExc_ValueError, - "memviewslice is already initialized!"); - goto fail; - } - if (buf->strides) { - for (i = 0; i < ndim; i++) { - memviewslice->strides[i] = buf->strides[i]; - } - } else { - Py_ssize_t stride = buf->itemsize; - for (i = ndim - 1; i >= 0; i--) { - memviewslice->strides[i] = stride; - stride *= buf->shape[i]; - } - } - for (i = 0; i < ndim; i++) { - memviewslice->shape[i] = buf->shape[i]; - if (buf->suboffsets) { - memviewslice->suboffsets[i] = buf->suboffsets[i]; - } else { - memviewslice->suboffsets[i] = -1; - } - } - memviewslice->memview = memview; - memviewslice->data = (char *)buf->buf; - if (__pyx_add_acquisition_count(memview) == 0 && !memview_is_new_reference) { - Py_INCREF(memview); - } - retval = 0; - goto no_fail; -fail: - memviewslice->memview = 0; - memviewslice->data = 0; - retval = -1; -no_fail: - __Pyx_RefNannyFinishContext(); - return retval; -} -#ifndef Py_NO_RETURN -#define Py_NO_RETURN -#endif -static void __pyx_fatalerror(const char *fmt, ...) Py_NO_RETURN { - va_list vargs; - char msg[200]; -#ifdef HAVE_STDARG_PROTOTYPES - va_start(vargs, fmt); -#else - va_start(vargs); -#endif - vsnprintf(msg, 200, fmt, vargs); - va_end(vargs); - Py_FatalError(msg); -} -static CYTHON_INLINE int -__pyx_add_acquisition_count_locked(__pyx_atomic_int *acquisition_count, - PyThread_type_lock lock) -{ - int result; - PyThread_acquire_lock(lock, 1); - result = (*acquisition_count)++; - PyThread_release_lock(lock); - return result; -} -static CYTHON_INLINE int -__pyx_sub_acquisition_count_locked(__pyx_atomic_int *acquisition_count, - PyThread_type_lock lock) -{ - int result; - PyThread_acquire_lock(lock, 1); - result = (*acquisition_count)--; - PyThread_release_lock(lock); - return result; -} -static CYTHON_INLINE void -__Pyx_INC_MEMVIEW(__Pyx_memviewslice *memslice, int have_gil, int lineno) -{ - int first_time; - struct __pyx_memoryview_obj *memview = memslice->memview; - if (unlikely(!memview || (PyObject *) memview == Py_None)) - return; - if (unlikely(__pyx_get_slice_count(memview) < 0)) - __pyx_fatalerror("Acquisition count is %d (line %d)", - __pyx_get_slice_count(memview), lineno); - first_time = __pyx_add_acquisition_count(memview) == 0; - if (unlikely(first_time)) { - if (have_gil) { - Py_INCREF((PyObject *) memview); - } else { - PyGILState_STATE _gilstate = PyGILState_Ensure(); - Py_INCREF((PyObject *) memview); - PyGILState_Release(_gilstate); - } - } -} -static CYTHON_INLINE void __Pyx_XDEC_MEMVIEW(__Pyx_memviewslice *memslice, - int have_gil, int lineno) { - int last_time; - struct __pyx_memoryview_obj *memview = memslice->memview; - if (unlikely(!memview || (PyObject *) memview == Py_None)) { - memslice->memview = NULL; - return; - } - if (unlikely(__pyx_get_slice_count(memview) <= 0)) - __pyx_fatalerror("Acquisition count is %d (line %d)", - __pyx_get_slice_count(memview), lineno); - last_time = __pyx_sub_acquisition_count(memview) == 1; - memslice->data = NULL; - if (unlikely(last_time)) { - if (have_gil) { - Py_CLEAR(memslice->memview); - } else { - PyGILState_STATE _gilstate = PyGILState_Ensure(); - Py_CLEAR(memslice->memview); - PyGILState_Release(_gilstate); - } - } else { - memslice->memview = NULL; - } -} - -/* RaiseArgTupleInvalid */ -static void __Pyx_RaiseArgtupleInvalid( - const char* func_name, - int exact, - Py_ssize_t num_min, - Py_ssize_t num_max, - Py_ssize_t num_found) -{ - Py_ssize_t num_expected; - const char *more_or_less; - if (num_found < num_min) { - num_expected = num_min; - more_or_less = "at least"; - } else { - num_expected = num_max; - more_or_less = "at most"; - } - if (exact) { - more_or_less = "exactly"; - } - PyErr_Format(PyExc_TypeError, - "%.200s() takes %.8s %" CYTHON_FORMAT_SSIZE_T "d positional argument%.1s (%" CYTHON_FORMAT_SSIZE_T "d given)", - func_name, more_or_less, num_expected, - (num_expected == 1) ? "" : "s", num_found); -} - -/* RaiseDoubleKeywords */ -static void __Pyx_RaiseDoubleKeywordsError( - const char* func_name, - PyObject* kw_name) -{ - PyErr_Format(PyExc_TypeError, - #if PY_MAJOR_VERSION >= 3 - "%s() got multiple values for keyword argument '%U'", func_name, kw_name); - #else - "%s() got multiple values for keyword argument '%s'", func_name, - PyString_AsString(kw_name)); - #endif -} - -/* ParseKeywords */ -static int __Pyx_ParseOptionalKeywords( - PyObject *kwds, - PyObject **argnames[], - PyObject *kwds2, - PyObject *values[], - Py_ssize_t num_pos_args, - const char* function_name) -{ - PyObject *key = 0, *value = 0; - Py_ssize_t pos = 0; - PyObject*** name; - PyObject*** first_kw_arg = argnames + num_pos_args; - while (PyDict_Next(kwds, &pos, &key, &value)) { - name = first_kw_arg; - while (*name && (**name != key)) name++; - if (*name) { - values[name-argnames] = value; - continue; - } - name = first_kw_arg; - #if PY_MAJOR_VERSION < 3 - if (likely(PyString_Check(key))) { - while (*name) { - if ((CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**name) == PyString_GET_SIZE(key)) - && _PyString_Eq(**name, key)) { - values[name-argnames] = value; - break; - } - name++; - } - if (*name) continue; - else { - PyObject*** argname = argnames; - while (argname != first_kw_arg) { - if ((**argname == key) || ( - (CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**argname) == PyString_GET_SIZE(key)) - && _PyString_Eq(**argname, key))) { - goto arg_passed_twice; - } - argname++; - } - } - } else - #endif - if (likely(PyUnicode_Check(key))) { - while (*name) { - int cmp = (**name == key) ? 0 : - #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 - (__Pyx_PyUnicode_GET_LENGTH(**name) != __Pyx_PyUnicode_GET_LENGTH(key)) ? 1 : - #endif - PyUnicode_Compare(**name, key); - if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; - if (cmp == 0) { - values[name-argnames] = value; - break; - } - name++; - } - if (*name) continue; - else { - PyObject*** argname = argnames; - while (argname != first_kw_arg) { - int cmp = (**argname == key) ? 0 : - #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 - (__Pyx_PyUnicode_GET_LENGTH(**argname) != __Pyx_PyUnicode_GET_LENGTH(key)) ? 1 : - #endif - PyUnicode_Compare(**argname, key); - if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; - if (cmp == 0) goto arg_passed_twice; - argname++; - } - } - } else - goto invalid_keyword_type; - if (kwds2) { - if (unlikely(PyDict_SetItem(kwds2, key, value))) goto bad; - } else { - goto invalid_keyword; - } - } - return 0; -arg_passed_twice: - __Pyx_RaiseDoubleKeywordsError(function_name, key); - goto bad; -invalid_keyword_type: - PyErr_Format(PyExc_TypeError, - "%.200s() keywords must be strings", function_name); - goto bad; -invalid_keyword: - PyErr_Format(PyExc_TypeError, - #if PY_MAJOR_VERSION < 3 - "%.200s() got an unexpected keyword argument '%.200s'", - function_name, PyString_AsString(key)); - #else - "%s() got an unexpected keyword argument '%U'", - function_name, key); - #endif -bad: - return -1; -} - -/* None */ -static CYTHON_INLINE void __Pyx_RaiseUnboundLocalError(const char *varname) { - PyErr_Format(PyExc_UnboundLocalError, "local variable '%s' referenced before assignment", varname); -} - -/* ArgTypeTest */ -static int __Pyx__ArgTypeTest(PyObject *obj, PyTypeObject *type, const char *name, int exact) -{ - if (unlikely(!type)) { - PyErr_SetString(PyExc_SystemError, "Missing type object"); - return 0; - } - else if (exact) { - #if PY_MAJOR_VERSION == 2 - if ((type == &PyBaseString_Type) && likely(__Pyx_PyBaseString_CheckExact(obj))) return 1; - #endif - } - else { - if (likely(__Pyx_TypeCheck(obj, type))) return 1; - } - PyErr_Format(PyExc_TypeError, - "Argument '%.200s' has incorrect type (expected %.200s, got %.200s)", - name, type->tp_name, Py_TYPE(obj)->tp_name); - return 0; -} - -/* PyObjectCall */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw) { - PyObject *result; - ternaryfunc call = func->ob_type->tp_call; - if (unlikely(!call)) - return PyObject_Call(func, arg, kw); - if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) - return NULL; - result = (*call)(func, arg, kw); - Py_LeaveRecursiveCall(); - if (unlikely(!result) && unlikely(!PyErr_Occurred())) { - PyErr_SetString( - PyExc_SystemError, - "NULL result without error in PyObject_Call"); - } - return result; -} -#endif - -/* PyErrFetchRestore */ -#if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - tmp_type = tstate->curexc_type; - tmp_value = tstate->curexc_value; - tmp_tb = tstate->curexc_traceback; - tstate->curexc_type = type; - tstate->curexc_value = value; - tstate->curexc_traceback = tb; - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -} -static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { - *type = tstate->curexc_type; - *value = tstate->curexc_value; - *tb = tstate->curexc_traceback; - tstate->curexc_type = 0; - tstate->curexc_value = 0; - tstate->curexc_traceback = 0; -} -#endif - -/* RaiseException */ -#if PY_MAJOR_VERSION < 3 -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, - CYTHON_UNUSED PyObject *cause) { - __Pyx_PyThreadState_declare - Py_XINCREF(type); - if (!value || value == Py_None) - value = NULL; - else - Py_INCREF(value); - if (!tb || tb == Py_None) - tb = NULL; - else { - Py_INCREF(tb); - if (!PyTraceBack_Check(tb)) { - PyErr_SetString(PyExc_TypeError, - "raise: arg 3 must be a traceback or None"); - goto raise_error; - } - } - if (PyType_Check(type)) { -#if CYTHON_COMPILING_IN_PYPY - if (!value) { - Py_INCREF(Py_None); - value = Py_None; - } -#endif - PyErr_NormalizeException(&type, &value, &tb); - } else { - if (value) { - PyErr_SetString(PyExc_TypeError, - "instance exception may not have a separate value"); - goto raise_error; - } - value = type; - type = (PyObject*) Py_TYPE(type); - Py_INCREF(type); - if (!PyType_IsSubtype((PyTypeObject *)type, (PyTypeObject *)PyExc_BaseException)) { - PyErr_SetString(PyExc_TypeError, - "raise: exception class must be a subclass of BaseException"); - goto raise_error; - } - } - __Pyx_PyThreadState_assign - __Pyx_ErrRestore(type, value, tb); - return; -raise_error: - Py_XDECREF(value); - Py_XDECREF(type); - Py_XDECREF(tb); - return; -} -#else -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause) { - PyObject* owned_instance = NULL; - if (tb == Py_None) { - tb = 0; - } else if (tb && !PyTraceBack_Check(tb)) { - PyErr_SetString(PyExc_TypeError, - "raise: arg 3 must be a traceback or None"); - goto bad; - } - if (value == Py_None) - value = 0; - if (PyExceptionInstance_Check(type)) { - if (value) { - PyErr_SetString(PyExc_TypeError, - "instance exception may not have a separate value"); - goto bad; - } - value = type; - type = (PyObject*) Py_TYPE(value); - } else if (PyExceptionClass_Check(type)) { - PyObject *instance_class = NULL; - if (value && PyExceptionInstance_Check(value)) { - instance_class = (PyObject*) Py_TYPE(value); - if (instance_class != type) { - int is_subclass = PyObject_IsSubclass(instance_class, type); - if (!is_subclass) { - instance_class = NULL; - } else if (unlikely(is_subclass == -1)) { - goto bad; - } else { - type = instance_class; - } - } - } - if (!instance_class) { - PyObject *args; - if (!value) - args = PyTuple_New(0); - else if (PyTuple_Check(value)) { - Py_INCREF(value); - args = value; - } else - args = PyTuple_Pack(1, value); - if (!args) - goto bad; - owned_instance = PyObject_Call(type, args, NULL); - Py_DECREF(args); - if (!owned_instance) - goto bad; - value = owned_instance; - if (!PyExceptionInstance_Check(value)) { - PyErr_Format(PyExc_TypeError, - "calling %R should have returned an instance of " - "BaseException, not %R", - type, Py_TYPE(value)); - goto bad; - } - } - } else { - PyErr_SetString(PyExc_TypeError, - "raise: exception class must be a subclass of BaseException"); - goto bad; - } - if (cause) { - PyObject *fixed_cause; - if (cause == Py_None) { - fixed_cause = NULL; - } else if (PyExceptionClass_Check(cause)) { - fixed_cause = PyObject_CallObject(cause, NULL); - if (fixed_cause == NULL) - goto bad; - } else if (PyExceptionInstance_Check(cause)) { - fixed_cause = cause; - Py_INCREF(fixed_cause); - } else { - PyErr_SetString(PyExc_TypeError, - "exception causes must derive from " - "BaseException"); - goto bad; - } - PyException_SetCause(value, fixed_cause); - } - PyErr_SetObject(type, value); - if (tb) { -#if CYTHON_COMPILING_IN_PYPY - PyObject *tmp_type, *tmp_value, *tmp_tb; - PyErr_Fetch(&tmp_type, &tmp_value, &tmp_tb); - Py_INCREF(tb); - PyErr_Restore(tmp_type, tmp_value, tb); - Py_XDECREF(tmp_tb); -#else - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject* tmp_tb = tstate->curexc_traceback; - if (tb != tmp_tb) { - Py_INCREF(tb); - tstate->curexc_traceback = tb; - Py_XDECREF(tmp_tb); - } -#endif - } -bad: - Py_XDECREF(owned_instance); - return; -} -#endif - -/* PyCFunctionFastCall */ -#if CYTHON_FAST_PYCCALL -static CYTHON_INLINE PyObject * __Pyx_PyCFunction_FastCall(PyObject *func_obj, PyObject **args, Py_ssize_t nargs) { - PyCFunctionObject *func = (PyCFunctionObject*)func_obj; - PyCFunction meth = PyCFunction_GET_FUNCTION(func); - PyObject *self = PyCFunction_GET_SELF(func); - int flags = PyCFunction_GET_FLAGS(func); - assert(PyCFunction_Check(func)); - assert(METH_FASTCALL == (flags & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_KEYWORDS | METH_STACKLESS))); - assert(nargs >= 0); - assert(nargs == 0 || args != NULL); - /* _PyCFunction_FastCallDict() must not be called with an exception set, - because it may clear it (directly or indirectly) and so the - caller loses its exception */ - assert(!PyErr_Occurred()); - if ((PY_VERSION_HEX < 0x030700A0) || unlikely(flags & METH_KEYWORDS)) { - return (*((__Pyx_PyCFunctionFastWithKeywords)(void*)meth)) (self, args, nargs, NULL); - } else { - return (*((__Pyx_PyCFunctionFast)(void*)meth)) (self, args, nargs); - } -} -#endif - -/* PyFunctionFastCall */ -#if CYTHON_FAST_PYCALL -static PyObject* __Pyx_PyFunction_FastCallNoKw(PyCodeObject *co, PyObject **args, Py_ssize_t na, - PyObject *globals) { - PyFrameObject *f; - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject **fastlocals; - Py_ssize_t i; - PyObject *result; - assert(globals != NULL); - /* XXX Perhaps we should create a specialized - PyFrame_New() that doesn't take locals, but does - take builtins without sanity checking them. - */ - assert(tstate != NULL); - f = PyFrame_New(tstate, co, globals, NULL); - if (f == NULL) { - return NULL; - } - fastlocals = __Pyx_PyFrame_GetLocalsplus(f); - for (i = 0; i < na; i++) { - Py_INCREF(*args); - fastlocals[i] = *args++; - } - result = PyEval_EvalFrameEx(f,0); - ++tstate->recursion_depth; - Py_DECREF(f); - --tstate->recursion_depth; - return result; -} -#if 1 || PY_VERSION_HEX < 0x030600B1 -static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs) { - PyCodeObject *co = (PyCodeObject *)PyFunction_GET_CODE(func); - PyObject *globals = PyFunction_GET_GLOBALS(func); - PyObject *argdefs = PyFunction_GET_DEFAULTS(func); - PyObject *closure; -#if PY_MAJOR_VERSION >= 3 - PyObject *kwdefs; -#endif - PyObject *kwtuple, **k; - PyObject **d; - Py_ssize_t nd; - Py_ssize_t nk; - PyObject *result; - assert(kwargs == NULL || PyDict_Check(kwargs)); - nk = kwargs ? PyDict_Size(kwargs) : 0; - if (Py_EnterRecursiveCall((char*)" while calling a Python object")) { - return NULL; - } - if ( -#if PY_MAJOR_VERSION >= 3 - co->co_kwonlyargcount == 0 && -#endif - likely(kwargs == NULL || nk == 0) && - co->co_flags == (CO_OPTIMIZED | CO_NEWLOCALS | CO_NOFREE)) { - if (argdefs == NULL && co->co_argcount == nargs) { - result = __Pyx_PyFunction_FastCallNoKw(co, args, nargs, globals); - goto done; - } - else if (nargs == 0 && argdefs != NULL - && co->co_argcount == Py_SIZE(argdefs)) { - /* function called with no arguments, but all parameters have - a default value: use default values as arguments .*/ - args = &PyTuple_GET_ITEM(argdefs, 0); - result =__Pyx_PyFunction_FastCallNoKw(co, args, Py_SIZE(argdefs), globals); - goto done; - } - } - if (kwargs != NULL) { - Py_ssize_t pos, i; - kwtuple = PyTuple_New(2 * nk); - if (kwtuple == NULL) { - result = NULL; - goto done; - } - k = &PyTuple_GET_ITEM(kwtuple, 0); - pos = i = 0; - while (PyDict_Next(kwargs, &pos, &k[i], &k[i+1])) { - Py_INCREF(k[i]); - Py_INCREF(k[i+1]); - i += 2; - } - nk = i / 2; - } - else { - kwtuple = NULL; - k = NULL; - } - closure = PyFunction_GET_CLOSURE(func); -#if PY_MAJOR_VERSION >= 3 - kwdefs = PyFunction_GET_KW_DEFAULTS(func); -#endif - if (argdefs != NULL) { - d = &PyTuple_GET_ITEM(argdefs, 0); - nd = Py_SIZE(argdefs); - } - else { - d = NULL; - nd = 0; - } -#if PY_MAJOR_VERSION >= 3 - result = PyEval_EvalCodeEx((PyObject*)co, globals, (PyObject *)NULL, - args, (int)nargs, - k, (int)nk, - d, (int)nd, kwdefs, closure); -#else - result = PyEval_EvalCodeEx(co, globals, (PyObject *)NULL, - args, (int)nargs, - k, (int)nk, - d, (int)nd, closure); -#endif - Py_XDECREF(kwtuple); -done: - Py_LeaveRecursiveCall(); - return result; -} -#endif -#endif - -/* PyObjectCall2Args */ -static CYTHON_UNUSED PyObject* __Pyx_PyObject_Call2Args(PyObject* function, PyObject* arg1, PyObject* arg2) { - PyObject *args, *result = NULL; - #if CYTHON_FAST_PYCALL - if (PyFunction_Check(function)) { - PyObject *args[2] = {arg1, arg2}; - return __Pyx_PyFunction_FastCall(function, args, 2); - } - #endif - #if CYTHON_FAST_PYCCALL - if (__Pyx_PyFastCFunction_Check(function)) { - PyObject *args[2] = {arg1, arg2}; - return __Pyx_PyCFunction_FastCall(function, args, 2); - } - #endif - args = PyTuple_New(2); - if (unlikely(!args)) goto done; - Py_INCREF(arg1); - PyTuple_SET_ITEM(args, 0, arg1); - Py_INCREF(arg2); - PyTuple_SET_ITEM(args, 1, arg2); - Py_INCREF(function); - result = __Pyx_PyObject_Call(function, args, NULL); - Py_DECREF(args); - Py_DECREF(function); -done: - return result; -} - -/* PyObjectCallMethO */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg) { - PyObject *self, *result; - PyCFunction cfunc; - cfunc = PyCFunction_GET_FUNCTION(func); - self = PyCFunction_GET_SELF(func); - if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) - return NULL; - result = cfunc(self, arg); - Py_LeaveRecursiveCall(); - if (unlikely(!result) && unlikely(!PyErr_Occurred())) { - PyErr_SetString( - PyExc_SystemError, - "NULL result without error in PyObject_Call"); - } - return result; -} -#endif - -/* PyObjectCallOneArg */ -#if CYTHON_COMPILING_IN_CPYTHON -static PyObject* __Pyx__PyObject_CallOneArg(PyObject *func, PyObject *arg) { - PyObject *result; - PyObject *args = PyTuple_New(1); - if (unlikely(!args)) return NULL; - Py_INCREF(arg); - PyTuple_SET_ITEM(args, 0, arg); - result = __Pyx_PyObject_Call(func, args, NULL); - Py_DECREF(args); - return result; -} -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) { -#if CYTHON_FAST_PYCALL - if (PyFunction_Check(func)) { - return __Pyx_PyFunction_FastCall(func, &arg, 1); - } -#endif - if (likely(PyCFunction_Check(func))) { - if (likely(PyCFunction_GET_FLAGS(func) & METH_O)) { - return __Pyx_PyObject_CallMethO(func, arg); -#if CYTHON_FAST_PYCCALL - } else if (PyCFunction_GET_FLAGS(func) & METH_FASTCALL) { - return __Pyx_PyCFunction_FastCall(func, &arg, 1); -#endif - } - } - return __Pyx__PyObject_CallOneArg(func, arg); -} -#else -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) { - PyObject *result; - PyObject *args = PyTuple_Pack(1, arg); - if (unlikely(!args)) return NULL; - result = __Pyx_PyObject_Call(func, args, NULL); - Py_DECREF(args); - return result; -} -#endif - -/* BytesEquals */ -static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals) { -#if CYTHON_COMPILING_IN_PYPY - return PyObject_RichCompareBool(s1, s2, equals); -#else - if (s1 == s2) { - return (equals == Py_EQ); - } else if (PyBytes_CheckExact(s1) & PyBytes_CheckExact(s2)) { - const char *ps1, *ps2; - Py_ssize_t length = PyBytes_GET_SIZE(s1); - if (length != PyBytes_GET_SIZE(s2)) - return (equals == Py_NE); - ps1 = PyBytes_AS_STRING(s1); - ps2 = PyBytes_AS_STRING(s2); - if (ps1[0] != ps2[0]) { - return (equals == Py_NE); - } else if (length == 1) { - return (equals == Py_EQ); - } else { - int result; -#if CYTHON_USE_UNICODE_INTERNALS - Py_hash_t hash1, hash2; - hash1 = ((PyBytesObject*)s1)->ob_shash; - hash2 = ((PyBytesObject*)s2)->ob_shash; - if (hash1 != hash2 && hash1 != -1 && hash2 != -1) { - return (equals == Py_NE); - } -#endif - result = memcmp(ps1, ps2, (size_t)length); - return (equals == Py_EQ) ? (result == 0) : (result != 0); - } - } else if ((s1 == Py_None) & PyBytes_CheckExact(s2)) { - return (equals == Py_NE); - } else if ((s2 == Py_None) & PyBytes_CheckExact(s1)) { - return (equals == Py_NE); - } else { - int result; - PyObject* py_result = PyObject_RichCompare(s1, s2, equals); - if (!py_result) - return -1; - result = __Pyx_PyObject_IsTrue(py_result); - Py_DECREF(py_result); - return result; - } -#endif -} - -/* UnicodeEquals */ -static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals) { -#if CYTHON_COMPILING_IN_PYPY - return PyObject_RichCompareBool(s1, s2, equals); -#else -#if PY_MAJOR_VERSION < 3 - PyObject* owned_ref = NULL; -#endif - int s1_is_unicode, s2_is_unicode; - if (s1 == s2) { - goto return_eq; - } - s1_is_unicode = PyUnicode_CheckExact(s1); - s2_is_unicode = PyUnicode_CheckExact(s2); -#if PY_MAJOR_VERSION < 3 - if ((s1_is_unicode & (!s2_is_unicode)) && PyString_CheckExact(s2)) { - owned_ref = PyUnicode_FromObject(s2); - if (unlikely(!owned_ref)) - return -1; - s2 = owned_ref; - s2_is_unicode = 1; - } else if ((s2_is_unicode & (!s1_is_unicode)) && PyString_CheckExact(s1)) { - owned_ref = PyUnicode_FromObject(s1); - if (unlikely(!owned_ref)) - return -1; - s1 = owned_ref; - s1_is_unicode = 1; - } else if (((!s2_is_unicode) & (!s1_is_unicode))) { - return __Pyx_PyBytes_Equals(s1, s2, equals); - } -#endif - if (s1_is_unicode & s2_is_unicode) { - Py_ssize_t length; - int kind; - void *data1, *data2; - if (unlikely(__Pyx_PyUnicode_READY(s1) < 0) || unlikely(__Pyx_PyUnicode_READY(s2) < 0)) - return -1; - length = __Pyx_PyUnicode_GET_LENGTH(s1); - if (length != __Pyx_PyUnicode_GET_LENGTH(s2)) { - goto return_ne; - } -#if CYTHON_USE_UNICODE_INTERNALS - { - Py_hash_t hash1, hash2; - #if CYTHON_PEP393_ENABLED - hash1 = ((PyASCIIObject*)s1)->hash; - hash2 = ((PyASCIIObject*)s2)->hash; - #else - hash1 = ((PyUnicodeObject*)s1)->hash; - hash2 = ((PyUnicodeObject*)s2)->hash; - #endif - if (hash1 != hash2 && hash1 != -1 && hash2 != -1) { - goto return_ne; - } - } -#endif - kind = __Pyx_PyUnicode_KIND(s1); - if (kind != __Pyx_PyUnicode_KIND(s2)) { - goto return_ne; - } - data1 = __Pyx_PyUnicode_DATA(s1); - data2 = __Pyx_PyUnicode_DATA(s2); - if (__Pyx_PyUnicode_READ(kind, data1, 0) != __Pyx_PyUnicode_READ(kind, data2, 0)) { - goto return_ne; - } else if (length == 1) { - goto return_eq; - } else { - int result = memcmp(data1, data2, (size_t)(length * kind)); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_EQ) ? (result == 0) : (result != 0); - } - } else if ((s1 == Py_None) & s2_is_unicode) { - goto return_ne; - } else if ((s2 == Py_None) & s1_is_unicode) { - goto return_ne; - } else { - int result; - PyObject* py_result = PyObject_RichCompare(s1, s2, equals); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - if (!py_result) - return -1; - result = __Pyx_PyObject_IsTrue(py_result); - Py_DECREF(py_result); - return result; - } -return_eq: - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_EQ); -return_ne: - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_NE); -#endif -} - -/* None */ -static CYTHON_INLINE Py_ssize_t __Pyx_div_Py_ssize_t(Py_ssize_t a, Py_ssize_t b) { - Py_ssize_t q = a / b; - Py_ssize_t r = a - q*b; - q -= ((r != 0) & ((r ^ b) < 0)); - return q; -} - -/* GetAttr */ -static CYTHON_INLINE PyObject *__Pyx_GetAttr(PyObject *o, PyObject *n) { -#if CYTHON_USE_TYPE_SLOTS -#if PY_MAJOR_VERSION >= 3 - if (likely(PyUnicode_Check(n))) -#else - if (likely(PyString_Check(n))) -#endif - return __Pyx_PyObject_GetAttrStr(o, n); -#endif - return PyObject_GetAttr(o, n); -} - -/* GetItemInt */ -static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j) { - PyObject *r; - if (!j) return NULL; - r = PyObject_GetItem(o, j); - Py_DECREF(j); - return r; -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - Py_ssize_t wrapped_i = i; - if (wraparound & unlikely(i < 0)) { - wrapped_i += PyList_GET_SIZE(o); - } - if ((!boundscheck) || likely(__Pyx_is_valid_index(wrapped_i, PyList_GET_SIZE(o)))) { - PyObject *r = PyList_GET_ITEM(o, wrapped_i); - Py_INCREF(r); - return r; - } - return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); -#else - return PySequence_GetItem(o, i); -#endif -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - Py_ssize_t wrapped_i = i; - if (wraparound & unlikely(i < 0)) { - wrapped_i += PyTuple_GET_SIZE(o); - } - if ((!boundscheck) || likely(__Pyx_is_valid_index(wrapped_i, PyTuple_GET_SIZE(o)))) { - PyObject *r = PyTuple_GET_ITEM(o, wrapped_i); - Py_INCREF(r); - return r; - } - return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); -#else - return PySequence_GetItem(o, i); -#endif -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i, int is_list, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS && CYTHON_USE_TYPE_SLOTS - if (is_list || PyList_CheckExact(o)) { - Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyList_GET_SIZE(o); - if ((!boundscheck) || (likely(__Pyx_is_valid_index(n, PyList_GET_SIZE(o))))) { - PyObject *r = PyList_GET_ITEM(o, n); - Py_INCREF(r); - return r; - } - } - else if (PyTuple_CheckExact(o)) { - Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyTuple_GET_SIZE(o); - if ((!boundscheck) || likely(__Pyx_is_valid_index(n, PyTuple_GET_SIZE(o)))) { - PyObject *r = PyTuple_GET_ITEM(o, n); - Py_INCREF(r); - return r; - } - } else { - PySequenceMethods *m = Py_TYPE(o)->tp_as_sequence; - if (likely(m && m->sq_item)) { - if (wraparound && unlikely(i < 0) && likely(m->sq_length)) { - Py_ssize_t l = m->sq_length(o); - if (likely(l >= 0)) { - i += l; - } else { - if (!PyErr_ExceptionMatches(PyExc_OverflowError)) - return NULL; - PyErr_Clear(); - } - } - return m->sq_item(o, i); - } - } -#else - if (is_list || PySequence_Check(o)) { - return PySequence_GetItem(o, i); - } -#endif - return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); -} - -/* ObjectGetItem */ -#if CYTHON_USE_TYPE_SLOTS -static PyObject *__Pyx_PyObject_GetIndex(PyObject *obj, PyObject* index) { - PyObject *runerr; - Py_ssize_t key_value; - PySequenceMethods *m = Py_TYPE(obj)->tp_as_sequence; - if (unlikely(!(m && m->sq_item))) { - PyErr_Format(PyExc_TypeError, "'%.200s' object is not subscriptable", Py_TYPE(obj)->tp_name); - return NULL; - } - key_value = __Pyx_PyIndex_AsSsize_t(index); - if (likely(key_value != -1 || !(runerr = PyErr_Occurred()))) { - return __Pyx_GetItemInt_Fast(obj, key_value, 0, 1, 1); - } - if (PyErr_GivenExceptionMatches(runerr, PyExc_OverflowError)) { - PyErr_Clear(); - PyErr_Format(PyExc_IndexError, "cannot fit '%.200s' into an index-sized integer", Py_TYPE(index)->tp_name); - } - return NULL; -} -static PyObject *__Pyx_PyObject_GetItem(PyObject *obj, PyObject* key) { - PyMappingMethods *m = Py_TYPE(obj)->tp_as_mapping; - if (likely(m && m->mp_subscript)) { - return m->mp_subscript(obj, key); - } - return __Pyx_PyObject_GetIndex(obj, key); -} -#endif - -/* decode_c_string */ -static CYTHON_INLINE PyObject* __Pyx_decode_c_string( - const char* cstring, Py_ssize_t start, Py_ssize_t stop, - const char* encoding, const char* errors, - PyObject* (*decode_func)(const char *s, Py_ssize_t size, const char *errors)) { - Py_ssize_t length; - if (unlikely((start < 0) | (stop < 0))) { - size_t slen = strlen(cstring); - if (unlikely(slen > (size_t) PY_SSIZE_T_MAX)) { - PyErr_SetString(PyExc_OverflowError, - "c-string too long to convert to Python"); - return NULL; - } - length = (Py_ssize_t) slen; - if (start < 0) { - start += length; - if (start < 0) - start = 0; - } - if (stop < 0) - stop += length; - } - if (unlikely(stop <= start)) - return __Pyx_NewRef(__pyx_empty_unicode); - length = stop - start; - cstring += start; - if (decode_func) { - return decode_func(cstring, length, errors); - } else { - return PyUnicode_Decode(cstring, length, encoding, errors); - } -} - -/* PyErrExceptionMatches */ -#if CYTHON_FAST_THREAD_STATE -static int __Pyx_PyErr_ExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) { - Py_ssize_t i, n; - n = PyTuple_GET_SIZE(tuple); -#if PY_MAJOR_VERSION >= 3 - for (i=0; icurexc_type; - if (exc_type == err) return 1; - if (unlikely(!exc_type)) return 0; - if (unlikely(PyTuple_Check(err))) - return __Pyx_PyErr_ExceptionMatchesTuple(exc_type, err); - return __Pyx_PyErr_GivenExceptionMatches(exc_type, err); -} -#endif - -/* GetAttr3 */ -static PyObject *__Pyx_GetAttr3Default(PyObject *d) { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - if (unlikely(!__Pyx_PyErr_ExceptionMatches(PyExc_AttributeError))) - return NULL; - __Pyx_PyErr_Clear(); - Py_INCREF(d); - return d; -} -static CYTHON_INLINE PyObject *__Pyx_GetAttr3(PyObject *o, PyObject *n, PyObject *d) { - PyObject *r = __Pyx_GetAttr(o, n); - return (likely(r)) ? r : __Pyx_GetAttr3Default(d); -} - -/* PyDictVersioning */ -#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj) { - PyObject *dict = Py_TYPE(obj)->tp_dict; - return likely(dict) ? __PYX_GET_DICT_VERSION(dict) : 0; -} -static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj) { - PyObject **dictptr = NULL; - Py_ssize_t offset = Py_TYPE(obj)->tp_dictoffset; - if (offset) { -#if CYTHON_COMPILING_IN_CPYTHON - dictptr = (likely(offset > 0)) ? (PyObject **) ((char *)obj + offset) : _PyObject_GetDictPtr(obj); -#else - dictptr = _PyObject_GetDictPtr(obj); -#endif - } - return (dictptr && *dictptr) ? __PYX_GET_DICT_VERSION(*dictptr) : 0; -} -static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version) { - PyObject *dict = Py_TYPE(obj)->tp_dict; - if (unlikely(!dict) || unlikely(tp_dict_version != __PYX_GET_DICT_VERSION(dict))) - return 0; - return obj_dict_version == __Pyx_get_object_dict_version(obj); -} -#endif - -/* GetModuleGlobalName */ -#if CYTHON_USE_DICT_VERSIONS -static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value) -#else -static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name) -#endif -{ - PyObject *result; -#if !CYTHON_AVOID_BORROWED_REFS -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1 - result = _PyDict_GetItem_KnownHash(__pyx_d, name, ((PyASCIIObject *) name)->hash); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } else if (unlikely(PyErr_Occurred())) { - return NULL; - } -#else - result = PyDict_GetItem(__pyx_d, name); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } -#endif -#else - result = PyObject_GetItem(__pyx_d, name); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } - PyErr_Clear(); -#endif - return __Pyx_GetBuiltinName(name); -} - -/* RaiseTooManyValuesToUnpack */ -static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected) { - PyErr_Format(PyExc_ValueError, - "too many values to unpack (expected %" CYTHON_FORMAT_SSIZE_T "d)", expected); -} - -/* RaiseNeedMoreValuesToUnpack */ -static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index) { - PyErr_Format(PyExc_ValueError, - "need more than %" CYTHON_FORMAT_SSIZE_T "d value%.1s to unpack", - index, (index == 1) ? "" : "s"); -} - -/* RaiseNoneIterError */ -static CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void) { - PyErr_SetString(PyExc_TypeError, "'NoneType' object is not iterable"); -} - -/* ExtTypeTest */ -static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type) { - if (unlikely(!type)) { - PyErr_SetString(PyExc_SystemError, "Missing type object"); - return 0; - } - if (likely(__Pyx_TypeCheck(obj, type))) - return 1; - PyErr_Format(PyExc_TypeError, "Cannot convert %.200s to %.200s", - Py_TYPE(obj)->tp_name, type->tp_name); - return 0; -} - -/* GetTopmostException */ -#if CYTHON_USE_EXC_INFO_STACK -static _PyErr_StackItem * -__Pyx_PyErr_GetTopmostException(PyThreadState *tstate) -{ - _PyErr_StackItem *exc_info = tstate->exc_info; - while ((exc_info->exc_type == NULL || exc_info->exc_type == Py_None) && - exc_info->previous_item != NULL) - { - exc_info = exc_info->previous_item; - } - return exc_info; -} -#endif - -/* SaveResetException */ -#if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { - #if CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = __Pyx_PyErr_GetTopmostException(tstate); - *type = exc_info->exc_type; - *value = exc_info->exc_value; - *tb = exc_info->exc_traceback; - #else - *type = tstate->exc_type; - *value = tstate->exc_value; - *tb = tstate->exc_traceback; - #endif - Py_XINCREF(*type); - Py_XINCREF(*value); - Py_XINCREF(*tb); -} -static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - #if CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = type; - exc_info->exc_value = value; - exc_info->exc_traceback = tb; - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = type; - tstate->exc_value = value; - tstate->exc_traceback = tb; - #endif - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -} -#endif - -/* GetException */ -#if CYTHON_FAST_THREAD_STATE -static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) -#else -static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb) -#endif -{ - PyObject *local_type, *local_value, *local_tb; -#if CYTHON_FAST_THREAD_STATE - PyObject *tmp_type, *tmp_value, *tmp_tb; - local_type = tstate->curexc_type; - local_value = tstate->curexc_value; - local_tb = tstate->curexc_traceback; - tstate->curexc_type = 0; - tstate->curexc_value = 0; - tstate->curexc_traceback = 0; -#else - PyErr_Fetch(&local_type, &local_value, &local_tb); -#endif - PyErr_NormalizeException(&local_type, &local_value, &local_tb); -#if CYTHON_FAST_THREAD_STATE - if (unlikely(tstate->curexc_type)) -#else - if (unlikely(PyErr_Occurred())) -#endif - goto bad; - #if PY_MAJOR_VERSION >= 3 - if (local_tb) { - if (unlikely(PyException_SetTraceback(local_value, local_tb) < 0)) - goto bad; - } - #endif - Py_XINCREF(local_tb); - Py_XINCREF(local_type); - Py_XINCREF(local_value); - *type = local_type; - *value = local_value; - *tb = local_tb; -#if CYTHON_FAST_THREAD_STATE - #if CYTHON_USE_EXC_INFO_STACK - { - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = local_type; - exc_info->exc_value = local_value; - exc_info->exc_traceback = local_tb; - } - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = local_type; - tstate->exc_value = local_value; - tstate->exc_traceback = local_tb; - #endif - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -#else - PyErr_SetExcInfo(local_type, local_value, local_tb); -#endif - return 0; -bad: - *type = 0; - *value = 0; - *tb = 0; - Py_XDECREF(local_type); - Py_XDECREF(local_value); - Py_XDECREF(local_tb); - return -1; -} - -/* SwapException */ -#if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx__ExceptionSwap(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - #if CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = *type; - exc_info->exc_value = *value; - exc_info->exc_traceback = *tb; - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = *type; - tstate->exc_value = *value; - tstate->exc_traceback = *tb; - #endif - *type = tmp_type; - *value = tmp_value; - *tb = tmp_tb; -} -#else -static CYTHON_INLINE void __Pyx_ExceptionSwap(PyObject **type, PyObject **value, PyObject **tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - PyErr_GetExcInfo(&tmp_type, &tmp_value, &tmp_tb); - PyErr_SetExcInfo(*type, *value, *tb); - *type = tmp_type; - *value = tmp_value; - *tb = tmp_tb; -} -#endif - -/* Import */ -static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level) { - PyObject *empty_list = 0; - PyObject *module = 0; - PyObject *global_dict = 0; - PyObject *empty_dict = 0; - PyObject *list; - #if PY_MAJOR_VERSION < 3 - PyObject *py_import; - py_import = __Pyx_PyObject_GetAttrStr(__pyx_b, __pyx_n_s_import); - if (!py_import) - goto bad; - #endif - if (from_list) - list = from_list; - else { - empty_list = PyList_New(0); - if (!empty_list) - goto bad; - list = empty_list; - } - global_dict = PyModule_GetDict(__pyx_m); - if (!global_dict) - goto bad; - empty_dict = PyDict_New(); - if (!empty_dict) - goto bad; - { - #if PY_MAJOR_VERSION >= 3 - if (level == -1) { - if ((1) && (strchr(__Pyx_MODULE_NAME, '.'))) { - module = PyImport_ImportModuleLevelObject( - name, global_dict, empty_dict, list, 1); - if (!module) { - if (!PyErr_ExceptionMatches(PyExc_ImportError)) - goto bad; - PyErr_Clear(); - } - } - level = 0; - } - #endif - if (!module) { - #if PY_MAJOR_VERSION < 3 - PyObject *py_level = PyInt_FromLong(level); - if (!py_level) - goto bad; - module = PyObject_CallFunctionObjArgs(py_import, - name, global_dict, empty_dict, list, py_level, (PyObject *)NULL); - Py_DECREF(py_level); - #else - module = PyImport_ImportModuleLevelObject( - name, global_dict, empty_dict, list, level); - #endif - } - } -bad: - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(py_import); - #endif - Py_XDECREF(empty_list); - Py_XDECREF(empty_dict); - return module; -} - -/* FastTypeChecks */ -#if CYTHON_COMPILING_IN_CPYTHON -static int __Pyx_InBases(PyTypeObject *a, PyTypeObject *b) { - while (a) { - a = a->tp_base; - if (a == b) - return 1; - } - return b == &PyBaseObject_Type; -} -static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b) { - PyObject *mro; - if (a == b) return 1; - mro = a->tp_mro; - if (likely(mro)) { - Py_ssize_t i, n; - n = PyTuple_GET_SIZE(mro); - for (i = 0; i < n; i++) { - if (PyTuple_GET_ITEM(mro, i) == (PyObject *)b) - return 1; - } - return 0; - } - return __Pyx_InBases(a, b); -} -#if PY_MAJOR_VERSION == 2 -static int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject* exc_type2) { - PyObject *exception, *value, *tb; - int res; - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ErrFetch(&exception, &value, &tb); - res = exc_type1 ? PyObject_IsSubclass(err, exc_type1) : 0; - if (unlikely(res == -1)) { - PyErr_WriteUnraisable(err); - res = 0; - } - if (!res) { - res = PyObject_IsSubclass(err, exc_type2); - if (unlikely(res == -1)) { - PyErr_WriteUnraisable(err); - res = 0; - } - } - __Pyx_ErrRestore(exception, value, tb); - return res; -} -#else -static CYTHON_INLINE int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject *exc_type2) { - int res = exc_type1 ? __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type1) : 0; - if (!res) { - res = __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type2); - } - return res; -} -#endif -static int __Pyx_PyErr_GivenExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) { - Py_ssize_t i, n; - assert(PyExceptionClass_Check(exc_type)); - n = PyTuple_GET_SIZE(tuple); -#if PY_MAJOR_VERSION >= 3 - for (i=0; i= 0 || (x^b) >= 0)) - return PyInt_FromLong(x); - return PyLong_Type.tp_as_number->nb_add(op1, op2); - } - #endif - #if CYTHON_USE_PYLONG_INTERNALS - if (likely(PyLong_CheckExact(op1))) { - const long b = intval; - long a, x; -#ifdef HAVE_LONG_LONG - const PY_LONG_LONG llb = intval; - PY_LONG_LONG lla, llx; -#endif - const digit* digits = ((PyLongObject*)op1)->ob_digit; - const Py_ssize_t size = Py_SIZE(op1); - if (likely(__Pyx_sst_abs(size) <= 1)) { - a = likely(size) ? digits[0] : 0; - if (size == -1) a = -a; - } else { - switch (size) { - case -2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - a = -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - case 2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - a = (long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - case -3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - a = -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - case 3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - a = (long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - case -4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - a = -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - case 4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - a = (long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - default: return PyLong_Type.tp_as_number->nb_add(op1, op2); - } - } - x = a + b; - return PyLong_FromLong(x); -#ifdef HAVE_LONG_LONG - long_long: - llx = lla + llb; - return PyLong_FromLongLong(llx); -#endif - - - } - #endif - if (PyFloat_CheckExact(op1)) { - const long b = intval; - double a = PyFloat_AS_DOUBLE(op1); - double result; - PyFPE_START_PROTECT("add", return NULL) - result = ((double)a) + (double)b; - PyFPE_END_PROTECT(result) - return PyFloat_FromDouble(result); - } - return (inplace ? PyNumber_InPlaceAdd : PyNumber_Add)(op1, op2); -} -#endif - -/* None */ -static CYTHON_INLINE long __Pyx_div_long(long a, long b) { - long q = a / b; - long r = a - q*b; - q -= ((r != 0) & ((r ^ b) < 0)); - return q; -} - -/* ImportFrom */ -static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name) { - PyObject* value = __Pyx_PyObject_GetAttrStr(module, name); - if (unlikely(!value) && PyErr_ExceptionMatches(PyExc_AttributeError)) { - PyErr_Format(PyExc_ImportError, - #if PY_MAJOR_VERSION < 3 - "cannot import name %.230s", PyString_AS_STRING(name)); - #else - "cannot import name %S", name); - #endif - } - return value; -} - -/* HasAttr */ -static CYTHON_INLINE int __Pyx_HasAttr(PyObject *o, PyObject *n) { - PyObject *r; - if (unlikely(!__Pyx_PyBaseString_Check(n))) { - PyErr_SetString(PyExc_TypeError, - "hasattr(): attribute name must be string"); - return -1; - } - r = __Pyx_GetAttr(o, n); - if (unlikely(!r)) { - PyErr_Clear(); - return 0; - } else { - Py_DECREF(r); - return 1; - } -} - -/* PyObject_GenericGetAttrNoDict */ -#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000 -static PyObject *__Pyx_RaiseGenericGetAttributeError(PyTypeObject *tp, PyObject *attr_name) { - PyErr_Format(PyExc_AttributeError, -#if PY_MAJOR_VERSION >= 3 - "'%.50s' object has no attribute '%U'", - tp->tp_name, attr_name); -#else - "'%.50s' object has no attribute '%.400s'", - tp->tp_name, PyString_AS_STRING(attr_name)); -#endif - return NULL; -} -static CYTHON_INLINE PyObject* __Pyx_PyObject_GenericGetAttrNoDict(PyObject* obj, PyObject* attr_name) { - PyObject *descr; - PyTypeObject *tp = Py_TYPE(obj); - if (unlikely(!PyString_Check(attr_name))) { - return PyObject_GenericGetAttr(obj, attr_name); - } - assert(!tp->tp_dictoffset); - descr = _PyType_Lookup(tp, attr_name); - if (unlikely(!descr)) { - return __Pyx_RaiseGenericGetAttributeError(tp, attr_name); - } - Py_INCREF(descr); - #if PY_MAJOR_VERSION < 3 - if (likely(PyType_HasFeature(Py_TYPE(descr), Py_TPFLAGS_HAVE_CLASS))) - #endif - { - descrgetfunc f = Py_TYPE(descr)->tp_descr_get; - if (unlikely(f)) { - PyObject *res = f(descr, obj, (PyObject *)tp); - Py_DECREF(descr); - return res; - } - } - return descr; -} -#endif - -/* PyObject_GenericGetAttr */ -#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000 -static PyObject* __Pyx_PyObject_GenericGetAttr(PyObject* obj, PyObject* attr_name) { - if (unlikely(Py_TYPE(obj)->tp_dictoffset)) { - return PyObject_GenericGetAttr(obj, attr_name); - } - return __Pyx_PyObject_GenericGetAttrNoDict(obj, attr_name); -} -#endif - -/* SetVTable */ -static int __Pyx_SetVtable(PyObject *dict, void *vtable) { -#if PY_VERSION_HEX >= 0x02070000 - PyObject *ob = PyCapsule_New(vtable, 0, 0); -#else - PyObject *ob = PyCObject_FromVoidPtr(vtable, 0); -#endif - if (!ob) - goto bad; - if (PyDict_SetItem(dict, __pyx_n_s_pyx_vtable, ob) < 0) - goto bad; - Py_DECREF(ob); - return 0; -bad: - Py_XDECREF(ob); - return -1; -} - -/* PyObjectGetAttrStrNoError */ -static void __Pyx_PyObject_GetAttrStr_ClearAttributeError(void) { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - if (likely(__Pyx_PyErr_ExceptionMatches(PyExc_AttributeError))) - __Pyx_PyErr_Clear(); -} -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStrNoError(PyObject* obj, PyObject* attr_name) { - PyObject *result; -#if CYTHON_COMPILING_IN_CPYTHON && CYTHON_USE_TYPE_SLOTS && PY_VERSION_HEX >= 0x030700B1 - PyTypeObject* tp = Py_TYPE(obj); - if (likely(tp->tp_getattro == PyObject_GenericGetAttr)) { - return _PyObject_GenericGetAttrWithDict(obj, attr_name, NULL, 1); - } -#endif - result = __Pyx_PyObject_GetAttrStr(obj, attr_name); - if (unlikely(!result)) { - __Pyx_PyObject_GetAttrStr_ClearAttributeError(); - } - return result; -} - -/* SetupReduce */ -static int __Pyx_setup_reduce_is_named(PyObject* meth, PyObject* name) { - int ret; - PyObject *name_attr; - name_attr = __Pyx_PyObject_GetAttrStr(meth, __pyx_n_s_name_2); - if (likely(name_attr)) { - ret = PyObject_RichCompareBool(name_attr, name, Py_EQ); - } else { - ret = -1; - } - if (unlikely(ret < 0)) { - PyErr_Clear(); - ret = 0; - } - Py_XDECREF(name_attr); - return ret; -} -static int __Pyx_setup_reduce(PyObject* type_obj) { - int ret = 0; - PyObject *object_reduce = NULL; - PyObject *object_reduce_ex = NULL; - PyObject *reduce = NULL; - PyObject *reduce_ex = NULL; - PyObject *reduce_cython = NULL; - PyObject *setstate = NULL; - PyObject *setstate_cython = NULL; -#if CYTHON_USE_PYTYPE_LOOKUP - if (_PyType_Lookup((PyTypeObject*)type_obj, __pyx_n_s_getstate)) goto __PYX_GOOD; -#else - if (PyObject_HasAttr(type_obj, __pyx_n_s_getstate)) goto __PYX_GOOD; -#endif -#if CYTHON_USE_PYTYPE_LOOKUP - object_reduce_ex = _PyType_Lookup(&PyBaseObject_Type, __pyx_n_s_reduce_ex); if (!object_reduce_ex) goto __PYX_BAD; -#else - object_reduce_ex = __Pyx_PyObject_GetAttrStr((PyObject*)&PyBaseObject_Type, __pyx_n_s_reduce_ex); if (!object_reduce_ex) goto __PYX_BAD; -#endif - reduce_ex = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_reduce_ex); if (unlikely(!reduce_ex)) goto __PYX_BAD; - if (reduce_ex == object_reduce_ex) { -#if CYTHON_USE_PYTYPE_LOOKUP - object_reduce = _PyType_Lookup(&PyBaseObject_Type, __pyx_n_s_reduce); if (!object_reduce) goto __PYX_BAD; -#else - object_reduce = __Pyx_PyObject_GetAttrStr((PyObject*)&PyBaseObject_Type, __pyx_n_s_reduce); if (!object_reduce) goto __PYX_BAD; -#endif - reduce = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_reduce); if (unlikely(!reduce)) goto __PYX_BAD; - if (reduce == object_reduce || __Pyx_setup_reduce_is_named(reduce, __pyx_n_s_reduce_cython)) { - reduce_cython = __Pyx_PyObject_GetAttrStrNoError(type_obj, __pyx_n_s_reduce_cython); - if (likely(reduce_cython)) { - ret = PyDict_SetItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_reduce, reduce_cython); if (unlikely(ret < 0)) goto __PYX_BAD; - ret = PyDict_DelItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_reduce_cython); if (unlikely(ret < 0)) goto __PYX_BAD; - } else if (reduce == object_reduce || PyErr_Occurred()) { - goto __PYX_BAD; - } - setstate = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_setstate); - if (!setstate) PyErr_Clear(); - if (!setstate || __Pyx_setup_reduce_is_named(setstate, __pyx_n_s_setstate_cython)) { - setstate_cython = __Pyx_PyObject_GetAttrStrNoError(type_obj, __pyx_n_s_setstate_cython); - if (likely(setstate_cython)) { - ret = PyDict_SetItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_setstate, setstate_cython); if (unlikely(ret < 0)) goto __PYX_BAD; - ret = PyDict_DelItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_setstate_cython); if (unlikely(ret < 0)) goto __PYX_BAD; - } else if (!setstate || PyErr_Occurred()) { - goto __PYX_BAD; - } - } - PyType_Modified((PyTypeObject*)type_obj); - } - } - goto __PYX_GOOD; -__PYX_BAD: - if (!PyErr_Occurred()) - PyErr_Format(PyExc_RuntimeError, "Unable to initialize pickling for %s", ((PyTypeObject*)type_obj)->tp_name); - ret = -1; -__PYX_GOOD: -#if !CYTHON_USE_PYTYPE_LOOKUP - Py_XDECREF(object_reduce); - Py_XDECREF(object_reduce_ex); -#endif - Py_XDECREF(reduce); - Py_XDECREF(reduce_ex); - Py_XDECREF(reduce_cython); - Py_XDECREF(setstate); - Py_XDECREF(setstate_cython); - return ret; -} - -/* CLineInTraceback */ -#ifndef CYTHON_CLINE_IN_TRACEBACK -static int __Pyx_CLineForTraceback(CYTHON_NCP_UNUSED PyThreadState *tstate, int c_line) { - PyObject *use_cline; - PyObject *ptype, *pvalue, *ptraceback; -#if CYTHON_COMPILING_IN_CPYTHON - PyObject **cython_runtime_dict; -#endif - if (unlikely(!__pyx_cython_runtime)) { - return c_line; - } - __Pyx_ErrFetchInState(tstate, &ptype, &pvalue, &ptraceback); -#if CYTHON_COMPILING_IN_CPYTHON - cython_runtime_dict = _PyObject_GetDictPtr(__pyx_cython_runtime); - if (likely(cython_runtime_dict)) { - __PYX_PY_DICT_LOOKUP_IF_MODIFIED( - use_cline, *cython_runtime_dict, - __Pyx_PyDict_GetItemStr(*cython_runtime_dict, __pyx_n_s_cline_in_traceback)) - } else -#endif - { - PyObject *use_cline_obj = __Pyx_PyObject_GetAttrStr(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback); - if (use_cline_obj) { - use_cline = PyObject_Not(use_cline_obj) ? Py_False : Py_True; - Py_DECREF(use_cline_obj); - } else { - PyErr_Clear(); - use_cline = NULL; - } - } - if (!use_cline) { - c_line = 0; - PyObject_SetAttr(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback, Py_False); - } - else if (use_cline == Py_False || (use_cline != Py_True && PyObject_Not(use_cline) != 0)) { - c_line = 0; - } - __Pyx_ErrRestoreInState(tstate, ptype, pvalue, ptraceback); - return c_line; -} -#endif - -/* CodeObjectCache */ -static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line) { - int start = 0, mid = 0, end = count - 1; - if (end >= 0 && code_line > entries[end].code_line) { - return count; - } - while (start < end) { - mid = start + (end - start) / 2; - if (code_line < entries[mid].code_line) { - end = mid; - } else if (code_line > entries[mid].code_line) { - start = mid + 1; - } else { - return mid; - } - } - if (code_line <= entries[mid].code_line) { - return mid; - } else { - return mid + 1; - } -} -static PyCodeObject *__pyx_find_code_object(int code_line) { - PyCodeObject* code_object; - int pos; - if (unlikely(!code_line) || unlikely(!__pyx_code_cache.entries)) { - return NULL; - } - pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); - if (unlikely(pos >= __pyx_code_cache.count) || unlikely(__pyx_code_cache.entries[pos].code_line != code_line)) { - return NULL; - } - code_object = __pyx_code_cache.entries[pos].code_object; - Py_INCREF(code_object); - return code_object; -} -static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object) { - int pos, i; - __Pyx_CodeObjectCacheEntry* entries = __pyx_code_cache.entries; - if (unlikely(!code_line)) { - return; - } - if (unlikely(!entries)) { - entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Malloc(64*sizeof(__Pyx_CodeObjectCacheEntry)); - if (likely(entries)) { - __pyx_code_cache.entries = entries; - __pyx_code_cache.max_count = 64; - __pyx_code_cache.count = 1; - entries[0].code_line = code_line; - entries[0].code_object = code_object; - Py_INCREF(code_object); - } - return; - } - pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); - if ((pos < __pyx_code_cache.count) && unlikely(__pyx_code_cache.entries[pos].code_line == code_line)) { - PyCodeObject* tmp = entries[pos].code_object; - entries[pos].code_object = code_object; - Py_DECREF(tmp); - return; - } - if (__pyx_code_cache.count == __pyx_code_cache.max_count) { - int new_max = __pyx_code_cache.max_count + 64; - entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Realloc( - __pyx_code_cache.entries, ((size_t)new_max) * sizeof(__Pyx_CodeObjectCacheEntry)); - if (unlikely(!entries)) { - return; - } - __pyx_code_cache.entries = entries; - __pyx_code_cache.max_count = new_max; - } - for (i=__pyx_code_cache.count; i>pos; i--) { - entries[i] = entries[i-1]; - } - entries[pos].code_line = code_line; - entries[pos].code_object = code_object; - __pyx_code_cache.count++; - Py_INCREF(code_object); -} - -/* AddTraceback */ -#include "compile.h" -#include "frameobject.h" -#include "traceback.h" -static PyCodeObject* __Pyx_CreateCodeObjectForTraceback( - const char *funcname, int c_line, - int py_line, const char *filename) { - PyCodeObject *py_code = 0; - PyObject *py_srcfile = 0; - PyObject *py_funcname = 0; - #if PY_MAJOR_VERSION < 3 - py_srcfile = PyString_FromString(filename); - #else - py_srcfile = PyUnicode_FromString(filename); - #endif - if (!py_srcfile) goto bad; - if (c_line) { - #if PY_MAJOR_VERSION < 3 - py_funcname = PyString_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); - #else - py_funcname = PyUnicode_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); - #endif - } - else { - #if PY_MAJOR_VERSION < 3 - py_funcname = PyString_FromString(funcname); - #else - py_funcname = PyUnicode_FromString(funcname); - #endif - } - if (!py_funcname) goto bad; - py_code = __Pyx_PyCode_New( - 0, - 0, - 0, - 0, - 0, - __pyx_empty_bytes, /*PyObject *code,*/ - __pyx_empty_tuple, /*PyObject *consts,*/ - __pyx_empty_tuple, /*PyObject *names,*/ - __pyx_empty_tuple, /*PyObject *varnames,*/ - __pyx_empty_tuple, /*PyObject *freevars,*/ - __pyx_empty_tuple, /*PyObject *cellvars,*/ - py_srcfile, /*PyObject *filename,*/ - py_funcname, /*PyObject *name,*/ - py_line, - __pyx_empty_bytes /*PyObject *lnotab*/ - ); - Py_DECREF(py_srcfile); - Py_DECREF(py_funcname); - return py_code; -bad: - Py_XDECREF(py_srcfile); - Py_XDECREF(py_funcname); - return NULL; -} -static void __Pyx_AddTraceback(const char *funcname, int c_line, - int py_line, const char *filename) { - PyCodeObject *py_code = 0; - PyFrameObject *py_frame = 0; - PyThreadState *tstate = __Pyx_PyThreadState_Current; - if (c_line) { - c_line = __Pyx_CLineForTraceback(tstate, c_line); - } - py_code = __pyx_find_code_object(c_line ? -c_line : py_line); - if (!py_code) { - py_code = __Pyx_CreateCodeObjectForTraceback( - funcname, c_line, py_line, filename); - if (!py_code) goto bad; - __pyx_insert_code_object(c_line ? -c_line : py_line, py_code); - } - py_frame = PyFrame_New( - tstate, /*PyThreadState *tstate,*/ - py_code, /*PyCodeObject *code,*/ - __pyx_d, /*PyObject *globals,*/ - 0 /*PyObject *locals*/ - ); - if (!py_frame) goto bad; - __Pyx_PyFrame_SetLineNumber(py_frame, py_line); - PyTraceBack_Here(py_frame); -bad: - Py_XDECREF(py_code); - Py_XDECREF(py_frame); -} - -#if PY_MAJOR_VERSION < 3 -static int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags) { - if (PyObject_CheckBuffer(obj)) return PyObject_GetBuffer(obj, view, flags); - if (__Pyx_TypeCheck(obj, __pyx_array_type)) return __pyx_array_getbuffer(obj, view, flags); - if (__Pyx_TypeCheck(obj, __pyx_memoryview_type)) return __pyx_memoryview_getbuffer(obj, view, flags); - PyErr_Format(PyExc_TypeError, "'%.200s' does not have the buffer interface", Py_TYPE(obj)->tp_name); - return -1; -} -static void __Pyx_ReleaseBuffer(Py_buffer *view) { - PyObject *obj = view->obj; - if (!obj) return; - if (PyObject_CheckBuffer(obj)) { - PyBuffer_Release(view); - return; - } - if ((0)) {} - view->obj = NULL; - Py_DECREF(obj); -} -#endif - - -/* MemviewSliceIsContig */ -static int -__pyx_memviewslice_is_contig(const __Pyx_memviewslice mvs, char order, int ndim) -{ - int i, index, step, start; - Py_ssize_t itemsize = mvs.memview->view.itemsize; - if (order == 'F') { - step = 1; - start = 0; - } else { - step = -1; - start = ndim - 1; - } - for (i = 0; i < ndim; i++) { - index = start + step * i; - if (mvs.suboffsets[index] >= 0 || mvs.strides[index] != itemsize) - return 0; - itemsize *= mvs.shape[index]; - } - return 1; -} - -/* OverlappingSlices */ -static void -__pyx_get_array_memory_extents(__Pyx_memviewslice *slice, - void **out_start, void **out_end, - int ndim, size_t itemsize) -{ - char *start, *end; - int i; - start = end = slice->data; - for (i = 0; i < ndim; i++) { - Py_ssize_t stride = slice->strides[i]; - Py_ssize_t extent = slice->shape[i]; - if (extent == 0) { - *out_start = *out_end = start; - return; - } else { - if (stride > 0) - end += stride * (extent - 1); - else - start += stride * (extent - 1); - } - } - *out_start = start; - *out_end = end + itemsize; -} -static int -__pyx_slices_overlap(__Pyx_memviewslice *slice1, - __Pyx_memviewslice *slice2, - int ndim, size_t itemsize) -{ - void *start1, *end1, *start2, *end2; - __pyx_get_array_memory_extents(slice1, &start1, &end1, ndim, itemsize); - __pyx_get_array_memory_extents(slice2, &start2, &end2, ndim, itemsize); - return (start1 < end2) && (start2 < end1); -} - -/* Capsule */ -static CYTHON_INLINE PyObject * -__pyx_capsule_create(void *p, CYTHON_UNUSED const char *sig) -{ - PyObject *cobj; -#if PY_VERSION_HEX >= 0x02070000 - cobj = PyCapsule_New(p, sig, NULL); -#else - cobj = PyCObject_FromVoidPtr(p, NULL); -#endif - return cobj; -} - -/* IsLittleEndian */ -static CYTHON_INLINE int __Pyx_Is_Little_Endian(void) -{ - union { - uint32_t u32; - uint8_t u8[4]; - } S; - S.u32 = 0x01020304; - return S.u8[0] == 4; -} - -/* BufferFormatCheck */ -static void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx, - __Pyx_BufFmt_StackElem* stack, - __Pyx_TypeInfo* type) { - stack[0].field = &ctx->root; - stack[0].parent_offset = 0; - ctx->root.type = type; - ctx->root.name = "buffer dtype"; - ctx->root.offset = 0; - ctx->head = stack; - ctx->head->field = &ctx->root; - ctx->fmt_offset = 0; - ctx->head->parent_offset = 0; - ctx->new_packmode = '@'; - ctx->enc_packmode = '@'; - ctx->new_count = 1; - ctx->enc_count = 0; - ctx->enc_type = 0; - ctx->is_complex = 0; - ctx->is_valid_array = 0; - ctx->struct_alignment = 0; - while (type->typegroup == 'S') { - ++ctx->head; - ctx->head->field = type->fields; - ctx->head->parent_offset = 0; - type = type->fields->type; - } -} -static int __Pyx_BufFmt_ParseNumber(const char** ts) { - int count; - const char* t = *ts; - if (*t < '0' || *t > '9') { - return -1; - } else { - count = *t++ - '0'; - while (*t >= '0' && *t <= '9') { - count *= 10; - count += *t++ - '0'; - } - } - *ts = t; - return count; -} -static int __Pyx_BufFmt_ExpectNumber(const char **ts) { - int number = __Pyx_BufFmt_ParseNumber(ts); - if (number == -1) - PyErr_Format(PyExc_ValueError,\ - "Does not understand character buffer dtype format string ('%c')", **ts); - return number; -} -static void __Pyx_BufFmt_RaiseUnexpectedChar(char ch) { - PyErr_Format(PyExc_ValueError, - "Unexpected format string character: '%c'", ch); -} -static const char* __Pyx_BufFmt_DescribeTypeChar(char ch, int is_complex) { - switch (ch) { - case '?': return "'bool'"; - case 'c': return "'char'"; - case 'b': return "'signed char'"; - case 'B': return "'unsigned char'"; - case 'h': return "'short'"; - case 'H': return "'unsigned short'"; - case 'i': return "'int'"; - case 'I': return "'unsigned int'"; - case 'l': return "'long'"; - case 'L': return "'unsigned long'"; - case 'q': return "'long long'"; - case 'Q': return "'unsigned long long'"; - case 'f': return (is_complex ? "'complex float'" : "'float'"); - case 'd': return (is_complex ? "'complex double'" : "'double'"); - case 'g': return (is_complex ? "'complex long double'" : "'long double'"); - case 'T': return "a struct"; - case 'O': return "Python object"; - case 'P': return "a pointer"; - case 's': case 'p': return "a string"; - case 0: return "end"; - default: return "unparseable format string"; - } -} -static size_t __Pyx_BufFmt_TypeCharToStandardSize(char ch, int is_complex) { - switch (ch) { - case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; - case 'h': case 'H': return 2; - case 'i': case 'I': case 'l': case 'L': return 4; - case 'q': case 'Q': return 8; - case 'f': return (is_complex ? 8 : 4); - case 'd': return (is_complex ? 16 : 8); - case 'g': { - PyErr_SetString(PyExc_ValueError, "Python does not define a standard format string size for long double ('g').."); - return 0; - } - case 'O': case 'P': return sizeof(void*); - default: - __Pyx_BufFmt_RaiseUnexpectedChar(ch); - return 0; - } -} -static size_t __Pyx_BufFmt_TypeCharToNativeSize(char ch, int is_complex) { - switch (ch) { - case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; - case 'h': case 'H': return sizeof(short); - case 'i': case 'I': return sizeof(int); - case 'l': case 'L': return sizeof(long); - #ifdef HAVE_LONG_LONG - case 'q': case 'Q': return sizeof(PY_LONG_LONG); - #endif - case 'f': return sizeof(float) * (is_complex ? 2 : 1); - case 'd': return sizeof(double) * (is_complex ? 2 : 1); - case 'g': return sizeof(long double) * (is_complex ? 2 : 1); - case 'O': case 'P': return sizeof(void*); - default: { - __Pyx_BufFmt_RaiseUnexpectedChar(ch); - return 0; - } - } -} -typedef struct { char c; short x; } __Pyx_st_short; -typedef struct { char c; int x; } __Pyx_st_int; -typedef struct { char c; long x; } __Pyx_st_long; -typedef struct { char c; float x; } __Pyx_st_float; -typedef struct { char c; double x; } __Pyx_st_double; -typedef struct { char c; long double x; } __Pyx_st_longdouble; -typedef struct { char c; void *x; } __Pyx_st_void_p; -#ifdef HAVE_LONG_LONG -typedef struct { char c; PY_LONG_LONG x; } __Pyx_st_longlong; -#endif -static size_t __Pyx_BufFmt_TypeCharToAlignment(char ch, CYTHON_UNUSED int is_complex) { - switch (ch) { - case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; - case 'h': case 'H': return sizeof(__Pyx_st_short) - sizeof(short); - case 'i': case 'I': return sizeof(__Pyx_st_int) - sizeof(int); - case 'l': case 'L': return sizeof(__Pyx_st_long) - sizeof(long); -#ifdef HAVE_LONG_LONG - case 'q': case 'Q': return sizeof(__Pyx_st_longlong) - sizeof(PY_LONG_LONG); -#endif - case 'f': return sizeof(__Pyx_st_float) - sizeof(float); - case 'd': return sizeof(__Pyx_st_double) - sizeof(double); - case 'g': return sizeof(__Pyx_st_longdouble) - sizeof(long double); - case 'P': case 'O': return sizeof(__Pyx_st_void_p) - sizeof(void*); - default: - __Pyx_BufFmt_RaiseUnexpectedChar(ch); - return 0; - } -} -/* These are for computing the padding at the end of the struct to align - on the first member of the struct. This will probably the same as above, - but we don't have any guarantees. - */ -typedef struct { short x; char c; } __Pyx_pad_short; -typedef struct { int x; char c; } __Pyx_pad_int; -typedef struct { long x; char c; } __Pyx_pad_long; -typedef struct { float x; char c; } __Pyx_pad_float; -typedef struct { double x; char c; } __Pyx_pad_double; -typedef struct { long double x; char c; } __Pyx_pad_longdouble; -typedef struct { void *x; char c; } __Pyx_pad_void_p; -#ifdef HAVE_LONG_LONG -typedef struct { PY_LONG_LONG x; char c; } __Pyx_pad_longlong; -#endif -static size_t __Pyx_BufFmt_TypeCharToPadding(char ch, CYTHON_UNUSED int is_complex) { - switch (ch) { - case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; - case 'h': case 'H': return sizeof(__Pyx_pad_short) - sizeof(short); - case 'i': case 'I': return sizeof(__Pyx_pad_int) - sizeof(int); - case 'l': case 'L': return sizeof(__Pyx_pad_long) - sizeof(long); -#ifdef HAVE_LONG_LONG - case 'q': case 'Q': return sizeof(__Pyx_pad_longlong) - sizeof(PY_LONG_LONG); -#endif - case 'f': return sizeof(__Pyx_pad_float) - sizeof(float); - case 'd': return sizeof(__Pyx_pad_double) - sizeof(double); - case 'g': return sizeof(__Pyx_pad_longdouble) - sizeof(long double); - case 'P': case 'O': return sizeof(__Pyx_pad_void_p) - sizeof(void*); - default: - __Pyx_BufFmt_RaiseUnexpectedChar(ch); - return 0; - } -} -static char __Pyx_BufFmt_TypeCharToGroup(char ch, int is_complex) { - switch (ch) { - case 'c': - return 'H'; - case 'b': case 'h': case 'i': - case 'l': case 'q': case 's': case 'p': - return 'I'; - case '?': case 'B': case 'H': case 'I': case 'L': case 'Q': - return 'U'; - case 'f': case 'd': case 'g': - return (is_complex ? 'C' : 'R'); - case 'O': - return 'O'; - case 'P': - return 'P'; - default: { - __Pyx_BufFmt_RaiseUnexpectedChar(ch); - return 0; - } - } -} -static void __Pyx_BufFmt_RaiseExpected(__Pyx_BufFmt_Context* ctx) { - if (ctx->head == NULL || ctx->head->field == &ctx->root) { - const char* expected; - const char* quote; - if (ctx->head == NULL) { - expected = "end"; - quote = ""; - } else { - expected = ctx->head->field->type->name; - quote = "'"; - } - PyErr_Format(PyExc_ValueError, - "Buffer dtype mismatch, expected %s%s%s but got %s", - quote, expected, quote, - __Pyx_BufFmt_DescribeTypeChar(ctx->enc_type, ctx->is_complex)); - } else { - __Pyx_StructField* field = ctx->head->field; - __Pyx_StructField* parent = (ctx->head - 1)->field; - PyErr_Format(PyExc_ValueError, - "Buffer dtype mismatch, expected '%s' but got %s in '%s.%s'", - field->type->name, __Pyx_BufFmt_DescribeTypeChar(ctx->enc_type, ctx->is_complex), - parent->type->name, field->name); - } -} -static int __Pyx_BufFmt_ProcessTypeChunk(__Pyx_BufFmt_Context* ctx) { - char group; - size_t size, offset, arraysize = 1; - if (ctx->enc_type == 0) return 0; - if (ctx->head->field->type->arraysize[0]) { - int i, ndim = 0; - if (ctx->enc_type == 's' || ctx->enc_type == 'p') { - ctx->is_valid_array = ctx->head->field->type->ndim == 1; - ndim = 1; - if (ctx->enc_count != ctx->head->field->type->arraysize[0]) { - PyErr_Format(PyExc_ValueError, - "Expected a dimension of size %zu, got %zu", - ctx->head->field->type->arraysize[0], ctx->enc_count); - return -1; - } - } - if (!ctx->is_valid_array) { - PyErr_Format(PyExc_ValueError, "Expected %d dimensions, got %d", - ctx->head->field->type->ndim, ndim); - return -1; - } - for (i = 0; i < ctx->head->field->type->ndim; i++) { - arraysize *= ctx->head->field->type->arraysize[i]; - } - ctx->is_valid_array = 0; - ctx->enc_count = 1; - } - group = __Pyx_BufFmt_TypeCharToGroup(ctx->enc_type, ctx->is_complex); - do { - __Pyx_StructField* field = ctx->head->field; - __Pyx_TypeInfo* type = field->type; - if (ctx->enc_packmode == '@' || ctx->enc_packmode == '^') { - size = __Pyx_BufFmt_TypeCharToNativeSize(ctx->enc_type, ctx->is_complex); - } else { - size = __Pyx_BufFmt_TypeCharToStandardSize(ctx->enc_type, ctx->is_complex); - } - if (ctx->enc_packmode == '@') { - size_t align_at = __Pyx_BufFmt_TypeCharToAlignment(ctx->enc_type, ctx->is_complex); - size_t align_mod_offset; - if (align_at == 0) return -1; - align_mod_offset = ctx->fmt_offset % align_at; - if (align_mod_offset > 0) ctx->fmt_offset += align_at - align_mod_offset; - if (ctx->struct_alignment == 0) - ctx->struct_alignment = __Pyx_BufFmt_TypeCharToPadding(ctx->enc_type, - ctx->is_complex); - } - if (type->size != size || type->typegroup != group) { - if (type->typegroup == 'C' && type->fields != NULL) { - size_t parent_offset = ctx->head->parent_offset + field->offset; - ++ctx->head; - ctx->head->field = type->fields; - ctx->head->parent_offset = parent_offset; - continue; - } - if ((type->typegroup == 'H' || group == 'H') && type->size == size) { - } else { - __Pyx_BufFmt_RaiseExpected(ctx); - return -1; - } - } - offset = ctx->head->parent_offset + field->offset; - if (ctx->fmt_offset != offset) { - PyErr_Format(PyExc_ValueError, - "Buffer dtype mismatch; next field is at offset %" CYTHON_FORMAT_SSIZE_T "d but %" CYTHON_FORMAT_SSIZE_T "d expected", - (Py_ssize_t)ctx->fmt_offset, (Py_ssize_t)offset); - return -1; - } - ctx->fmt_offset += size; - if (arraysize) - ctx->fmt_offset += (arraysize - 1) * size; - --ctx->enc_count; - while (1) { - if (field == &ctx->root) { - ctx->head = NULL; - if (ctx->enc_count != 0) { - __Pyx_BufFmt_RaiseExpected(ctx); - return -1; - } - break; - } - ctx->head->field = ++field; - if (field->type == NULL) { - --ctx->head; - field = ctx->head->field; - continue; - } else if (field->type->typegroup == 'S') { - size_t parent_offset = ctx->head->parent_offset + field->offset; - if (field->type->fields->type == NULL) continue; - field = field->type->fields; - ++ctx->head; - ctx->head->field = field; - ctx->head->parent_offset = parent_offset; - break; - } else { - break; - } - } - } while (ctx->enc_count); - ctx->enc_type = 0; - ctx->is_complex = 0; - return 0; -} -static PyObject * -__pyx_buffmt_parse_array(__Pyx_BufFmt_Context* ctx, const char** tsp) -{ - const char *ts = *tsp; - int i = 0, number, ndim; - ++ts; - if (ctx->new_count != 1) { - PyErr_SetString(PyExc_ValueError, - "Cannot handle repeated arrays in format string"); - return NULL; - } - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - ndim = ctx->head->field->type->ndim; - while (*ts && *ts != ')') { - switch (*ts) { - case ' ': case '\f': case '\r': case '\n': case '\t': case '\v': continue; - default: break; - } - number = __Pyx_BufFmt_ExpectNumber(&ts); - if (number == -1) return NULL; - if (i < ndim && (size_t) number != ctx->head->field->type->arraysize[i]) - return PyErr_Format(PyExc_ValueError, - "Expected a dimension of size %zu, got %d", - ctx->head->field->type->arraysize[i], number); - if (*ts != ',' && *ts != ')') - return PyErr_Format(PyExc_ValueError, - "Expected a comma in format string, got '%c'", *ts); - if (*ts == ',') ts++; - i++; - } - if (i != ndim) - return PyErr_Format(PyExc_ValueError, "Expected %d dimension(s), got %d", - ctx->head->field->type->ndim, i); - if (!*ts) { - PyErr_SetString(PyExc_ValueError, - "Unexpected end of format string, expected ')'"); - return NULL; - } - ctx->is_valid_array = 1; - ctx->new_count = 1; - *tsp = ++ts; - return Py_None; -} -static const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const char* ts) { - int got_Z = 0; - while (1) { - switch(*ts) { - case 0: - if (ctx->enc_type != 0 && ctx->head == NULL) { - __Pyx_BufFmt_RaiseExpected(ctx); - return NULL; - } - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - if (ctx->head != NULL) { - __Pyx_BufFmt_RaiseExpected(ctx); - return NULL; - } - return ts; - case ' ': - case '\r': - case '\n': - ++ts; - break; - case '<': - if (!__Pyx_Is_Little_Endian()) { - PyErr_SetString(PyExc_ValueError, "Little-endian buffer not supported on big-endian compiler"); - return NULL; - } - ctx->new_packmode = '='; - ++ts; - break; - case '>': - case '!': - if (__Pyx_Is_Little_Endian()) { - PyErr_SetString(PyExc_ValueError, "Big-endian buffer not supported on little-endian compiler"); - return NULL; - } - ctx->new_packmode = '='; - ++ts; - break; - case '=': - case '@': - case '^': - ctx->new_packmode = *ts++; - break; - case 'T': - { - const char* ts_after_sub; - size_t i, struct_count = ctx->new_count; - size_t struct_alignment = ctx->struct_alignment; - ctx->new_count = 1; - ++ts; - if (*ts != '{') { - PyErr_SetString(PyExc_ValueError, "Buffer acquisition: Expected '{' after 'T'"); - return NULL; - } - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - ctx->enc_type = 0; - ctx->enc_count = 0; - ctx->struct_alignment = 0; - ++ts; - ts_after_sub = ts; - for (i = 0; i != struct_count; ++i) { - ts_after_sub = __Pyx_BufFmt_CheckString(ctx, ts); - if (!ts_after_sub) return NULL; - } - ts = ts_after_sub; - if (struct_alignment) ctx->struct_alignment = struct_alignment; - } - break; - case '}': - { - size_t alignment = ctx->struct_alignment; - ++ts; - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - ctx->enc_type = 0; - if (alignment && ctx->fmt_offset % alignment) { - ctx->fmt_offset += alignment - (ctx->fmt_offset % alignment); - } - } - return ts; - case 'x': - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - ctx->fmt_offset += ctx->new_count; - ctx->new_count = 1; - ctx->enc_count = 0; - ctx->enc_type = 0; - ctx->enc_packmode = ctx->new_packmode; - ++ts; - break; - case 'Z': - got_Z = 1; - ++ts; - if (*ts != 'f' && *ts != 'd' && *ts != 'g') { - __Pyx_BufFmt_RaiseUnexpectedChar('Z'); - return NULL; - } - CYTHON_FALLTHROUGH; - case '?': case 'c': case 'b': case 'B': case 'h': case 'H': case 'i': case 'I': - case 'l': case 'L': case 'q': case 'Q': - case 'f': case 'd': case 'g': - case 'O': case 'p': - if ((ctx->enc_type == *ts) && (got_Z == ctx->is_complex) && - (ctx->enc_packmode == ctx->new_packmode) && (!ctx->is_valid_array)) { - ctx->enc_count += ctx->new_count; - ctx->new_count = 1; - got_Z = 0; - ++ts; - break; - } - CYTHON_FALLTHROUGH; - case 's': - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - ctx->enc_count = ctx->new_count; - ctx->enc_packmode = ctx->new_packmode; - ctx->enc_type = *ts; - ctx->is_complex = got_Z; - ++ts; - ctx->new_count = 1; - got_Z = 0; - break; - case ':': - ++ts; - while(*ts != ':') ++ts; - ++ts; - break; - case '(': - if (!__pyx_buffmt_parse_array(ctx, &ts)) return NULL; - break; - default: - { - int number = __Pyx_BufFmt_ExpectNumber(&ts); - if (number == -1) return NULL; - ctx->new_count = (size_t)number; - } - } - } -} - -/* TypeInfoCompare */ - static int -__pyx_typeinfo_cmp(__Pyx_TypeInfo *a, __Pyx_TypeInfo *b) -{ - int i; - if (!a || !b) - return 0; - if (a == b) - return 1; - if (a->size != b->size || a->typegroup != b->typegroup || - a->is_unsigned != b->is_unsigned || a->ndim != b->ndim) { - if (a->typegroup == 'H' || b->typegroup == 'H') { - return a->size == b->size; - } else { - return 0; - } - } - if (a->ndim) { - for (i = 0; i < a->ndim; i++) - if (a->arraysize[i] != b->arraysize[i]) - return 0; - } - if (a->typegroup == 'S') { - if (a->flags != b->flags) - return 0; - if (a->fields || b->fields) { - if (!(a->fields && b->fields)) - return 0; - for (i = 0; a->fields[i].type && b->fields[i].type; i++) { - __Pyx_StructField *field_a = a->fields + i; - __Pyx_StructField *field_b = b->fields + i; - if (field_a->offset != field_b->offset || - !__pyx_typeinfo_cmp(field_a->type, field_b->type)) - return 0; - } - return !a->fields[i].type && !b->fields[i].type; - } - } - return 1; -} - -/* MemviewSliceValidateAndInit */ - static int -__pyx_check_strides(Py_buffer *buf, int dim, int ndim, int spec) -{ - if (buf->shape[dim] <= 1) - return 1; - if (buf->strides) { - if (spec & __Pyx_MEMVIEW_CONTIG) { - if (spec & (__Pyx_MEMVIEW_PTR|__Pyx_MEMVIEW_FULL)) { - if (unlikely(buf->strides[dim] != sizeof(void *))) { - PyErr_Format(PyExc_ValueError, - "Buffer is not indirectly contiguous " - "in dimension %d.", dim); - goto fail; - } - } else if (unlikely(buf->strides[dim] != buf->itemsize)) { - PyErr_SetString(PyExc_ValueError, - "Buffer and memoryview are not contiguous " - "in the same dimension."); - goto fail; - } - } - if (spec & __Pyx_MEMVIEW_FOLLOW) { - Py_ssize_t stride = buf->strides[dim]; - if (stride < 0) - stride = -stride; - if (unlikely(stride < buf->itemsize)) { - PyErr_SetString(PyExc_ValueError, - "Buffer and memoryview are not contiguous " - "in the same dimension."); - goto fail; - } - } - } else { - if (unlikely(spec & __Pyx_MEMVIEW_CONTIG && dim != ndim - 1)) { - PyErr_Format(PyExc_ValueError, - "C-contiguous buffer is not contiguous in " - "dimension %d", dim); - goto fail; - } else if (unlikely(spec & (__Pyx_MEMVIEW_PTR))) { - PyErr_Format(PyExc_ValueError, - "C-contiguous buffer is not indirect in " - "dimension %d", dim); - goto fail; - } else if (unlikely(buf->suboffsets)) { - PyErr_SetString(PyExc_ValueError, - "Buffer exposes suboffsets but no strides"); - goto fail; - } - } - return 1; -fail: - return 0; -} -static int -__pyx_check_suboffsets(Py_buffer *buf, int dim, CYTHON_UNUSED int ndim, int spec) -{ - if (spec & __Pyx_MEMVIEW_DIRECT) { - if (unlikely(buf->suboffsets && buf->suboffsets[dim] >= 0)) { - PyErr_Format(PyExc_ValueError, - "Buffer not compatible with direct access " - "in dimension %d.", dim); - goto fail; - } - } - if (spec & __Pyx_MEMVIEW_PTR) { - if (unlikely(!buf->suboffsets || (buf->suboffsets[dim] < 0))) { - PyErr_Format(PyExc_ValueError, - "Buffer is not indirectly accessible " - "in dimension %d.", dim); - goto fail; - } - } - return 1; -fail: - return 0; -} -static int -__pyx_verify_contig(Py_buffer *buf, int ndim, int c_or_f_flag) -{ - int i; - if (c_or_f_flag & __Pyx_IS_F_CONTIG) { - Py_ssize_t stride = 1; - for (i = 0; i < ndim; i++) { - if (unlikely(stride * buf->itemsize != buf->strides[i] && buf->shape[i] > 1)) { - PyErr_SetString(PyExc_ValueError, - "Buffer not fortran contiguous."); - goto fail; - } - stride = stride * buf->shape[i]; - } - } else if (c_or_f_flag & __Pyx_IS_C_CONTIG) { - Py_ssize_t stride = 1; - for (i = ndim - 1; i >- 1; i--) { - if (unlikely(stride * buf->itemsize != buf->strides[i] && buf->shape[i] > 1)) { - PyErr_SetString(PyExc_ValueError, - "Buffer not C contiguous."); - goto fail; - } - stride = stride * buf->shape[i]; - } - } - return 1; -fail: - return 0; -} -static int __Pyx_ValidateAndInit_memviewslice( - int *axes_specs, - int c_or_f_flag, - int buf_flags, - int ndim, - __Pyx_TypeInfo *dtype, - __Pyx_BufFmt_StackElem stack[], - __Pyx_memviewslice *memviewslice, - PyObject *original_obj) -{ - struct __pyx_memoryview_obj *memview, *new_memview; - __Pyx_RefNannyDeclarations - Py_buffer *buf; - int i, spec = 0, retval = -1; - __Pyx_BufFmt_Context ctx; - int from_memoryview = __pyx_memoryview_check(original_obj); - __Pyx_RefNannySetupContext("ValidateAndInit_memviewslice", 0); - if (from_memoryview && __pyx_typeinfo_cmp(dtype, ((struct __pyx_memoryview_obj *) - original_obj)->typeinfo)) { - memview = (struct __pyx_memoryview_obj *) original_obj; - new_memview = NULL; - } else { - memview = (struct __pyx_memoryview_obj *) __pyx_memoryview_new( - original_obj, buf_flags, 0, dtype); - new_memview = memview; - if (unlikely(!memview)) - goto fail; - } - buf = &memview->view; - if (unlikely(buf->ndim != ndim)) { - PyErr_Format(PyExc_ValueError, - "Buffer has wrong number of dimensions (expected %d, got %d)", - ndim, buf->ndim); - goto fail; - } - if (new_memview) { - __Pyx_BufFmt_Init(&ctx, stack, dtype); - if (unlikely(!__Pyx_BufFmt_CheckString(&ctx, buf->format))) goto fail; - } - if (unlikely((unsigned) buf->itemsize != dtype->size)) { - PyErr_Format(PyExc_ValueError, - "Item size of buffer (%" CYTHON_FORMAT_SSIZE_T "u byte%s) " - "does not match size of '%s' (%" CYTHON_FORMAT_SSIZE_T "u byte%s)", - buf->itemsize, - (buf->itemsize > 1) ? "s" : "", - dtype->name, - dtype->size, - (dtype->size > 1) ? "s" : ""); - goto fail; - } - if (buf->len > 0) { - for (i = 0; i < ndim; i++) { - spec = axes_specs[i]; - if (unlikely(!__pyx_check_strides(buf, i, ndim, spec))) - goto fail; - if (unlikely(!__pyx_check_suboffsets(buf, i, ndim, spec))) - goto fail; - } - if (unlikely(buf->strides && !__pyx_verify_contig(buf, ndim, c_or_f_flag))) - goto fail; - } - if (unlikely(__Pyx_init_memviewslice(memview, ndim, memviewslice, - new_memview != NULL) == -1)) { - goto fail; - } - retval = 0; - goto no_fail; -fail: - Py_XDECREF(new_memview); - retval = -1; -no_fail: - __Pyx_RefNannyFinishContext(); - return retval; -} - -/* ObjectToMemviewSlice */ - static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_int(PyObject *obj, int writable_flag) { - __Pyx_memviewslice result = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_BufFmt_StackElem stack[1]; - int axes_specs[] = { (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_FOLLOW), (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_FOLLOW), (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_CONTIG) }; - int retcode; - if (obj == Py_None) { - result.memview = (struct __pyx_memoryview_obj *) Py_None; - return result; - } - retcode = __Pyx_ValidateAndInit_memviewslice(axes_specs, __Pyx_IS_C_CONTIG, - (PyBUF_C_CONTIGUOUS | PyBUF_FORMAT) | writable_flag, 3, - &__Pyx_TypeInfo_int, stack, - &result, obj); - if (unlikely(retcode == -1)) - goto __pyx_fail; - return result; -__pyx_fail: - result.memview = NULL; - result.data = NULL; - return result; -} - -/* ObjectToMemviewSlice */ - static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_float(PyObject *obj, int writable_flag) { - __Pyx_memviewslice result = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_BufFmt_StackElem stack[1]; - int axes_specs[] = { (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_FOLLOW), (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_FOLLOW), (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_CONTIG) }; - int retcode; - if (obj == Py_None) { - result.memview = (struct __pyx_memoryview_obj *) Py_None; - return result; - } - retcode = __Pyx_ValidateAndInit_memviewslice(axes_specs, __Pyx_IS_C_CONTIG, - (PyBUF_C_CONTIGUOUS | PyBUF_FORMAT) | writable_flag, 3, - &__Pyx_TypeInfo_float, stack, - &result, obj); - if (unlikely(retcode == -1)) - goto __pyx_fail; - return result; -__pyx_fail: - result.memview = NULL; - result.data = NULL; - return result; -} - -/* ObjectToMemviewSlice */ - static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_dc_int(PyObject *obj, int writable_flag) { - __Pyx_memviewslice result = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_BufFmt_StackElem stack[1]; - int axes_specs[] = { (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_CONTIG) }; - int retcode; - if (obj == Py_None) { - result.memview = (struct __pyx_memoryview_obj *) Py_None; - return result; - } - retcode = __Pyx_ValidateAndInit_memviewslice(axes_specs, __Pyx_IS_C_CONTIG, - (PyBUF_C_CONTIGUOUS | PyBUF_FORMAT) | writable_flag, 1, - &__Pyx_TypeInfo_int, stack, - &result, obj); - if (unlikely(retcode == -1)) - goto __pyx_fail; - return result; -__pyx_fail: - result.memview = NULL; - result.data = NULL; - return result; -} - -/* CIntToPy */ - static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value) { - const int neg_one = (int) ((int) 0 - (int) 1), const_zero = (int) 0; - const int is_unsigned = neg_one > const_zero; - if (is_unsigned) { - if (sizeof(int) < sizeof(long)) { - return PyInt_FromLong((long) value); - } else if (sizeof(int) <= sizeof(unsigned long)) { - return PyLong_FromUnsignedLong((unsigned long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) { - return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); -#endif - } - } else { - if (sizeof(int) <= sizeof(long)) { - return PyInt_FromLong((long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) { - return PyLong_FromLongLong((PY_LONG_LONG) value); -#endif - } - } - { - int one = 1; int little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&value; - return _PyLong_FromByteArray(bytes, sizeof(int), - little, !is_unsigned); - } -} - -/* CIntFromPyVerify */ - #define __PYX_VERIFY_RETURN_INT(target_type, func_type, func_value)\ - __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 0) -#define __PYX_VERIFY_RETURN_INT_EXC(target_type, func_type, func_value)\ - __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 1) -#define __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, exc)\ - {\ - func_type value = func_value;\ - if (sizeof(target_type) < sizeof(func_type)) {\ - if (unlikely(value != (func_type) (target_type) value)) {\ - func_type zero = 0;\ - if (exc && unlikely(value == (func_type)-1 && PyErr_Occurred()))\ - return (target_type) -1;\ - if (is_unsigned && unlikely(value < zero))\ - goto raise_neg_overflow;\ - else\ - goto raise_overflow;\ - }\ - }\ - return (target_type) value;\ - } - -/* CIntToPy */ - static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value) { - const long neg_one = (long) ((long) 0 - (long) 1), const_zero = (long) 0; - const int is_unsigned = neg_one > const_zero; - if (is_unsigned) { - if (sizeof(long) < sizeof(long)) { - return PyInt_FromLong((long) value); - } else if (sizeof(long) <= sizeof(unsigned long)) { - return PyLong_FromUnsignedLong((unsigned long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) { - return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); -#endif - } - } else { - if (sizeof(long) <= sizeof(long)) { - return PyInt_FromLong((long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) { - return PyLong_FromLongLong((PY_LONG_LONG) value); -#endif - } - } - { - int one = 1; int little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&value; - return _PyLong_FromByteArray(bytes, sizeof(long), - little, !is_unsigned); - } -} - -/* MemviewSliceCopyTemplate */ - static __Pyx_memviewslice -__pyx_memoryview_copy_new_contig(const __Pyx_memviewslice *from_mvs, - const char *mode, int ndim, - size_t sizeof_dtype, int contig_flag, - int dtype_is_object) -{ - __Pyx_RefNannyDeclarations - int i; - __Pyx_memviewslice new_mvs = { 0, 0, { 0 }, { 0 }, { 0 } }; - struct __pyx_memoryview_obj *from_memview = from_mvs->memview; - Py_buffer *buf = &from_memview->view; - PyObject *shape_tuple = NULL; - PyObject *temp_int = NULL; - struct __pyx_array_obj *array_obj = NULL; - struct __pyx_memoryview_obj *memview_obj = NULL; - __Pyx_RefNannySetupContext("__pyx_memoryview_copy_new_contig", 0); - for (i = 0; i < ndim; i++) { - if (unlikely(from_mvs->suboffsets[i] >= 0)) { - PyErr_Format(PyExc_ValueError, "Cannot copy memoryview slice with " - "indirect dimensions (axis %d)", i); - goto fail; - } - } - shape_tuple = PyTuple_New(ndim); - if (unlikely(!shape_tuple)) { - goto fail; - } - __Pyx_GOTREF(shape_tuple); - for(i = 0; i < ndim; i++) { - temp_int = PyInt_FromSsize_t(from_mvs->shape[i]); - if(unlikely(!temp_int)) { - goto fail; - } else { - PyTuple_SET_ITEM(shape_tuple, i, temp_int); - temp_int = NULL; - } - } - array_obj = __pyx_array_new(shape_tuple, sizeof_dtype, buf->format, (char *) mode, NULL); - if (unlikely(!array_obj)) { - goto fail; - } - __Pyx_GOTREF(array_obj); - memview_obj = (struct __pyx_memoryview_obj *) __pyx_memoryview_new( - (PyObject *) array_obj, contig_flag, - dtype_is_object, - from_mvs->memview->typeinfo); - if (unlikely(!memview_obj)) - goto fail; - if (unlikely(__Pyx_init_memviewslice(memview_obj, ndim, &new_mvs, 1) < 0)) - goto fail; - if (unlikely(__pyx_memoryview_copy_contents(*from_mvs, new_mvs, ndim, ndim, - dtype_is_object) < 0)) - goto fail; - goto no_fail; -fail: - __Pyx_XDECREF(new_mvs.memview); - new_mvs.memview = NULL; - new_mvs.data = NULL; -no_fail: - __Pyx_XDECREF(shape_tuple); - __Pyx_XDECREF(temp_int); - __Pyx_XDECREF(array_obj); - __Pyx_RefNannyFinishContext(); - return new_mvs; -} - -/* CIntFromPy */ - static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *x) { - const int neg_one = (int) ((int) 0 - (int) 1), const_zero = (int) 0; - const int is_unsigned = neg_one > const_zero; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x))) { - if (sizeof(int) < sizeof(long)) { - __PYX_VERIFY_RETURN_INT(int, long, PyInt_AS_LONG(x)) - } else { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - goto raise_neg_overflow; - } - return (int) val; - } - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (int) 0; - case 1: __PYX_VERIFY_RETURN_INT(int, digit, digits[0]) - case 2: - if (8 * sizeof(int) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) >= 2 * PyLong_SHIFT) { - return (int) (((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - case 3: - if (8 * sizeof(int) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) >= 3 * PyLong_SHIFT) { - return (int) (((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - case 4: - if (8 * sizeof(int) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) >= 4 * PyLong_SHIFT) { - return (int) (((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON - if (unlikely(Py_SIZE(x) < 0)) { - goto raise_neg_overflow; - } -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (int) -1; - if (unlikely(result == 1)) - goto raise_neg_overflow; - } -#endif - if (sizeof(int) <= sizeof(unsigned long)) { - __PYX_VERIFY_RETURN_INT_EXC(int, unsigned long, PyLong_AsUnsignedLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(int, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) -#endif - } - } else { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (int) 0; - case -1: __PYX_VERIFY_RETURN_INT(int, sdigit, (sdigit) (-(sdigit)digits[0])) - case 1: __PYX_VERIFY_RETURN_INT(int, digit, +digits[0]) - case -2: - if (8 * sizeof(int) - 1 > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { - return (int) (((int)-1)*(((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 2: - if (8 * sizeof(int) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { - return (int) ((((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case -3: - if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { - return (int) (((int)-1)*(((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 3: - if (8 * sizeof(int) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { - return (int) ((((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case -4: - if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) { - return (int) (((int)-1)*(((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 4: - if (8 * sizeof(int) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) { - return (int) ((((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - } -#endif - if (sizeof(int) <= sizeof(long)) { - __PYX_VERIFY_RETURN_INT_EXC(int, long, PyLong_AsLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(int, PY_LONG_LONG, PyLong_AsLongLong(x)) -#endif - } - } - { -#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) - PyErr_SetString(PyExc_RuntimeError, - "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); -#else - int val; - PyObject *v = __Pyx_PyNumber_IntOrLong(x); - #if PY_MAJOR_VERSION < 3 - if (likely(v) && !PyLong_Check(v)) { - PyObject *tmp = v; - v = PyNumber_Long(tmp); - Py_DECREF(tmp); - } - #endif - if (likely(v)) { - int one = 1; int is_little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&val; - int ret = _PyLong_AsByteArray((PyLongObject *)v, - bytes, sizeof(val), - is_little, !is_unsigned); - Py_DECREF(v); - if (likely(!ret)) - return val; - } -#endif - return (int) -1; - } - } else { - int val; - PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); - if (!tmp) return (int) -1; - val = __Pyx_PyInt_As_int(tmp); - Py_DECREF(tmp); - return val; - } -raise_overflow: - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to int"); - return (int) -1; -raise_neg_overflow: - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to int"); - return (int) -1; -} - -/* CIntFromPy */ - static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *x) { - const long neg_one = (long) ((long) 0 - (long) 1), const_zero = (long) 0; - const int is_unsigned = neg_one > const_zero; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x))) { - if (sizeof(long) < sizeof(long)) { - __PYX_VERIFY_RETURN_INT(long, long, PyInt_AS_LONG(x)) - } else { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - goto raise_neg_overflow; - } - return (long) val; - } - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (long) 0; - case 1: __PYX_VERIFY_RETURN_INT(long, digit, digits[0]) - case 2: - if (8 * sizeof(long) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) >= 2 * PyLong_SHIFT) { - return (long) (((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - case 3: - if (8 * sizeof(long) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) >= 3 * PyLong_SHIFT) { - return (long) (((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - case 4: - if (8 * sizeof(long) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) >= 4 * PyLong_SHIFT) { - return (long) (((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON - if (unlikely(Py_SIZE(x) < 0)) { - goto raise_neg_overflow; - } -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (long) -1; - if (unlikely(result == 1)) - goto raise_neg_overflow; - } -#endif - if (sizeof(long) <= sizeof(unsigned long)) { - __PYX_VERIFY_RETURN_INT_EXC(long, unsigned long, PyLong_AsUnsignedLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(long, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) -#endif - } - } else { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (long) 0; - case -1: __PYX_VERIFY_RETURN_INT(long, sdigit, (sdigit) (-(sdigit)digits[0])) - case 1: __PYX_VERIFY_RETURN_INT(long, digit, +digits[0]) - case -2: - if (8 * sizeof(long) - 1 > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - return (long) (((long)-1)*(((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 2: - if (8 * sizeof(long) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - return (long) ((((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case -3: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - return (long) (((long)-1)*(((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 3: - if (8 * sizeof(long) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - return (long) ((((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case -4: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - return (long) (((long)-1)*(((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 4: - if (8 * sizeof(long) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - return (long) ((((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - } -#endif - if (sizeof(long) <= sizeof(long)) { - __PYX_VERIFY_RETURN_INT_EXC(long, long, PyLong_AsLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(long, PY_LONG_LONG, PyLong_AsLongLong(x)) -#endif - } - } - { -#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) - PyErr_SetString(PyExc_RuntimeError, - "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); -#else - long val; - PyObject *v = __Pyx_PyNumber_IntOrLong(x); - #if PY_MAJOR_VERSION < 3 - if (likely(v) && !PyLong_Check(v)) { - PyObject *tmp = v; - v = PyNumber_Long(tmp); - Py_DECREF(tmp); - } - #endif - if (likely(v)) { - int one = 1; int is_little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&val; - int ret = _PyLong_AsByteArray((PyLongObject *)v, - bytes, sizeof(val), - is_little, !is_unsigned); - Py_DECREF(v); - if (likely(!ret)) - return val; - } -#endif - return (long) -1; - } - } else { - long val; - PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); - if (!tmp) return (long) -1; - val = __Pyx_PyInt_As_long(tmp); - Py_DECREF(tmp); - return val; - } -raise_overflow: - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to long"); - return (long) -1; -raise_neg_overflow: - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to long"); - return (long) -1; -} - -/* CIntFromPy */ - static CYTHON_INLINE char __Pyx_PyInt_As_char(PyObject *x) { - const char neg_one = (char) ((char) 0 - (char) 1), const_zero = (char) 0; - const int is_unsigned = neg_one > const_zero; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x))) { - if (sizeof(char) < sizeof(long)) { - __PYX_VERIFY_RETURN_INT(char, long, PyInt_AS_LONG(x)) - } else { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - goto raise_neg_overflow; - } - return (char) val; - } - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (char) 0; - case 1: __PYX_VERIFY_RETURN_INT(char, digit, digits[0]) - case 2: - if (8 * sizeof(char) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) >= 2 * PyLong_SHIFT) { - return (char) (((((char)digits[1]) << PyLong_SHIFT) | (char)digits[0])); - } - } - break; - case 3: - if (8 * sizeof(char) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) >= 3 * PyLong_SHIFT) { - return (char) (((((((char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0])); - } - } - break; - case 4: - if (8 * sizeof(char) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) >= 4 * PyLong_SHIFT) { - return (char) (((((((((char)digits[3]) << PyLong_SHIFT) | (char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0])); - } - } - break; - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON - if (unlikely(Py_SIZE(x) < 0)) { - goto raise_neg_overflow; - } -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (char) -1; - if (unlikely(result == 1)) - goto raise_neg_overflow; - } -#endif - if (sizeof(char) <= sizeof(unsigned long)) { - __PYX_VERIFY_RETURN_INT_EXC(char, unsigned long, PyLong_AsUnsignedLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(char) <= sizeof(unsigned PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(char, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) -#endif - } - } else { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (char) 0; - case -1: __PYX_VERIFY_RETURN_INT(char, sdigit, (sdigit) (-(sdigit)digits[0])) - case 1: __PYX_VERIFY_RETURN_INT(char, digit, +digits[0]) - case -2: - if (8 * sizeof(char) - 1 > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) - 1 > 2 * PyLong_SHIFT) { - return (char) (((char)-1)*(((((char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - case 2: - if (8 * sizeof(char) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) - 1 > 2 * PyLong_SHIFT) { - return (char) ((((((char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - case -3: - if (8 * sizeof(char) - 1 > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) - 1 > 3 * PyLong_SHIFT) { - return (char) (((char)-1)*(((((((char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - case 3: - if (8 * sizeof(char) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) - 1 > 3 * PyLong_SHIFT) { - return (char) ((((((((char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - case -4: - if (8 * sizeof(char) - 1 > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) - 1 > 4 * PyLong_SHIFT) { - return (char) (((char)-1)*(((((((((char)digits[3]) << PyLong_SHIFT) | (char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - case 4: - if (8 * sizeof(char) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) - 1 > 4 * PyLong_SHIFT) { - return (char) ((((((((((char)digits[3]) << PyLong_SHIFT) | (char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - } -#endif - if (sizeof(char) <= sizeof(long)) { - __PYX_VERIFY_RETURN_INT_EXC(char, long, PyLong_AsLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(char) <= sizeof(PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(char, PY_LONG_LONG, PyLong_AsLongLong(x)) -#endif - } - } - { -#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) - PyErr_SetString(PyExc_RuntimeError, - "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); -#else - char val; - PyObject *v = __Pyx_PyNumber_IntOrLong(x); - #if PY_MAJOR_VERSION < 3 - if (likely(v) && !PyLong_Check(v)) { - PyObject *tmp = v; - v = PyNumber_Long(tmp); - Py_DECREF(tmp); - } - #endif - if (likely(v)) { - int one = 1; int is_little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&val; - int ret = _PyLong_AsByteArray((PyLongObject *)v, - bytes, sizeof(val), - is_little, !is_unsigned); - Py_DECREF(v); - if (likely(!ret)) - return val; - } -#endif - return (char) -1; - } - } else { - char val; - PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); - if (!tmp) return (char) -1; - val = __Pyx_PyInt_As_char(tmp); - Py_DECREF(tmp); - return val; - } -raise_overflow: - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to char"); - return (char) -1; -raise_neg_overflow: - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to char"); - return (char) -1; -} - -/* CheckBinaryVersion */ - static int __Pyx_check_binary_version(void) { - char ctversion[4], rtversion[4]; - PyOS_snprintf(ctversion, 4, "%d.%d", PY_MAJOR_VERSION, PY_MINOR_VERSION); - PyOS_snprintf(rtversion, 4, "%s", Py_GetVersion()); - if (ctversion[0] != rtversion[0] || ctversion[2] != rtversion[2]) { - char message[200]; - PyOS_snprintf(message, sizeof(message), - "compiletime version %s of module '%.100s' " - "does not match runtime version %s", - ctversion, __Pyx_MODULE_NAME, rtversion); - return PyErr_WarnEx(NULL, message, 1); - } - return 0; -} - -/* InitStrings */ - static int __Pyx_InitStrings(__Pyx_StringTabEntry *t) { - while (t->p) { - #if PY_MAJOR_VERSION < 3 - if (t->is_unicode) { - *t->p = PyUnicode_DecodeUTF8(t->s, t->n - 1, NULL); - } else if (t->intern) { - *t->p = PyString_InternFromString(t->s); - } else { - *t->p = PyString_FromStringAndSize(t->s, t->n - 1); - } - #else - if (t->is_unicode | t->is_str) { - if (t->intern) { - *t->p = PyUnicode_InternFromString(t->s); - } else if (t->encoding) { - *t->p = PyUnicode_Decode(t->s, t->n - 1, t->encoding, NULL); - } else { - *t->p = PyUnicode_FromStringAndSize(t->s, t->n - 1); - } - } else { - *t->p = PyBytes_FromStringAndSize(t->s, t->n - 1); - } - #endif - if (!*t->p) - return -1; - if (PyObject_Hash(*t->p) == -1) - return -1; - ++t; - } - return 0; -} - -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char* c_str) { - return __Pyx_PyUnicode_FromStringAndSize(c_str, (Py_ssize_t)strlen(c_str)); -} -static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject* o) { - Py_ssize_t ignore; - return __Pyx_PyObject_AsStringAndSize(o, &ignore); -} -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT -#if !CYTHON_PEP393_ENABLED -static const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) { - char* defenc_c; - PyObject* defenc = _PyUnicode_AsDefaultEncodedString(o, NULL); - if (!defenc) return NULL; - defenc_c = PyBytes_AS_STRING(defenc); -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - { - char* end = defenc_c + PyBytes_GET_SIZE(defenc); - char* c; - for (c = defenc_c; c < end; c++) { - if ((unsigned char) (*c) >= 128) { - PyUnicode_AsASCIIString(o); - return NULL; - } - } - } -#endif - *length = PyBytes_GET_SIZE(defenc); - return defenc_c; -} -#else -static CYTHON_INLINE const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) { - if (unlikely(__Pyx_PyUnicode_READY(o) == -1)) return NULL; -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - if (likely(PyUnicode_IS_ASCII(o))) { - *length = PyUnicode_GET_LENGTH(o); - return PyUnicode_AsUTF8(o); - } else { - PyUnicode_AsASCIIString(o); - return NULL; - } -#else - return PyUnicode_AsUTF8AndSize(o, length); -#endif -} -#endif -#endif -static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject* o, Py_ssize_t *length) { -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT - if ( -#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - __Pyx_sys_getdefaultencoding_not_ascii && -#endif - PyUnicode_Check(o)) { - return __Pyx_PyUnicode_AsStringAndSize(o, length); - } else -#endif -#if (!CYTHON_COMPILING_IN_PYPY) || (defined(PyByteArray_AS_STRING) && defined(PyByteArray_GET_SIZE)) - if (PyByteArray_Check(o)) { - *length = PyByteArray_GET_SIZE(o); - return PyByteArray_AS_STRING(o); - } else -#endif - { - char* result; - int r = PyBytes_AsStringAndSize(o, &result, length); - if (unlikely(r < 0)) { - return NULL; - } else { - return result; - } - } -} -static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject* x) { - int is_true = x == Py_True; - if (is_true | (x == Py_False) | (x == Py_None)) return is_true; - else return PyObject_IsTrue(x); -} -static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject* x) { - int retval; - if (unlikely(!x)) return -1; - retval = __Pyx_PyObject_IsTrue(x); - Py_DECREF(x); - return retval; -} -static PyObject* __Pyx_PyNumber_IntOrLongWrongResultType(PyObject* result, const char* type_name) { -#if PY_MAJOR_VERSION >= 3 - if (PyLong_Check(result)) { - if (PyErr_WarnFormat(PyExc_DeprecationWarning, 1, - "__int__ returned non-int (type %.200s). " - "The ability to return an instance of a strict subclass of int " - "is deprecated, and may be removed in a future version of Python.", - Py_TYPE(result)->tp_name)) { - Py_DECREF(result); - return NULL; - } - return result; - } -#endif - PyErr_Format(PyExc_TypeError, - "__%.4s__ returned non-%.4s (type %.200s)", - type_name, type_name, Py_TYPE(result)->tp_name); - Py_DECREF(result); - return NULL; -} -static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x) { -#if CYTHON_USE_TYPE_SLOTS - PyNumberMethods *m; -#endif - const char *name = NULL; - PyObject *res = NULL; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x) || PyLong_Check(x))) -#else - if (likely(PyLong_Check(x))) -#endif - return __Pyx_NewRef(x); -#if CYTHON_USE_TYPE_SLOTS - m = Py_TYPE(x)->tp_as_number; - #if PY_MAJOR_VERSION < 3 - if (m && m->nb_int) { - name = "int"; - res = m->nb_int(x); - } - else if (m && m->nb_long) { - name = "long"; - res = m->nb_long(x); - } - #else - if (likely(m && m->nb_int)) { - name = "int"; - res = m->nb_int(x); - } - #endif -#else - if (!PyBytes_CheckExact(x) && !PyUnicode_CheckExact(x)) { - res = PyNumber_Int(x); - } -#endif - if (likely(res)) { -#if PY_MAJOR_VERSION < 3 - if (unlikely(!PyInt_Check(res) && !PyLong_Check(res))) { -#else - if (unlikely(!PyLong_CheckExact(res))) { -#endif - return __Pyx_PyNumber_IntOrLongWrongResultType(res, name); - } - } - else if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_TypeError, - "an integer is required"); - } - return res; -} -static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject* b) { - Py_ssize_t ival; - PyObject *x; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_CheckExact(b))) { - if (sizeof(Py_ssize_t) >= sizeof(long)) - return PyInt_AS_LONG(b); - else - return PyInt_AsSsize_t(b); - } -#endif - if (likely(PyLong_CheckExact(b))) { - #if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)b)->ob_digit; - const Py_ssize_t size = Py_SIZE(b); - if (likely(__Pyx_sst_abs(size) <= 1)) { - ival = likely(size) ? digits[0] : 0; - if (size == -1) ival = -ival; - return ival; - } else { - switch (size) { - case 2: - if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { - return (Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -2: - if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case 3: - if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { - return (Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -3: - if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case 4: - if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { - return (Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -4: - if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - } - } - #endif - return PyLong_AsSsize_t(b); - } - x = PyNumber_Index(b); - if (!x) return -1; - ival = PyInt_AsSsize_t(x); - Py_DECREF(x); - return ival; -} -static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b) { - return b ? __Pyx_NewRef(Py_True) : __Pyx_NewRef(Py_False); -} -static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t ival) { - return PyInt_FromSize_t(ival); -} - - -#endif /* Py_PYTHON_H */ diff --git a/spaces/Epitech/UpscaleAI/app.py b/spaces/Epitech/UpscaleAI/app.py deleted file mode 100644 index bb719337e27d895c1c8325744637019e3cfc3e2f..0000000000000000000000000000000000000000 --- a/spaces/Epitech/UpscaleAI/app.py +++ /dev/null @@ -1,36 +0,0 @@ -import gradio as gr -import cv2 -from cv2 import dnn_superres -import sys -from os.path import exists - -def upscale(image): - # increase the size of the picture with a factor of 3 - factor = 2 - - # Create an SR object - sr = dnn_superres.DnnSuperResImpl_create() - - # Read image - # image = cv2.imread(FILE_PATH) - - # Read the desired model - path = "models/FSRCNN_x" + str(factor) + ".pb" - sr.readModel(path) - - # Set the desired model and scale to get correct pre- and post-processing - sr.setModel("fsrcnn", factor) - - # Upscale the image - result = sr.upsample(image) - - # Save the image - # cv2.imwrite("./upscaled.png", result) - return result - -iface = gr.Interface( - upscale, - gr.inputs.Image(shape=(128,128)), - "text" -) -iface.launch() diff --git a/spaces/EronSamez/RVC_HFmeu/infer/lib/infer_pack/commons.py b/spaces/EronSamez/RVC_HFmeu/infer/lib/infer_pack/commons.py deleted file mode 100644 index ccd334b7320543b0c3a2166f82093564c9721317..0000000000000000000000000000000000000000 --- a/spaces/EronSamez/RVC_HFmeu/infer/lib/infer_pack/commons.py +++ /dev/null @@ -1,167 +0,0 @@ -import math - -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += ( - 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q) - ) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def slice_segments2(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / ( - num_timescales - 1 - ) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment - ) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2, 3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1.0 / norm_type) - return total_norm diff --git a/spaces/Faboor/README/README.md b/spaces/Faboor/README/README.md deleted file mode 100644 index 180b080f61e450750d752d240c7c730915657248..0000000000000000000000000000000000000000 --- a/spaces/Faboor/README/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: README -emoji: 🚀 -colorFrom: indigo -colorTo: pink -sdk: static -pinned: false ---- - -Edit this `README.md` markdown file to author your organization card 🔥 diff --git a/spaces/Felladrin/MiniSearch/index.html b/spaces/Felladrin/MiniSearch/index.html deleted file mode 100644 index 2aa8a253cccc1c68640e242309962ed82a221ec7..0000000000000000000000000000000000000000 --- a/spaces/Felladrin/MiniSearch/index.html +++ /dev/null @@ -1,39 +0,0 @@ - - - - - - - - - - - - - - - MiniSearch - - - - - - diff --git a/spaces/Ferion/image-matting-app/app.py b/spaces/Ferion/image-matting-app/app.py deleted file mode 100644 index 8dcec2b4eb270445507e71996b767e2cd90c36d5..0000000000000000000000000000000000000000 --- a/spaces/Ferion/image-matting-app/app.py +++ /dev/null @@ -1,173 +0,0 @@ -from hashlib import sha1 -from pathlib import Path - -import cv2 -import gradio as gr -import numpy as np -from PIL import Image - -from paddleseg.cvlibs import manager, Config -from paddleseg.utils import load_entire_model - -manager.BACKBONES._components_dict.clear() -manager.TRANSFORMS._components_dict.clear() - -import ppmatting as ppmatting -from ppmatting.core import predict -from ppmatting.utils import estimate_foreground_ml - -model_names = [ - "modnet-mobilenetv2", - "ppmatting-512", - "ppmatting-1024", - "ppmatting-2048", - "modnet-hrnet_w18", - "modnet-resnet50_vd", -] -model_dict = { - name: None - for name in model_names -} - -last_result = { - "cache_key": None, - "algorithm": None, -} - - -def image_matting( - image: np.ndarray, - result_type: str, - bg_color: str, - algorithm: str, - morph_op: str, - morph_op_factor: float, -) -> np.ndarray: - image = np.ascontiguousarray(image) - cache_key = sha1(image).hexdigest() - if cache_key == last_result["cache_key"] and algorithm == last_result["algorithm"]: - alpha = last_result["alpha"] - else: - cfg = Config(f"configs/{algorithm}.yml") - if model_dict[algorithm] is not None: - model = model_dict[algorithm] - else: - model = cfg.model - load_entire_model(model, f"models/{algorithm}.pdparams") - model.eval() - model_dict[algorithm] = model - - transforms = ppmatting.transforms.Compose(cfg.val_transforms) - - alpha = predict( - model, - transforms=transforms, - image=image, - ) - last_result["cache_key"] = cache_key - last_result["algorithm"] = algorithm - last_result["alpha"] = alpha - - alpha = (alpha * 255).astype(np.uint8) - kernel = np.ones((5, 5), np.uint8) - if morph_op == "Dilate": - alpha = cv2.dilate(alpha, kernel, iterations=int(morph_op_factor)) - else: - alpha = cv2.erode(alpha, kernel, iterations=int(morph_op_factor)) - alpha = (alpha / 255).astype(np.float32) - - image = (image / 255.0).astype("float32") - fg = estimate_foreground_ml(image, alpha) - - if result_type == "Remove BG": - result = np.concatenate((fg, alpha[:, :, None]), axis=-1) - elif result_type == "Replace BG": - bg_r = int(bg_color[1:3], base=16) - bg_g = int(bg_color[3:5], base=16) - bg_b = int(bg_color[5:7], base=16) - - bg = np.zeros_like(fg) - bg[:, :, 0] = bg_r / 255. - bg[:, :, 1] = bg_g / 255. - bg[:, :, 2] = bg_b / 255. - - result = alpha[:, :, None] * fg + (1 - alpha[:, :, None]) * bg - result = np.clip(result, 0, 1) - else: - result = alpha - - return result - - -def main(): - with gr.Blocks() as app: - gr.Markdown("Image Matting Powered By AI") - - with gr.Row(variant="panel"): - image_input = gr.Image() - image_output = gr.Image() - - with gr.Row(variant="panel"): - result_type = gr.Radio( - label="Mode", - show_label=True, - choices=[ - "Remove BG", - "Replace BG", - "Generate Mask", - ], - value="Remove BG", - ) - bg_color = gr.ColorPicker( - label="BG Color", - show_label=True, - value="#000000", - ) - algorithm = gr.Dropdown( - label="Algorithm", - show_label=True, - choices=model_names, - value="modnet-hrnet_w18" - ) - - with gr.Row(variant="panel"): - morph_op = gr.Radio( - label="Post-process", - show_label=True, - choices=[ - "Dilate", - "Erode", - ], - value="Dilate", - ) - - morph_op_factor = gr.Slider( - label="Factor", - show_label=True, - minimum=0, - maximum=20, - value=0, - step=1, - ) - - run_button = gr.Button("Run") - - run_button.click( - image_matting, - inputs=[ - image_input, - result_type, - bg_color, - algorithm, - morph_op, - morph_op_factor, - ], - outputs=image_output, - api_name="ferionapi" - ) - - app.launch(show_api=True) - - -if __name__ == "__main__": - main() diff --git a/spaces/Ferion/image-matting-app/ppmatting/core/train.py b/spaces/Ferion/image-matting-app/ppmatting/core/train.py deleted file mode 100644 index 695a177dcdd13fc7e79cf067a5c6984f5f125904..0000000000000000000000000000000000000000 --- a/spaces/Ferion/image-matting-app/ppmatting/core/train.py +++ /dev/null @@ -1,315 +0,0 @@ -# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import os -import time -from collections import deque, defaultdict -import pickle -import shutil - -import numpy as np -import paddle -import paddle.nn.functional as F -from paddleseg.utils import TimeAverager, calculate_eta, resume, logger - -from .val import evaluate - - -def visual_in_traning(log_writer, vis_dict, step): - """ - Visual in vdl - - Args: - log_writer (LogWriter): The log writer of vdl. - vis_dict (dict): Dict of tensor. The shape of thesor is (C, H, W) - """ - for key, value in vis_dict.items(): - value_shape = value.shape - if value_shape[0] not in [1, 3]: - value = value[0] - value = value.unsqueeze(0) - value = paddle.transpose(value, (1, 2, 0)) - min_v = paddle.min(value) - max_v = paddle.max(value) - if (min_v > 0) and (max_v < 1): - value = value * 255 - elif (min_v < 0 and min_v >= -1) and (max_v <= 1): - value = (1 + value) / 2 * 255 - else: - value = (value - min_v) / (max_v - min_v) * 255 - - value = value.astype('uint8') - value = value.numpy() - log_writer.add_image(tag=key, img=value, step=step) - - -def save_best(best_model_dir, metrics_data, iter): - with open(os.path.join(best_model_dir, 'best_metrics.txt'), 'w') as f: - for key, value in metrics_data.items(): - line = key + ' ' + str(value) + '\n' - f.write(line) - f.write('iter' + ' ' + str(iter) + '\n') - - -def get_best(best_file, metrics, resume_model=None): - '''Get best metrics and iter from file''' - best_metrics_data = {} - if os.path.exists(best_file) and (resume_model is not None): - values = [] - with open(best_file, 'r') as f: - lines = f.readlines() - for line in lines: - line = line.strip() - key, value = line.split(' ') - best_metrics_data[key] = eval(value) - if key == 'iter': - best_iter = eval(value) - else: - for key in metrics: - best_metrics_data[key] = np.inf - best_iter = -1 - return best_metrics_data, best_iter - - -def train(model, - train_dataset, - val_dataset=None, - optimizer=None, - save_dir='output', - iters=10000, - batch_size=2, - resume_model=None, - save_interval=1000, - log_iters=10, - log_image_iters=1000, - num_workers=0, - use_vdl=False, - losses=None, - keep_checkpoint_max=5, - eval_begin_iters=None, - metrics='sad'): - """ - Launch training. - Args: - model(nn.Layer): A matting model. - train_dataset (paddle.io.Dataset): Used to read and process training datasets. - val_dataset (paddle.io.Dataset, optional): Used to read and process validation datasets. - optimizer (paddle.optimizer.Optimizer): The optimizer. - save_dir (str, optional): The directory for saving the model snapshot. Default: 'output'. - iters (int, optional): How may iters to train the model. Defualt: 10000. - batch_size (int, optional): Mini batch size of one gpu or cpu. Default: 2. - resume_model (str, optional): The path of resume model. - save_interval (int, optional): How many iters to save a model snapshot once during training. Default: 1000. - log_iters (int, optional): Display logging information at every log_iters. Default: 10. - log_image_iters (int, optional): Log image to vdl. Default: 1000. - num_workers (int, optional): Num workers for data loader. Default: 0. - use_vdl (bool, optional): Whether to record the data to VisualDL during training. Default: False. - losses (dict, optional): A dict of loss, refer to the loss function of the model for details. Default: None. - keep_checkpoint_max (int, optional): Maximum number of checkpoints to save. Default: 5. - eval_begin_iters (int): The iters begin evaluation. It will evaluate at iters/2 if it is None. Defalust: None. - metrics(str|list, optional): The metrics to evaluate, it may be the combination of ("sad", "mse", "grad", "conn"). - """ - model.train() - nranks = paddle.distributed.ParallelEnv().nranks - local_rank = paddle.distributed.ParallelEnv().local_rank - - start_iter = 0 - if resume_model is not None: - start_iter = resume(model, optimizer, resume_model) - - if not os.path.isdir(save_dir): - if os.path.exists(save_dir): - os.remove(save_dir) - os.makedirs(save_dir) - - if nranks > 1: - # Initialize parallel environment if not done. - if not paddle.distributed.parallel.parallel_helper._is_parallel_ctx_initialized( - ): - paddle.distributed.init_parallel_env() - ddp_model = paddle.DataParallel(model) - else: - ddp_model = paddle.DataParallel(model) - - batch_sampler = paddle.io.DistributedBatchSampler( - train_dataset, batch_size=batch_size, shuffle=True, drop_last=True) - - loader = paddle.io.DataLoader( - train_dataset, - batch_sampler=batch_sampler, - num_workers=num_workers, - return_list=True, ) - - if use_vdl: - from visualdl import LogWriter - log_writer = LogWriter(save_dir) - - if isinstance(metrics, str): - metrics = [metrics] - elif not isinstance(metrics, list): - metrics = ['sad'] - best_metrics_data, best_iter = get_best( - os.path.join(save_dir, 'best_model', 'best_metrics.txt'), - metrics, - resume_model=resume_model) - avg_loss = defaultdict(float) - iters_per_epoch = len(batch_sampler) - reader_cost_averager = TimeAverager() - batch_cost_averager = TimeAverager() - save_models = deque() - batch_start = time.time() - - iter = start_iter - while iter < iters: - for data in loader: - iter += 1 - if iter > iters: - break - reader_cost_averager.record(time.time() - batch_start) - - logit_dict, loss_dict = ddp_model(data) if nranks > 1 else model( - data) - - loss_dict['all'].backward() - - optimizer.step() - lr = optimizer.get_lr() - if isinstance(optimizer._learning_rate, - paddle.optimizer.lr.LRScheduler): - optimizer._learning_rate.step() - model.clear_gradients() - - for key, value in loss_dict.items(): - avg_loss[key] += value.numpy()[0] - batch_cost_averager.record( - time.time() - batch_start, num_samples=batch_size) - - if (iter) % log_iters == 0 and local_rank == 0: - for key, value in avg_loss.items(): - avg_loss[key] = value / log_iters - remain_iters = iters - iter - avg_train_batch_cost = batch_cost_averager.get_average() - avg_train_reader_cost = reader_cost_averager.get_average() - eta = calculate_eta(remain_iters, avg_train_batch_cost) - # loss info - loss_str = ' ' * 26 + '\t[LOSSES]' - loss_str = loss_str - for key, value in avg_loss.items(): - if key != 'all': - loss_str = loss_str + ' ' + key + '={:.4f}'.format( - value) - logger.info( - "[TRAIN] epoch={}, iter={}/{}, loss={:.4f}, lr={:.6f}, batch_cost={:.4f}, reader_cost={:.5f}, ips={:.4f} samples/sec | ETA {}\n{}\n" - .format((iter - 1) // iters_per_epoch + 1, iter, iters, - avg_loss['all'], lr, avg_train_batch_cost, - avg_train_reader_cost, - batch_cost_averager.get_ips_average( - ), eta, loss_str)) - if use_vdl: - for key, value in avg_loss.items(): - log_tag = 'Train/' + key - log_writer.add_scalar(log_tag, value, iter) - - log_writer.add_scalar('Train/lr', lr, iter) - log_writer.add_scalar('Train/batch_cost', - avg_train_batch_cost, iter) - log_writer.add_scalar('Train/reader_cost', - avg_train_reader_cost, iter) - if iter % log_image_iters == 0: - vis_dict = {} - # ground truth - vis_dict['ground truth/img'] = data['img'][0] - for key in data['gt_fields']: - key = key[0] - vis_dict['/'.join(['ground truth', key])] = data[ - key][0] - # predict - for key, value in logit_dict.items(): - vis_dict['/'.join(['predict', key])] = logit_dict[ - key][0] - visual_in_traning( - log_writer=log_writer, vis_dict=vis_dict, step=iter) - - for key in avg_loss.keys(): - avg_loss[key] = 0. - reader_cost_averager.reset() - batch_cost_averager.reset() - - # save model - if (iter % save_interval == 0 or iter == iters) and local_rank == 0: - current_save_dir = os.path.join(save_dir, - "iter_{}".format(iter)) - if not os.path.isdir(current_save_dir): - os.makedirs(current_save_dir) - paddle.save(model.state_dict(), - os.path.join(current_save_dir, 'model.pdparams')) - paddle.save(optimizer.state_dict(), - os.path.join(current_save_dir, 'model.pdopt')) - save_models.append(current_save_dir) - if len(save_models) > keep_checkpoint_max > 0: - model_to_remove = save_models.popleft() - shutil.rmtree(model_to_remove) - - # eval model - if eval_begin_iters is None: - eval_begin_iters = iters // 2 - if (iter % save_interval == 0 or iter == iters) and ( - val_dataset is not None - ) and local_rank == 0 and iter >= eval_begin_iters: - num_workers = 1 if num_workers > 0 else 0 - metrics_data = evaluate( - model, - val_dataset, - num_workers=1, - print_detail=True, - save_results=False, - metrics=metrics) - model.train() - - # save best model and add evaluation results to vdl - if (iter % save_interval == 0 or iter == iters) and local_rank == 0: - if val_dataset is not None and iter >= eval_begin_iters: - if metrics_data[metrics[0]] < best_metrics_data[metrics[0]]: - best_iter = iter - best_metrics_data = metrics_data.copy() - best_model_dir = os.path.join(save_dir, "best_model") - paddle.save( - model.state_dict(), - os.path.join(best_model_dir, 'model.pdparams')) - save_best(best_model_dir, best_metrics_data, iter) - - show_list = [] - for key, value in best_metrics_data.items(): - show_list.append((key, value)) - log_str = '[EVAL] The model with the best validation {} ({:.4f}) was saved at iter {}.'.format( - show_list[0][0], show_list[0][1], best_iter) - if len(show_list) > 1: - log_str += " While" - for i in range(1, len(show_list)): - log_str = log_str + ' {}: {:.4f},'.format( - show_list[i][0], show_list[i][1]) - log_str = log_str[:-1] - logger.info(log_str) - - if use_vdl: - for key, value in metrics_data.items(): - log_writer.add_scalar('Evaluate/' + key, value, - iter) - - batch_start = time.time() - - # Sleep for half a second to let dataloader release resources. - time.sleep(0.5) - if use_vdl: - log_writer.close() diff --git a/spaces/Francesco/FairytaleDJ/scrape.py b/spaces/Francesco/FairytaleDJ/scrape.py deleted file mode 100644 index 2f2d8667867c57796bee596b7b35bf1a4b00c223..0000000000000000000000000000000000000000 --- a/spaces/Francesco/FairytaleDJ/scrape.py +++ /dev/null @@ -1,98 +0,0 @@ -import asyncio -import json -from collections import defaultdict -from itertools import chain -from typing import List, Optional, Tuple, TypedDict - -import aiohttp -from bs4 import BeautifulSoup - -""" -This file scrapes disney songs + lyrics from "https://www.disneyclips.com/lyrics/" -""" - -URL = "https://www.disneyclips.com/lyrics/" - - -async def get_lyrics_names_and_urls_from_movie_url( - movie_name: str, url: str, session: aiohttp.ClientSession -) -> List[Tuple[str, str]]: - async with session.get(url) as response: - html = await response.text() - soup = BeautifulSoup(html, "html.parser") - table = soup.find("table", {"class": "songs"}) - names_and_urls = [] - if table: - links = table.find_all("a") - names_and_urls = [] - for link in links: - names_and_urls.append( - (movie_name, link.text, f"{URL}/{link.get('href')}") - ) - return names_and_urls - - -async def get_lyric_from_lyric_url( - movie_name: str, lyric_name: str, url: str, session: aiohttp.ClientSession -) -> str: - async with session.get(url) as response: - html = await response.text() - soup = BeautifulSoup(html, "html.parser") - div = soup.find("div", {"id": "cnt"}).find("div", {"class": "main"}) - paragraphs = div.find_all("p") - text = "" - # first

      has the lyric - p = paragraphs[0] - for br in p.find_all("br"): - br.replace_with(". ") - for span in p.find_all("span"): - span.decompose() - text += p.text - - return (movie_name, lyric_name, text) - - -async def get_movie_names_and_urls( - session: aiohttp.ClientSession, -) -> List[Tuple[str, str]]: - async with session.get(URL) as response: - html = await response.text() - soup = BeautifulSoup(html, "html.parser") - links = ( - soup.find("div", {"id": "cnt"}).find("div", {"class": "main"}).find_all("a") - ) - movie_names_and_urls = [ - (link.text, f"{URL}/{link.get('href')}") for link in links - ] - return movie_names_and_urls - - -async def scrape_disney_lyrics(): - async with aiohttp.ClientSession() as session: - data = await get_movie_names_and_urls(session) - data = await asyncio.gather( - *[ - asyncio.create_task( - get_lyrics_names_and_urls_from_movie_url(*el, session) - ) - for el in data - ] - ) - data = await asyncio.gather( - *[ - asyncio.create_task(get_lyric_from_lyric_url(*data, session)) - for data in chain(*data) - ] - ) - - result = defaultdict(list) - - for movie_name, lyric_name, lyric_text in data: - result[movie_name].append({"name": lyric_name, "text": lyric_text}) - - with open("data/lyrics.json", "w") as f: - json.dump(result, f) - - -loop = asyncio.get_event_loop() -loop.run_until_complete(scrape_disney_lyrics()) diff --git a/spaces/FridaZuley/RVC_HFKawaii/utils/clonerepo_experimental.py b/spaces/FridaZuley/RVC_HFKawaii/utils/clonerepo_experimental.py deleted file mode 100644 index b0ae02648c1307562cf48033908edcf2996db5e2..0000000000000000000000000000000000000000 --- a/spaces/FridaZuley/RVC_HFKawaii/utils/clonerepo_experimental.py +++ /dev/null @@ -1,253 +0,0 @@ -import os -import subprocess -import shutil -from concurrent.futures import ThreadPoolExecutor, as_completed -from tqdm.notebook import tqdm -from pathlib import Path -import requests - -def run_script(): - def run_cmd(cmd): - process = subprocess.run(cmd, shell=True, check=True, text=True) - return process.stdout - - # Change the current directory to /content/ - os.chdir('/content/') - print("Changing dir to /content/") - - # Your function to edit the file - def edit_file(file_path): - temp_file_path = "/tmp/temp_file.py" - changes_made = False - with open(file_path, "r") as file, open(temp_file_path, "w") as temp_file: - previous_line = "" - second_previous_line = "" - for line in file: - new_line = line.replace("value=160", "value=128") - if new_line != line: - print("Replaced 'value=160' with 'value=128'") - changes_made = True - line = new_line - - new_line = line.replace("crepe hop length: 160", "crepe hop length: 128") - if new_line != line: - print("Replaced 'crepe hop length: 160' with 'crepe hop length: 128'") - changes_made = True - line = new_line - - new_line = line.replace("value=0.88", "value=0.75") - if new_line != line: - print("Replaced 'value=0.88' with 'value=0.75'") - changes_made = True - line = new_line - - if "label=i18n(\"输入源音量包络替换输出音量包络融合比例,越靠近1越使用输出包络\")" in previous_line and "value=1," in line: - new_line = line.replace("value=1,", "value=0.25,") - if new_line != line: - print("Replaced 'value=1,' with 'value=0.25,' based on the condition") - changes_made = True - line = new_line - - if "label=i18n(\"总训练轮数total_epoch\")" in previous_line and "value=20," in line: - new_line = line.replace("value=20,", "value=500,") - if new_line != line: - print("Replaced 'value=20,' with 'value=500,' based on the condition for DEFAULT EPOCH") - changes_made = True - line = new_line - - if 'choices=["pm", "harvest", "dio", "crepe", "crepe-tiny", "mangio-crepe", "mangio-crepe-tiny"], # Fork Feature. Add Crepe-Tiny' in previous_line: - if 'value="pm",' in line: - new_line = line.replace('value="pm",', 'value="mangio-crepe",') - if new_line != line: - print("Replaced 'value=\"pm\",' with 'value=\"mangio-crepe\",' based on the condition") - changes_made = True - line = new_line - - new_line = line.replace('label=i18n("输入训练文件夹路径"), value="E:\\\\语音音频+标注\\\\米津玄师\\\\src"', 'label=i18n("输入训练文件夹路径"), value="/content/dataset/"') - if new_line != line: - print("Replaced 'label=i18n(\"输入训练文件夹路径\"), value=\"E:\\\\语音音频+标注\\\\米津玄师\\\\src\"' with 'label=i18n(\"输入训练文件夹路径\"), value=\"/content/dataset/\"'") - changes_made = True - line = new_line - - if 'label=i18n("是否仅保存最新的ckpt文件以节省硬盘空间"),' in second_previous_line: - if 'value=i18n("否"),' in line: - new_line = line.replace('value=i18n("否"),', 'value=i18n("是"),') - if new_line != line: - print("Replaced 'value=i18n(\"否\"),' with 'value=i18n(\"是\"),' based on the condition for SAVE ONLY LATEST") - changes_made = True - line = new_line - - if 'label=i18n("是否在每次保存时间点将最终小模型保存至weights文件夹"),' in second_previous_line: - if 'value=i18n("否"),' in line: - new_line = line.replace('value=i18n("否"),', 'value=i18n("是"),') - if new_line != line: - print("Replaced 'value=i18n(\"否\"),' with 'value=i18n(\"是\"),' based on the condition for SAVE SMALL WEIGHTS") - changes_made = True - line = new_line - - temp_file.write(line) - second_previous_line = previous_line - previous_line = line - - # After finished, we replace the original file with the temp one - import shutil - shutil.move(temp_file_path, file_path) - - if changes_made: - print("Changes made and file saved successfully.") - else: - print("No changes were needed.") - - # Define the repo path - repo_path = '/content/Applio-RVC-Fork' - - def copy_all_files_in_directory(src_dir, dest_dir): - # Iterate over all files in source directory - for item in Path(src_dir).glob('*'): - if item.is_file(): - # Copy each file to destination directory - shutil.copy(item, dest_dir) - else: - # If it's a directory, make a new directory in the destination and copy the files recursively - new_dest = Path(dest_dir) / item.name - new_dest.mkdir(exist_ok=True) - copy_all_files_in_directory(str(item), str(new_dest)) - - def clone_and_copy_repo(repo_path): - # New repository link - new_repo_link = "https://github.com/IAHispano/Applio-RVC-Fork/" - # Temporary path to clone the repository - temp_repo_path = "/content/temp_Applio-RVC-Fork" - # New folder name - new_folder_name = "Applio-RVC-Fork" - - # Clone the latest code from the new repository to a temporary location - run_cmd(f"git clone {new_repo_link} {temp_repo_path}") - os.chdir(temp_repo_path) - - run_cmd(f"git checkout 3fa4dad3d8961e5ca2522e9e12c0b4ddb71ad402") - run_cmd(f"git checkout f9e606c279cb49420597519b0a83b92be81e42e4") - run_cmd(f"git checkout 9e305588844c5442d58add1061b29beeca89d679") - run_cmd(f"git checkout bf92dc1eb54b4f28d6396a4d1820a25896cc9af8") - run_cmd(f"git checkout c3810e197d3cb98039973b2f723edf967ecd9e61") - run_cmd(f"git checkout a33159efd134c2413b0afe26a76b7dc87926d2de") - run_cmd(f"git checkout 24e251fb62c662e39ac5cf9253cc65deb9be94ec") - run_cmd(f"git checkout ad5667d3017e93232dba85969cddac1322ba2902") - run_cmd(f"git checkout ce9715392cf52dd5a0e18e00d1b5e408f08dbf27") - run_cmd(f"git checkout 7c7da3f2ac68f3bd8f3ad5ca5c700f18ab9f90eb") - run_cmd(f"git checkout 4ac395eab101955e8960b50d772c26f592161764") - run_cmd(f"git checkout b15b358702294c7375761584e5276c811ffab5e8") - run_cmd(f"git checkout 1501793dc490982db9aca84a50647764caa66e51") - run_cmd(f"git checkout 21f7faf57219c75e6ba837062350391a803e9ae2") - run_cmd(f"git checkout b5eb689fbc409b49f065a431817f822f554cebe7") - run_cmd(f"git checkout 7e02fae1ebf24cb151bf6cbe787d06734aa65862") - run_cmd(f"git checkout 6aea5ea18ed0b9a1e03fa5d268d6bc3c616672a9") - run_cmd(f"git checkout f0f9b25717e59116473fb42bd7f9252cfc32b398") - run_cmd(f"git checkout b394de424088a81fc081224bc27338a8651ad3b2") - run_cmd(f"git checkout f1999406a88b80c965d2082340f5ea2bfa9ab67a") - run_cmd(f"git checkout d98a0fa8dc715308dfc73eac5c553b69c6ee072b") - run_cmd(f"git checkout d73267a415fb0eba98477afa43ef71ffd82a7157") - run_cmd(f"git checkout 1a03d01356ae79179e1fb8d8915dc9cc79925742") - run_cmd(f"git checkout 81497bb3115e92c754300c9b3992df428886a3e9") - run_cmd(f"git checkout c5af1f8edcf79cb70f065c0110e279e78e48caf9") - run_cmd(f"git checkout cdb3c90109387fa4dfa92f53c3864c71170ffc77") - - # Edit the file here, before copying - #edit_file(f"{temp_repo_path}/infer-web.py") - - # Copy all files from the cloned repository to the existing path - copy_all_files_in_directory(temp_repo_path, repo_path) - print(f"Copying all {new_folder_name} files from GitHub.") - - # Change working directory back to /content/ - os.chdir('/content/') - print("Changed path back to /content/") - - # Remove the temporary cloned repository - shutil.rmtree(temp_repo_path) - - # Call the function - clone_and_copy_repo(repo_path) - - # Download the credentials file for RVC archive sheet - os.makedirs('/content/Applio-RVC-Fork/stats/', exist_ok=True) - run_cmd("wget -q https://cdn.discordapp.com/attachments/945486970883285045/1114717554481569802/peppy-generator-388800-07722f17a188.json -O /content/Applio-RVC-Fork/stats/peppy-generator-388800-07722f17a188.json") - - # Forcefully delete any existing torchcrepe dependencies downloaded from an earlier run just in case - shutil.rmtree('/content/Applio-RVC-Fork/torchcrepe', ignore_errors=True) - shutil.rmtree('/content/torchcrepe', ignore_errors=True) - - # Download the torchcrepe folder from the maxrmorrison/torchcrepe repository - run_cmd("git clone https://github.com/maxrmorrison/torchcrepe.git") - shutil.move('/content/torchcrepe/torchcrepe', '/content/Applio-RVC-Fork/') - shutil.rmtree('/content/torchcrepe', ignore_errors=True) # Delete the torchcrepe repository folder - - # Change the current directory to /content/Applio-RVC-Fork - os.chdir('/content/Applio-RVC-Fork') - os.makedirs('pretrained', exist_ok=True) - os.makedirs('uvr5_weights', exist_ok=True) - -def download_file(url, filepath): - response = requests.get(url, stream=True) - response.raise_for_status() - - with open(filepath, "wb") as file: - for chunk in response.iter_content(chunk_size=8192): - if chunk: - file.write(chunk) - -def download_pretrained_models(): - pretrained_models = { - "pretrained": [ - "D40k.pth", - "G40k.pth", - "f0D40k.pth", - "f0G40k.pth" - ], - "pretrained_v2": [ - "D40k.pth", - "G40k.pth", - "f0D40k.pth", - "f0G40k.pth", - "f0G48k.pth", - "f0D48k.pth" - ], - "uvr5_weights": [ - "HP2-人声vocals+非人声instrumentals.pth", - "HP5-主旋律人声vocals+其他instrumentals.pth", - "VR-DeEchoNormal.pth", - "VR-DeEchoDeReverb.pth", - "VR-DeEchoAggressive.pth", - "HP5_only_main_vocal.pth", - "HP3_all_vocals.pth", - "HP2_all_vocals.pth" - ] - } - part2 = "I" - base_url = "https://huggingface.co/lj1995/VoiceConversionWebU" + part2 + "/resolve/main/" - base_path = "/content/Applio-RVC-Fork/" - base_pathm = base_path - - # Calculate total number of files to download - total_files = sum(len(files) for files in pretrained_models.values()) + 1 # +1 for hubert_base.pt - - with tqdm(total=total_files, desc="Downloading files") as pbar: - for folder, models in pretrained_models.items(): - folder_path = os.path.join(base_path, folder) - os.makedirs(folder_path, exist_ok=True) - for model in models: - url = base_url + folder + "/" + model - filepath = os.path.join(folder_path, model) - download_file(url, filepath) - pbar.update() - - # Download hubert_base.pt to the base path - hubert_url = base_url + "hubert_base.pt" - hubert_filepath = os.path.join(base_pathm, "hubert_base.pt") - download_file(hubert_url, hubert_filepath) - pbar.update() -def clone_repository(run_download): - with ThreadPoolExecutor(max_workers=2) as executor: - executor.submit(run_script) - if run_download: - executor.submit(download_pretrained_models) diff --git a/spaces/Gators123/fusf_pdf_2023/app.py b/spaces/Gators123/fusf_pdf_2023/app.py deleted file mode 100644 index dc9fcf1e582611e21b5e1004483e7eed8d971b77..0000000000000000000000000000000000000000 --- a/spaces/Gators123/fusf_pdf_2023/app.py +++ /dev/null @@ -1,365 +0,0 @@ -# Creates an "application" to process individual PDF files. Either accessible locally by running the program, -# or at https://huggingface.co/spaces/Gators123/fusf_pdf_2023 - -# 1. Enter open API key at the top bar -# 2. Select a pdf file to classify -# 3. Additional questions about the pdf can be asked to the built-in chat bot - -import os -from langchain.document_loaders import PyPDFLoader -from langchain.embeddings.openai import OpenAIEmbeddings -from langchain.vectorstores import Chroma -from langchain.chains import ConversationalRetrievalChain -from langchain.chat_models import ChatOpenAI -from langchain.chat_models import ChatOpenAI -from langchain.schema import ( - HumanMessage, - SystemMessage -) -from dotenv import load_dotenv -from PIL import Image -import fitz -from joblib import load -import gradio as gr -# _________________________________________________________________ - -# Global variables -COUNT, N = 0, 0 -chat_history = [] -chain = '' - -# API Textboxes -enable_box = gr.Textbox.update(value=None, placeholder='Upload your OpenAI API key', interactive=True) -disable_box = gr.Textbox.update(value='OpenAI API key is Set', interactive=False) - -# Function to set the API key -def set_apikey(api_key): - os.environ['OPENAI_API_KEY'] = api_key - return disable_box - -# Function to enable the API key input box -def enable_api_box(): - return enable_box - - -# Function to add text to the chat history -def add_text(history, text): - if not text: - raise gr.Error('Enter text') - history = history + [(text, '')] - return history - -# Function to process the PDF file and create a conversation chain -def process_file(file): - if 'OPENAI_API_KEY' not in os.environ: - raise gr.Error('Upload your OpenAI API key') - - loader = PyPDFLoader(file.name) - documents = loader.load() - - embeddings = OpenAIEmbeddings() - - pdfsearch = Chroma.from_documents(documents, embeddings) - - chain = ConversationalRetrievalChain.from_llm(ChatOpenAI(temperature=0.3), - retriever=pdfsearch.as_retriever(search_kwargs={"k": 1}), - return_source_documents=True) - return chain - - - -# Function to generate a response based on the chat history and query -def generate_response(history, query, btn): - global COUNT, N, chat_history, chain - - if not btn: - raise gr.Error(message='Upload a PDF') - if COUNT == 0: - chain = process_file(btn) - COUNT += 1 - - result = chain({"question": query, 'chat_history': chat_history}, return_only_outputs=True) - chat_history += [(query, result["answer"])] - N = list(result['source_documents'][0])[1][1]['page'] - - for char in result['answer']: - history[-1][-1] += char - yield history, '' - - -# Function to render a specific page of a PDF file as an image -def render_file(pdf_file): - global N - doc = fitz.open(pdf_file.name) - page = doc[N] - # Render the page as a PNG image with a resolution of 300 DPI - pix = page.get_pixmap(matrix=fitz.Matrix(300/72, 300/72)) - image = Image.frombytes('RGB', [pix.width, pix.height], pix.samples) - return image - -#____________________________________________________________________ - -# Returns text from the pdf file -def read_string_text_from_file(pdf_file): - - if btn: - loader = PyPDFLoader(pdf_file.name) - - doc = loader.load_and_split() - - stringtxt = str(doc) - - - # Convert into summary from string to list in order to remove '\n' from text - mylist = [] - final_string_no_lines = '' # This is the final summary that is printed - - # Adds all the characters to the list - for char in stringtxt: - mylist.append(char) - - # Finds the indices where '\n' is present - pop_index = [] - for word in range(0,len(mylist)): - if mylist[word] =='\\' and mylist[word+1]=='n': - pop_index.append(word) - pop_index.append(word+1) - - # Replaces those indices with an empty space - for word in pop_index: - mylist[word] = ' ' - - # Converts cleaned list back into string and returns - for i in mylist: - final_string_no_lines+=i - - return final_string_no_lines - -# ___________________________________________________________ - -# Classifications using GPT API -def other_info(pdf_file): - - if btn: - loader = PyPDFLoader(pdf_file.name) - - doc = loader.load_and_split() - - stringtxt = str(doc) - - - # Had to convert into summary from string to list in order to remove '\n' from text - mylist = [] - final_string_no_lines = '' # Final text that is returned - - for char in stringtxt: - mylist.append(char) - - # Finds the indices where '\n' is present - pop_index = [] - for word in range(0,len(mylist)): - if mylist[word] =='\\' and mylist[word+1]=='n': - pop_index.append(word) - pop_index.append(word+1) - - # Replaces those indices with an empty space - for word in pop_index: - mylist[word] = ' ' - - # Converts cleaned list back into string and returns - for i in mylist: - final_string_no_lines+=i - - load_dotenv() - - if 'OPENAI_API_KEY' not in os.environ: - raise gr.Error('Upload your OpenAI API key') - - chat = ChatOpenAI(openai_api_key=os.environ['OPENAI_API_KEY']) - - - ml_type_messages = [ - SystemMessage(content='''Classify the article into Supervised Machine Learning, Unsupervised Machine Learning, Both, or None. Surround the answer in brackets. On a new line, write a short blurb justifying why, no longer than 5 sentences: - Do not include brackets around your answer.'''), - HumanMessage(content=final_string_no_lines[0:4000]) - ] - - - treatment_cycle_messages = [ - SystemMessage(content='''Classify the article in one of the following treatment cycles. On a new line, write a short blurb justifying why: - [Treatment Planning, Treatment Monitoring and Results Analysis, Patient Selection, Clinical Decision Support]'''), - HumanMessage(content=final_string_no_lines[0:4000]) - ] - - - medical_indication_messages = [ - SystemMessage(content='''Classify the article in one of the following medical indications. On a new line, write a short blurb justifying why.: - [Cardiovascular, Emerging Indications, Gynelogical, Neurological (blood-brain-barrier opening), Neurosurgery, Oncological, Urological (prostate), Veterinary, Other]'''), - HumanMessage(content=final_string_no_lines[0:4000]) - ] - - key_word_messages = [ - SystemMessage(content='''Pick some of the keywords, and ONLY KEY WORDS LISTED BELOW that the article encompasses. Provide in a numbered list: - [Angular spectrum, Artificial intelligence, Artificial neural networks, Auto encoders, Bio-heat transfer, Cat Swarm Operation, Chaotic krill her algorithm (CKHA), CIVA HealthCare platform, Classification, - Coefficient based method, Computed tomography (CT), Computer architecture, Convolutional neural network (CNN), Decision trees, Deep CNN, Deep leaning, Diagnostic imaging,, Differential equation solver, - Encoder-decoder, Fourier transform, Functional mapping, Functional neurosurgery, FUS monitoring, Generative adversarial networks (GAN), Global convolutional networks, Harmonic motion imaging, - HIFU Artifact, Image filtering, Intelligent theranostics, Joint Mutual Information (JMI), K means clustering, Kapur entropy, K-nearest neighbor, Logistic regression, Magnetic resonance imaging (MRI), - Medical diagnostics, Metamodel, Multilayer Perception (MLP), Multistage neural network, Mutual Information Maximisation (MIM), Naive Bayes classifier, NDE, Neural network, Neuromodulation, - Numerical model, Partial dependence plots, Photon counting CT, Prediction, Preoperative prediction, Principal component analysis, Prognosis, Radiomics, Random forest, Rayleigh-Sommerfeld, Real-time lesion tracking, - Regression models (linear and logistic), Residual, Rule based decision tree method, Segmentation, Skull density ratio, Support vector classification (SVC) model, Support vector machines, SWOT, Temperature monitoring, Transfer learning, - Transformers, Ultrasonography, Ultrasound (US), U-net (CNN, Encoder, Decoder, Autoencoder), Unsupervised learning, VGG Net, Vision transformers (ViT), Wiener Filtering]. Remember to only use the keywords in the list above'''), - HumanMessage(content=final_string_no_lines[0:4000]) - - ] - - - summary = [ - SystemMessage(content='''Write a summary of the article.'''), - HumanMessage(content=final_string_no_lines[0:4000]) - ] - - - # ML Type, Treatment Cycle, Medical Indication, Keywords, Summary - return chat(ml_type_messages).content,chat(treatment_cycle_messages).content, chat(medical_indication_messages).content, chat(key_word_messages).content, chat(summary).content - - -# Fus/Non-fus Model -def fus_model(pdf_file): - - # Loads FUS model with Joblib - fus_model = load('fus_model.joblib') - - prediction = fus_model.predict_proba([read_string_text_from_file(pdf_file)]) - - percentage_pos = (prediction[0][0])*100 - percentage_neg = (prediction[0][1])*100 - - # Returns probability that it is Fus and probability that it is Non-fus - return 'Focused Ultrasound Related: ' + str((round(percentage_pos,1)))+'%'+'\n'+'Non-Fus: '+ str((round(percentage_neg,1)))+'%' - - -# Setting up Gradio application layout -with gr.Blocks() as demo: - - - # Top Row, the place for submitting or changing the API Key - with gr.Column(): - with gr.Row(): - with gr.Column(scale=0.8): - api_key = gr.Textbox( - placeholder='Paste OpenAI API key and press Enter', - show_label=False, - interactive=True - ) - with gr.Column(scale=0.2): - change_api_key = gr.Button('Change Key') - - - # Second row, the place for displaying the FUS/Non-FUS and ML type probabilities - with gr.Column(): - with gr.Row(): - fus = gr.Textbox(label='FUS/Non-FUS',interactive=False) - ml = gr.Textbox(label='ML Type',interactive=False) - - - with gr.Row(): - treatment_cycle = gr.Textbox(label='Treatment Cycle',interactive=False) - medical_indication = gr.Textbox(label='Medical Indication',interactive=False) - - - # Contains the summary generation box - with gr.Row(): - keyword = gr.Textbox(label='Keywords',interactive=False) - summary = gr.Textbox(label='Summary',interactive=False) - - - # Contains Chatbot and PDF displayer - with gr.Row(): - chatbot = gr.Chatbot(value=[], elem_id='chatbot',height=780) - - show_img = gr.Image(label='Upload PDF', tool='select',height=780) - - - # Text box for user to input questions into the chatbot - with gr.Row(): - with gr.Column(scale=0.5): - txt = gr.Textbox( - show_label=False, - placeholder="Enter text and press submit", - container=False) - - # Button for uploading PDF - with gr.Column(scale=0.5): - btn = gr.UploadButton("📁 Upload a PDF", file_types=[".pdf"]) - - - # Example prompts that can be entered into the chatbot - gr.Examples( - - #Add or customize example prompts here - examples=[['What are five important keywords?'],['What were the conclusions/results of this study?'],['Who were the authors of this study?']], - inputs = [txt] - ) - - - with gr.Row(): - - # Submit button - with gr.Column(scale=0.5): - submit_btn = gr.Button('Submit') - - -# __________________________________________________________________________________ - - # Set up event handlers - - # Event handler for submitting the OpenAI API key - api_key.submit(fn=set_apikey, inputs=[api_key], outputs=[api_key]) - - # Event handler for changing the API key - change_api_key.click(fn=enable_api_box, outputs=[api_key]) - - # Event handler for uploading a PDF - def on_upload(btn): - show_img.value = render_file(btn) - - fus.value = fus_model(btn) # Fus/Non Fus - - ml.value = other_info(btn)[0] # ML Type - - treatment_cycle.value = other_info(btn)[1] # Treatment Cycle - - medical_indication.value = other_info(btn)[2] # Medical Indication - - keyword.value = other_info(btn)[3] # Keywords - - summary.value = other_info(btn)[4] # Summary - - - return show_img.value, fus.value, ml.value, treatment_cycle.value, medical_indication.value, keyword.value, summary.value - - - btn.upload(on_upload, inputs=[btn], outputs=[show_img, fus, ml, treatment_cycle, medical_indication, keyword, summary]) - - - # Event handler for submitting text and generating response - submit_btn.click( - fn=add_text, - inputs=[chatbot, txt], - outputs=[chatbot], - queue=False - ).success( - fn=generate_response, - inputs=[chatbot, txt, btn], - outputs=[chatbot, txt] - ).success( - fn=render_file, - inputs=[btn], - outputs=[show_img] - ) - -# Launches the Gradio application -demo.queue() -if __name__ == "__main__": - - demo.launch() \ No newline at end of file diff --git a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/utils.py b/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/utils.py deleted file mode 100644 index 740ced9943143c7a56a16273044e60d6ab3e9728..0000000000000000000000000000000000000000 --- a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/utils.py +++ /dev/null @@ -1,7 +0,0 @@ -def is_google_colab(): - try: - import google.colab - - return True - except: - return False diff --git a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/app.py b/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/app.py deleted file mode 100644 index f8a201d2361655d4e0ea9d1c15b0a388d38094c3..0000000000000000000000000000000000000000 --- a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/app.py +++ /dev/null @@ -1,71 +0,0 @@ -import multiprocessing - -import streamlit as st - -from color_selection_ui import color_selection_ui -from depth_selection_ui import depth_selection_ui -from device import device -from s_multimae.run_type import RUN_TYPE, run_type -from sod_selection_ui import sod_selection_ui - -run_type.set_run_type(RUN_TYPE.HUGGINGFACE) - -class MODE: - IMAGE = 'image' - VIDEO = 'video' - WEBRTC = 'webrtc' - DEMO = 'demo' - -st.set_page_config( - page_title='RGB-D Salient Object Detection Multi-modal Masked Autoencoders (dubbed S-MultiMAE)', - page_icon="🧊", - layout="wide", - initial_sidebar_state="expanded", - menu_items={ - 'Get Help': 'https://www.extremelycoolapp.com/help', - 'Report a bug': "https://www.extremelycoolapp.com/bug", - 'About': "# This is a header. This is an *extremely* cool app!" - } -) - -st.title('RGB-D Salient Object Detection Multi-modal Masked Autoencoders (S-MultiMAE)') - -with st.expander("INTRODUCTION"): - st.text(f'''Streamlit demo for S-MultiMAE. - Author: Huynh Nguyen Truong Thinh - Device: {device.type} - Number of CPU(s): {multiprocessing.cpu_count()} - ''') - -with st.expander("OPTIONS"): - col1, col2 = st.columns(2) - - with col1: - mode = st.radio( - "Mode", - ( - MODE.IMAGE, MODE.VIDEO, - MODE.WEBRTC, MODE.DEMO, - ) - ) - st.markdown("---") - color = color_selection_ui() - - with col2: - depth_model = depth_selection_ui() - st.markdown("---") - sod_model = sod_selection_ui() - -if mode == MODE.IMAGE: - from image_inference import image_inference - image_inference(depth_model, sod_model, color) -elif mode == MODE.VIDEO: - from video_inference import video_inference - video_inference(depth_model, sod_model, color) -elif mode == MODE.WEBRTC: - from webrtc_app import webrtc_app - webrtc_app(depth_model, sod_model, color) -elif mode == MODE.DEMO: - from demo import demo - demo() - diff --git a/spaces/HaHaBill/LandShapes-Antarctica/models/stylegan2/stylegan2-pytorch/op/__init__.py b/spaces/HaHaBill/LandShapes-Antarctica/models/stylegan2/stylegan2-pytorch/op/__init__.py deleted file mode 100644 index d0918d92285955855be89f00096b888ee5597ce3..0000000000000000000000000000000000000000 --- a/spaces/HaHaBill/LandShapes-Antarctica/models/stylegan2/stylegan2-pytorch/op/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from .fused_act import FusedLeakyReLU, fused_leaky_relu -from .upfirdn2d import upfirdn2d diff --git a/spaces/Hanyin/anime-remove-background/app.py b/spaces/Hanyin/anime-remove-background/app.py deleted file mode 100644 index 230a0d5f8a3da6ab18ecb8db1cd90016a489b96a..0000000000000000000000000000000000000000 --- a/spaces/Hanyin/anime-remove-background/app.py +++ /dev/null @@ -1,52 +0,0 @@ -import gradio as gr -import huggingface_hub -import onnxruntime as rt -import numpy as np -import cv2 - - -def get_mask(img, s=1024): - img = (img / 255).astype(np.float32) - h, w = h0, w0 = img.shape[:-1] - h, w = (s, int(s * w / h)) if h > w else (int(s * h / w), s) - ph, pw = s - h, s - w - img_input = np.zeros([s, s, 3], dtype=np.float32) - img_input[ph // 2:ph // 2 + h, pw // 2:pw // 2 + w] = cv2.resize(img, (w, h)) - img_input = np.transpose(img_input, (2, 0, 1)) - img_input = img_input[np.newaxis, :] - mask = rmbg_model.run(None, {'img': img_input})[0][0] - mask = np.transpose(mask, (1, 2, 0)) - mask = mask[ph // 2:ph // 2 + h, pw // 2:pw // 2 + w] - mask = cv2.resize(mask, (w0, h0))[:, :, np.newaxis] - return mask - - -def rmbg_fn(img): - mask = get_mask(img) - img = (mask * img + 255 * (1 - mask)).astype(np.uint8) - mask = (mask * 255).astype(np.uint8) - img = np.concatenate([img, mask], axis=2, dtype=np.uint8) - mask = mask.repeat(3, axis=2) - return mask, img - - -if __name__ == "__main__": - providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] - model_path = huggingface_hub.hf_hub_download("skytnt/anime-seg", "isnetis.onnx") - rmbg_model = rt.InferenceSession(model_path, providers=providers) - app = gr.Blocks() - with app: - gr.Markdown("# Anime Remove Background\n\n" - "![visitor badge](https://visitor-badge.glitch.me/badge?page_id=skytnt.animeseg)\n\n" - "demo for [https://github.com/SkyTNT/anime-segmentation/](https://github.com/SkyTNT/anime-segmentation/)") - with gr.Row(): - with gr.Column(): - input_img = gr.Image(label="input image") - examples_data = [[f"examples/{x:02d}.jpg"] for x in range(1, 4)] - examples = gr.Dataset(components=[input_img], samples=examples_data) - run_btn = gr.Button(variant="primary") - output_mask = gr.Image(label="mask") - output_img = gr.Image(label="result", image_mode="RGBA") - examples.click(lambda x: x[0], [examples], [input_img]) - run_btn.click(rmbg_fn, [input_img], [output_mask, output_img]) - app.launch() diff --git a/spaces/HarryLee/eCommerceImageCaptioning/criterions/label_smoothed_cross_entropy.py b/spaces/HarryLee/eCommerceImageCaptioning/criterions/label_smoothed_cross_entropy.py deleted file mode 100644 index 73b36e750a0037cad8403e383d790f868b509d24..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/criterions/label_smoothed_cross_entropy.py +++ /dev/null @@ -1,343 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from dataclasses import dataclass, field -from typing import Optional - -import torch -import torch.nn.functional as F -import numpy as np -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion -from fairseq.dataclass import FairseqDataclass -from omegaconf import II - - -@dataclass -class AjustLabelSmoothedCrossEntropyCriterionConfig(FairseqDataclass): - label_smoothing: float = field( - default=0.0, - metadata={"help": "epsilon for label smoothing, 0 means no label smoothing"}, - ) - report_accuracy: bool = field( - default=False, - metadata={"help": "report accuracy metric"}, - ) - ignore_prefix_size: int = field( - default=0, - metadata={"help": "Ignore first N tokens"}, - ) - ignore_eos: bool = field( - default=False, - metadata={"help": "Ignore eos token"}, - ) - sentence_avg: bool = II("optimization.sentence_avg") - drop_worst_ratio: float = field( - default=0.0, - metadata={"help": "ratio for discarding bad samples"}, - ) - drop_worst_after: int = field( - default=0, - metadata={"help": "steps for discarding bad samples"}, - ) - use_rdrop: bool = field( - default=False, metadata={"help": "use R-Drop"} - ) - reg_alpha: float = field( - default=1.0, metadata={"help": "weight for R-Drop"} - ) - sample_patch_num: int = field( - default=196, metadata={"help": "sample patchs for v1"} - ) - constraint_range: Optional[str] = field( - default=None, - metadata={"help": "constraint range"} - ) - - -def construct_rdrop_sample(x): - if isinstance(x, dict): - for key in x: - x[key] = construct_rdrop_sample(x[key]) - return x - elif isinstance(x, torch.Tensor): - return x.repeat(2, *([1] * (x.dim()-1))) - elif isinstance(x, int): - return x * 2 - elif isinstance(x, np.ndarray): - return x.repeat(2) - else: - raise NotImplementedError - - -def kl_loss(p, q): - p_loss = F.kl_div(p, torch.exp(q), reduction='sum') - q_loss = F.kl_div(q, torch.exp(p), reduction='sum') - loss = (p_loss + q_loss) / 2 - return loss - - -def label_smoothed_nll_loss( - lprobs, target, epsilon, update_num, reduce=True, - drop_worst_ratio=0.0, drop_worst_after=0, use_rdrop=False, reg_alpha=1.0, - constraint_masks=None, constraint_start=None, constraint_end=None -): - if target.dim() == lprobs.dim() - 1: - target = target.unsqueeze(-1) - nll_loss = -lprobs.gather(dim=-1, index=target).squeeze(-1) - if constraint_masks is not None: - smooth_loss = -lprobs.masked_fill(~constraint_masks, 0).sum(dim=-1, keepdim=True).squeeze(-1) - eps_i = epsilon / (constraint_masks.sum(1) - 1 + 1e-6) - elif constraint_start is not None and constraint_end is not None: - constraint_range = [0, 1, 2, 3] + list(range(constraint_start, constraint_end)) - smooth_loss = -lprobs[:, constraint_range].sum(dim=-1, keepdim=True).squeeze(-1) - eps_i = epsilon / (len(constraint_range) - 1 + 1e-6) - else: - smooth_loss = -lprobs.sum(dim=-1, keepdim=True).squeeze(-1) - eps_i = epsilon / (lprobs.size(-1) - 1) - loss = (1.0 - epsilon - eps_i) * nll_loss + eps_i * smooth_loss - if drop_worst_ratio > 0 and update_num > drop_worst_after: - if use_rdrop: - true_batch_size = loss.size(0) // 2 - _, indices = torch.topk(loss[:true_batch_size], k=int(true_batch_size * (1 - drop_worst_ratio)), largest=False) - loss = torch.cat([loss[indices], loss[indices+true_batch_size]]) - nll_loss = torch.cat([nll_loss[indices], nll_loss[indices+true_batch_size]]) - lprobs = torch.cat([lprobs[indices], lprobs[indices+true_batch_size]]) - else: - loss, indices = torch.topk(loss, k=int(loss.shape[0] * (1 - drop_worst_ratio)), largest=False) - nll_loss = nll_loss[indices] - lprobs = lprobs[indices] - - ntokens = loss.numel() - nll_loss = nll_loss.sum() - loss = loss.sum() - if use_rdrop: - true_batch_size = lprobs.size(0) // 2 - p = lprobs[:true_batch_size] - q = lprobs[true_batch_size:] - if constraint_start is not None and constraint_end is not None: - constraint_range = [0, 1, 2, 3] + list(range(constraint_start, constraint_end)) - p = p[:, constraint_range] - q = q[:, constraint_range] - loss += kl_loss(p, q) * reg_alpha - - return loss, nll_loss, ntokens - - -@register_criterion( - "ajust_label_smoothed_cross_entropy", dataclass=AjustLabelSmoothedCrossEntropyCriterionConfig -) -class AjustLabelSmoothedCrossEntropyCriterion(FairseqCriterion): - def __init__( - self, - task, - sentence_avg, - label_smoothing, - ignore_prefix_size=0, - ignore_eos=False, - report_accuracy=False, - drop_worst_ratio=0, - drop_worst_after=0, - use_rdrop=False, - reg_alpha=1.0, - sample_patch_num=196, - constraint_range=None - ): - super().__init__(task) - self.sentence_avg = sentence_avg - self.eps = label_smoothing - self.ignore_prefix_size = ignore_prefix_size - self.ignore_eos = ignore_eos - self.report_accuracy = report_accuracy - self.drop_worst_ratio = drop_worst_ratio - self.drop_worst_after = drop_worst_after - self.use_rdrop = use_rdrop - self.reg_alpha = reg_alpha - self.sample_patch_num = sample_patch_num - - self.constraint_start = None - self.constraint_end = None - if constraint_range is not None: - constraint_start, constraint_end = constraint_range.split(',') - self.constraint_start = int(constraint_start) - self.constraint_end = int(constraint_end) - - def forward(self, model, sample, update_num=0, reduce=True): - """Compute the loss for the given sample. - - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - if isinstance(sample, list): - if self.sample_patch_num > 0: - sample[0]['net_input']['sample_patch_num'] = self.sample_patch_num - loss_v1, sample_size_v1, logging_output_v1 = self.forward(model, sample[0], update_num, reduce) - loss_v2, sample_size_v2, logging_output_v2 = self.forward(model, sample[1], update_num, reduce) - loss = loss_v1 / sample_size_v1 + loss_v2 / sample_size_v2 - sample_size = 1 - logging_output = { - "loss": loss.data, - "loss_v1": loss_v1.data, - "loss_v2": loss_v2.data, - "nll_loss": logging_output_v1["nll_loss"].data / sample_size_v1 + logging_output_v2["nll_loss"].data / sample_size_v2, - "ntokens": logging_output_v1["ntokens"] + logging_output_v2["ntokens"], - "nsentences": logging_output_v1["nsentences"] + logging_output_v2["nsentences"], - "sample_size": 1, - "sample_size_v1": sample_size_v1, - "sample_size_v2": sample_size_v2, - } - return loss, sample_size, logging_output - - if self.use_rdrop: - construct_rdrop_sample(sample) - - net_output = model(**sample["net_input"]) - loss, nll_loss, ntokens = self.compute_loss(model, net_output, sample, update_num, reduce=reduce) - sample_size = ( - sample["target"].size(0) if self.sentence_avg else ntokens - ) - logging_output = { - "loss": loss.data, - "nll_loss": nll_loss.data, - "ntokens": sample["ntokens"], - "nsentences": sample["nsentences"], - "sample_size": sample_size, - } - if self.report_accuracy: - n_correct, total = self.compute_accuracy(model, net_output, sample) - logging_output["n_correct"] = utils.item(n_correct.data) - logging_output["total"] = utils.item(total.data) - return loss, sample_size, logging_output - - def get_lprobs_and_target(self, model, net_output, sample): - conf = sample['conf'][:, None, None] if 'conf' in sample and sample['conf'] is not None else 1 - constraint_masks = None - if "constraint_masks" in sample and sample["constraint_masks"] is not None: - constraint_masks = sample["constraint_masks"] - net_output[0].masked_fill_(~constraint_masks, -math.inf) - if self.constraint_start is not None and self.constraint_end is not None: - net_output[0][:, :, 4:self.constraint_start] = -math.inf - net_output[0][:, :, self.constraint_end:] = -math.inf - lprobs = model.get_normalized_probs(net_output, log_probs=True) * conf - target = model.get_targets(sample, net_output) - if self.ignore_prefix_size > 0: - lprobs = lprobs[:, self.ignore_prefix_size :, :].contiguous() - target = target[:, self.ignore_prefix_size :].contiguous() - if constraint_masks is not None: - constraint_masks = constraint_masks[:, self.ignore_prefix_size :, :].contiguous() - if self.ignore_eos: - bsz, seq_len, embed_dim = lprobs.size() - eos_indices = target.eq(self.task.tgt_dict.eos()) - lprobs = lprobs[~eos_indices].reshape(bsz, seq_len-1, embed_dim) - target = target[~eos_indices].reshape(bsz, seq_len-1) - if constraint_masks is not None: - constraint_masks = constraint_masks[~eos_indices].reshape(bsz, seq_len-1, embed_dim) - if constraint_masks is not None: - constraint_masks = constraint_masks.view(-1, constraint_masks.size(-1)) - return lprobs.view(-1, lprobs.size(-1)), target.view(-1), constraint_masks - - def compute_loss(self, model, net_output, sample, update_num, reduce=True): - lprobs, target, constraint_masks = self.get_lprobs_and_target(model, net_output, sample) - if constraint_masks is not None: - constraint_masks = constraint_masks[target != self.padding_idx] - lprobs = lprobs[target != self.padding_idx] - target = target[target != self.padding_idx] - loss, nll_loss, ntokens = label_smoothed_nll_loss( - lprobs, - target, - self.eps, - update_num, - reduce=reduce, - drop_worst_ratio=self.drop_worst_ratio, - drop_worst_after=self.drop_worst_after, - use_rdrop=self.use_rdrop, - reg_alpha=self.reg_alpha, - constraint_masks=constraint_masks, - constraint_start=self.constraint_start, - constraint_end=self.constraint_end - ) - return loss, nll_loss, ntokens - - def compute_accuracy(self, model, net_output, sample): - lprobs, target = self.get_lprobs_and_target(model, net_output, sample) - mask = target.ne(self.padding_idx) - n_correct = torch.sum( - lprobs.argmax(1).masked_select(mask).eq(target.masked_select(mask)) - ) - total = torch.sum(mask) - return n_correct, total - - @classmethod - def reduce_metrics(cls, logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - loss_sum = sum(log.get("loss", 0) for log in logging_outputs) - loss_sum_v1 = sum(log.get("loss_v1", 0) for log in logging_outputs) - loss_sum_v2 = sum(log.get("loss_v2", 0) for log in logging_outputs) - nll_loss_sum = sum(log.get("nll_loss", 0) for log in logging_outputs) - ntokens = sum(log.get("ntokens", 0) for log in logging_outputs) - nsentences = sum(log.get("nsentences", 0) for log in logging_outputs) - sample_size = sum(log.get("sample_size", 0) for log in logging_outputs) - sample_size_v1 = sum(log.get("sample_size_v1", 0) for log in logging_outputs) - sample_size_v2 = sum(log.get("sample_size_v2", 0) for log in logging_outputs) - - metrics.log_scalar( - "loss", loss_sum / sample_size, sample_size, round=3 - ) - metrics.log_scalar( - "loss_v1", loss_sum_v1 / max(sample_size_v1, 1), max(sample_size_v1, 1), round=3 - ) - metrics.log_scalar( - "loss_v2", loss_sum_v2 / max(sample_size_v2, 1), max(sample_size_v2, 1), round=3 - ) - metrics.log_scalar( - "nll_loss", nll_loss_sum / sample_size, ntokens, round=3 - ) - metrics.log_derived( - "ppl", lambda meters: utils.get_perplexity(meters["nll_loss"].avg) - ) - - metrics.log_scalar( - "ntokens", ntokens, 1, round=3 - ) - metrics.log_scalar( - "nsentences", nsentences, 1, round=3 - ) - metrics.log_scalar( - "sample_size", sample_size, 1, round=3 - ) - metrics.log_scalar( - "sample_size_v1", sample_size_v1, 1, round=3 - ) - metrics.log_scalar( - "sample_size_v2", sample_size_v2, 1, round=3 - ) - - total = utils.item(sum(log.get("total", 0) for log in logging_outputs)) - if total > 0: - metrics.log_scalar("total", total) - n_correct = utils.item( - sum(log.get("n_correct", 0) for log in logging_outputs) - ) - metrics.log_scalar("n_correct", n_correct) - metrics.log_derived( - "accuracy", - lambda meters: round( - meters["n_correct"].sum * 100.0 / meters["total"].sum, 3 - ) - if meters["total"].sum > 0 - else float("nan"), - ) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_lm.sh b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_lm.sh deleted file mode 100644 index c2edcefede2da3b6a991b9c8fbc78c96d46d27cb..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_lm.sh +++ /dev/null @@ -1,35 +0,0 @@ -#!/usr/bin/env bash - -langdir="" -lmdir="" - -. ./cmd.sh -. ./path.sh -. parse_options.sh - -arpa_lm=$1 -data=$2 - -if [ -z $langdir ]; then - langdir=$data/lang -fi -if [ -z $lmdir ]; then - lmdir=$data/lang_test -fi - -if [ ! -d $langdir ]; then - echo "$langdir not found. run local/prepare_lang.sh first" && exit 1 -fi - -mkdir -p $lmdir -cp -r $langdir/* $lmdir - -if [[ "$arpa_lm" == *.gz ]]; then - gunzip -c $arpa_lm | arpa2fst --disambig-symbol=#0 --read-symbol-table=$lmdir/words.txt - $lmdir/G.fst -else - arpa2fst --disambig-symbol=#0 --read-symbol-table=$lmdir/words.txt $arpa_lm $lmdir/G.fst -fi -fstisstochastic $lmdir/G.fst -utils/validate_lang.pl $lmdir || exit 1 - -echo "done preparing lm ($lmdir)" diff --git a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/index.64cd2c53.css b/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/index.64cd2c53.css deleted file mode 100644 index 0107402b5cf065d10c4cfd756200b56b923f0cc9..0000000000000000000000000000000000000000 --- a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/index.64cd2c53.css +++ /dev/null @@ -1 +0,0 @@ -input::-webkit-outer-spin-button,input::-webkit-inner-spin-button{-webkit-appearance:none;margin:0}input{-moz-appearance:textfield}.input-number{--tw-shadow:0 1px 3px 0 rgb(0 0 0 / .1), 0 1px 2px -1px rgb(0 0 0 / .1);--tw-shadow-colored:0 1px 3px 0 var(--tw-shadow-color), 0 1px 2px -1px var(--tw-shadow-color);box-shadow:var(--tw-ring-offset-shadow, 0 0 #0000),var(--tw-ring-shadow, 0 0 #0000),var(--tw-shadow);transition-property:color,background-color,border-color,text-decoration-color,fill,stroke,opacity,box-shadow,transform,filter,backdrop-filter;transition-timing-function:cubic-bezier(.4,0,.2,1);transition-duration:.15s}.input-number:hover{--tw-shadow:0 4px 6px -1px rgb(0 0 0 / .1), 0 2px 4px -2px rgb(0 0 0 / .1);--tw-shadow-colored:0 4px 6px -1px var(--tw-shadow-color), 0 2px 4px -2px var(--tw-shadow-color);box-shadow:var(--tw-ring-offset-shadow, 0 0 #0000),var(--tw-ring-shadow, 0 0 #0000),var(--tw-shadow)}.dark .input-number{--tw-bg-opacity:1;background-color:rgb(31 41 55 / var(--tw-bg-opacity))}.input-dropdown .dark .selector{background-color:#0b0f19;--tw-bg-opacity:1;background-color:rgb(11 15 25 / var(--tw-bg-opacity))}.input-dropdown .selector{--tw-bg-opacity:1;background-color:rgb(255 255 255 / var(--tw-bg-opacity));--tw-shadow:0 1px 3px 0 rgb(0 0 0 / .1), 0 1px 2px -1px rgb(0 0 0 / .1);--tw-shadow-colored:0 1px 3px 0 var(--tw-shadow-color), 0 1px 2px -1px var(--tw-shadow-color);box-shadow:var(--tw-ring-offset-shadow, 0 0 #0000),var(--tw-ring-shadow, 0 0 #0000),var(--tw-shadow);transition-property:color,background-color,border-color,text-decoration-color,fill,stroke,opacity,box-shadow,transform,filter,backdrop-filter;transition-timing-function:cubic-bezier(.4,0,.2,1);transition-duration:.15s}.input-dropdown .selector:hover{--tw-shadow:0 4px 6px -1px rgb(0 0 0 / .1), 0 2px 4px -2px rgb(0 0 0 / .1);--tw-shadow-colored:0 4px 6px -1px var(--tw-shadow-color), 0 2px 4px -2px var(--tw-shadow-color);box-shadow:var(--tw-ring-offset-shadow, 0 0 #0000),var(--tw-ring-shadow, 0 0 #0000),var(--tw-shadow)}.input-dropdown .dark .selector{--tw-bg-opacity:1;background-color:rgb(31 41 55 / var(--tw-bg-opacity))}.input-dropdown .dropdown-menu{--tw-shadow:0 1px 3px 0 rgb(0 0 0 / .1), 0 1px 2px -1px rgb(0 0 0 / .1);--tw-shadow-colored:0 1px 3px 0 var(--tw-shadow-color), 0 1px 2px -1px var(--tw-shadow-color);box-shadow:var(--tw-ring-offset-shadow, 0 0 #0000),var(--tw-ring-shadow, 0 0 #0000),var(--tw-shadow)}.input-dropdown .dark .dropdown-item{background-color:#0b0f19;--tw-bg-opacity:1;background-color:rgb(11 15 25 / var(--tw-bg-opacity))}.input-dropdown .dropdown-item{--tw-bg-opacity:1;background-color:rgb(255 255 255 / var(--tw-bg-opacity))}.input-dropdown .dropdown-item:hover{font-weight:600}.input-dropdown .dark .dropdown-item{--tw-bg-opacity:1;background-color:rgb(31 41 55 / var(--tw-bg-opacity))}.selected.svelte-r8ethh .check.svelte-r8ethh{opacity:1}.input-checkbox.svelte-r8ethh .dark .checkbox-item.svelte-r8ethh{background-color:#0b0f19;--tw-bg-opacity:1;background-color:rgb(11 15 25 / var(--tw-bg-opacity))}.input-checkbox.svelte-r8ethh .checkbox-item.svelte-r8ethh{--tw-bg-opacity:1;background-color:rgb(255 255 255 / var(--tw-bg-opacity));--tw-shadow:0 1px 3px 0 rgb(0 0 0 / .1), 0 1px 2px -1px rgb(0 0 0 / .1);--tw-shadow-colored:0 1px 3px 0 var(--tw-shadow-color), 0 1px 2px -1px var(--tw-shadow-color);box-shadow:var(--tw-ring-offset-shadow, 0 0 #0000),var(--tw-ring-shadow, 0 0 #0000),var(--tw-shadow);transition-property:color,background-color,border-color,text-decoration-color,fill,stroke,opacity,box-shadow,transform,filter,backdrop-filter;transition-timing-function:cubic-bezier(.4,0,.2,1);transition-duration:.15s}.input-checkbox.svelte-r8ethh .checkbox-item.svelte-r8ethh:hover{--tw-shadow:0 4px 6px -1px rgb(0 0 0 / .1), 0 2px 4px -2px rgb(0 0 0 / .1);--tw-shadow-colored:0 4px 6px -1px var(--tw-shadow-color), 0 2px 4px -2px var(--tw-shadow-color);box-shadow:var(--tw-ring-offset-shadow, 0 0 #0000),var(--tw-ring-shadow, 0 0 #0000),var(--tw-shadow)}.input-checkbox.svelte-r8ethh .dark .checkbox-item.svelte-r8ethh{--tw-bg-opacity:1;background-color:rgb(31 41 55 / var(--tw-bg-opacity))}.input-checkbox.svelte-r8ethh .checkbox-item.selected.svelte-r8ethh{--tw-bg-opacity:1;background-color:rgb(245 158 11 / var(--tw-bg-opacity));--tw-text-opacity:1;color:rgb(255 255 255 / var(--tw-text-opacity))}.input-checkbox.svelte-r8ethh .dark .checkbox-item.selected.svelte-r8ethh{--tw-bg-opacity:1;background-color:rgb(220 38 38 / var(--tw-bg-opacity))}.selected.svelte-h5sk3f .check.svelte-h5sk3f{opacity:1}.input-checkbox-group.svelte-h5sk3f .dark .checkbox-item.svelte-h5sk3f{background-color:#0b0f19;--tw-bg-opacity:1;background-color:rgb(11 15 25 / var(--tw-bg-opacity))}.input-checkbox-group.svelte-h5sk3f .checkbox-item.svelte-h5sk3f{--tw-bg-opacity:1;background-color:rgb(255 255 255 / var(--tw-bg-opacity));--tw-shadow:0 1px 3px 0 rgb(0 0 0 / .1), 0 1px 2px -1px rgb(0 0 0 / .1);--tw-shadow-colored:0 1px 3px 0 var(--tw-shadow-color), 0 1px 2px -1px var(--tw-shadow-color);box-shadow:var(--tw-ring-offset-shadow, 0 0 #0000),var(--tw-ring-shadow, 0 0 #0000),var(--tw-shadow);transition-property:color,background-color,border-color,text-decoration-color,fill,stroke,opacity,box-shadow,transform,filter,backdrop-filter;transition-timing-function:cubic-bezier(.4,0,.2,1);transition-duration:.15s}.input-checkbox-group.svelte-h5sk3f .checkbox-item.svelte-h5sk3f:hover{--tw-shadow:0 4px 6px -1px rgb(0 0 0 / .1), 0 2px 4px -2px rgb(0 0 0 / .1);--tw-shadow-colored:0 4px 6px -1px var(--tw-shadow-color), 0 2px 4px -2px var(--tw-shadow-color);box-shadow:var(--tw-ring-offset-shadow, 0 0 #0000),var(--tw-ring-shadow, 0 0 #0000),var(--tw-shadow)}.input-checkbox-group.svelte-h5sk3f .dark .checkbox-item.svelte-h5sk3f{--tw-bg-opacity:1;background-color:rgb(31 41 55 / var(--tw-bg-opacity))}.input-checkbox-group.svelte-h5sk3f .checkbox.svelte-h5sk3f{--tw-bg-opacity:1;background-color:rgb(243 244 246 / var(--tw-bg-opacity));transition-property:color,background-color,border-color,text-decoration-color,fill,stroke,opacity,box-shadow,transform,filter,backdrop-filter;transition-timing-function:cubic-bezier(.4,0,.2,1);transition-duration:.15s}.input-checkbox-group.svelte-h5sk3f .dark .checkbox.svelte-h5sk3f{--tw-bg-opacity:1;background-color:rgb(156 163 175 / var(--tw-bg-opacity))}.input-checkbox-group.svelte-h5sk3f .checkbox-item.selected.svelte-h5sk3f{--tw-bg-opacity:1;background-color:rgb(245 158 11 / var(--tw-bg-opacity));--tw-text-opacity:1;color:rgb(255 255 255 / var(--tw-text-opacity))}.input-checkbox-group.svelte-h5sk3f .dark .checkbox-item.selected.svelte-h5sk3f{--tw-bg-opacity:1;background-color:rgb(220 38 38 / var(--tw-bg-opacity))}.input-checkbox-group.svelte-h5sk3f .selected .checkbox.svelte-h5sk3f{--tw-bg-opacity:1;background-color:rgb(217 119 6 / var(--tw-bg-opacity))}.input-checkbox-group.svelte-h5sk3f .dark .selected .checkbox.svelte-h5sk3f{--tw-bg-opacity:1;background-color:rgb(185 28 28 / var(--tw-bg-opacity))}.range.svelte-3aijhr.svelte-3aijhr::-webkit-slider-thumb{-webkit-appearance:none;height:1.25rem;width:1.25rem;cursor:pointer;appearance:none;border-radius:.25rem}.range.svelte-3aijhr.svelte-3aijhr::-moz-range-thumb{height:1.25rem;width:1.25rem;cursor:pointer;appearance:none;border-radius:.25rem}.input-slider.svelte-3aijhr .dark .range.svelte-3aijhr{background-color:#0b0f19;--tw-bg-opacity:1;background-color:rgb(11 15 25 / var(--tw-bg-opacity))}.input-slider.svelte-3aijhr .range.svelte-3aijhr{height:.75rem;--tw-bg-opacity:1;background-color:rgb(255 255 255 / var(--tw-bg-opacity));--tw-shadow:0 1px 3px 0 rgb(0 0 0 / .1), 0 1px 2px -1px rgb(0 0 0 / .1);--tw-shadow-colored:0 1px 3px 0 var(--tw-shadow-color), 0 1px 2px -1px var(--tw-shadow-color);box-shadow:var(--tw-ring-offset-shadow, 0 0 #0000),var(--tw-ring-shadow, 0 0 #0000),var(--tw-shadow);transition-property:color,background-color,border-color,text-decoration-color,fill,stroke,opacity,box-shadow,transform,filter,backdrop-filter;transition-timing-function:cubic-bezier(.4,0,.2,1);transition-duration:.15s}.input-slider.svelte-3aijhr .range.svelte-3aijhr:hover{--tw-shadow:0 4px 6px -1px rgb(0 0 0 / .1), 0 2px 4px -2px rgb(0 0 0 / .1);--tw-shadow-colored:0 4px 6px -1px var(--tw-shadow-color), 0 2px 4px -2px var(--tw-shadow-color);box-shadow:var(--tw-ring-offset-shadow, 0 0 #0000),var(--tw-ring-shadow, 0 0 #0000),var(--tw-shadow)}.input-slider.svelte-3aijhr .dark .range.svelte-3aijhr{--tw-bg-opacity:1;background-color:rgb(31 41 55 / var(--tw-bg-opacity))}.input-slider.svelte-3aijhr .range.svelte-3aijhr::-webkit-slider-thumb{background-image:linear-gradient(to bottom,var(--tw-gradient-stops));--tw-gradient-from:#fbbf24;--tw-gradient-to:rgb(251 191 36 / 0);--tw-gradient-stops:var(--tw-gradient-from), var(--tw-gradient-to);--tw-gradient-to:#f59e0b;--tw-shadow:0 1px 3px 0 rgb(0 0 0 / .1), 0 1px 2px -1px rgb(0 0 0 / .1);--tw-shadow-colored:0 1px 3px 0 var(--tw-shadow-color), 0 1px 2px -1px var(--tw-shadow-color);box-shadow:var(--tw-ring-offset-shadow, 0 0 #0000),var(--tw-ring-shadow, 0 0 #0000),var(--tw-shadow)}.input-slider.svelte-3aijhr .dark .range.svelte-3aijhr::-webkit-slider-thumb{--tw-gradient-from:#ef4444;--tw-gradient-to:rgb(239 68 68 / 0);--tw-gradient-stops:var(--tw-gradient-from), var(--tw-gradient-to);--tw-gradient-to:#dc2626}.input-slider.svelte-3aijhr .range.svelte-3aijhr::-moz-range-thumb{background-image:linear-gradient(to bottom,var(--tw-gradient-stops));--tw-gradient-from:#fbbf24;--tw-gradient-to:rgb(251 191 36 / 0);--tw-gradient-stops:var(--tw-gradient-from), var(--tw-gradient-to);--tw-gradient-to:#f59e0b;--tw-shadow:0 1px 3px 0 rgb(0 0 0 / .1), 0 1px 2px -1px rgb(0 0 0 / .1);--tw-shadow-colored:0 1px 3px 0 var(--tw-shadow-color), 0 1px 2px -1px var(--tw-shadow-color);box-shadow:var(--tw-ring-offset-shadow, 0 0 #0000),var(--tw-ring-shadow, 0 0 #0000),var(--tw-shadow)}.input-radio.svelte-145r163 .dark .radio-item.svelte-145r163{background-color:#0b0f19;--tw-bg-opacity:1;background-color:rgb(11 15 25 / var(--tw-bg-opacity))}.input-radio.svelte-145r163 .radio-item.svelte-145r163{--tw-bg-opacity:1;background-color:rgb(255 255 255 / var(--tw-bg-opacity));--tw-shadow:0 1px 3px 0 rgb(0 0 0 / .1), 0 1px 2px -1px rgb(0 0 0 / .1);--tw-shadow-colored:0 1px 3px 0 var(--tw-shadow-color), 0 1px 2px -1px var(--tw-shadow-color);box-shadow:var(--tw-ring-offset-shadow, 0 0 #0000),var(--tw-ring-shadow, 0 0 #0000),var(--tw-shadow);transition-property:color,background-color,border-color,text-decoration-color,fill,stroke,opacity,box-shadow,transform,filter,backdrop-filter;transition-timing-function:cubic-bezier(.4,0,.2,1);transition-duration:.15s}.input-radio.svelte-145r163 .radio-item.svelte-145r163:hover{--tw-shadow:0 4px 6px -1px rgb(0 0 0 / .1), 0 2px 4px -2px rgb(0 0 0 / .1);--tw-shadow-colored:0 4px 6px -1px var(--tw-shadow-color), 0 2px 4px -2px var(--tw-shadow-color);box-shadow:var(--tw-ring-offset-shadow, 0 0 #0000),var(--tw-ring-shadow, 0 0 #0000),var(--tw-shadow)}.input-radio.svelte-145r163 .dark .radio-item.svelte-145r163{--tw-bg-opacity:1;background-color:rgb(31 41 55 / var(--tw-bg-opacity))}.input-radio.svelte-145r163 .radio-circle.svelte-145r163{box-sizing:border-box;height:1rem;width:1rem;border-radius:9999px}.input-radio.svelte-145r163 .radio-item.selected.svelte-145r163{--tw-bg-opacity:1;background-color:rgb(245 158 11 / var(--tw-bg-opacity));--tw-text-opacity:1;color:rgb(255 255 255 / var(--tw-text-opacity));--tw-shadow:0 1px 3px 0 rgb(0 0 0 / .1), 0 1px 2px -1px rgb(0 0 0 / .1);--tw-shadow-colored:0 1px 3px 0 var(--tw-shadow-color), 0 1px 2px -1px var(--tw-shadow-color);box-shadow:var(--tw-ring-offset-shadow, 0 0 #0000),var(--tw-ring-shadow, 0 0 #0000),var(--tw-shadow)}.input-radio.svelte-145r163 .dark .radio-item.selected.svelte-145r163{--tw-bg-opacity:1;background-color:rgb(220 38 38 / var(--tw-bg-opacity))} diff --git a/spaces/HuangLab/CELL-E_2-Sequence_Prediction/taming/modules/diffusionmodules/model.py b/spaces/HuangLab/CELL-E_2-Sequence_Prediction/taming/modules/diffusionmodules/model.py deleted file mode 100644 index d3a5db6aa2ef915e270f1ae135e4a9918fdd884c..0000000000000000000000000000000000000000 --- a/spaces/HuangLab/CELL-E_2-Sequence_Prediction/taming/modules/diffusionmodules/model.py +++ /dev/null @@ -1,776 +0,0 @@ -# pytorch_diffusion + derived encoder decoder -import math -import torch -import torch.nn as nn -import numpy as np - - -def get_timestep_embedding(timesteps, embedding_dim): - """ - This matches the implementation in Denoising Diffusion Probabilistic Models: - From Fairseq. - Build sinusoidal embeddings. - This matches the implementation in tensor2tensor, but differs slightly - from the description in Section 3.5 of "Attention Is All You Need". - """ - assert len(timesteps.shape) == 1 - - half_dim = embedding_dim // 2 - emb = math.log(10000) / (half_dim - 1) - emb = torch.exp(torch.arange(half_dim, dtype=torch.float32) * -emb) - emb = emb.to(device=timesteps.device) - emb = timesteps.float()[:, None] * emb[None, :] - emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=1) - if embedding_dim % 2 == 1: # zero pad - emb = torch.nn.functional.pad(emb, (0,1,0,0)) - return emb - - -def nonlinearity(x): - # swish - return x*torch.sigmoid(x) - - -def Normalize(in_channels): - return torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True) - - -class Upsample(nn.Module): - def __init__(self, in_channels, with_conv): - super().__init__() - self.with_conv = with_conv - if self.with_conv: - self.conv = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, x): - x = torch.nn.functional.interpolate(x, scale_factor=2.0, mode="nearest") - if self.with_conv: - x = self.conv(x) - return x - - -class Downsample(nn.Module): - def __init__(self, in_channels, with_conv): - super().__init__() - self.with_conv = with_conv - if self.with_conv: - # no asymmetric padding in torch conv, must do it ourselves - self.conv = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=3, - stride=2, - padding=0) - - def forward(self, x): - if self.with_conv: - pad = (0,1,0,1) - x = torch.nn.functional.pad(x, pad, mode="constant", value=0) - x = self.conv(x) - else: - x = torch.nn.functional.avg_pool2d(x, kernel_size=2, stride=2) - return x - - -class ResnetBlock(nn.Module): - def __init__(self, *, in_channels, out_channels=None, conv_shortcut=False, - dropout, temb_channels=512): - super().__init__() - self.in_channels = in_channels - out_channels = in_channels if out_channels is None else out_channels - self.out_channels = out_channels - self.use_conv_shortcut = conv_shortcut - - self.norm1 = Normalize(in_channels) - self.conv1 = torch.nn.Conv2d(in_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1) - if temb_channels > 0: - self.temb_proj = torch.nn.Linear(temb_channels, - out_channels) - self.norm2 = Normalize(out_channels) - self.dropout = torch.nn.Dropout(dropout) - self.conv2 = torch.nn.Conv2d(out_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1) - if self.in_channels != self.out_channels: - if self.use_conv_shortcut: - self.conv_shortcut = torch.nn.Conv2d(in_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1) - else: - self.nin_shortcut = torch.nn.Conv2d(in_channels, - out_channels, - kernel_size=1, - stride=1, - padding=0) - - def forward(self, x, temb): - h = x - h = self.norm1(h) - h = nonlinearity(h) - h = self.conv1(h) - - if temb is not None: - h = h + self.temb_proj(nonlinearity(temb))[:,:,None,None] - - h = self.norm2(h) - h = nonlinearity(h) - h = self.dropout(h) - h = self.conv2(h) - - if self.in_channels != self.out_channels: - if self.use_conv_shortcut: - x = self.conv_shortcut(x) - else: - x = self.nin_shortcut(x) - - return x+h - - -class AttnBlock(nn.Module): - def __init__(self, in_channels): - super().__init__() - self.in_channels = in_channels - - self.norm = Normalize(in_channels) - self.q = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.k = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.v = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.proj_out = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - - - def forward(self, x): - h_ = x - h_ = self.norm(h_) - q = self.q(h_) - k = self.k(h_) - v = self.v(h_) - - # compute attention - b,c,h,w = q.shape - q = q.reshape(b,c,h*w) - q = q.permute(0,2,1) # b,hw,c - k = k.reshape(b,c,h*w) # b,c,hw - w_ = torch.bmm(q,k) # b,hw,hw w[b,i,j]=sum_c q[b,i,c]k[b,c,j] - w_ = w_ * (int(c)**(-0.5)) - w_ = torch.nn.functional.softmax(w_, dim=2) - - # attend to values - v = v.reshape(b,c,h*w) - w_ = w_.permute(0,2,1) # b,hw,hw (first hw of k, second of q) - h_ = torch.bmm(v,w_) # b, c,hw (hw of q) h_[b,c,j] = sum_i v[b,c,i] w_[b,i,j] - h_ = h_.reshape(b,c,h,w) - - h_ = self.proj_out(h_) - - return x+h_ - - -class Model(nn.Module): - def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks, - attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels, - resolution, use_timestep=True): - super().__init__() - self.ch = ch - self.temb_ch = self.ch*4 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.in_channels = in_channels - - self.use_timestep = use_timestep - if self.use_timestep: - # timestep embedding - self.temb = nn.Module() - self.temb.dense = nn.ModuleList([ - torch.nn.Linear(self.ch, - self.temb_ch), - torch.nn.Linear(self.temb_ch, - self.temb_ch), - ]) - - # downsampling - self.conv_in = torch.nn.Conv2d(in_channels, - self.ch, - kernel_size=3, - stride=1, - padding=1) - - curr_res = resolution - in_ch_mult = (1,)+tuple(ch_mult) - self.down = nn.ModuleList() - for i_level in range(self.num_resolutions): - block = nn.ModuleList() - attn = nn.ModuleList() - block_in = ch*in_ch_mult[i_level] - block_out = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks): - block.append(ResnetBlock(in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(AttnBlock(block_in)) - down = nn.Module() - down.block = block - down.attn = attn - if i_level != self.num_resolutions-1: - down.downsample = Downsample(block_in, resamp_with_conv) - curr_res = curr_res // 2 - self.down.append(down) - - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - self.mid.attn_1 = AttnBlock(block_in) - self.mid.block_2 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - - # upsampling - self.up = nn.ModuleList() - for i_level in reversed(range(self.num_resolutions)): - block = nn.ModuleList() - attn = nn.ModuleList() - block_out = ch*ch_mult[i_level] - skip_in = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks+1): - if i_block == self.num_res_blocks: - skip_in = ch*in_ch_mult[i_level] - block.append(ResnetBlock(in_channels=block_in+skip_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(AttnBlock(block_in)) - up = nn.Module() - up.block = block - up.attn = attn - if i_level != 0: - up.upsample = Upsample(block_in, resamp_with_conv) - curr_res = curr_res * 2 - self.up.insert(0, up) # prepend to get consistent order - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d(block_in, - out_ch, - kernel_size=3, - stride=1, - padding=1) - - - def forward(self, x, t=None): - #assert x.shape[2] == x.shape[3] == self.resolution - - if self.use_timestep: - # timestep embedding - assert t is not None - temb = get_timestep_embedding(t, self.ch) - temb = self.temb.dense[0](temb) - temb = nonlinearity(temb) - temb = self.temb.dense[1](temb) - else: - temb = None - - # downsampling - hs = [self.conv_in(x)] - for i_level in range(self.num_resolutions): - for i_block in range(self.num_res_blocks): - h = self.down[i_level].block[i_block](hs[-1], temb) - if len(self.down[i_level].attn) > 0: - h = self.down[i_level].attn[i_block](h) - hs.append(h) - if i_level != self.num_resolutions-1: - hs.append(self.down[i_level].downsample(hs[-1])) - - # middle - h = hs[-1] - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # upsampling - for i_level in reversed(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks+1): - h = self.up[i_level].block[i_block]( - torch.cat([h, hs.pop()], dim=1), temb) - if len(self.up[i_level].attn) > 0: - h = self.up[i_level].attn[i_block](h) - if i_level != 0: - h = self.up[i_level].upsample(h) - - # end - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - - -class Encoder(nn.Module): - def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks, - attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels, - resolution, z_channels, double_z=True, **ignore_kwargs): - super().__init__() - self.ch = ch - self.temb_ch = 0 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.in_channels = in_channels - - # downsampling - self.conv_in = torch.nn.Conv2d(in_channels, - self.ch, - kernel_size=3, - stride=1, - padding=1) - - curr_res = resolution - in_ch_mult = (1,)+tuple(ch_mult) - self.down = nn.ModuleList() - for i_level in range(self.num_resolutions): - block = nn.ModuleList() - attn = nn.ModuleList() - block_in = ch*in_ch_mult[i_level] - block_out = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks): - block.append(ResnetBlock(in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(AttnBlock(block_in)) - down = nn.Module() - down.block = block - down.attn = attn - if i_level != self.num_resolutions-1: - down.downsample = Downsample(block_in, resamp_with_conv) - curr_res = curr_res // 2 - self.down.append(down) - - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - self.mid.attn_1 = AttnBlock(block_in) - self.mid.block_2 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d(block_in, - 2*z_channels if double_z else z_channels, - kernel_size=3, - stride=1, - padding=1) - - - def forward(self, x): - #assert x.shape[2] == x.shape[3] == self.resolution, "{}, {}, {}".format(x.shape[2], x.shape[3], self.resolution) - - # timestep embedding - temb = None - - # downsampling - hs = [self.conv_in(x)] - for i_level in range(self.num_resolutions): - for i_block in range(self.num_res_blocks): - h = self.down[i_level].block[i_block](hs[-1], temb) - if len(self.down[i_level].attn) > 0: - h = self.down[i_level].attn[i_block](h) - hs.append(h) - if i_level != self.num_resolutions-1: - hs.append(self.down[i_level].downsample(hs[-1])) - - # middle - h = hs[-1] - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # end - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - - -class Decoder(nn.Module): - def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks, - attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels, - resolution, z_channels, give_pre_end=False, **ignorekwargs): - super().__init__() - self.ch = ch - self.temb_ch = 0 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.in_channels = in_channels - self.give_pre_end = give_pre_end - - # compute in_ch_mult, block_in and curr_res at lowest res - in_ch_mult = (1,)+tuple(ch_mult) - block_in = ch*ch_mult[self.num_resolutions-1] - curr_res = resolution // 2**(self.num_resolutions-1) - self.z_shape = (1,z_channels,curr_res,curr_res) - print("Working with z of shape {} = {} dimensions.".format( - self.z_shape, np.prod(self.z_shape))) - - # z to block_in - self.conv_in = torch.nn.Conv2d(z_channels, - block_in, - kernel_size=3, - stride=1, - padding=1) - - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - self.mid.attn_1 = AttnBlock(block_in) - self.mid.block_2 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - - # upsampling - self.up = nn.ModuleList() - for i_level in reversed(range(self.num_resolutions)): - block = nn.ModuleList() - attn = nn.ModuleList() - block_out = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks+1): - block.append(ResnetBlock(in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(AttnBlock(block_in)) - up = nn.Module() - up.block = block - up.attn = attn - if i_level != 0: - up.upsample = Upsample(block_in, resamp_with_conv) - curr_res = curr_res * 2 - self.up.insert(0, up) # prepend to get consistent order - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d(block_in, - out_ch, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, z): - #assert z.shape[1:] == self.z_shape[1:] - self.last_z_shape = z.shape - - # timestep embedding - temb = None - - # z to block_in - h = self.conv_in(z) - - # middle - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # upsampling - for i_level in reversed(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks+1): - h = self.up[i_level].block[i_block](h, temb) - if len(self.up[i_level].attn) > 0: - h = self.up[i_level].attn[i_block](h) - if i_level != 0: - h = self.up[i_level].upsample(h) - - # end - if self.give_pre_end: - return h - - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - - -class VUNet(nn.Module): - def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks, - attn_resolutions, dropout=0.0, resamp_with_conv=True, - in_channels, c_channels, - resolution, z_channels, use_timestep=False, **ignore_kwargs): - super().__init__() - self.ch = ch - self.temb_ch = self.ch*4 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - - self.use_timestep = use_timestep - if self.use_timestep: - # timestep embedding - self.temb = nn.Module() - self.temb.dense = nn.ModuleList([ - torch.nn.Linear(self.ch, - self.temb_ch), - torch.nn.Linear(self.temb_ch, - self.temb_ch), - ]) - - # downsampling - self.conv_in = torch.nn.Conv2d(c_channels, - self.ch, - kernel_size=3, - stride=1, - padding=1) - - curr_res = resolution - in_ch_mult = (1,)+tuple(ch_mult) - self.down = nn.ModuleList() - for i_level in range(self.num_resolutions): - block = nn.ModuleList() - attn = nn.ModuleList() - block_in = ch*in_ch_mult[i_level] - block_out = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks): - block.append(ResnetBlock(in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(AttnBlock(block_in)) - down = nn.Module() - down.block = block - down.attn = attn - if i_level != self.num_resolutions-1: - down.downsample = Downsample(block_in, resamp_with_conv) - curr_res = curr_res // 2 - self.down.append(down) - - self.z_in = torch.nn.Conv2d(z_channels, - block_in, - kernel_size=1, - stride=1, - padding=0) - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock(in_channels=2*block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - self.mid.attn_1 = AttnBlock(block_in) - self.mid.block_2 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - - # upsampling - self.up = nn.ModuleList() - for i_level in reversed(range(self.num_resolutions)): - block = nn.ModuleList() - attn = nn.ModuleList() - block_out = ch*ch_mult[i_level] - skip_in = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks+1): - if i_block == self.num_res_blocks: - skip_in = ch*in_ch_mult[i_level] - block.append(ResnetBlock(in_channels=block_in+skip_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(AttnBlock(block_in)) - up = nn.Module() - up.block = block - up.attn = attn - if i_level != 0: - up.upsample = Upsample(block_in, resamp_with_conv) - curr_res = curr_res * 2 - self.up.insert(0, up) # prepend to get consistent order - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d(block_in, - out_ch, - kernel_size=3, - stride=1, - padding=1) - - - def forward(self, x, z): - #assert x.shape[2] == x.shape[3] == self.resolution - - if self.use_timestep: - # timestep embedding - assert t is not None - temb = get_timestep_embedding(t, self.ch) - temb = self.temb.dense[0](temb) - temb = nonlinearity(temb) - temb = self.temb.dense[1](temb) - else: - temb = None - - # downsampling - hs = [self.conv_in(x)] - for i_level in range(self.num_resolutions): - for i_block in range(self.num_res_blocks): - h = self.down[i_level].block[i_block](hs[-1], temb) - if len(self.down[i_level].attn) > 0: - h = self.down[i_level].attn[i_block](h) - hs.append(h) - if i_level != self.num_resolutions-1: - hs.append(self.down[i_level].downsample(hs[-1])) - - # middle - h = hs[-1] - z = self.z_in(z) - h = torch.cat((h,z),dim=1) - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # upsampling - for i_level in reversed(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks+1): - h = self.up[i_level].block[i_block]( - torch.cat([h, hs.pop()], dim=1), temb) - if len(self.up[i_level].attn) > 0: - h = self.up[i_level].attn[i_block](h) - if i_level != 0: - h = self.up[i_level].upsample(h) - - # end - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - - -class SimpleDecoder(nn.Module): - def __init__(self, in_channels, out_channels, *args, **kwargs): - super().__init__() - self.model = nn.ModuleList([nn.Conv2d(in_channels, in_channels, 1), - ResnetBlock(in_channels=in_channels, - out_channels=2 * in_channels, - temb_channels=0, dropout=0.0), - ResnetBlock(in_channels=2 * in_channels, - out_channels=4 * in_channels, - temb_channels=0, dropout=0.0), - ResnetBlock(in_channels=4 * in_channels, - out_channels=2 * in_channels, - temb_channels=0, dropout=0.0), - nn.Conv2d(2*in_channels, in_channels, 1), - Upsample(in_channels, with_conv=True)]) - # end - self.norm_out = Normalize(in_channels) - self.conv_out = torch.nn.Conv2d(in_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, x): - for i, layer in enumerate(self.model): - if i in [1,2,3]: - x = layer(x, None) - else: - x = layer(x) - - h = self.norm_out(x) - h = nonlinearity(h) - x = self.conv_out(h) - return x - - -class UpsampleDecoder(nn.Module): - def __init__(self, in_channels, out_channels, ch, num_res_blocks, resolution, - ch_mult=(2,2), dropout=0.0): - super().__init__() - # upsampling - self.temb_ch = 0 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - block_in = in_channels - curr_res = resolution // 2 ** (self.num_resolutions - 1) - self.res_blocks = nn.ModuleList() - self.upsample_blocks = nn.ModuleList() - for i_level in range(self.num_resolutions): - res_block = [] - block_out = ch * ch_mult[i_level] - for i_block in range(self.num_res_blocks + 1): - res_block.append(ResnetBlock(in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - self.res_blocks.append(nn.ModuleList(res_block)) - if i_level != self.num_resolutions - 1: - self.upsample_blocks.append(Upsample(block_in, True)) - curr_res = curr_res * 2 - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d(block_in, - out_channels, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, x): - # upsampling - h = x - for k, i_level in enumerate(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks + 1): - h = self.res_blocks[i_level][i_block](h, None) - if i_level != self.num_resolutions - 1: - h = self.upsample_blocks[k](h) - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - diff --git a/spaces/HughAA/IPQA/README.md b/spaces/HughAA/IPQA/README.md deleted file mode 100644 index 4bfcb2260840a4c3f111de9ad3c314e4d7ad944a..0000000000000000000000000000000000000000 --- a/spaces/HughAA/IPQA/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: IPQA -emoji: 🦀 -colorFrom: green -colorTo: gray -sdk: gradio -sdk_version: 3.47.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/HugsVision/Skin-Cancer/app.py b/spaces/HugsVision/Skin-Cancer/app.py deleted file mode 100644 index 328ea4bffa9f37713f6444660df82eb26b02e51f..0000000000000000000000000000000000000000 --- a/spaces/HugsVision/Skin-Cancer/app.py +++ /dev/null @@ -1,118 +0,0 @@ -import gradio as gr - -import numpy as np -from PIL import Image - -from transformers import DeiTFeatureExtractor, DeiTForImageClassification -from hugsvision.inference.VisionClassifierInference import VisionClassifierInference -from hugsvision.inference.TorchVisionClassifierInference import TorchVisionClassifierInference - -models_name = [ - "VGG16", - "DeiT", - "ShuffleNetV2", - "MobileNetV2", - "DenseNet121", -] - -radio = gr.inputs.Radio(models_name, default="DenseNet121", type="value") - -def predict_image(image, model_name): - - image = Image.fromarray(np.uint8(image)).convert('RGB') - - model_path = "./models/" + model_name - - if model_name == "DeiT": - - model = VisionClassifierInference( - feature_extractor = DeiTFeatureExtractor.from_pretrained(model_path), - model = DeiTForImageClassification.from_pretrained(model_path), - ) - - else: - - model = TorchVisionClassifierInference( - model_path = model_path - ) - - pred = model.predict_image(img=image, return_str=False) - - for key in pred.keys(): - pred[key] = pred[key]/100 - - return pred - -id2label = ["akiec", "bcc", "bkl", "df", "mel", "nv", "vasc"] - -samples = [["images/" + p + ".jpg"] for p in id2label] -print(samples) - -image = gr.inputs.Image(shape=(224, 224), label="Upload Your Image Here") -label = gr.outputs.Label(num_top_classes=len(id2label)) - -interface = gr.Interface( - fn=predict_image, - inputs=[image,radio], - outputs=label, - capture_session=True, - allow_flagging=False, - thumbnail="ressources/thumbnail.png", - article=""" - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
      ModelAccuracySize
      VGG1638.27%512.0 MB
      DeiT71.60%327.0 MB
      DenseNet12177.78%27.1 MB
      MobileNetV275.31%8.77 MB
      ShuffleNetV276.54%4.99 MB
      - - """, - theme="darkhuggingface", - title="HAM10000: Training and using a TorchVision Image Classifier in 5 min to identify skin cancer", - description="A fast and easy tutorial to train a TorchVision Image Classifier that can help dermatologist in their identification procedures Melanoma cases with HugsVision and HAM10000 dataset.", - allow_screenshot=True, - show_tips=False, - encrypt=True, - examples=samples, -) -interface.launch() \ No newline at end of file diff --git a/spaces/Ikaros521/so-vits-svc-4.0-ikaros2/train.py b/spaces/Ikaros521/so-vits-svc-4.0-ikaros2/train.py deleted file mode 100644 index 0fc80bf4aacf143feaf08575eb285910c0c8ce0a..0000000000000000000000000000000000000000 --- a/spaces/Ikaros521/so-vits-svc-4.0-ikaros2/train.py +++ /dev/null @@ -1,297 +0,0 @@ -import logging -logging.getLogger('matplotlib').setLevel(logging.WARNING) -import os -import json -import argparse -import itertools -import math -import torch -from torch import nn, optim -from torch.nn import functional as F -from torch.utils.data import DataLoader -from torch.utils.tensorboard import SummaryWriter -import torch.multiprocessing as mp -import torch.distributed as dist -from torch.nn.parallel import DistributedDataParallel as DDP -from torch.cuda.amp import autocast, GradScaler - -import modules.commons as commons -import utils -from data_utils import TextAudioSpeakerLoader, TextAudioCollate -from models import ( - SynthesizerTrn, - MultiPeriodDiscriminator, -) -from modules.losses import ( - kl_loss, - generator_loss, discriminator_loss, feature_loss -) - -from modules.mel_processing import mel_spectrogram_torch, spec_to_mel_torch - -torch.backends.cudnn.benchmark = True -global_step = 0 - - -# os.environ['TORCH_DISTRIBUTED_DEBUG'] = 'INFO' - - -def main(): - """Assume Single Node Multi GPUs Training Only""" - assert torch.cuda.is_available(), "CPU training is not allowed." - hps = utils.get_hparams() - - n_gpus = torch.cuda.device_count() - os.environ['MASTER_ADDR'] = 'localhost' - os.environ['MASTER_PORT'] = hps.train.port - - mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,)) - - -def run(rank, n_gpus, hps): - global global_step - if rank == 0: - logger = utils.get_logger(hps.model_dir) - logger.info(hps) - utils.check_git_hash(hps.model_dir) - writer = SummaryWriter(log_dir=hps.model_dir) - writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval")) - - # for pytorch on win, backend use gloo - dist.init_process_group(backend= 'gloo' if os.name == 'nt' else 'nccl', init_method='env://', world_size=n_gpus, rank=rank) - torch.manual_seed(hps.train.seed) - torch.cuda.set_device(rank) - collate_fn = TextAudioCollate() - train_dataset = TextAudioSpeakerLoader(hps.data.training_files, hps) - train_loader = DataLoader(train_dataset, num_workers=8, shuffle=False, pin_memory=True, - batch_size=hps.train.batch_size,collate_fn=collate_fn) - if rank == 0: - eval_dataset = TextAudioSpeakerLoader(hps.data.validation_files, hps) - eval_loader = DataLoader(eval_dataset, num_workers=1, shuffle=False, - batch_size=1, pin_memory=False, - drop_last=False, collate_fn=collate_fn) - - net_g = SynthesizerTrn( - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - **hps.model).cuda(rank) - net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank) - optim_g = torch.optim.AdamW( - net_g.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - optim_d = torch.optim.AdamW( - net_d.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - net_g = DDP(net_g, device_ids=[rank]) # , find_unused_parameters=True) - net_d = DDP(net_d, device_ids=[rank]) - - skip_optimizer = True - try: - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g, - optim_g, skip_optimizer) - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d, - optim_d, skip_optimizer) - global_step = (epoch_str - 1) * len(train_loader) - except: - print("load old checkpoint failed...") - epoch_str = 1 - global_step = 0 - if skip_optimizer: - epoch_str = 1 - global_step = 0 - - scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2) - scheduler_d = torch.optim.lr_scheduler.ExponentialLR(optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2) - - scaler = GradScaler(enabled=hps.train.fp16_run) - - for epoch in range(epoch_str, hps.train.epochs + 1): - if rank == 0: - train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler, - [train_loader, eval_loader], logger, [writer, writer_eval]) - else: - train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler, - [train_loader, None], None, None) - scheduler_g.step() - scheduler_d.step() - - -def train_and_evaluate(rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers): - net_g, net_d = nets - optim_g, optim_d = optims - scheduler_g, scheduler_d = schedulers - train_loader, eval_loader = loaders - if writers is not None: - writer, writer_eval = writers - - # train_loader.batch_sampler.set_epoch(epoch) - global global_step - - net_g.train() - net_d.train() - for batch_idx, items in enumerate(train_loader): - c, f0, spec, y, spk, lengths, uv = items - g = spk.cuda(rank, non_blocking=True) - spec, y = spec.cuda(rank, non_blocking=True), y.cuda(rank, non_blocking=True) - c = c.cuda(rank, non_blocking=True) - f0 = f0.cuda(rank, non_blocking=True) - uv = uv.cuda(rank, non_blocking=True) - lengths = lengths.cuda(rank, non_blocking=True) - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - - with autocast(enabled=hps.train.fp16_run): - y_hat, ids_slice, z_mask, \ - (z, z_p, m_p, logs_p, m_q, logs_q), pred_lf0, norm_lf0, lf0 = net_g(c, f0, uv, spec, g=g, c_lengths=lengths, - spec_lengths=lengths) - - y_mel = commons.slice_segments(mel, ids_slice, hps.train.segment_size // hps.data.hop_length) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - y = commons.slice_segments(y, ids_slice * hps.data.hop_length, hps.train.segment_size) # slice - - # Discriminator - y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach()) - - with autocast(enabled=False): - loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(y_d_hat_r, y_d_hat_g) - loss_disc_all = loss_disc - - optim_d.zero_grad() - scaler.scale(loss_disc_all).backward() - scaler.unscale_(optim_d) - grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None) - scaler.step(optim_d) - - with autocast(enabled=hps.train.fp16_run): - # Generator - y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat) - with autocast(enabled=False): - loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel - loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl - loss_fm = feature_loss(fmap_r, fmap_g) - loss_gen, losses_gen = generator_loss(y_d_hat_g) - loss_lf0 = F.mse_loss(pred_lf0, lf0) - loss_gen_all = loss_gen + loss_fm + loss_mel + loss_kl + loss_lf0 - optim_g.zero_grad() - scaler.scale(loss_gen_all).backward() - scaler.unscale_(optim_g) - grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None) - scaler.step(optim_g) - scaler.update() - - if rank == 0: - if global_step % hps.train.log_interval == 0: - lr = optim_g.param_groups[0]['lr'] - losses = [loss_disc, loss_gen, loss_fm, loss_mel, loss_kl] - logger.info('Train Epoch: {} [{:.0f}%]'.format( - epoch, - 100. * batch_idx / len(train_loader))) - logger.info([x.item() for x in losses] + [global_step, lr]) - - scalar_dict = {"loss/g/total": loss_gen_all, "loss/d/total": loss_disc_all, "learning_rate": lr, - "grad_norm_d": grad_norm_d, "grad_norm_g": grad_norm_g} - scalar_dict.update({"loss/g/fm": loss_fm, "loss/g/mel": loss_mel, "loss/g/kl": loss_kl, - "loss/g/lf0": loss_lf0}) - - # scalar_dict.update({"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)}) - # scalar_dict.update({"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)}) - # scalar_dict.update({"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)}) - image_dict = { - "slice/mel_org": utils.plot_spectrogram_to_numpy(y_mel[0].data.cpu().numpy()), - "slice/mel_gen": utils.plot_spectrogram_to_numpy(y_hat_mel[0].data.cpu().numpy()), - "all/mel": utils.plot_spectrogram_to_numpy(mel[0].data.cpu().numpy()), - "all/lf0": utils.plot_data_to_numpy(lf0[0, 0, :].cpu().numpy(), - pred_lf0[0, 0, :].detach().cpu().numpy()), - "all/norm_lf0": utils.plot_data_to_numpy(lf0[0, 0, :].cpu().numpy(), - norm_lf0[0, 0, :].detach().cpu().numpy()) - } - - utils.summarize( - writer=writer, - global_step=global_step, - images=image_dict, - scalars=scalar_dict - ) - - if global_step % hps.train.eval_interval == 0: - evaluate(hps, net_g, eval_loader, writer_eval) - utils.save_checkpoint(net_g, optim_g, hps.train.learning_rate, epoch, - os.path.join(hps.model_dir, "G_{}.pth".format(global_step)), hps.train.eval_interval, global_step) - utils.save_checkpoint(net_d, optim_d, hps.train.learning_rate, epoch, - os.path.join(hps.model_dir, "D_{}.pth".format(global_step)), hps.train.eval_interval, global_step) - global_step += 1 - - if rank == 0: - logger.info('====> Epoch: {}'.format(epoch)) - - -def evaluate(hps, generator, eval_loader, writer_eval): - generator.eval() - image_dict = {} - audio_dict = {} - with torch.no_grad(): - for batch_idx, items in enumerate(eval_loader): - c, f0, spec, y, spk, _, uv = items - g = spk[:1].cuda(0) - spec, y = spec[:1].cuda(0), y[:1].cuda(0) - c = c[:1].cuda(0) - f0 = f0[:1].cuda(0) - uv= uv[:1].cuda(0) - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_hat = generator.module.infer(c, f0, uv, g=g) - - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1).float(), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - - audio_dict.update({ - f"gen/audio_{batch_idx}": y_hat[0], - f"gt/audio_{batch_idx}": y[0] - }) - image_dict.update({ - f"gen/mel": utils.plot_spectrogram_to_numpy(y_hat_mel[0].cpu().numpy()), - "gt/mel": utils.plot_spectrogram_to_numpy(mel[0].cpu().numpy()) - }) - utils.summarize( - writer=writer_eval, - global_step=global_step, - images=image_dict, - audios=audio_dict, - audio_sampling_rate=hps.data.sampling_rate - ) - generator.train() - - -if __name__ == "__main__": - main() diff --git a/spaces/Illumotion/Koboldcpp/examples/jeopardy/jeopardy.sh b/spaces/Illumotion/Koboldcpp/examples/jeopardy/jeopardy.sh deleted file mode 100644 index 9bdbc755c13a7894fc0bc48af1d41acabbe6b254..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/examples/jeopardy/jeopardy.sh +++ /dev/null @@ -1,30 +0,0 @@ -#!/bin/bash -set -e - -MODEL=./models/ggml-vicuna-13b-1.1-q4_0.bin -MODEL_NAME=Vicuna - -# exec options -prefix="Human: " # Ex. Vicuna uses "Human: " -opts="--temp 0 -n 80" # additional flags -nl=' -' -introduction="You will be playing a game of Jeopardy. Simply answer the question in the correct format (Ex. What is Paris, or Who is George Washington)." - -# file options -question_file=./examples/jeopardy/questions.txt -touch ./examples/jeopardy/results/$MODEL_NAME.txt -output_file=./examples/jeopardy/results/$MODEL_NAME.txt - -counter=1 - -echo 'Running' -while IFS= read -r question -do - exe_cmd="./main -p "\"$prefix$introduction$nl$prefix$question\"" "$opts" -m ""\"$MODEL\""" >> ""\"$output_file\"" - echo $counter - echo "Current Question: $question" - eval "$exe_cmd" - echo -e "\n------" >> $output_file - counter=$((counter+1)) -done < "$question_file" diff --git a/spaces/Intel/NeuralChat-ICX-INT4/fastchat/serve/__init__.py b/spaces/Intel/NeuralChat-ICX-INT4/fastchat/serve/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Jamel887/Rv-percobaan887/lib/infer_pack/modules/F0Predictor/__init__.py b/spaces/Jamel887/Rv-percobaan887/lib/infer_pack/modules/F0Predictor/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/JosephusCheung/ACertainsStrategyTalk/2.html b/spaces/JosephusCheung/ACertainsStrategyTalk/2.html deleted file mode 100644 index aae771b29476c213ebcade487c9e9b38b7c85c07..0000000000000000000000000000000000000000 --- a/spaces/JosephusCheung/ACertainsStrategyTalk/2.html +++ /dev/null @@ -1,102 +0,0 @@ - - - - - - - - - -

      - - - - - - - - - - - - - - - -
      -
      - - - - -
      Summary on Certains Certains Certains -CertainModel -Main inference model, -prompt input has good -accuracy. -CertainThing -Anythingv3 style (improved -for poorly written prompts, -but less freedom of creation) -Certainty -Balanced model better -for further Dreambooth -and finetuning
      - - - -
      - - diff --git a/spaces/Kabriske/Multilingual_Video_Subtitler/video_to_audio_converter.py b/spaces/Kabriske/Multilingual_Video_Subtitler/video_to_audio_converter.py deleted file mode 100644 index d90098387454598ca5d50e0656332bf9fd10084b..0000000000000000000000000000000000000000 --- a/spaces/Kabriske/Multilingual_Video_Subtitler/video_to_audio_converter.py +++ /dev/null @@ -1,37 +0,0 @@ -import os -import subprocess - -import ffmpeg - -from utils import log - - -class VideoToAudioConverter: - @staticmethod - def convert(path_to_video: str, output_ext="mp3") -> str: - """Converts video to audio directly using `ffmpeg` command - with the help of subprocess module""" - log("Converts video to audio") - filename, ext = os.path.splitext(path_to_video) - subprocess.call(["ffmpeg", - "-y", - "-i", - path_to_video, - f"{filename}.{output_ext}"], - stdout=subprocess.DEVNULL, - stderr=subprocess.STDOUT) - - video_length = float(ffmpeg.probe(path_to_video)['format']['duration']) - audio_length = float(ffmpeg.probe(f"{filename}.{output_ext}")['format']['duration']) - if video_length - audio_length > 1: - raise Exception("Conversion failed") - return f"{filename}.{output_ext}" - - -if __name__ == '__main__': - video_to_audio_converter = VideoToAudioConverter() - video_to_audio_converter.convert('iPhone_14_Pro.mp4') - if os.path.exists('sample/iPhone_14_Pro.mp3'): - log("File converted successfully") - else: - log("File conversion failed") diff --git a/spaces/KarinaCardozo/PrevencionFraude/README.md b/spaces/KarinaCardozo/PrevencionFraude/README.md deleted file mode 100644 index cb386664feaac80d1f1500764e896eb7a5a5a827..0000000000000000000000000000000000000000 --- a/spaces/KarinaCardozo/PrevencionFraude/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: PrevencionFraude -emoji: 📚 -colorFrom: indigo -colorTo: gray -sdk: gradio -sdk_version: 3.36.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Kayson/InstructDiffusion/dataset/seg/coco_stuff.py b/spaces/Kayson/InstructDiffusion/dataset/seg/coco_stuff.py deleted file mode 100644 index 22a85515fe67f6b589bd8b6fc171585eb225824b..0000000000000000000000000000000000000000 --- a/spaces/Kayson/InstructDiffusion/dataset/seg/coco_stuff.py +++ /dev/null @@ -1,175 +0,0 @@ -# -------------------------------------------------------- -# InstructDiffusion -# Based on instruct-pix2pix (https://github.com/timothybrooks/instruct-pix2pix) -# Modified by Binxin Yang (tennyson@mail.ustc.edu.cn) -# -------------------------------------------------------- - -from __future__ import annotations - -import json -import math -from pathlib import Path -from typing import Any - -import numpy as np -import torch -import torchvision -from einops import rearrange -from PIL import Image -from torch.utils.data import Dataset -import cv2 -import os -import random -import copy -from glob import glob - - -class COCOStuffDataset(Dataset): - def __init__( - self, - path: str, - path_edit: str = "None", - split: str = "train", - splits: tuple[float, float, float] = (0.9, 0.05, 0.05), - crop_res: int = 256, - flip_prob: float = 0.0, - transparency: float = 0, - batch_size: int = 10, - empty_percentage: float = 0, - ): - assert split in ("train2017", "val2017") - assert sum(splits) == 1 - self.split = split - self.path = path - self.path_edit = path_edit - self.batch_size = batch_size - self.crop_res = crop_res - self.flip_prob = flip_prob - self.empty_percentage = empty_percentage - self.transparency = transparency - if self.split in ["train2017", "val2017"]: - file_list = sorted(glob(os.path.join(self.path, "images", self.split, "*.jpg"))) - assert len(file_list) > 0, "{} has no image".format( - os.path.join(self.path, "images", self.split) - ) - file_list = [f.split("/")[-1].replace(".jpg", "") for f in file_list] - self.files = file_list - - else: - raise ValueError("Invalid split name: {}".format(self.split)) - - seg_diverse_prompt_path = 'dataset/prompt/prompt_seg.txt' - self.seg_diverse_prompt_list=[] - with open(seg_diverse_prompt_path) as f: - line=f.readline() - while line: - line=line.strip('\n') - self.seg_diverse_prompt_list.append(line) - line=f.readline() - - color_list_file_path='dataset/prompt/color_list_train_small.txt' - self.color_list=[] - with open(color_list_file_path) as f: - line = f.readline() - while line: - line_split = line.strip('\n').split(" ") - if len(line_split)>1: - temp = [] - for i in range(4): - temp.append(line_split[i]) - self.color_list.append(temp) - line = f.readline() - - coco_label_list_path = self.path + '/labels.txt' - self.label_dict={} - with open(coco_label_list_path) as f: - line = f.readline() - while line: - line_split = line.strip('\n').split(": ") - self.label_dict[int(line_split[0])]=line_split[1] - line = f.readline() - - def __len__(self) -> int: - length=len(self.files) - return length - - def _augmentation_new(self, image, label): - - # Cropping - h, w = label.shape - if h > w: - start_h = random.randint(0, h - w) - end_h = start_h + w - image = image[start_h:end_h] - label = label[start_h:end_h] - elif h < w: - start_w = random.randint(0, w - h) - end_w = start_w + h - image = image[:, start_w:end_w] - label = label[:, start_w:end_w] - else: - pass - image = Image.fromarray(image).resize((self.crop_res, self.crop_res), resample=Image.Resampling.LANCZOS) - image = np.asarray(image, dtype=np.uint8) - label = Image.fromarray(label).resize((self.crop_res, self.crop_res), resample=Image.Resampling.NEAREST) - label = np.asarray(label, dtype=np.int64) - return image, label - - def __getitem__(self, i): - - image_id = self.files[i] - img_path = os.path.join(self.path, "images", self.split, image_id + ".jpg") - mask_path = os.path.join(self.path, "annotations", self.split, image_id + ".png") - - label = Image.open(mask_path).convert("L") - image = Image.open(img_path).convert("RGB") - label = np.asarray(label) - image = np.asarray(image) - image, label = self._augmentation_new(image,label) - - label_list = np.unique(label) - label_list = list(label_list) - label_list_rest = [i for i in range(182)] - for item in label_list_rest: - if item in label_list: - label_list_rest.remove(item) - if 255 in label_list: - label_list.remove(255) - if len(label_list)!=0: - label_idx = random.choice(label_list) - if random.uniform(0, 1) < self.empty_percentage: - label_idx = random.choice(label_list_rest) - - class_name = self.label_dict[label_idx+1] - - prompt = random.choice(self.seg_diverse_prompt_list) - color = random.choice(self.color_list) - color_name = color[0] - prompt = prompt.format(color=color_name.lower(), object=class_name.lower()) - R, G, B = color[3].split(",") - R = int(R) - G = int(G) - B = int(B) - else: - label_idx = 200 - prompt = "leave the picture as it is." - mask = (label==label_idx) - image_0 = Image.fromarray(image) - image_1 = copy.deepcopy(image) - - if len(label_list)!=0: - image_1[:,:,0][mask]=self.transparency*image_1[:,:,0][mask]+(1-self.transparency)*R - image_1[:,:,1][mask]=self.transparency*image_1[:,:,1][mask]+(1-self.transparency)*G - image_1[:,:,2][mask]=self.transparency*image_1[:,:,2][mask]+(1-self.transparency)*B - - image_1 = Image.fromarray(image_1) - # return image_0, image_1, prompt - - image_0 = rearrange(2 * torch.tensor(np.array(image_0)).float() / 255 - 1, "h w c -> c h w") - image_1 = rearrange(2 * torch.tensor(np.array(image_1)).float() / 255 - 1, "h w c -> c h w") - - mask = torch.tensor(mask).float() - crop = torchvision.transforms.RandomCrop(self.crop_res) - flip = torchvision.transforms.RandomHorizontalFlip(float(self.flip_prob)) - image_0, image_1 = flip(crop(torch.cat((image_0, image_1)))).chunk(2) - return dict(edited=image_1, edit=dict(c_concat=image_0, c_crossattn=prompt)) \ No newline at end of file diff --git a/spaces/KdaiP/yolov8-deepsort-tracking/deep_sort/deep_sort/sort/track.py b/spaces/KdaiP/yolov8-deepsort-tracking/deep_sort/deep_sort/sort/track.py deleted file mode 100644 index b81d7968bdb828ba43fd9a9968d40520f2d818b3..0000000000000000000000000000000000000000 --- a/spaces/KdaiP/yolov8-deepsort-tracking/deep_sort/deep_sort/sort/track.py +++ /dev/null @@ -1,199 +0,0 @@ -# vim: expandtab:ts=4:sw=4 - - -class TrackState: - """ - Enumeration type for the single target track state. Newly created tracks are - classified as `tentative` until enough evidence has been collected. Then, - the track state is changed to `confirmed`. Tracks that are no longer alive - are classified as `deleted` to mark them for removal from the set of active - tracks. - - 单个目标track状态的枚举类型。 - 新创建的track分类为“Tentative”,直到收集到足够的证据为止。 - 然后,跟踪状态更改为“Confirmed”。 - 不再活跃的tracks被归类为“Deleted”,以将其标记为从有效集中删除。 - - """ - - Tentative = 1 - Confirmed = 2 - Deleted = 3 - - -class Track: - """ - A single target track with state space `(x, y, a, h)` and associated - velocities, where `(x, y)` is the center of the bounding box, `a` is the - aspect ratio and `h` is the height. - - 具有状态空间(x,y,a,h)并关联速度的单个目标轨迹(track), - 其中(x,y)是边界框的中心,a是宽高比,h是高度。 - - Parameters - ---------- - mean : ndarray - Mean vector of the initial state distribution. - 初始状态分布的均值向量 - covariance : ndarray - Covariance matrix of the initial state distribution. - 初始状态分布的协方差矩阵 - track_id : int - A unique track identifier. - 唯一的track标识符 - n_init : int - Number of consecutive detections before the track is confirmed. The - track state is set to `Deleted` if a miss occurs within the first - `n_init` frames. - 确认track之前的连续检测次数。 在第一个n_init帧中 - 第一个未命中的情况下将跟踪状态设置为“Deleted” - max_age : int - The maximum number of consecutive misses before the track state is - set to `Deleted`. - 跟踪状态设置为Deleted之前的最大连续未命中数;代表一个track的存活期限 - - feature : Optional[ndarray] - Feature vector of the detection this track originates from. If not None, - this feature is added to the `features` cache. - 此track所源自的检测的特征向量。 如果不是None,此feature已添加到feature缓存中。 - - Attributes - ---------- - mean : ndarray - Mean vector of the initial state distribution. - 初始状态分布的均值向量 - covariance : ndarray - Covariance matrix of the initial state distribution. - 初始状态分布的协方差矩阵 - track_id : int - A unique track identifier. - hits : int - Total number of measurement updates. - 测量更新总数 - age : int - Total number of frames since first occurence. - 自第一次出现以来的总帧数 - time_since_update : int - Total number of frames since last measurement update. - 自上次测量更新以来的总帧数 - state : TrackState - The current track state. - features : List[ndarray] - A cache of features. On each measurement update, the associated feature - vector is added to this list. - feature缓存。每次测量更新时,相关feature向量添加到此列表中 - - """ - - def __init__(self, mean, covariance, track_id, n_init, max_age, - feature=None): - self.mean = mean - self.covariance = covariance - self.track_id = track_id - # hits代表匹配上了多少次,匹配次数超过n_init,设置Confirmed状态 - # hits每次调用update函数的时候+1 - self.hits = 1 - self.age = 1 # 和time_since_update功能重复 - # 每次调用predict函数的时候就会+1; 每次调用update函数的时候就会设置为0 - self.time_since_update = 0 - - self.state = TrackState.Tentative # 初始化一个Track的时设置Tentative状态 - # 每个track对应多个features, 每次更新都会将最新的feature添加到列表中 - self.features = [] - if feature is not None: - self.features.append(feature) - - self._n_init = n_init - self._max_age = max_age - - def to_tlwh(self): - """Get current position in bounding box format `(top left x, top left y, - width, height)`. - - Returns - ------- - ndarray - The bounding box. - - """ - ret = self.mean[:4].copy() - ret[2] *= ret[3] - ret[:2] -= ret[2:] / 2 - return ret - - def to_tlbr(self): - """Get current position in bounding box format `(min x, miny, max x, - max y)`. - - Returns - ------- - ndarray - The bounding box. - - """ - ret = self.to_tlwh() - ret[2:] = ret[:2] + ret[2:] - return ret - - def predict(self, kf): - """Propagate the state distribution to the current time step using a - Kalman filter prediction step. - 使用卡尔曼滤波器预测步骤将状态分布传播到当前时间步 - - Parameters - ---------- - kf : kalman_filter.KalmanFilter - The Kalman filter. - - """ - self.mean, self.covariance = kf.predict(self.mean, self.covariance) - self.age += 1 - self.time_since_update += 1 - - def update(self, kf, detection): - """Perform Kalman filter measurement update step and update the feature - cache. - 执行卡尔曼滤波器测量更新步骤并更新feature缓存 - - Parameters - ---------- - kf : kalman_filter.KalmanFilter - The Kalman filter. - detection : Detection - The associated detection. - - """ - self.mean, self.covariance = kf.update( - self.mean, self.covariance, detection.to_xyah()) - self.features.append(detection.feature) - - self.hits += 1 - self.time_since_update = 0 - # hits代表匹配上了多少次,匹配次数超过n_init,设置Confirmed状态 - # 连续匹配上n_init帧的时候,转变为确定态 - if self.state == TrackState.Tentative and self.hits >= self._n_init: - self.state = TrackState.Confirmed - - def mark_missed(self): - """Mark this track as missed (no association at the current time step). - """ - # 如果在处于Tentative态的情况下没有匹配上任何detection,转变为删除态。 - if self.state == TrackState.Tentative: - self.state = TrackState.Deleted - elif self.time_since_update > self._max_age: - # 如果time_since_update超过max_age,设置Deleted状态 - # 即失配连续达到max_age次数的时候,转变为删除态 - self.state = TrackState.Deleted - - def is_tentative(self): - """Returns True if this track is tentative (unconfirmed). - """ - return self.state == TrackState.Tentative - - def is_confirmed(self): - """Returns True if this track is confirmed.""" - return self.state == TrackState.Confirmed - - def is_deleted(self): - """Returns True if this track is dead and should be deleted.""" - return self.state == TrackState.Deleted diff --git a/spaces/KdaiP/yolov8-deepsort-tracking/deep_sort/utils/draw.py b/spaces/KdaiP/yolov8-deepsort-tracking/deep_sort/utils/draw.py deleted file mode 100644 index bc7cb537978e86805d5d9789785a8afe67df9030..0000000000000000000000000000000000000000 --- a/spaces/KdaiP/yolov8-deepsort-tracking/deep_sort/utils/draw.py +++ /dev/null @@ -1,36 +0,0 @@ -import numpy as np -import cv2 - -palette = (2 ** 11 - 1, 2 ** 15 - 1, 2 ** 20 - 1) - - -def compute_color_for_labels(label): - """ - Simple function that adds fixed color depending on the class - """ - color = [int((p * (label ** 2 - label + 1)) % 255) for p in palette] - return tuple(color) - - -def draw_boxes(img, bbox, identities=None, offset=(0,0)): - for i,box in enumerate(bbox): - x1,y1,x2,y2 = [int(i) for i in box] - x1 += offset[0] - x2 += offset[0] - y1 += offset[1] - y2 += offset[1] - # box text and bar - id = int(identities[i]) if identities is not None else 0 - color = compute_color_for_labels(id) - label = '{}{:d}'.format("", id) - t_size = cv2.getTextSize(label, cv2.FONT_HERSHEY_PLAIN, 2 , 2)[0] - cv2.rectangle(img,(x1, y1),(x2,y2),color,3) - cv2.rectangle(img,(x1, y1),(x1+t_size[0]+3,y1+t_size[1]+4), color,-1) - cv2.putText(img,label,(x1,y1+t_size[1]+4), cv2.FONT_HERSHEY_PLAIN, 2, [255,255,255], 2) - return img - - - -if __name__ == '__main__': - for i in range(82): - print(compute_color_for_labels(i)) diff --git a/spaces/Kimata/Sanskrit-TTS/utils/utils.py b/spaces/Kimata/Sanskrit-TTS/utils/utils.py deleted file mode 100644 index 07839a71a8339f90fe7eeff4dc4a6bd284330049..0000000000000000000000000000000000000000 --- a/spaces/Kimata/Sanskrit-TTS/utils/utils.py +++ /dev/null @@ -1,75 +0,0 @@ -import logging -from json import loads -from torch import load, FloatTensor -from numpy import float32 -import librosa - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() - - -def load_checkpoint(checkpoint_path, model): - checkpoint_dict = load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict= {} - for k, v in state_dict.items(): - try: - new_state_dict[k] = saved_state_dict[k] - except: - logging.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logging.info("Loaded checkpoint '{}' (iteration {})" .format( - checkpoint_path, iteration)) - return - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = loads(data) - - hparams = HParams(**config) - return hparams - - -def load_audio_to_torch(full_path, target_sampling_rate): - audio, sampling_rate = librosa.load(full_path, sr=target_sampling_rate, mono=True) - return FloatTensor(audio.astype(float32)) diff --git a/spaces/Kirokowa/hakurei-waifu-diffusion/README.md b/spaces/Kirokowa/hakurei-waifu-diffusion/README.md deleted file mode 100644 index 57f0c312ab7e8587fc9c42626a4a3609d9008fc5..0000000000000000000000000000000000000000 --- a/spaces/Kirokowa/hakurei-waifu-diffusion/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Hakurei Waifu Diffusion -emoji: 💩 -colorFrom: blue -colorTo: gray -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/KyanChen/RSPrompter/mmdet/evaluation/metrics/cityscapes_metric.py b/spaces/KyanChen/RSPrompter/mmdet/evaluation/metrics/cityscapes_metric.py deleted file mode 100644 index e5cdc179a3c76ef3742dd3ee6692c7deb9905459..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/evaluation/metrics/cityscapes_metric.py +++ /dev/null @@ -1,205 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -import os.path as osp -import shutil -import tempfile -from collections import OrderedDict -from typing import Dict, Optional, Sequence - -import mmcv -import numpy as np -from mmengine.dist import is_main_process -from mmengine.evaluator import BaseMetric -from mmengine.logging import MMLogger - -from mmdet.registry import METRICS - -try: - import cityscapesscripts.evaluation.evalInstanceLevelSemanticLabeling as CSEval # noqa: E501 - import cityscapesscripts.helpers.labels as CSLabels - - from mmdet.evaluation.functional import evaluateImgLists - HAS_CITYSCAPESAPI = True -except ImportError: - HAS_CITYSCAPESAPI = False - - -@METRICS.register_module() -class CityScapesMetric(BaseMetric): - """CityScapes metric for instance segmentation. - - Args: - outfile_prefix (str): The prefix of txt and png files. The txt and - png file will be save in a directory whose path is - "outfile_prefix.results/". - seg_prefix (str, optional): Path to the directory which contains the - cityscapes instance segmentation masks. It's necessary when - training and validation. It could be None when infer on test - dataset. Defaults to None. - format_only (bool): Format the output results without perform - evaluation. It is useful when you want to format the result - to a specific format and submit it to the test server. - Defaults to False. - collect_device (str): Device name used for collecting results from - different ranks during distributed training. Must be 'cpu' or - 'gpu'. Defaults to 'cpu'. - prefix (str, optional): The prefix that will be added in the metric - names to disambiguate homonymous metrics of different evaluators. - If prefix is not provided in the argument, self.default_prefix - will be used instead. Defaults to None. - dump_matches (bool): Whether dump matches.json file during evaluating. - Defaults to False. - file_client_args (dict, optional): Arguments to instantiate the - corresponding backend in mmdet <= 3.0.0rc6. Defaults to None. - backend_args (dict, optional): Arguments to instantiate the - corresponding backend. Defaults to None. - """ - default_prefix: Optional[str] = 'cityscapes' - - def __init__(self, - outfile_prefix: str, - seg_prefix: Optional[str] = None, - format_only: bool = False, - collect_device: str = 'cpu', - prefix: Optional[str] = None, - dump_matches: bool = False, - file_client_args: dict = None, - backend_args: dict = None) -> None: - - if not HAS_CITYSCAPESAPI: - raise RuntimeError('Failed to import `cityscapesscripts`.' - 'Please try to install official ' - 'cityscapesscripts by ' - '"pip install cityscapesscripts"') - super().__init__(collect_device=collect_device, prefix=prefix) - - self.tmp_dir = None - self.format_only = format_only - if self.format_only: - assert outfile_prefix is not None, 'outfile_prefix must be not' - 'None when format_only is True, otherwise the result files will' - 'be saved to a temp directory which will be cleaned up at the end.' - else: - assert seg_prefix is not None, '`seg_prefix` is necessary when ' - 'computing the CityScapes metrics' - - if outfile_prefix is None: - self.tmp_dir = tempfile.TemporaryDirectory() - self.outfile_prefix = osp.join(self.tmp_dir.name, 'results') - else: - # the directory to save predicted panoptic segmentation mask - self.outfile_prefix = osp.join(outfile_prefix, 'results') # type: ignore # yapf: disable # noqa: E501 - - dir_name = osp.expanduser(self.outfile_prefix) - - if osp.exists(dir_name) and is_main_process(): - logger: MMLogger = MMLogger.get_current_instance() - logger.info('remove previous results.') - shutil.rmtree(dir_name) - os.makedirs(dir_name, exist_ok=True) - - self.backend_args = backend_args - if file_client_args is not None: - raise RuntimeError( - 'The `file_client_args` is deprecated, ' - 'please use `backend_args` instead, please refer to' - 'https://github.com/open-mmlab/mmdetection/blob/main/configs/_base_/datasets/coco_detection.py' # noqa: E501 - ) - - self.seg_prefix = seg_prefix - self.dump_matches = dump_matches - - def __del__(self) -> None: - """Clean up the results if necessary.""" - if self.tmp_dir is not None: - self.tmp_dir.cleanup() - - # TODO: data_batch is no longer needed, consider adjusting the - # parameter position - def process(self, data_batch: dict, data_samples: Sequence[dict]) -> None: - """Process one batch of data samples and predictions. The processed - results should be stored in ``self.results``, which will be used to - compute the metrics when all batches have been processed. - - Args: - data_batch (dict): A batch of data from the dataloader. - data_samples (Sequence[dict]): A batch of data samples that - contain annotations and predictions. - """ - for data_sample in data_samples: - # parse pred - result = dict() - pred = data_sample['pred_instances'] - filename = data_sample['img_path'] - basename = osp.splitext(osp.basename(filename))[0] - pred_txt = osp.join(self.outfile_prefix, basename + '_pred.txt') - result['pred_txt'] = pred_txt - labels = pred['labels'].cpu().numpy() - masks = pred['masks'].cpu().numpy().astype(np.uint8) - if 'mask_scores' in pred: - # some detectors use different scores for bbox and mask - mask_scores = pred['mask_scores'].cpu().numpy() - else: - mask_scores = pred['scores'].cpu().numpy() - - with open(pred_txt, 'w') as f: - for i, (label, mask, mask_score) in enumerate( - zip(labels, masks, mask_scores)): - class_name = self.dataset_meta['classes'][label] - class_id = CSLabels.name2label[class_name].id - png_filename = osp.join( - self.outfile_prefix, - basename + f'_{i}_{class_name}.png') - mmcv.imwrite(mask, png_filename) - f.write(f'{osp.basename(png_filename)} ' - f'{class_id} {mask_score}\n') - - # parse gt - gt = dict() - img_path = filename.replace('leftImg8bit.png', - 'gtFine_instanceIds.png') - gt['file_name'] = img_path.replace('leftImg8bit', 'gtFine') - - self.results.append((gt, result)) - - def compute_metrics(self, results: list) -> Dict[str, float]: - """Compute the metrics from processed results. - - Args: - results (list): The processed results of each batch. - - Returns: - Dict[str, float]: The computed metrics. The keys are the names of - the metrics, and the values are corresponding results. - """ - logger: MMLogger = MMLogger.get_current_instance() - - if self.format_only: - logger.info( - f'results are saved to {osp.dirname(self.outfile_prefix)}') - return OrderedDict() - logger.info('starts to compute metric') - - gts, preds = zip(*results) - # set global states in cityscapes evaluation API - gt_instances_file = osp.join(self.outfile_prefix, 'gtInstances.json') # type: ignore # yapf: disable # noqa: E501 - # split gt and prediction list - gts, preds = zip(*results) - CSEval.args.JSONOutput = False - CSEval.args.colorized = False - CSEval.args.gtInstancesFile = gt_instances_file - - groundTruthImgList = [gt['file_name'] for gt in gts] - predictionImgList = [pred['pred_txt'] for pred in preds] - CSEval_results = evaluateImgLists( - predictionImgList, - groundTruthImgList, - CSEval.args, - self.backend_args, - dump_matches=self.dump_matches)['averages'] - - eval_results = OrderedDict() - eval_results['mAP'] = CSEval_results['allAp'] - eval_results['AP@50'] = CSEval_results['allAp50%'] - - return eval_results diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/base_mask_head.py b/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/base_mask_head.py deleted file mode 100644 index 7183d782829aa15bf12b9e2f7ade999c84d0593f..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/base_mask_head.py +++ /dev/null @@ -1,128 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod -from typing import List, Tuple, Union - -from mmengine.model import BaseModule -from torch import Tensor - -from mmdet.structures import SampleList -from mmdet.utils import InstanceList, OptInstanceList, OptMultiConfig -from ..utils import unpack_gt_instances - - -class BaseMaskHead(BaseModule, metaclass=ABCMeta): - """Base class for mask heads used in One-Stage Instance Segmentation.""" - - def __init__(self, init_cfg: OptMultiConfig = None) -> None: - super().__init__(init_cfg=init_cfg) - - @abstractmethod - def loss_by_feat(self, *args, **kwargs): - """Calculate the loss based on the features extracted by the mask - head.""" - pass - - @abstractmethod - def predict_by_feat(self, *args, **kwargs): - """Transform a batch of output features extracted from the head into - mask results.""" - pass - - def loss(self, - x: Union[List[Tensor], Tuple[Tensor]], - batch_data_samples: SampleList, - positive_infos: OptInstanceList = None, - **kwargs) -> dict: - """Perform forward propagation and loss calculation of the mask head on - the features of the upstream network. - - Args: - x (list[Tensor] | tuple[Tensor]): Features from FPN. - Each has a shape (B, C, H, W). - batch_data_samples (list[:obj:`DetDataSample`]): Each item contains - the meta information of each image and corresponding - annotations. - positive_infos (list[:obj:`InstanceData`], optional): Information - of positive samples. Used when the label assignment is - done outside the MaskHead, e.g., BboxHead in - YOLACT or CondInst, etc. When the label assignment is done in - MaskHead, it would be None, like SOLO or SOLOv2. All values - in it should have shape (num_positive_samples, *). - - - Returns: - dict: A dictionary of loss components. - """ - if positive_infos is None: - outs = self(x) - else: - outs = self(x, positive_infos) - - assert isinstance(outs, tuple), 'Forward results should be a tuple, ' \ - 'even if only one item is returned' - - outputs = unpack_gt_instances(batch_data_samples) - batch_gt_instances, batch_gt_instances_ignore, batch_img_metas \ - = outputs - for gt_instances, img_metas in zip(batch_gt_instances, - batch_img_metas): - img_shape = img_metas['batch_input_shape'] - gt_masks = gt_instances.masks.pad(img_shape) - gt_instances.masks = gt_masks - - losses = self.loss_by_feat( - *outs, - batch_gt_instances=batch_gt_instances, - batch_img_metas=batch_img_metas, - positive_infos=positive_infos, - batch_gt_instances_ignore=batch_gt_instances_ignore, - **kwargs) - return losses - - def predict(self, - x: Tuple[Tensor], - batch_data_samples: SampleList, - rescale: bool = False, - results_list: OptInstanceList = None, - **kwargs) -> InstanceList: - """Test function without test-time augmentation. - - Args: - x (tuple[Tensor]): Multi-level features from the - upstream network, each is a 4D-tensor. - batch_data_samples (List[:obj:`DetDataSample`]): The Data - Samples. It usually includes information such as - `gt_instance`, `gt_panoptic_seg` and `gt_sem_seg`. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - results_list (list[obj:`InstanceData`], optional): Detection - results of each image after the post process. Only exist - if there is a `bbox_head`, like `YOLACT`, `CondInst`, etc. - - Returns: - list[obj:`InstanceData`]: Instance segmentation - results of each image after the post process. - Each item usually contains following keys. - - - scores (Tensor): Classification scores, has a shape - (num_instance,) - - labels (Tensor): Has a shape (num_instances,). - - masks (Tensor): Processed mask results, has a - shape (num_instances, h, w). - """ - batch_img_metas = [ - data_samples.metainfo for data_samples in batch_data_samples - ] - if results_list is None: - outs = self(x) - else: - outs = self(x, results_list) - - results_list = self.predict_by_feat( - *outs, - batch_img_metas=batch_img_metas, - rescale=rescale, - results_list=results_list, - **kwargs) - - return results_list diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/losses/kd_loss.py b/spaces/KyanChen/RSPrompter/mmdet/models/losses/kd_loss.py deleted file mode 100644 index 0a7d5ef24a0b0d7d7390a27c7cd9cbfdbe61d823..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/losses/kd_loss.py +++ /dev/null @@ -1,95 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Optional - -import torch.nn as nn -import torch.nn.functional as F -from torch import Tensor - -from mmdet.registry import MODELS -from .utils import weighted_loss - - -@weighted_loss -def knowledge_distillation_kl_div_loss(pred: Tensor, - soft_label: Tensor, - T: int, - detach_target: bool = True) -> Tensor: - r"""Loss function for knowledge distilling using KL divergence. - - Args: - pred (Tensor): Predicted logits with shape (N, n + 1). - soft_label (Tensor): Target logits with shape (N, N + 1). - T (int): Temperature for distillation. - detach_target (bool): Remove soft_label from automatic differentiation - - Returns: - Tensor: Loss tensor with shape (N,). - """ - assert pred.size() == soft_label.size() - target = F.softmax(soft_label / T, dim=1) - if detach_target: - target = target.detach() - - kd_loss = F.kl_div( - F.log_softmax(pred / T, dim=1), target, reduction='none').mean(1) * ( - T * T) - - return kd_loss - - -@MODELS.register_module() -class KnowledgeDistillationKLDivLoss(nn.Module): - """Loss function for knowledge distilling using KL divergence. - - Args: - reduction (str): Options are `'none'`, `'mean'` and `'sum'`. - loss_weight (float): Loss weight of current loss. - T (int): Temperature for distillation. - """ - - def __init__(self, - reduction: str = 'mean', - loss_weight: float = 1.0, - T: int = 10) -> None: - super().__init__() - assert T >= 1 - self.reduction = reduction - self.loss_weight = loss_weight - self.T = T - - def forward(self, - pred: Tensor, - soft_label: Tensor, - weight: Optional[Tensor] = None, - avg_factor: Optional[int] = None, - reduction_override: Optional[str] = None) -> Tensor: - """Forward function. - - Args: - pred (Tensor): Predicted logits with shape (N, n + 1). - soft_label (Tensor): Target logits with shape (N, N + 1). - weight (Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Defaults to None. - - Returns: - Tensor: Loss tensor. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - - reduction = ( - reduction_override if reduction_override else self.reduction) - - loss_kd = self.loss_weight * knowledge_distillation_kl_div_loss( - pred, - soft_label, - weight, - reduction=reduction, - avg_factor=avg_factor, - T=self.T) - - return loss_kd diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/task_modules/assigners/point_assigner.py b/spaces/KyanChen/RSPrompter/mmdet/models/task_modules/assigners/point_assigner.py deleted file mode 100644 index 4da60a490b0022ac76c46db8a34f814bc9da8e2e..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/task_modules/assigners/point_assigner.py +++ /dev/null @@ -1,155 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Optional - -import torch -from mmengine.structures import InstanceData - -from mmdet.registry import TASK_UTILS -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - - -@TASK_UTILS.register_module() -class PointAssigner(BaseAssigner): - """Assign a corresponding gt bbox or background to each point. - - Each proposals will be assigned with `0`, or a positive integer - indicating the ground truth index. - - - 0: negative sample, no assigned gt - - positive integer: positive sample, index (1-based) of assigned gt - """ - - def __init__(self, scale: int = 4, pos_num: int = 3) -> None: - self.scale = scale - self.pos_num = pos_num - - def assign(self, - pred_instances: InstanceData, - gt_instances: InstanceData, - gt_instances_ignore: Optional[InstanceData] = None, - **kwargs) -> AssignResult: - """Assign gt to points. - - This method assign a gt bbox to every points set, each points set - will be assigned with the background_label (-1), or a label number. - -1 is background, and semi-positive number is the index (0-based) of - assigned gt. - The assignment is done in following steps, the order matters. - - 1. assign every points to the background_label (-1) - 2. A point is assigned to some gt bbox if - (i) the point is within the k closest points to the gt bbox - (ii) the distance between this point and the gt is smaller than - other gt bboxes - - Args: - pred_instances (:obj:`InstanceData`): Instances of model - predictions. It includes ``priors``, and the priors can - be anchors or points, or the bboxes predicted by the - previous stage, has shape (n, 4). The bboxes predicted by - the current model or stage will be named ``bboxes``, - ``labels``, and ``scores``, the same as the ``InstanceData`` - in other places. - - - gt_instances (:obj:`InstanceData`): Ground truth of instance - annotations. It usually includes ``bboxes``, with shape (k, 4), - and ``labels``, with shape (k, ). - gt_instances_ignore (:obj:`InstanceData`, optional): Instances - to be ignored during training. It includes ``bboxes`` - attribute data that is ignored during training and testing. - Defaults to None. - Returns: - :obj:`AssignResult`: The assign result. - """ - gt_bboxes = gt_instances.bboxes - gt_labels = gt_instances.labels - # points to be assigned, shape(n, 3) while last - # dimension stands for (x, y, stride). - points = pred_instances.priors - - num_points = points.shape[0] - num_gts = gt_bboxes.shape[0] - - if num_gts == 0 or num_points == 0: - # If no truth assign everything to the background - assigned_gt_inds = points.new_full((num_points, ), - 0, - dtype=torch.long) - assigned_labels = points.new_full((num_points, ), - -1, - dtype=torch.long) - return AssignResult( - num_gts=num_gts, - gt_inds=assigned_gt_inds, - max_overlaps=None, - labels=assigned_labels) - - points_xy = points[:, :2] - points_stride = points[:, 2] - points_lvl = torch.log2( - points_stride).int() # [3...,4...,5...,6...,7...] - lvl_min, lvl_max = points_lvl.min(), points_lvl.max() - - # assign gt box - gt_bboxes_xy = (gt_bboxes[:, :2] + gt_bboxes[:, 2:]) / 2 - gt_bboxes_wh = (gt_bboxes[:, 2:] - gt_bboxes[:, :2]).clamp(min=1e-6) - scale = self.scale - gt_bboxes_lvl = ((torch.log2(gt_bboxes_wh[:, 0] / scale) + - torch.log2(gt_bboxes_wh[:, 1] / scale)) / 2).int() - gt_bboxes_lvl = torch.clamp(gt_bboxes_lvl, min=lvl_min, max=lvl_max) - - # stores the assigned gt index of each point - assigned_gt_inds = points.new_zeros((num_points, ), dtype=torch.long) - # stores the assigned gt dist (to this point) of each point - assigned_gt_dist = points.new_full((num_points, ), float('inf')) - points_range = torch.arange(points.shape[0]) - - for idx in range(num_gts): - gt_lvl = gt_bboxes_lvl[idx] - # get the index of points in this level - lvl_idx = gt_lvl == points_lvl - points_index = points_range[lvl_idx] - # get the points in this level - lvl_points = points_xy[lvl_idx, :] - # get the center point of gt - gt_point = gt_bboxes_xy[[idx], :] - # get width and height of gt - gt_wh = gt_bboxes_wh[[idx], :] - # compute the distance between gt center and - # all points in this level - points_gt_dist = ((lvl_points - gt_point) / gt_wh).norm(dim=1) - # find the nearest k points to gt center in this level - min_dist, min_dist_index = torch.topk( - points_gt_dist, self.pos_num, largest=False) - # the index of nearest k points to gt center in this level - min_dist_points_index = points_index[min_dist_index] - # The less_than_recorded_index stores the index - # of min_dist that is less then the assigned_gt_dist. Where - # assigned_gt_dist stores the dist from previous assigned gt - # (if exist) to each point. - less_than_recorded_index = min_dist < assigned_gt_dist[ - min_dist_points_index] - # The min_dist_points_index stores the index of points satisfy: - # (1) it is k nearest to current gt center in this level. - # (2) it is closer to current gt center than other gt center. - min_dist_points_index = min_dist_points_index[ - less_than_recorded_index] - # assign the result - assigned_gt_inds[min_dist_points_index] = idx + 1 - assigned_gt_dist[min_dist_points_index] = min_dist[ - less_than_recorded_index] - - assigned_labels = assigned_gt_inds.new_full((num_points, ), -1) - pos_inds = torch.nonzero( - assigned_gt_inds > 0, as_tuple=False).squeeze() - if pos_inds.numel() > 0: - assigned_labels[pos_inds] = gt_labels[assigned_gt_inds[pos_inds] - - 1] - - return AssignResult( - num_gts=num_gts, - gt_inds=assigned_gt_inds, - max_overlaps=None, - labels=assigned_labels) diff --git a/spaces/KyanChen/RSPrompter/mmpretrain/datasets/imagenet.py b/spaces/KyanChen/RSPrompter/mmpretrain/datasets/imagenet.py deleted file mode 100644 index e309d3af7e53f9ec4072e24f7433d1f6e33d14cb..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmpretrain/datasets/imagenet.py +++ /dev/null @@ -1,102 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Optional, Union - -from mmengine.logging import MMLogger - -from mmpretrain.registry import DATASETS -from .categories import IMAGENET_CATEGORIES -from .custom import CustomDataset - - -@DATASETS.register_module() -class ImageNet(CustomDataset): - """`ImageNet `_ Dataset. - - The dataset supports two kinds of annotation format. More details can be - found in :class:`CustomDataset`. - - Args: - data_root (str): The root directory for ``data_prefix`` and - ``ann_file``. Defaults to ''. - data_prefix (str | dict): Prefix for training data. Defaults to ''. - ann_file (str): Annotation file path. Defaults to ''. - metainfo (dict, optional): Meta information for dataset, such as class - information. Defaults to None. - **kwargs: Other keyword arguments in :class:`CustomDataset` and - :class:`BaseDataset`. - """ # noqa: E501 - - IMG_EXTENSIONS = ('.jpg', '.jpeg', '.png', '.ppm', '.bmp', '.pgm', '.tif') - METAINFO = {'classes': IMAGENET_CATEGORIES} - - def __init__(self, - data_root: str = '', - data_prefix: Union[str, dict] = '', - ann_file: str = '', - metainfo: Optional[dict] = None, - **kwargs): - kwargs = {'extensions': self.IMG_EXTENSIONS, **kwargs} - super().__init__( - data_root=data_root, - data_prefix=data_prefix, - ann_file=ann_file, - metainfo=metainfo, - **kwargs) - - -@DATASETS.register_module() -class ImageNet21k(CustomDataset): - """ImageNet21k Dataset. - - Since the dataset ImageNet21k is extremely big, cantains 21k+ classes - and 1.4B files. We won't provide the default categories list. Please - specify it from the ``classes`` argument. - - Args: - data_root (str): The root directory for ``data_prefix`` and - ``ann_file``. Defaults to ''. - data_prefix (str | dict): Prefix for training data. Defaults to ''. - ann_file (str): Annotation file path. Defaults to ''. - metainfo (dict, optional): Meta information for dataset, such as class - information. Defaults to None. - multi_label (bool): Not implement by now. Use multi label or not. - Defaults to False. - **kwargs: Other keyword arguments in :class:`CustomDataset` and - :class:`BaseDataset`. - """ - - IMG_EXTENSIONS = ('.jpg', '.jpeg', '.png', '.ppm', '.bmp', '.pgm', '.tif') - - def __init__(self, - data_root: str = '', - data_prefix: Union[str, dict] = '', - ann_file: str = '', - metainfo: Optional[dict] = None, - multi_label: bool = False, - **kwargs): - if multi_label: - raise NotImplementedError( - 'The `multi_label` option is not supported by now.') - self.multi_label = multi_label - - logger = MMLogger.get_current_instance() - - if not ann_file: - logger.warning( - 'The ImageNet21k dataset is large, and scanning directory may ' - 'consume long time. Considering to specify the `ann_file` to ' - 'accelerate the initialization.') - - kwargs = {'extensions': self.IMG_EXTENSIONS, **kwargs} - super().__init__( - data_root=data_root, - data_prefix=data_prefix, - ann_file=ann_file, - metainfo=metainfo, - **kwargs) - - if self.CLASSES is None: - logger.warning( - 'The CLASSES is not stored in the `ImageNet21k` class. ' - 'Considering to specify the `classes` argument if you need ' - 'do inference on the ImageNet-21k dataset') diff --git a/spaces/Kynlo/google-flan-t5-xl/app.py b/spaces/Kynlo/google-flan-t5-xl/app.py deleted file mode 100644 index d4541bd639b01857f72279b4cad096be48ed6d11..0000000000000000000000000000000000000000 --- a/spaces/Kynlo/google-flan-t5-xl/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/google/flan-t5-xl").launch() \ No newline at end of file diff --git a/spaces/LUOYE-123/QQsign/README.md b/spaces/LUOYE-123/QQsign/README.md deleted file mode 100644 index 3042be806844c4b6d92719e8afaa17d09c970d46..0000000000000000000000000000000000000000 --- a/spaces/LUOYE-123/QQsign/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: QQsign -emoji: 🦀 -colorFrom: blue -colorTo: purple -sdk: docker -pinned: false -license: mit -duplicated_from: CikeyQI/QQsign ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Lamai/LAMAIGPT/autogpt/workspace.py b/spaces/Lamai/LAMAIGPT/autogpt/workspace.py deleted file mode 100644 index 451cc4d680a4c73c7a681116672a5e7964fe679b..0000000000000000000000000000000000000000 --- a/spaces/Lamai/LAMAIGPT/autogpt/workspace.py +++ /dev/null @@ -1,46 +0,0 @@ -from __future__ import annotations - -import os -from pathlib import Path - -from autogpt.config import Config - -CFG = Config() - -# Set a dedicated folder for file I/O -WORKSPACE_PATH = Path(os.getcwd()) / "auto_gpt_workspace" - -# Create the directory if it doesn't exist -if not os.path.exists(WORKSPACE_PATH): - os.makedirs(WORKSPACE_PATH) - - -def path_in_workspace(relative_path: str | Path) -> Path: - """Get full path for item in workspace - - Parameters: - relative_path (str | Path): Path to translate into the workspace - - Returns: - Path: Absolute path for the given path in the workspace - """ - return safe_path_join(WORKSPACE_PATH, relative_path) - - -def safe_path_join(base: Path, *paths: str | Path) -> Path: - """Join one or more path components, asserting the resulting path is within the workspace. - Args: - base (Path): The base path - *paths (str): The paths to join to the base path - Returns: - Path: The joined path - """ - joined_path = base.joinpath(*paths).resolve() - - if CFG.restrict_to_workspace and not joined_path.is_relative_to(base): - raise ValueError( - f"Attempted to access path '{joined_path}' outside of workspace '{base}'." - ) - - return joined_path - return WORKSPACE_PATH.joinpath(relative_path) \ No newline at end of file diff --git a/spaces/Lianjd/stock_dashboard/backtrader/indicators/pivotpoint.py b/spaces/Lianjd/stock_dashboard/backtrader/indicators/pivotpoint.py deleted file mode 100644 index bfc3befb6f287b340cf8e9b0a473f27fff0afcfb..0000000000000000000000000000000000000000 --- a/spaces/Lianjd/stock_dashboard/backtrader/indicators/pivotpoint.py +++ /dev/null @@ -1,266 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8; py-indent-offset:4 -*- -############################################################################### -# -# Copyright (C) 2015-2020 Daniel Rodriguez -# -# This program is free software: you can redistribute it and/or modify -# it under the terms of the GNU General Public License as published by -# the Free Software Foundation, either version 3 of the License, or -# (at your option) any later version. -# -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# GNU General Public License for more details. -# -# You should have received a copy of the GNU General Public License -# along with this program. If not, see . -# -############################################################################### -from __future__ import (absolute_import, division, print_function, - unicode_literals) - -from . import Indicator, CmpEx - - -class PivotPoint(Indicator): - ''' - Defines a level of significance by taking into account the average of price - bar components of the past period of a larger timeframe. For example when - operating with days, the values are taking from the already "past" month - fixed prices. - - Example of using this indicator: - - data = btfeeds.ADataFeed(dataname=x, timeframe=bt.TimeFrame.Days) - cerebro.adddata(data) - cerebro.resampledata(data, timeframe=bt.TimeFrame.Months) - - In the ``__init__`` method of the strategy: - - pivotindicator = btind.PivotPoiont(self.data1) # the resampled data - - The indicator will try to automatically plo to the non-resampled data. To - disable this behavior use the following during construction: - - - _autoplot=False - - Note: - - The example shows *days* and *months*, but any combination of timeframes - can be used. See the literature for recommended combinations - - Formula: - - pivot = (h + l + c) / 3 # variants duplicate close or add open - - support1 = 2.0 * pivot - high - - support2 = pivot - (high - low) - - resistance1 = 2.0 * pivot - low - - resistance2 = pivot + (high - low) - - See: - - http://stockcharts.com/school/doku.php?id=chart_school:technical_indicators:pivot_points - - https://en.wikipedia.org/wiki/Pivot_point_(technical_analysis) - ''' - lines = ('p', 's1', 's2', 'r1', 'r2',) - plotinfo = dict(subplot=False) - - params = ( - ('open', False), # add opening price to the pivot point - ('close', False), # use close twice in the calcs - ('_autoplot', True), # attempt to plot on real target data - ) - - def _plotinit(self): - # Try to plot to the actual timeframe master - if self.p._autoplot: - if hasattr(self.data, 'data'): - self.plotinfo.plotmaster = self.data.data - - def __init__(self): - o = self.data.open - h = self.data.high # current high - l = self.data.low # current low - c = self.data.close # current close - - if self.p.close: - self.lines.p = p = (h + l + 2.0 * c) / 4.0 - elif self.p.open: - self.lines.p = p = (h + l + c + o) / 4.0 - else: - self.lines.p = p = (h + l + c) / 3.0 - - self.lines.s1 = 2.0 * p - h - self.lines.r1 = 2.0 * p - l - - self.lines.s2 = p - (h - l) - self.lines.r2 = p + (h - l) - - super(PivotPoint, self).__init__() # enable coopertive inheritance - - if self.p._autoplot: - self.plotinfo.plot = False # disable own plotting - self() # Coupler to follow real object - - -class FibonacciPivotPoint(Indicator): - ''' - Defines a level of significance by taking into account the average of price - bar components of the past period of a larger timeframe. For example when - operating with days, the values are taking from the already "past" month - fixed prices. - - Fibonacci levels (configurable) are used to define the support/resistance levels - - Example of using this indicator: - - data = btfeeds.ADataFeed(dataname=x, timeframe=bt.TimeFrame.Days) - cerebro.adddata(data) - cerebro.resampledata(data, timeframe=bt.TimeFrame.Months) - - In the ``__init__`` method of the strategy: - - pivotindicator = btind.FibonacciPivotPoiont(self.data1) # the resampled data - - The indicator will try to automatically plo to the non-resampled data. To - disable this behavior use the following during construction: - - - _autoplot=False - - Note: - - The example shows *days* and *months*, but any combination of timeframes - can be used. See the literature for recommended combinations - - Formula: - - pivot = (h + l + c) / 3 # variants duplicate close or add open - - support1 = p - level1 * (high - low) # level1 0.382 - - support2 = p - level2 * (high - low) # level2 0.618 - - support3 = p - level3 * (high - low) # level3 1.000 - - resistance1 = p + level1 * (high - low) # level1 0.382 - - resistance2 = p + level2 * (high - low) # level2 0.618 - - resistance3 = p + level3 * (high - low) # level3 1.000 - - See: - - http://stockcharts.com/school/doku.php?id=chart_school:technical_indicators:pivot_points - ''' - lines = ('p', 's1', 's2', 's3', 'r1', 'r2', 'r3') - plotinfo = dict(subplot=False) - params = ( - ('open', False), # add opening price to the pivot point - ('close', False), # use close twice in the calcs - ('_autoplot', True), # attempt to plot on real target data - ('level1', 0.382), - ('level2', 0.618), - ('level3', 1.0), - ) - - def _plotinit(self): - # Try to plot to the actual timeframe master - if self.p._autoplot: - if hasattr(self.data, 'data'): - self.plotinfo.plotmaster = self.data.data - - def __init__(self): - o = self.data.open - h = self.data.high # current high - l = self.data.low # current high - c = self.data.close # current high - - if self.p.close: - self.lines.p = p = (h + l + 2.0 * c) / 4.0 - elif self.p.open: - self.lines.p = p = (h + l + c + o) / 4.0 - else: - self.lines.p = p = (h + l + c) / 3.0 - - self.lines.s1 = p - self.p.level1 * (h - l) - self.lines.s2 = p - self.p.level2 * (h - l) - self.lines.s3 = p - self.p.level3 * (h - l) - - self.lines.r1 = p + self.p.level1 * (h - l) - self.lines.r2 = p + self.p.level2 * (h - l) - self.lines.r3 = p + self.p.level3 * (h - l) - - super(FibonacciPivotPoint, self).__init__() - - if self.p._autoplot: - self.plotinfo.plot = False # disable own plotting - self() # Coupler to follow real object - - -class DemarkPivotPoint(Indicator): - ''' - Defines a level of significance by taking into account the average of price - bar components of the past period of a larger timeframe. For example when - operating with days, the values are taking from the already "past" month - fixed prices. - - Example of using this indicator: - - data = btfeeds.ADataFeed(dataname=x, timeframe=bt.TimeFrame.Days) - cerebro.adddata(data) - cerebro.resampledata(data, timeframe=bt.TimeFrame.Months) - - In the ``__init__`` method of the strategy: - - pivotindicator = btind.DemarkPivotPoiont(self.data1) # the resampled data - - The indicator will try to automatically plo to the non-resampled data. To - disable this behavior use the following during construction: - - - _autoplot=False - - Note: - - The example shows *days* and *months*, but any combination of timeframes - can be used. See the literature for recommended combinations - - Formula: - - if close < open x = high + (2 x low) + close - - - if close > open x = (2 x high) + low + close - - - if Close == open x = high + low + (2 x close) - - - p = x / 4 - - - support1 = x / 2 - high - - resistance1 = x / 2 - low - - See: - - http://stockcharts.com/school/doku.php?id=chart_school:technical_indicators:pivot_points - ''' - lines = ('p', 's1', 'r1',) - plotinfo = dict(subplot=False) - params = ( - ('open', False), # add opening price to the pivot point - ('close', False), # use close twice in the calcs - ('_autoplot', True), # attempt to plot on real target data - ('level1', 0.382), - ('level2', 0.618), - ('level3', 1.0), - ) - - def _plotinit(self): - # Try to plot to the actual timeframe master - if self.p._autoplot: - if hasattr(self.data, 'data'): - self.plotinfo.plotmaster = self.data.data - - def __init__(self): - x1 = self.data.high + 2.0 * self.data.low + self.data.close - x2 = 2.0 * self.data.high + self.data.low + self.data.close - x3 = self.data.high + self.data.low + 2.0 * self.data.close - - x = CmpEx(self.data.close, self.data.open, x1, x2, x3) - self.lines.p = x / 4.0 - - self.lines.s1 = x / 2.0 - self.data.high - self.lines.r1 = x / 2.0 - self.data.low - - super(DemarkPivotPoint, self).__init__() - - if self.p._autoplot: - self.plotinfo.plot = False # disable own plotting - self() # Coupler to follow real object diff --git a/spaces/MINAMONI/img-to-music/style.css b/spaces/MINAMONI/img-to-music/style.css deleted file mode 100644 index 8f7397fe7f0971636015170df075cd2d070344ec..0000000000000000000000000000000000000000 --- a/spaces/MINAMONI/img-to-music/style.css +++ /dev/null @@ -1,51 +0,0 @@ -#col-container {max-width: 510px; margin-left: auto; margin-right: auto;} -a {text-decoration-line: underline; font-weight: 600;} -div#music-output .h-full { - min-height: 5rem; -} -.footer { - margin-bottom: 45px; - margin-top: 10px; - text-align: center; - border-bottom: 1px solid #e5e5e5; - } - .footer>p { - font-size: .8rem; - display: inline-block; - padding: 0 10px; - transform: translateY(10px); - background: white; - } - .dark .footer { - border-color: #303030; - } - .dark .footer>p { - background: #0b0f19; - } -.animate-spin { - animation: spin 1s linear infinite; -} -@keyframes spin { - from { - transform: rotate(0deg); - } - to { - transform: rotate(360deg); - } -} -#share-btn-container { - display: flex; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem; -} -#share-btn { - all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important;right:0; -} -#share-btn * { - all: unset; -} -#share-btn-container div:nth-child(-n+2){ - width: auto !important; - min-height: 0px !important; -} -#share-btn-container .wrap { - display: none !important; -} \ No newline at end of file diff --git a/spaces/MMMMQZ/MQZGPT/readme/README_ja.md b/spaces/MMMMQZ/MQZGPT/readme/README_ja.md deleted file mode 100644 index fc56eec0b81c22ff0a49e3960aa52ffd7d6dc5cb..0000000000000000000000000000000000000000 --- a/spaces/MMMMQZ/MQZGPT/readme/README_ja.md +++ /dev/null @@ -1,126 +0,0 @@ -
      - - 简体中文 | English | 日本語 -
      - -

      川虎 Chat 🐯 Chuanhu Chat

      -
      - - Logo - - -

      -

      ChatGPT/ChatGLM/LLaMAなどのLLMのための軽量でユーザーフレンドリーなWeb-UI

      -

      - - Tests Passing - - - GitHub Contributors - - - GitHub pull requests - -

      - ストリーム出力/会話回数無制限/履歴保存/プリセットプロンプト/ファイルへの質問チャット
      - ウェブ検索/LaTeXレンダリング/表レンダリング/コードハイライト
      - オートダークモード/アダプティブ・ウェブ・インターフェイス/WeChatライク・テーマ
      - マルチパラメーターチューニング/マルチAPI-Key対応/マルチユーザー対応
      - GPT-4対応/LLMのローカルデプロイ可能。 -

      - 動画チュートリアル - · - 2.0 イントロダクション - · - 3.0 イントロダクション & チュートリアル - || - オンライントライアル - · - ワンクリックデプロイ -

      -

      - Animation Demo -

      -

      -
      - -## 使う上でのTips - -- ChatGPTをより適切に制御するために、システムプロンプトを使用できます。 -- プロンプトテンプレートを使用するには、プロンプトテンプレートコレクションを選択し、ドロップダウンメニューから特定のプロンプトを選択。回答が不十分な場合は、`🔄再生成`ボタンを使って再試行します。 -- 入力ボックスで改行するには、Shift + Enterキーを押してください。 -- 入力履歴を素早く切り替えるには、入力ボックスで キーを押す。 -- プログラムをサーバにデプロイするには、プログラムの最終行を `demo.launch(server_name="0.0.0.0", server_port=)`に変更します。 -- 共有リンクを取得するには、プログラムの最後の行を `demo.launch(share=True)` に変更してください。なお、公開リンクでアクセスするためには、プログラムが実行されている必要があることに注意してください。 -- Hugging Face Spacesで使用する場合: より速く、より安全に利用するために、**Duplicate Space**を使用し、自分のスペースでプログラムを実行することをお勧めします。 - -## インストール - -```shell -git clone https://github.com/GaiZhenbiao/ChuanhuChatGPT.git -cd ChuanhuChatGPT -pip install -r requirements.txt -``` - -次に `config_example.json`をコピーして `config.json`にリネームし、そのファイルにAPI-Keyなどの設定を記入する。 - -```shell -python ChuanhuChatbot.py -``` - -ブラウザのウィンドウが開き、ChatGPTとチャットできるようになります。 - -> **Note** -> -> 詳しい手順は[wikiページ](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程)をご確認ください。 - -## トラブルシューティング - -問題が発生した場合は、まずこのプロジェクトの最新の変更点を手動で引っ張ってみるのがよいでしょう。その手順は以下の通りです: - -1. ウェブページの `Download ZIP` をクリックして最新のコードアーカイブをダウンロードするか、または - ```shell - git pull https://github.com/GaiZhenbiao/ChuanhuChatGPT.git main -f - ``` -2. 新しい依存関係が導入されている可能性があるため、依存関係を再度インストールしてみてください。 - ``` - pip install -r requirements.txt - ``` -3. Gradioを更新 - ``` - pip install gradio --upgrade --force-reinstall - ``` - -一般的に、以下の手順でほとんどの問題を解決することができます。 - -それでも問題が解決しない場合は、こちらのページをご参照ください: [よくある質問(FAQ)](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/常见问题) - -このページでは、考えられるほぼすべての問題点と解決策を掲載しています。よくお読みください。 - -## More Information - -より詳細な情報は、[wiki](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki) をご覧ください。: - -- [How to contribute a translation](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/Localization) -- [How to make a contribution](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/贡献指南) -- [How to cite the project](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用许可#如何引用该项目) -- [Project changelog](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/更新日志) -- [Project license](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用许可) - -## Starchart - -[![Star History Chart](https://api.star-history.com/svg?repos=GaiZhenbiao/ChuanhuChatGPT&type=Date)](https://star-history.com/#GaiZhenbiao/ChuanhuChatGPT&Date) - -## Contributors - - - - - -## Sponsor - -🐯 この企画が役に立ったら、遠慮なくコーラかコーヒーでもおごってください〜。 - -Buy Me A Coffee - -image diff --git a/spaces/MarcCote/TextWorldExpress/style.css b/spaces/MarcCote/TextWorldExpress/style.css deleted file mode 100644 index a3a29cf116fac70813f6e2d01146b55764044dec..0000000000000000000000000000000000000000 --- a/spaces/MarcCote/TextWorldExpress/style.css +++ /dev/null @@ -1,4 +0,0 @@ -section.main[tabindex="0"] { - overflow-y: scroll; - /* height: 700px */ -} \ No newline at end of file diff --git a/spaces/Marshalls/testmtd/feature_extraction/madmom/features/tempo.py b/spaces/Marshalls/testmtd/feature_extraction/madmom/features/tempo.py deleted file mode 100644 index 84ecb312ae49b9e02a090ab06d6613ebaabfeb13..0000000000000000000000000000000000000000 --- a/spaces/Marshalls/testmtd/feature_extraction/madmom/features/tempo.py +++ /dev/null @@ -1,882 +0,0 @@ -# encoding: utf-8 -# pylint: disable=no-member -# pylint: disable=invalid-name -# pylint: disable=too-many-arguments -""" -This module contains tempo related functionality. - -""" - -from __future__ import absolute_import, division, print_function - -import sys - -import numpy as np - -from ..audio.signal import smooth as smooth_signal -from ..processors import BufferProcessor, OnlineProcessor - -METHOD = 'comb' -ALPHA = 0.79 -MIN_BPM = 40. -MAX_BPM = 250. -ACT_SMOOTH = 0.14 -HIST_SMOOTH = 9 -HIST_BUFFER = 10. -NO_TEMPO = np.nan - - -# helper functions -def smooth_histogram(histogram, smooth): - """ - Smooth the given histogram. - - Parameters - ---------- - histogram : tuple - Histogram (tuple of 2 numpy arrays, the first giving the strengths of - the bins and the second corresponding delay values). - smooth : int or numpy array - Smoothing kernel (size). - - Returns - ------- - histogram_bins : numpy array - Bins of the smoothed histogram. - histogram_delays : numpy array - Corresponding delays. - - Notes - ----- - If `smooth` is an integer, a Hamming window of that length will be used as - a smoothing kernel. - - """ - # smooth only the histogram bins, not the corresponding delays - return smooth_signal(histogram[0], smooth), histogram[1] - - -# interval detection -def interval_histogram_acf(activations, min_tau=1, max_tau=None): - """ - Compute the interval histogram of the given (beat) activation function via - auto-correlation as in [1]_. - - Parameters - ---------- - activations : numpy array - Beat activation function. - min_tau : int, optional - Minimal delay for the auto-correlation function [frames]. - max_tau : int, optional - Maximal delay for the auto-correlation function [frames]. - - Returns - ------- - histogram_bins : numpy array - Bins of the tempo histogram. - histogram_delays : numpy array - Corresponding delays [frames]. - - References - ---------- - .. [1] Sebastian Böck and Markus Schedl, - "Enhanced Beat Tracking with Context-Aware Neural Networks", - Proceedings of the 14th International Conference on Digital Audio - Effects (DAFx), 2011. - - """ - if activations.ndim != 1: - raise NotImplementedError('too many dimensions for autocorrelation ' - 'interval histogram calculation.') - # set the maximum delay - if max_tau is None: - max_tau = len(activations) - min_tau - # test all possible delays - taus = list(range(min_tau, max_tau + 1)) - bins = [] - # Note: this is faster than: - # corr = np.correlate(activations, activations, mode='full') - # bins = corr[len(activations) + min_tau - 1: len(activations) + max_tau] - for tau in taus: - bins.append(np.sum(np.abs(activations[tau:] * activations[0:-tau]))) - # return histogram - return np.array(bins), np.array(taus) - - -def interval_histogram_comb(activations, alpha, min_tau=1, max_tau=None): - """ - Compute the interval histogram of the given (beat) activation function via - a bank of resonating comb filters as in [1]_. - - Parameters - ---------- - activations : numpy array - Beat activation function. - alpha : float or numpy array - Scaling factor for the comb filter; if only a single value is given, - the same scaling factor for all delays is assumed. - min_tau : int, optional - Minimal delay for the comb filter [frames]. - max_tau : int, optional - Maximal delta for comb filter [frames]. - - Returns - ------- - histogram_bins : numpy array - Bins of the tempo histogram. - histogram_delays : numpy array - Corresponding delays [frames]. - - References - ---------- - .. [1] Sebastian Böck, Florian Krebs and Gerhard Widmer, - "Accurate Tempo Estimation based on Recurrent Neural Networks and - Resonating Comb Filters", - Proceedings of the 16th International Society for Music Information - Retrieval Conference (ISMIR), 2015. - - """ - # import comb filter - from madmom.audio.comb_filters import CombFilterbankProcessor - # set the maximum delay - if max_tau is None: - max_tau = len(activations) - min_tau - # get the range of taus - taus = np.arange(min_tau, max_tau + 1) - # create a comb filter bank instance - cfb = CombFilterbankProcessor('backward', taus, alpha) - if activations.ndim in (1, 2): - # apply a bank of comb filters - act = cfb.process(activations) - # determine the tau with the highest value for each time step - act_max = act == np.max(act, axis=-1)[..., np.newaxis] - # sum up these maxima weighted by the activation value to yield the - # histogram bin values - histogram_bins = np.sum(act * act_max, axis=0) - else: - raise NotImplementedError('too many dimensions for comb filter ' - 'interval histogram calculation.') - # return the histogram - return histogram_bins, taus - - -# helper functions -def dominant_interval(histogram, smooth=None): - """ - Extract the dominant interval of the given histogram. - - Parameters - ---------- - histogram : tuple - Histogram (tuple of 2 numpy arrays, the first giving the strengths of - the bins and the second corresponding delay values). - smooth : int or numpy array, optional - Smooth the histogram with the given kernel (size). - - Returns - ------- - interval : int - Dominant interval. - - Notes - ----- - If `smooth` is an integer, a Hamming window of that length will be used as - a smoothing kernel. - - """ - # smooth the histogram bins - if smooth: - histogram = smooth_histogram(histogram, smooth) - # return the dominant interval - return histogram[1][np.argmax(histogram[0])] - - -# extract the tempo from a histogram -def detect_tempo(histogram, fps): - """ - Extract the tempo from the given histogram. - - Parameters - ---------- - histogram : tuple - Histogram (tuple of 2 numpy arrays, the first giving the strengths of - the bins and the second corresponding delay values). - fps : float - Frames per second. - - Returns - ------- - tempi : numpy array - Numpy array with the dominant tempi [bpm] (first column) and their - relative strengths (second column). - - """ - from scipy.signal import argrelmax - # histogram of IBIs - bins = histogram[0] - # convert the histogram bin delays to tempi in beats per minute - tempi = 60.0 * fps / histogram[1] - # to get the two dominant tempi, just keep the peaks - # use 'wrap' mode to also get peaks at the borders - peaks = argrelmax(bins, mode='wrap')[0] - # we need more than 1 peak to report multiple tempi - if len(peaks) == 0: - # a flat histogram has no peaks, use the center bin - if len(bins): - ret = np.asarray([tempi[len(bins) // 2], 1.]) - else: - # otherwise: no peaks, no tempo - ret = np.asarray([NO_TEMPO, 0.]) - elif len(peaks) == 1: - # report only the strongest tempo - ret = np.asarray([tempi[peaks[0]], 1.]) - else: - # sort the peaks in descending order of bin heights - sorted_peaks = peaks[np.argsort(bins[peaks])[::-1]] - # normalize their strengths - strengths = bins[sorted_peaks] - strengths /= np.sum(strengths) - # return the tempi and their normalized strengths - ret = np.asarray(list(zip(tempi[sorted_peaks], strengths))) - # return the tempi - return np.atleast_2d(ret) - - -# tempo histogram processor classes -class TempoHistogramProcessor(OnlineProcessor): - """ - Tempo Histogram Processor class. - - Parameters - ---------- - min_bpm : float - Minimum tempo to detect [bpm]. - max_bpm : float - Maximum tempo to detect [bpm]. - hist_buffer : float - Aggregate the tempo histogram over `hist_buffer` seconds. - fps : float, optional - Frames per second. - - Notes - ----- - This abstract class provides the basic tempo histogram functionality. - Please use one of the following implementations: - - - :class:`CombFilterTempoHistogramProcessor`, - - :class:`ACFTempoHistogramProcessor` or - - :class:`DBNTempoHistogramProcessor`. - - """ - - def __init__(self, min_bpm, max_bpm, hist_buffer=HIST_BUFFER, fps=None, - online=False, **kwargs): - # pylint: disable=unused-argument - super(TempoHistogramProcessor, self).__init__(online=online) - self.min_bpm = min_bpm - self.max_bpm = max_bpm - self.hist_buffer = hist_buffer - self.fps = fps - if self.online: - self._hist_buffer = BufferProcessor((int(hist_buffer * self.fps), - len(self.intervals))) - - @property - def min_interval(self): - """Minimum beat interval [frames].""" - return int(np.floor(60. * self.fps / self.max_bpm)) - - @property - def max_interval(self): - """Maximum beat interval [frames].""" - return int(np.ceil(60. * self.fps / self.min_bpm)) - - @property - def intervals(self): - """Beat intervals [frames].""" - return np.arange(self.min_interval, self.max_interval + 1) - - def reset(self): - """Reset the tempo histogram aggregation buffer.""" - self._hist_buffer.reset() - - -class CombFilterTempoHistogramProcessor(TempoHistogramProcessor): - """ - Create a tempo histogram with a bank of resonating comb filters. - - Parameters - ---------- - min_bpm : float, optional - Minimum tempo to detect [bpm]. - max_bpm : float, optional - Maximum tempo to detect [bpm]. - alpha : float, optional - Scaling factor for the comb filter. - hist_buffer : float - Aggregate the tempo histogram over `hist_buffer` seconds. - fps : float, optional - Frames per second. - online : bool, optional - Operate in online (i.e. causal) mode. - - """ - - def __init__(self, min_bpm=MIN_BPM, max_bpm=MAX_BPM, alpha=ALPHA, - hist_buffer=HIST_BUFFER, fps=None, online=False, **kwargs): - # pylint: disable=unused-argument - super(CombFilterTempoHistogramProcessor, self).__init__( - min_bpm=min_bpm, max_bpm=max_bpm, hist_buffer=hist_buffer, fps=fps, - online=online, **kwargs) - self.alpha = alpha - if self.online: - self._comb_buffer = BufferProcessor((self.max_interval + 1, - len(self.intervals))) - - def reset(self): - """Reset to initial state.""" - super(CombFilterTempoHistogramProcessor, self).reset() - self._comb_buffer.reset() - - def process_offline(self, activations, **kwargs): - """ - Compute the histogram of the beat intervals with a bank of resonating - comb filters. - - Parameters - ---------- - activations : numpy array - Beat activation function. - - Returns - ------- - histogram_bins : numpy array - Bins of the beat interval histogram. - histogram_delays : numpy array - Corresponding delays [frames]. - - """ - return interval_histogram_comb(activations, self.alpha, - self.min_interval, self.max_interval) - - def process_online(self, activations, reset=True, **kwargs): - """ - Compute the histogram of the beat intervals with a bank of resonating - comb filters in online mode. - - Parameters - ---------- - activations : numpy float - Beat activation function. - reset : bool, optional - Reset to initial state before processing. - - Returns - ------- - histogram_bins : numpy array - Bins of the tempo histogram. - histogram_delays : numpy array - Corresponding delays [frames]. - - """ - # reset to initial state - if reset: - self.reset() - # indices at which to retrieve y[n - τ] - idx = [-self.intervals, np.arange(len(self.intervals))] - # iterate over all activations - for act in activations: - # online feed backward comb filter (y[n] = x[n] + α * y[n - τ]) - y_n = act + self.alpha * self._comb_buffer[idx] - # shift output buffer with new value - self._comb_buffer(y_n) - # determine the tau with the highest value - act_max = y_n == np.max(y_n, axis=-1)[..., np.newaxis] - # compute the max bins - bins = y_n * act_max - # use a buffer to only keep a certain number of bins - # shift buffer and put new bins at end of buffer - bins = self._hist_buffer(bins) - # build a histogram together with the intervals and return it - return np.sum(bins, axis=0), self.intervals - - -class ACFTempoHistogramProcessor(TempoHistogramProcessor): - """ - Create a tempo histogram with autocorrelation. - - Parameters - ---------- - min_bpm : float, optional - Minimum tempo to detect [bpm]. - max_bpm : float, optional - Maximum tempo to detect [bpm]. - hist_buffer : float - Aggregate the tempo histogram over `hist_buffer` seconds. - fps : float, optional - Frames per second. - online : bool, optional - Operate in online (i.e. causal) mode. - - """ - - def __init__(self, min_bpm=MIN_BPM, max_bpm=MAX_BPM, - hist_buffer=HIST_BUFFER, fps=None, online=False, **kwargs): - # pylint: disable=unused-argument - super(ACFTempoHistogramProcessor, self).__init__( - min_bpm=min_bpm, max_bpm=max_bpm, hist_buffer=hist_buffer, fps=fps, - online=online, **kwargs) - if self.online: - self._act_buffer = BufferProcessor((self.max_interval + 1, 1)) - - def reset(self): - """Reset to initial state.""" - super(ACFTempoHistogramProcessor, self).reset() - self._act_buffer.reset() - - def process_offline(self, activations, **kwargs): - """ - Compute the histogram of the beat intervals with the autocorrelation - function. - - Parameters - ---------- - activations : numpy array - Beat activation function. - - Returns - ------- - histogram_bins : numpy array - Bins of the beat interval histogram. - histogram_delays : numpy array - Corresponding delays [frames]. - - """ - # build the tempo (i.e. inter beat interval) histogram and return it - return interval_histogram_acf(activations, self.min_interval, - self.max_interval) - - def process_online(self, activations, reset=True, **kwargs): - """ - Compute the histogram of the beat intervals with the autocorrelation - function in online mode. - - Parameters - ---------- - activations : numpy float - Beat activation function. - reset : bool, optional - Reset to initial state before processing. - - Returns - ------- - histogram_bins : numpy array - Bins of the tempo histogram. - histogram_delays : numpy array - Corresponding delays [frames]. - - """ - # reset to initial state - if reset: - self.reset() - # iterate over all activations - # TODO: speed this up! - for act in activations: - # online ACF (y[n] = x[n] * x[n - τ]) - bins = act * self._act_buffer[-self.intervals].T - # shift activation buffer with new value - self._act_buffer(act) - # use a buffer to only keep a certain number of bins - # shift buffer and put new bins at end of buffer - bins = self._hist_buffer(bins) - # build a histogram together with the intervals and return it - return np.sum(bins, axis=0), self.intervals - - -class DBNTempoHistogramProcessor(TempoHistogramProcessor): - """ - Create a tempo histogram with a dynamic Bayesian network (DBN). - - Parameters - ---------- - min_bpm : float, optional - Minimum tempo to detect [bpm]. - max_bpm : float, optional - Maximum tempo to detect [bpm]. - hist_buffer : float - Aggregate the tempo histogram over `hist_buffer` seconds. - fps : float, optional - Frames per second. - online : bool, optional - Operate in online (i.e. causal) mode. - - """ - - def __init__(self, min_bpm=MIN_BPM, max_bpm=MAX_BPM, - hist_buffer=HIST_BUFFER, fps=None, online=False, **kwargs): - # pylint: disable=unused-argument - super(DBNTempoHistogramProcessor, self).__init__( - min_bpm=min_bpm, max_bpm=max_bpm, hist_buffer=hist_buffer, fps=fps, - online=online, **kwargs) - from .beats import DBNBeatTrackingProcessor - self.dbn = DBNBeatTrackingProcessor( - min_bpm=self.min_bpm, max_bpm=self.max_bpm, fps=self.fps, - online=online, **kwargs) - - def reset(self): - """Reset DBN to initial state.""" - super(DBNTempoHistogramProcessor, self).reset() - self.dbn.hmm.reset() - - def process_offline(self, activations, **kwargs): - """ - Compute the histogram of the beat intervals with a DBN. - - Parameters - ---------- - activations : numpy array - Beat activation function. - - Returns - ------- - histogram_bins : numpy array - Bins of the beat interval histogram. - histogram_delays : numpy array - Corresponding delays [frames]. - - """ - # get the best state path by calling the viterbi algorithm - path, _ = self.dbn.hmm.viterbi(activations.astype(np.float32)) - intervals = self.dbn.st.state_intervals[path] - # get the counts of the bins - bins = np.bincount(intervals, - minlength=self.dbn.st.intervals.max() + 1) - # truncate everything below the minimum interval of the state space - bins = bins[self.dbn.st.intervals.min():] - # build a histogram together with the intervals and return it - return bins, self.dbn.st.intervals - - def process_online(self, activations, reset=True, **kwargs): - """ - Compute the histogram of the beat intervals with a DBN using the - forward algorithm. - - Parameters - ---------- - activations : numpy float - Beat activation function. - reset : bool, optional - Reset DBN to initial state before processing. - - Returns - ------- - histogram_bins : numpy array - Bins of the tempo histogram. - histogram_delays : numpy array - Corresponding delays [frames]. - - """ - # reset to initial state - if reset: - self.reset() - # use forward path to get best state - fwd = self.dbn.hmm.forward(activations, reset=reset) - # choose the best state for each step - states = np.argmax(fwd, axis=1) - intervals = self.dbn.st.state_intervals[states] - # convert intervals to bins - bins = np.zeros((len(activations), len(self.intervals))) - bins[np.arange(len(activations)), intervals - self.min_interval] = 1 - # shift buffer and put new bins at end of buffer - bins = self._hist_buffer(bins) - # build a histogram together with the intervals and return it - return np.sum(bins, axis=0), self.intervals - - -class TempoEstimationProcessor(OnlineProcessor): - """ - Tempo Estimation Processor class. - - Parameters - ---------- - method : {'comb', 'acf', 'dbn'} - Method used for tempo estimation. - min_bpm : float, optional - Minimum tempo to detect [bpm]. - max_bpm : float, optional - Maximum tempo to detect [bpm]. - act_smooth : float, optional (default: 0.14) - Smooth the activation function over `act_smooth` seconds. - hist_smooth : int, optional (default: 7) - Smooth the tempo histogram over `hist_smooth` bins. - alpha : float, optional - Scaling factor for the comb filter. - fps : float, optional - Frames per second. - histogram_processor : :class:`TempoHistogramProcessor`, optional - Processor used to create a tempo histogram. If 'None', a default - combfilter histogram processor will be created and used. - kwargs : dict, optional - Keyword arguments passed to :class:`CombFilterTempoHistogramProcessor` - if no `histogram_processor` was given. - - Examples - -------- - Create a TempoEstimationProcessor. The returned array represents the - estimated tempi (given in beats per minute) and their relative strength. - - >>> proc = TempoEstimationProcessor(fps=100) - >>> proc # doctest: +ELLIPSIS - - - Call this TempoEstimationProcessor with the beat activation function - obtained by RNNBeatProcessor to estimate the tempi. - - >>> from madmom.features.beats import RNNBeatProcessor - >>> act = RNNBeatProcessor()('tests/data/audio/sample.wav') - >>> proc(act) # doctest: +NORMALIZE_WHITESPACE - array([[176.47059, 0.47469], - [117.64706, 0.17667], - [240. , 0.15371], - [ 68.96552, 0.09864], - [ 82.19178, 0.09629]]) - - """ - - def __init__(self, method=METHOD, min_bpm=MIN_BPM, max_bpm=MAX_BPM, - act_smooth=ACT_SMOOTH, hist_smooth=HIST_SMOOTH, fps=None, - online=False, histogram_processor=None, **kwargs): - # pylint: disable=unused-argument - super(TempoEstimationProcessor, self).__init__(online=online) - self.method = method - self.act_smooth = act_smooth - self.hist_smooth = hist_smooth - self.fps = fps - if self.online: - self.visualize = kwargs.get('verbose', False) - if histogram_processor is None: - if method == 'acf': - histogram_processor = ACFTempoHistogramProcessor - elif method == 'comb': - histogram_processor = CombFilterTempoHistogramProcessor - elif method == 'dbn': - histogram_processor = DBNTempoHistogramProcessor - # do not smooth the activations for the DBN - self.act_smooth = None - else: - raise ValueError('tempo histogram method unknown.') - # instantiate histogram processor - histogram_processor = histogram_processor( - min_bpm=min_bpm, max_bpm=max_bpm, fps=fps, online=online, - **kwargs) - self.histogram_processor = histogram_processor - - @property - def min_bpm(self): - """Minimum tempo [bpm].""" - return self.histogram_processor.min_bpm - - @property - def max_bpm(self): - """Maximum tempo [bpm].""" - return self.histogram_processor.max_bpm - - @property - def intervals(self): - """Beat intervals [frames].""" - return self.histogram_processor.intervals - - @property - def min_interval(self): - """Minimum beat interval [frames].""" - return self.histogram_processor.min_interval - - @property - def max_interval(self): - """Maximum beat interval [frames].""" - return self.histogram_processor.max_interval - - def reset(self): - """Reset to initial state.""" - self.histogram_processor.reset() - - def process_offline(self, activations, **kwargs): - """ - Detect the tempi from the (beat) activations. - - Parameters - ---------- - activations : numpy array - Beat activation function. - - Returns - ------- - tempi : numpy array - Array with the dominant tempi [bpm] (first column) and their - relative strengths (second column). - - """ - # smooth the activations if needed - if self.act_smooth is not None: - act_smooth = int(round(self.fps * self.act_smooth)) - activations = smooth_signal(activations, act_smooth) - # generate a histogram of beat intervals - histogram = self.interval_histogram(activations.astype(np.float)) - # smooth the histogram - histogram = smooth_histogram(histogram, self.hist_smooth) - # detect the tempi and return them - return detect_tempo(histogram, self.fps) - - def process_online(self, activations, reset=True, **kwargs): - """ - Detect the tempi from the (beat) activations in online mode. - - Parameters - ---------- - activations : numpy array - Beat activation function processed frame by frame. - reset : bool, optional - Reset the TempoEstimationProcessor to its initial state before - processing. - - Returns - ------- - tempi : numpy array - Array with the dominant tempi [bpm] (first column) and their - relative strengths (second column). - - """ - # build the tempo histogram depending on the chosen method - histogram = self.interval_histogram(activations, reset=reset) - # smooth the histogram - histogram = smooth_histogram(histogram, self.hist_smooth) - # detect the tempo and append it to the found tempi - tempo = detect_tempo(histogram, self.fps) - # visualize tempo - if self.visualize: - display = '' - # display the 3 most likely tempi and their strengths - for i, display_tempo in enumerate(tempo[:3], start=1): - # display tempo - display += '| ' + str(round(display_tempo[0], 1)) + ' ' - # display strength - display += min(int(display_tempo[1] * 50), 18) * '*' - # fill up the rest with spaces - display = display.ljust(i * 26) - # print the tempi - sys.stderr.write('\r%s' % ''.join(display) + '|') - sys.stderr.flush() - # return tempo - return tempo - - def interval_histogram(self, activations, **kwargs): - """ - Compute the histogram of the beat intervals. - - Parameters - ---------- - activations : numpy array - Beat activation function. - - Returns - ------- - histogram_bins : numpy array - Bins of the beat interval histogram. - histogram_delays : numpy array - Corresponding delays [frames]. - - """ - return self.histogram_processor(activations, **kwargs) - - def dominant_interval(self, histogram): - """ - Extract the dominant interval of the given histogram. - - Parameters - ---------- - histogram : tuple - Histogram (tuple of 2 numpy arrays, the first giving the strengths - of the bins and the second corresponding delay values). - - Returns - ------- - interval : int - Dominant interval. - - """ - # return the dominant interval - return dominant_interval(histogram, self.hist_smooth) - - @staticmethod - def add_arguments(parser, method=None, min_bpm=None, max_bpm=None, - act_smooth=None, hist_smooth=None, hist_buffer=None, - alpha=None): - """ - Add tempo estimation related arguments to an existing parser. - - Parameters - ---------- - parser : argparse parser instance - Existing argparse parser. - method : {'comb', 'acf', 'dbn'} - Method used for tempo estimation. - min_bpm : float, optional - Minimum tempo to detect [bpm]. - max_bpm : float, optional - Maximum tempo to detect [bpm]. - act_smooth : float, optional - Smooth the activation function over `act_smooth` seconds. - hist_smooth : int, optional - Smooth the tempo histogram over `hist_smooth` bins. - hist_buffer : float, optional - Aggregate the tempo histogram over `hist_buffer` seconds. - alpha : float, optional - Scaling factor for the comb filter. - - Returns - ------- - parser_group : argparse argument group - Tempo argument parser group. - - Notes - ----- - Parameters are included in the group only if they are not 'None'. - - """ - # add tempo estimation related options to the existing parser - g = parser.add_argument_group('tempo estimation arguments') - if method is not None: - g.add_argument('--method', action='store', type=str, - default=method, choices=['acf', 'comb', 'dbn'], - help="which method to use [default=%(default)s]") - if min_bpm is not None: - g.add_argument('--min_bpm', action='store', type=float, - default=min_bpm, - help='minimum tempo [bpm, default=%(default).2f]') - if max_bpm is not None: - g.add_argument('--max_bpm', action='store', type=float, - default=max_bpm, - help='maximum tempo [bpm, default=%(default).2f]') - if act_smooth is not None: - g.add_argument('--act_smooth', action='store', type=float, - default=act_smooth, - help='smooth the activations over N seconds ' - '[default=%(default).2f]') - if hist_smooth is not None: - g.add_argument('--hist_smooth', action='store', type=int, - default=hist_smooth, - help='smooth the tempo histogram over N bins ' - '[default=%(default)d]') - if hist_buffer is not None: - g.add_argument('--hist_buffer', action='store', type=float, - default=hist_buffer, - help='aggregate the tempo histogram over N seconds ' - '[default=%(default).2f]') - if alpha is not None: - g.add_argument('--alpha', action='store', type=float, - default=alpha, - help='alpha for comb filter tempo estimation ' - '[default=%(default).2f]') - # return the argument group so it can be modified if needed - return g diff --git a/spaces/MattyWhite/ChatGPT-ImageCaptioner2/detic/data/custom_dataset_dataloader.py b/spaces/MattyWhite/ChatGPT-ImageCaptioner2/detic/data/custom_dataset_dataloader.py deleted file mode 100644 index 8f8d6817704026796d2c2f457fe2624800693267..0000000000000000000000000000000000000000 --- a/spaces/MattyWhite/ChatGPT-ImageCaptioner2/detic/data/custom_dataset_dataloader.py +++ /dev/null @@ -1,331 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Part of the code is from https://github.com/xingyizhou/UniDet/blob/master/projects/UniDet/unidet/data/multi_dataset_dataloader.py (Apache-2.0 License) -import copy -import logging -import numpy as np -import operator -import torch -import torch.utils.data -import json -from detectron2.utils.comm import get_world_size -from detectron2.utils.logger import _log_api_usage, log_first_n - -from detectron2.config import configurable -from detectron2.data import samplers -from torch.utils.data.sampler import BatchSampler, Sampler -from detectron2.data.common import DatasetFromList, MapDataset -from detectron2.data.dataset_mapper import DatasetMapper -from detectron2.data.build import get_detection_dataset_dicts, build_batch_data_loader -from detectron2.data.samplers import TrainingSampler, RepeatFactorTrainingSampler -from detectron2.data.build import worker_init_reset_seed, print_instances_class_histogram -from detectron2.data.build import filter_images_with_only_crowd_annotations -from detectron2.data.build import filter_images_with_few_keypoints -from detectron2.data.build import check_metadata_consistency -from detectron2.data.catalog import MetadataCatalog, DatasetCatalog -from detectron2.utils import comm -import itertools -import math -from collections import defaultdict -from typing import Optional - - -def _custom_train_loader_from_config(cfg, mapper=None, *, dataset=None, sampler=None): - sampler_name = cfg.DATALOADER.SAMPLER_TRAIN - if 'MultiDataset' in sampler_name: - dataset_dicts = get_detection_dataset_dicts_with_source( - cfg.DATASETS.TRAIN, - filter_empty=cfg.DATALOADER.FILTER_EMPTY_ANNOTATIONS, - min_keypoints=cfg.MODEL.ROI_KEYPOINT_HEAD.MIN_KEYPOINTS_PER_IMAGE - if cfg.MODEL.KEYPOINT_ON else 0, - proposal_files=cfg.DATASETS.PROPOSAL_FILES_TRAIN if cfg.MODEL.LOAD_PROPOSALS else None, - ) - else: - dataset_dicts = get_detection_dataset_dicts( - cfg.DATASETS.TRAIN, - filter_empty=cfg.DATALOADER.FILTER_EMPTY_ANNOTATIONS, - min_keypoints=cfg.MODEL.ROI_KEYPOINT_HEAD.MIN_KEYPOINTS_PER_IMAGE - if cfg.MODEL.KEYPOINT_ON else 0, - proposal_files=cfg.DATASETS.PROPOSAL_FILES_TRAIN if cfg.MODEL.LOAD_PROPOSALS else None, - ) - - if mapper is None: - mapper = DatasetMapper(cfg, True) - - if sampler is not None: - pass - elif sampler_name == "TrainingSampler": - sampler = TrainingSampler(len(dataset)) - elif sampler_name == "MultiDatasetSampler": - sampler = MultiDatasetSampler( - dataset_dicts, - dataset_ratio = cfg.DATALOADER.DATASET_RATIO, - use_rfs = cfg.DATALOADER.USE_RFS, - dataset_ann = cfg.DATALOADER.DATASET_ANN, - repeat_threshold = cfg.DATALOADER.REPEAT_THRESHOLD, - ) - elif sampler_name == "RepeatFactorTrainingSampler": - repeat_factors = RepeatFactorTrainingSampler.repeat_factors_from_category_frequency( - dataset_dicts, cfg.DATALOADER.REPEAT_THRESHOLD - ) - sampler = RepeatFactorTrainingSampler(repeat_factors) - else: - raise ValueError("Unknown training sampler: {}".format(sampler_name)) - - return { - "dataset": dataset_dicts, - "sampler": sampler, - "mapper": mapper, - "total_batch_size": cfg.SOLVER.IMS_PER_BATCH, - "aspect_ratio_grouping": cfg.DATALOADER.ASPECT_RATIO_GROUPING, - "num_workers": cfg.DATALOADER.NUM_WORKERS, - 'multi_dataset_grouping': cfg.DATALOADER.MULTI_DATASET_GROUPING, - 'use_diff_bs_size': cfg.DATALOADER.USE_DIFF_BS_SIZE, - 'dataset_bs': cfg.DATALOADER.DATASET_BS, - 'num_datasets': len(cfg.DATASETS.TRAIN) - } - - -@configurable(from_config=_custom_train_loader_from_config) -def build_custom_train_loader( - dataset, *, mapper, sampler, - total_batch_size=16, - aspect_ratio_grouping=True, - num_workers=0, - num_datasets=1, - multi_dataset_grouping=False, - use_diff_bs_size=False, - dataset_bs=[] - ): - """ - Modified from detectron2.data.build.build_custom_train_loader, but supports - different samplers - """ - if isinstance(dataset, list): - dataset = DatasetFromList(dataset, copy=False) - if mapper is not None: - dataset = MapDataset(dataset, mapper) - if sampler is None: - sampler = TrainingSampler(len(dataset)) - assert isinstance(sampler, torch.utils.data.sampler.Sampler) - if multi_dataset_grouping: - return build_multi_dataset_batch_data_loader( - use_diff_bs_size, - dataset_bs, - dataset, - sampler, - total_batch_size, - num_datasets=num_datasets, - num_workers=num_workers, - ) - else: - return build_batch_data_loader( - dataset, - sampler, - total_batch_size, - aspect_ratio_grouping=aspect_ratio_grouping, - num_workers=num_workers, - ) - - -def build_multi_dataset_batch_data_loader( - use_diff_bs_size, dataset_bs, - dataset, sampler, total_batch_size, num_datasets, num_workers=0 -): - """ - """ - world_size = get_world_size() - assert ( - total_batch_size > 0 and total_batch_size % world_size == 0 - ), "Total batch size ({}) must be divisible by the number of gpus ({}).".format( - total_batch_size, world_size - ) - - batch_size = total_batch_size // world_size - data_loader = torch.utils.data.DataLoader( - dataset, - sampler=sampler, - num_workers=num_workers, - batch_sampler=None, - collate_fn=operator.itemgetter(0), # don't batch, but yield individual elements - worker_init_fn=worker_init_reset_seed, - ) # yield individual mapped dict - if use_diff_bs_size: - return DIFFMDAspectRatioGroupedDataset( - data_loader, dataset_bs, num_datasets) - else: - return MDAspectRatioGroupedDataset( - data_loader, batch_size, num_datasets) - - -def get_detection_dataset_dicts_with_source( - dataset_names, filter_empty=True, min_keypoints=0, proposal_files=None -): - assert len(dataset_names) - dataset_dicts = [DatasetCatalog.get(dataset_name) for dataset_name in dataset_names] - for dataset_name, dicts in zip(dataset_names, dataset_dicts): - assert len(dicts), "Dataset '{}' is empty!".format(dataset_name) - - for source_id, (dataset_name, dicts) in \ - enumerate(zip(dataset_names, dataset_dicts)): - assert len(dicts), "Dataset '{}' is empty!".format(dataset_name) - for d in dicts: - d['dataset_source'] = source_id - - if "annotations" in dicts[0]: - try: - class_names = MetadataCatalog.get(dataset_name).thing_classes - check_metadata_consistency("thing_classes", dataset_name) - print_instances_class_histogram(dicts, class_names) - except AttributeError: # class names are not available for this dataset - pass - - assert proposal_files is None - - dataset_dicts = list(itertools.chain.from_iterable(dataset_dicts)) - - has_instances = "annotations" in dataset_dicts[0] - if filter_empty and has_instances: - dataset_dicts = filter_images_with_only_crowd_annotations(dataset_dicts) - if min_keypoints > 0 and has_instances: - dataset_dicts = filter_images_with_few_keypoints(dataset_dicts, min_keypoints) - - return dataset_dicts - - -class MultiDatasetSampler(Sampler): - def __init__( - self, - dataset_dicts, - dataset_ratio, - use_rfs, - dataset_ann, - repeat_threshold=0.001, - seed: Optional[int] = None, - ): - """ - """ - sizes = [0 for _ in range(len(dataset_ratio))] - for d in dataset_dicts: - sizes[d['dataset_source']] += 1 - print('dataset sizes', sizes) - self.sizes = sizes - assert len(dataset_ratio) == len(sizes), \ - 'length of dataset ratio {} should be equal to number if dataset {}'.format( - len(dataset_ratio), len(sizes) - ) - if seed is None: - seed = comm.shared_random_seed() - self._seed = int(seed) - self._rank = comm.get_rank() - self._world_size = comm.get_world_size() - - self.dataset_ids = torch.tensor( - [d['dataset_source'] for d in dataset_dicts], dtype=torch.long) - - dataset_weight = [torch.ones(s) * max(sizes) / s * r / sum(dataset_ratio) \ - for i, (r, s) in enumerate(zip(dataset_ratio, sizes))] - dataset_weight = torch.cat(dataset_weight) - - rfs_factors = [] - st = 0 - for i, s in enumerate(sizes): - if use_rfs[i]: - if dataset_ann[i] == 'box': - rfs_func = RepeatFactorTrainingSampler.repeat_factors_from_category_frequency - else: - rfs_func = repeat_factors_from_tag_frequency - rfs_factor = rfs_func( - dataset_dicts[st: st + s], - repeat_thresh=repeat_threshold) - rfs_factor = rfs_factor * (s / rfs_factor.sum()) - else: - rfs_factor = torch.ones(s) - rfs_factors.append(rfs_factor) - st = st + s - rfs_factors = torch.cat(rfs_factors) - - self.weights = dataset_weight * rfs_factors - self.sample_epoch_size = len(self.weights) - - def __iter__(self): - start = self._rank - yield from itertools.islice( - self._infinite_indices(), start, None, self._world_size) - - - def _infinite_indices(self): - g = torch.Generator() - g.manual_seed(self._seed) - while True: - ids = torch.multinomial( - self.weights, self.sample_epoch_size, generator=g, - replacement=True) - nums = [(self.dataset_ids[ids] == i).sum().int().item() \ - for i in range(len(self.sizes))] - yield from ids - - -class MDAspectRatioGroupedDataset(torch.utils.data.IterableDataset): - def __init__(self, dataset, batch_size, num_datasets): - """ - """ - self.dataset = dataset - self.batch_size = batch_size - self._buckets = [[] for _ in range(2 * num_datasets)] - - def __iter__(self): - for d in self.dataset: - w, h = d["width"], d["height"] - aspect_ratio_bucket_id = 0 if w > h else 1 - bucket_id = d['dataset_source'] * 2 + aspect_ratio_bucket_id - bucket = self._buckets[bucket_id] - bucket.append(d) - if len(bucket) == self.batch_size: - yield bucket[:] - del bucket[:] - - -class DIFFMDAspectRatioGroupedDataset(torch.utils.data.IterableDataset): - def __init__(self, dataset, batch_sizes, num_datasets): - """ - """ - self.dataset = dataset - self.batch_sizes = batch_sizes - self._buckets = [[] for _ in range(2 * num_datasets)] - - def __iter__(self): - for d in self.dataset: - w, h = d["width"], d["height"] - aspect_ratio_bucket_id = 0 if w > h else 1 - bucket_id = d['dataset_source'] * 2 + aspect_ratio_bucket_id - bucket = self._buckets[bucket_id] - bucket.append(d) - if len(bucket) == self.batch_sizes[d['dataset_source']]: - yield bucket[:] - del bucket[:] - - -def repeat_factors_from_tag_frequency(dataset_dicts, repeat_thresh): - """ - """ - category_freq = defaultdict(int) - for dataset_dict in dataset_dicts: - cat_ids = dataset_dict['pos_category_ids'] - for cat_id in cat_ids: - category_freq[cat_id] += 1 - num_images = len(dataset_dicts) - for k, v in category_freq.items(): - category_freq[k] = v / num_images - - category_rep = { - cat_id: max(1.0, math.sqrt(repeat_thresh / cat_freq)) - for cat_id, cat_freq in category_freq.items() - } - - rep_factors = [] - for dataset_dict in dataset_dicts: - cat_ids = dataset_dict['pos_category_ids'] - rep_factor = max({category_rep[cat_id] for cat_id in cat_ids}, default=1.0) - rep_factors.append(rep_factor) - - return torch.tensor(rep_factors, dtype=torch.float32) \ No newline at end of file diff --git a/spaces/Mediocreatmybest/PipelineImageCaption/app.py b/spaces/Mediocreatmybest/PipelineImageCaption/app.py deleted file mode 100644 index 8637fe35de29cce47b5c5a5fb8c3b394f77aac44..0000000000000000000000000000000000000000 --- a/spaces/Mediocreatmybest/PipelineImageCaption/app.py +++ /dev/null @@ -1,60 +0,0 @@ -import torch -import gradio as gr -from transformers import pipeline -import ast - -CAPTION_MODELS = { - 'blip-base': 'Salesforce/blip-image-captioning-base', - 'blip-large': 'Salesforce/blip-image-captioning-large', - 'vit-gpt2-coco-en': 'ydshieh/vit-gpt2-coco-en', - 'blip2-2.7b_8bit': 'Mediocreatmybest/blip2-opt-2.7b_8bit', - 'blip2-2.7b-fp16': 'Mediocreatmybest/blip2-opt-2.7b-fp16-sharded', -} - -# Create a dictionary to store loaded models -loaded_models = {} - -# Simple caption creation -def caption_image(model_choice, image_input, url_inputs, load_in_8bit, device): - if image_input is not None: - input_data = [image_input] - else: - input_data = ast.literal_eval(url_inputs) # interpret the input string as a list - - captions = [] - model_key = (model_choice, load_in_8bit) # Create a tuple to represent the unique combination of model and 8bit loading - - # Check if the model is already loaded - if model_key in loaded_models: - captioner = loaded_models[model_key] - else: - model_kwargs = {"load_in_8bit": load_in_8bit} if load_in_8bit else {} - dtype = torch.float16 if load_in_8bit else torch.float32 # Set dtype based on the value of load_in_8bit - captioner = pipeline(task="image-to-text", - model=CAPTION_MODELS[model_choice], - max_new_tokens=30, - device=device, # Use selected device - model_kwargs=model_kwargs, - torch_dtype=dtype, # Set the floating point - use_fast=True - ) - # Store the loaded model - loaded_models[model_key] = captioner - - for input_item in input_data: - caption = captioner(input_item)[0]['generated_text'] - captions.append(str(caption).strip()) - return captions - -def launch(model_choice, image_input, url_inputs, load_in_8bit, device): - return caption_image(model_choice, image_input, url_inputs, load_in_8bit, device) - -model_dropdown = gr.Dropdown(choices=list(CAPTION_MODELS.keys()), label='Select Caption Model') -image_input = gr.Image(type="pil", label="Input Image", multiple=True) # Enable multiple inputs -url_inputs = gr.Textbox(label="Input URLs", description="Enter URLs in a list format, e.g., ['url1', 'url2', 'url3']") -load_in_8bit = gr.Checkbox(label="Load model in 8bit") -device = gr.Radio(['cpu', 'cuda'], label='Select device', default='cpu') - -iface = gr.Interface(launch, inputs=[model_dropdown, image_input, url_inputs, load_in_8bit, device], - outputs=gr.outputs.Textbox(type="text", label="Caption")) -iface.launch() \ No newline at end of file diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/cnn/bricks/drop.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/cnn/bricks/drop.py deleted file mode 100644 index b7b4fccd457a0d51fb10c789df3c8537fe7b67c1..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/cnn/bricks/drop.py +++ /dev/null @@ -1,65 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn - -from annotator.uniformer.mmcv import build_from_cfg -from .registry import DROPOUT_LAYERS - - -def drop_path(x, drop_prob=0., training=False): - """Drop paths (Stochastic Depth) per sample (when applied in main path of - residual blocks). - - We follow the implementation - https://github.com/rwightman/pytorch-image-models/blob/a2727c1bf78ba0d7b5727f5f95e37fb7f8866b1f/timm/models/layers/drop.py # noqa: E501 - """ - if drop_prob == 0. or not training: - return x - keep_prob = 1 - drop_prob - # handle tensors with different dimensions, not just 4D tensors. - shape = (x.shape[0], ) + (1, ) * (x.ndim - 1) - random_tensor = keep_prob + torch.rand( - shape, dtype=x.dtype, device=x.device) - output = x.div(keep_prob) * random_tensor.floor() - return output - - -@DROPOUT_LAYERS.register_module() -class DropPath(nn.Module): - """Drop paths (Stochastic Depth) per sample (when applied in main path of - residual blocks). - - We follow the implementation - https://github.com/rwightman/pytorch-image-models/blob/a2727c1bf78ba0d7b5727f5f95e37fb7f8866b1f/timm/models/layers/drop.py # noqa: E501 - - Args: - drop_prob (float): Probability of the path to be zeroed. Default: 0.1 - """ - - def __init__(self, drop_prob=0.1): - super(DropPath, self).__init__() - self.drop_prob = drop_prob - - def forward(self, x): - return drop_path(x, self.drop_prob, self.training) - - -@DROPOUT_LAYERS.register_module() -class Dropout(nn.Dropout): - """A wrapper for ``torch.nn.Dropout``, We rename the ``p`` of - ``torch.nn.Dropout`` to ``drop_prob`` so as to be consistent with - ``DropPath`` - - Args: - drop_prob (float): Probability of the elements to be - zeroed. Default: 0.5. - inplace (bool): Do the operation inplace or not. Default: False. - """ - - def __init__(self, drop_prob=0.5, inplace=False): - super().__init__(p=drop_prob, inplace=inplace) - - -def build_dropout(cfg, default_args=None): - """Builder for drop out layers.""" - return build_from_cfg(cfg, DROPOUT_LAYERS, default_args) diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/ldm/models/diffusion/ddim.py b/spaces/Mellow-ai/PhotoAI_Mellow/ldm/models/diffusion/ddim.py deleted file mode 100644 index 27ead0ea914c64c747b64e690662899fb3801144..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/ldm/models/diffusion/ddim.py +++ /dev/null @@ -1,336 +0,0 @@ -"""SAMPLING ONLY.""" - -import torch -import numpy as np -from tqdm import tqdm - -from ldm.modules.diffusionmodules.util import make_ddim_sampling_parameters, make_ddim_timesteps, noise_like, extract_into_tensor - - -class DDIMSampler(object): - def __init__(self, model, schedule="linear", **kwargs): - super().__init__() - self.model = model - self.ddpm_num_timesteps = model.num_timesteps - self.schedule = schedule - - def register_buffer(self, name, attr): - if type(attr) == torch.Tensor: - if attr.device != torch.device("cuda"): - attr = attr.to(torch.device("cuda")) - setattr(self, name, attr) - - def make_schedule(self, ddim_num_steps, ddim_discretize="uniform", ddim_eta=0., verbose=True): - self.ddim_timesteps = make_ddim_timesteps(ddim_discr_method=ddim_discretize, num_ddim_timesteps=ddim_num_steps, - num_ddpm_timesteps=self.ddpm_num_timesteps,verbose=verbose) - alphas_cumprod = self.model.alphas_cumprod - assert alphas_cumprod.shape[0] == self.ddpm_num_timesteps, 'alphas have to be defined for each timestep' - to_torch = lambda x: x.clone().detach().to(torch.float32).to(self.model.device) - - self.register_buffer('betas', to_torch(self.model.betas)) - self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod)) - self.register_buffer('alphas_cumprod_prev', to_torch(self.model.alphas_cumprod_prev)) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod.cpu()))) - self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod.cpu()))) - self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod.cpu()))) - self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu()))) - self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu() - 1))) - - # ddim sampling parameters - ddim_sigmas, ddim_alphas, ddim_alphas_prev = make_ddim_sampling_parameters(alphacums=alphas_cumprod.cpu(), - ddim_timesteps=self.ddim_timesteps, - eta=ddim_eta,verbose=verbose) - self.register_buffer('ddim_sigmas', ddim_sigmas) - self.register_buffer('ddim_alphas', ddim_alphas) - self.register_buffer('ddim_alphas_prev', ddim_alphas_prev) - self.register_buffer('ddim_sqrt_one_minus_alphas', np.sqrt(1. - ddim_alphas)) - sigmas_for_original_sampling_steps = ddim_eta * torch.sqrt( - (1 - self.alphas_cumprod_prev) / (1 - self.alphas_cumprod) * ( - 1 - self.alphas_cumprod / self.alphas_cumprod_prev)) - self.register_buffer('ddim_sigmas_for_original_num_steps', sigmas_for_original_sampling_steps) - - @torch.no_grad() - def sample(self, - S, - batch_size, - shape, - conditioning=None, - callback=None, - normals_sequence=None, - img_callback=None, - quantize_x0=False, - eta=0., - mask=None, - x0=None, - temperature=1., - noise_dropout=0., - score_corrector=None, - corrector_kwargs=None, - verbose=True, - x_T=None, - log_every_t=100, - unconditional_guidance_scale=1., - unconditional_conditioning=None, # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ... - dynamic_threshold=None, - ucg_schedule=None, - **kwargs - ): - if conditioning is not None: - if isinstance(conditioning, dict): - ctmp = conditioning[list(conditioning.keys())[0]] - while isinstance(ctmp, list): ctmp = ctmp[0] - cbs = ctmp.shape[0] - if cbs != batch_size: - print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}") - - elif isinstance(conditioning, list): - for ctmp in conditioning: - if ctmp.shape[0] != batch_size: - print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}") - - else: - if conditioning.shape[0] != batch_size: - print(f"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}") - - self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=verbose) - # sampling - C, H, W = shape - size = (batch_size, C, H, W) - print(f'Data shape for DDIM sampling is {size}, eta {eta}') - - samples, intermediates = self.ddim_sampling(conditioning, size, - callback=callback, - img_callback=img_callback, - quantize_denoised=quantize_x0, - mask=mask, x0=x0, - ddim_use_original_steps=False, - noise_dropout=noise_dropout, - temperature=temperature, - score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - x_T=x_T, - log_every_t=log_every_t, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning, - dynamic_threshold=dynamic_threshold, - ucg_schedule=ucg_schedule - ) - return samples, intermediates - - @torch.no_grad() - def ddim_sampling(self, cond, shape, - x_T=None, ddim_use_original_steps=False, - callback=None, timesteps=None, quantize_denoised=False, - mask=None, x0=None, img_callback=None, log_every_t=100, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, - unconditional_guidance_scale=1., unconditional_conditioning=None, dynamic_threshold=None, - ucg_schedule=None): - device = self.model.betas.device - b = shape[0] - if x_T is None: - img = torch.randn(shape, device=device) - else: - img = x_T - - if timesteps is None: - timesteps = self.ddpm_num_timesteps if ddim_use_original_steps else self.ddim_timesteps - elif timesteps is not None and not ddim_use_original_steps: - subset_end = int(min(timesteps / self.ddim_timesteps.shape[0], 1) * self.ddim_timesteps.shape[0]) - 1 - timesteps = self.ddim_timesteps[:subset_end] - - intermediates = {'x_inter': [img], 'pred_x0': [img]} - time_range = reversed(range(0,timesteps)) if ddim_use_original_steps else np.flip(timesteps) - total_steps = timesteps if ddim_use_original_steps else timesteps.shape[0] - print(f"Running DDIM Sampling with {total_steps} timesteps") - - iterator = tqdm(time_range, desc='DDIM Sampler', total=total_steps) - - for i, step in enumerate(iterator): - index = total_steps - i - 1 - ts = torch.full((b,), step, device=device, dtype=torch.long) - - if mask is not None: - assert x0 is not None - img_orig = self.model.q_sample(x0, ts) # TODO: deterministic forward pass? - img = img_orig * mask + (1. - mask) * img - - if ucg_schedule is not None: - assert len(ucg_schedule) == len(time_range) - unconditional_guidance_scale = ucg_schedule[i] - - outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps, - quantize_denoised=quantize_denoised, temperature=temperature, - noise_dropout=noise_dropout, score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning, - dynamic_threshold=dynamic_threshold) - img, pred_x0 = outs - if callback: callback(i) - if img_callback: img_callback(pred_x0, i) - - if index % log_every_t == 0 or index == total_steps - 1: - intermediates['x_inter'].append(img) - intermediates['pred_x0'].append(pred_x0) - - return img, intermediates - - @torch.no_grad() - def p_sample_ddim(self, x, c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, - unconditional_guidance_scale=1., unconditional_conditioning=None, - dynamic_threshold=None): - b, *_, device = *x.shape, x.device - - if unconditional_conditioning is None or unconditional_guidance_scale == 1.: - model_output = self.model.apply_model(x, t, c) - else: - x_in = torch.cat([x] * 2) - t_in = torch.cat([t] * 2) - if isinstance(c, dict): - assert isinstance(unconditional_conditioning, dict) - c_in = dict() - for k in c: - if isinstance(c[k], list): - c_in[k] = [torch.cat([ - unconditional_conditioning[k][i], - c[k][i]]) for i in range(len(c[k]))] - else: - c_in[k] = torch.cat([ - unconditional_conditioning[k], - c[k]]) - elif isinstance(c, list): - c_in = list() - assert isinstance(unconditional_conditioning, list) - for i in range(len(c)): - c_in.append(torch.cat([unconditional_conditioning[i], c[i]])) - else: - c_in = torch.cat([unconditional_conditioning, c]) - model_uncond, model_t = self.model.apply_model(x_in, t_in, c_in).chunk(2) - model_output = model_uncond + unconditional_guidance_scale * (model_t - model_uncond) - - if self.model.parameterization == "v": - e_t = self.model.predict_eps_from_z_and_v(x, t, model_output) - else: - e_t = model_output - - if score_corrector is not None: - assert self.model.parameterization == "eps", 'not implemented' - e_t = score_corrector.modify_score(self.model, e_t, x, t, c, **corrector_kwargs) - - alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas - alphas_prev = self.model.alphas_cumprod_prev if use_original_steps else self.ddim_alphas_prev - sqrt_one_minus_alphas = self.model.sqrt_one_minus_alphas_cumprod if use_original_steps else self.ddim_sqrt_one_minus_alphas - sigmas = self.model.ddim_sigmas_for_original_num_steps if use_original_steps else self.ddim_sigmas - # select parameters corresponding to the currently considered timestep - a_t = torch.full((b, 1, 1, 1), alphas[index], device=device) - a_prev = torch.full((b, 1, 1, 1), alphas_prev[index], device=device) - sigma_t = torch.full((b, 1, 1, 1), sigmas[index], device=device) - sqrt_one_minus_at = torch.full((b, 1, 1, 1), sqrt_one_minus_alphas[index],device=device) - - # current prediction for x_0 - if self.model.parameterization != "v": - pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt() - else: - pred_x0 = self.model.predict_start_from_z_and_v(x, t, model_output) - - if quantize_denoised: - pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0) - - if dynamic_threshold is not None: - raise NotImplementedError() - - # direction pointing to x_t - dir_xt = (1. - a_prev - sigma_t**2).sqrt() * e_t - noise = sigma_t * noise_like(x.shape, device, repeat_noise) * temperature - if noise_dropout > 0.: - noise = torch.nn.functional.dropout(noise, p=noise_dropout) - x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise - return x_prev, pred_x0 - - @torch.no_grad() - def encode(self, x0, c, t_enc, use_original_steps=False, return_intermediates=None, - unconditional_guidance_scale=1.0, unconditional_conditioning=None, callback=None): - num_reference_steps = self.ddpm_num_timesteps if use_original_steps else self.ddim_timesteps.shape[0] - - assert t_enc <= num_reference_steps - num_steps = t_enc - - if use_original_steps: - alphas_next = self.alphas_cumprod[:num_steps] - alphas = self.alphas_cumprod_prev[:num_steps] - else: - alphas_next = self.ddim_alphas[:num_steps] - alphas = torch.tensor(self.ddim_alphas_prev[:num_steps]) - - x_next = x0 - intermediates = [] - inter_steps = [] - for i in tqdm(range(num_steps), desc='Encoding Image'): - t = torch.full((x0.shape[0],), i, device=self.model.device, dtype=torch.long) - if unconditional_guidance_scale == 1.: - noise_pred = self.model.apply_model(x_next, t, c) - else: - assert unconditional_conditioning is not None - e_t_uncond, noise_pred = torch.chunk( - self.model.apply_model(torch.cat((x_next, x_next)), torch.cat((t, t)), - torch.cat((unconditional_conditioning, c))), 2) - noise_pred = e_t_uncond + unconditional_guidance_scale * (noise_pred - e_t_uncond) - - xt_weighted = (alphas_next[i] / alphas[i]).sqrt() * x_next - weighted_noise_pred = alphas_next[i].sqrt() * ( - (1 / alphas_next[i] - 1).sqrt() - (1 / alphas[i] - 1).sqrt()) * noise_pred - x_next = xt_weighted + weighted_noise_pred - if return_intermediates and i % ( - num_steps // return_intermediates) == 0 and i < num_steps - 1: - intermediates.append(x_next) - inter_steps.append(i) - elif return_intermediates and i >= num_steps - 2: - intermediates.append(x_next) - inter_steps.append(i) - if callback: callback(i) - - out = {'x_encoded': x_next, 'intermediate_steps': inter_steps} - if return_intermediates: - out.update({'intermediates': intermediates}) - return x_next, out - - @torch.no_grad() - def stochastic_encode(self, x0, t, use_original_steps=False, noise=None): - # fast, but does not allow for exact reconstruction - # t serves as an index to gather the correct alphas - if use_original_steps: - sqrt_alphas_cumprod = self.sqrt_alphas_cumprod - sqrt_one_minus_alphas_cumprod = self.sqrt_one_minus_alphas_cumprod - else: - sqrt_alphas_cumprod = torch.sqrt(self.ddim_alphas) - sqrt_one_minus_alphas_cumprod = self.ddim_sqrt_one_minus_alphas - - if noise is None: - noise = torch.randn_like(x0) - return (extract_into_tensor(sqrt_alphas_cumprod, t, x0.shape) * x0 + - extract_into_tensor(sqrt_one_minus_alphas_cumprod, t, x0.shape) * noise) - - @torch.no_grad() - def decode(self, x_latent, cond, t_start, unconditional_guidance_scale=1.0, unconditional_conditioning=None, - use_original_steps=False, callback=None): - - timesteps = np.arange(self.ddpm_num_timesteps) if use_original_steps else self.ddim_timesteps - timesteps = timesteps[:t_start] - - time_range = np.flip(timesteps) - total_steps = timesteps.shape[0] - print(f"Running DDIM Sampling with {total_steps} timesteps") - - iterator = tqdm(time_range, desc='Decoding image', total=total_steps) - x_dec = x_latent - for i, step in enumerate(iterator): - index = total_steps - i - 1 - ts = torch.full((x_latent.shape[0],), step, device=x_latent.device, dtype=torch.long) - x_dec, _ = self.p_sample_ddim(x_dec, cond, ts, index=index, use_original_steps=use_original_steps, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning) - if callback: callback(i) - return x_dec \ No newline at end of file diff --git a/spaces/MetaWabbit/Auto-GPT/autogpt/memory/redismem.py b/spaces/MetaWabbit/Auto-GPT/autogpt/memory/redismem.py deleted file mode 100644 index 082a812c5362cc9f19e35bf1bb10269b558f7724..0000000000000000000000000000000000000000 --- a/spaces/MetaWabbit/Auto-GPT/autogpt/memory/redismem.py +++ /dev/null @@ -1,156 +0,0 @@ -"""Redis memory provider.""" -from __future__ import annotations - -from typing import Any - -import numpy as np -import redis -from colorama import Fore, Style -from redis.commands.search.field import TextField, VectorField -from redis.commands.search.indexDefinition import IndexDefinition, IndexType -from redis.commands.search.query import Query - -from autogpt.llm_utils import create_embedding_with_ada -from autogpt.logs import logger -from autogpt.memory.base import MemoryProviderSingleton - -SCHEMA = [ - TextField("data"), - VectorField( - "embedding", - "HNSW", - {"TYPE": "FLOAT32", "DIM": 1536, "DISTANCE_METRIC": "COSINE"}, - ), -] - - -class RedisMemory(MemoryProviderSingleton): - def __init__(self, cfg): - """ - Initializes the Redis memory provider. - - Args: - cfg: The config object. - - Returns: None - """ - redis_host = cfg.redis_host - redis_port = cfg.redis_port - redis_password = cfg.redis_password - self.dimension = 1536 - self.redis = redis.Redis( - host=redis_host, - port=redis_port, - password=redis_password, - db=0, # Cannot be changed - ) - self.cfg = cfg - - # Check redis connection - try: - self.redis.ping() - except redis.ConnectionError as e: - logger.typewriter_log( - "FAILED TO CONNECT TO REDIS", - Fore.RED, - Style.BRIGHT + str(e) + Style.RESET_ALL, - ) - logger.double_check( - "Please ensure you have setup and configured Redis properly for use. " - + f"You can check out {Fore.CYAN + Style.BRIGHT}" - f"https://github.com/Torantulino/Auto-GPT#redis-setup{Style.RESET_ALL}" - " to ensure you've set up everything correctly." - ) - exit(1) - - if cfg.wipe_redis_on_start: - self.redis.flushall() - try: - self.redis.ft(f"{cfg.memory_index}").create_index( - fields=SCHEMA, - definition=IndexDefinition( - prefix=[f"{cfg.memory_index}:"], index_type=IndexType.HASH - ), - ) - except Exception as e: - print("Error creating Redis search index: ", e) - existing_vec_num = self.redis.get(f"{cfg.memory_index}-vec_num") - self.vec_num = int(existing_vec_num.decode("utf-8")) if existing_vec_num else 0 - - def add(self, data: str) -> str: - """ - Adds a data point to the memory. - - Args: - data: The data to add. - - Returns: Message indicating that the data has been added. - """ - if "Command Error:" in data: - return "" - vector = create_embedding_with_ada(data) - vector = np.array(vector).astype(np.float32).tobytes() - data_dict = {b"data": data, "embedding": vector} - pipe = self.redis.pipeline() - pipe.hset(f"{self.cfg.memory_index}:{self.vec_num}", mapping=data_dict) - _text = ( - f"Inserting data into memory at index: {self.vec_num}:\n" f"data: {data}" - ) - self.vec_num += 1 - pipe.set(f"{self.cfg.memory_index}-vec_num", self.vec_num) - pipe.execute() - return _text - - def get(self, data: str) -> list[Any] | None: - """ - Gets the data from the memory that is most relevant to the given data. - - Args: - data: The data to compare to. - - Returns: The most relevant data. - """ - return self.get_relevant(data, 1) - - def clear(self) -> str: - """ - Clears the redis server. - - Returns: A message indicating that the memory has been cleared. - """ - self.redis.flushall() - return "Obliviated" - - def get_relevant(self, data: str, num_relevant: int = 5) -> list[Any] | None: - """ - Returns all the data in the memory that is relevant to the given data. - Args: - data: The data to compare to. - num_relevant: The number of relevant data to return. - - Returns: A list of the most relevant data. - """ - query_embedding = create_embedding_with_ada(data) - base_query = f"*=>[KNN {num_relevant} @embedding $vector AS vector_score]" - query = ( - Query(base_query) - .return_fields("data", "vector_score") - .sort_by("vector_score") - .dialect(2) - ) - query_vector = np.array(query_embedding).astype(np.float32).tobytes() - - try: - results = self.redis.ft(f"{self.cfg.memory_index}").search( - query, query_params={"vector": query_vector} - ) - except Exception as e: - print("Error calling Redis search: ", e) - return None - return [result.data for result in results.docs] - - def get_stats(self): - """ - Returns: The stats of the memory index. - """ - return self.redis.ft(f"{self.cfg.memory_index}").info() diff --git a/spaces/MohitGupta/Eng2Indic_Translitration/transliteration/transformer/base_engine.py b/spaces/MohitGupta/Eng2Indic_Translitration/transliteration/transformer/base_engine.py deleted file mode 100644 index 0436127b5e5bb31c57f238a5dc9e4d2588ab2615..0000000000000000000000000000000000000000 --- a/spaces/MohitGupta/Eng2Indic_Translitration/transliteration/transformer/base_engine.py +++ /dev/null @@ -1,371 +0,0 @@ -import os -import re -import tqdm -import ujson -from pydload import dload -import zipfile -from abc import ABC, abstractmethod, abstractproperty -from indicnlp.normalize.indic_normalize import IndicNormalizerFactory -from urduhack import normalize as shahmukhi_normalize - -from ..utils import * -LANG_WORD_REGEXES = { - lang_name: re.compile(f"[{SCRIPT_CODE_TO_UNICODE_CHARS_RANGE_STR[script_name]}]+") - for lang_name, script_name in LANG_CODE_TO_SCRIPT_CODE.items() -} - -MODEL_FILE = 'transformer/pytorch_model.pt' -DICTS_FOLDER = 'word_prob_dicts' -CHARS_FOLDER = 'corpus-bin' -DICT_FILE_FORMAT = '%s_word_prob_dict.json' -LANG_LIST_FILE = './lang_list.txt' - -normalizer_factory = IndicNormalizerFactory() - -class BaseEngineTransformer(ABC): - - @abstractproperty - def all_supported_langs(self): - pass - - @abstractproperty - def tgt_langs(self): - pass - - def __init__(self, models_path, beam_width, rescore): - # added by yash - - print("Initializing Multilingual model for transliteration") - if 'en' in self.tgt_langs: - lang_pairs_csv = ','.join([lang+"-en" for lang in self.all_supported_langs]) - else: - lang_pairs_csv = ','.join(["en-"+lang for lang in self.all_supported_langs]) - - # initialize the model - from .custom_interactive import Transliterator - self.transliterator = Transliterator( - os.path.join(models_path, CHARS_FOLDER), - os.path.join(models_path, MODEL_FILE), - lang_pairs_csv = lang_pairs_csv, - lang_list_file = os.path.join(models_path, LANG_LIST_FILE), - beam = beam_width, batch_size = 32, - ) - - self.beam_width = beam_width - self._rescore = rescore - if self._rescore: - # loading the word_prob_dict for rescoring module - dicts_folder = os.path.join(models_path, DICTS_FOLDER) - self.word_prob_dicts = {} - for la in tqdm.tqdm(self.tgt_langs, desc="Loading dicts into RAM"): - self.word_prob_dicts[la] = ujson.load(open( - os.path.join(dicts_folder, DICT_FILE_FORMAT%la) - )) - - def download_models(self, models_path, download_url): - ''' - Download models from bucket - ''' - # added by yash - model_file_path = os.path.join(models_path, MODEL_FILE) - if not os.path.isfile(model_file_path): - print('Downloading Multilingual model for transliteration') - remote_url = download_url - downloaded_zip_path = os.path.join(models_path, 'model.zip') - - dload(url=remote_url, save_to_path=downloaded_zip_path, max_time=None) - - if not os.path.isfile(downloaded_zip_path): - exit(f'ERROR: Unable to download model from {remote_url} into {models_path}') - - with zipfile.ZipFile(downloaded_zip_path, 'r') as zip_ref: - zip_ref.extractall(models_path) - - if os.path.isfile(model_file_path): - os.remove(downloaded_zip_path) - else: - exit(f'ERROR: Unable to find models in {models_path} after download') - - print("Models downloaded to:", models_path) - print("NOTE: When uninstalling this library, REMEMBER to delete the models manually") - return model_file_path - - def download_dicts(self, models_path, download_url): - ''' - Download language model probablitites dictionaries - ''' - dicts_folder = os.path.join(models_path, DICTS_FOLDER) - if not os.path.isdir(dicts_folder): - # added by yash - print('Downloading language model probablitites dictionaries for rescoring module') - remote_url = download_url - downloaded_zip_path = os.path.join(models_path, 'dicts.zip') - - dload(url=remote_url, save_to_path=downloaded_zip_path, max_time=None) - - if not os.path.isfile(downloaded_zip_path): - exit(f'ERROR: Unable to download model from {remote_url} into {models_path}') - - with zipfile.ZipFile(downloaded_zip_path, 'r') as zip_ref: - zip_ref.extractall(models_path) - - if os.path.isdir(dicts_folder): - os.remove(downloaded_zip_path) - else: - exit(f'ERROR: Unable to find models in {models_path} after download') - return dicts_folder - - def indic_normalize(self, words, lang_code): - normalizer = normalizer_factory.get_normalizer('hi') - words = [ normalizer.normalize(word) for word in words ] - return words - - def pre_process(self, words, src_lang, tgt_lang): - # convert the word into sentence which contains space separated chars - words = [' '.join(list(word.lower())) for word in words] - - lang_code = tgt_lang if src_lang == 'en' else src_lang - # adding language token - words = ['__'+ lang_code +'__ ' + word for word in words] - - return words - - def rescore(self, res_dict, result_dict, tgt_lang, alpha ): - - alpha = alpha - # word_prob_dict = {} - word_prob_dict = self.word_prob_dicts[tgt_lang] - - candidate_word_prob_norm_dict = {} - candidate_word_result_norm_dict = {} - - input_data = {} - for i in res_dict.keys(): - input_data[res_dict[i]['S']] = [] - for j in range(len(res_dict[i]['H'])): - input_data[res_dict[i]['S']].append( res_dict[i]['H'][j][0] ) - - for src_word in input_data.keys(): - candidates = input_data[src_word] - - candidates = [' '.join(word.split(' ')) for word in candidates] - - total_score = 0 - - if src_word.lower() in result_dict.keys(): - for candidate_word in candidates: - if candidate_word in result_dict[src_word.lower()].keys(): - total_score += result_dict[src_word.lower()][candidate_word] - - candidate_word_result_norm_dict[src_word.lower()] = {} - - for candidate_word in candidates: - candidate_word_result_norm_dict[src_word.lower()][candidate_word] = (result_dict[src_word.lower()][candidate_word]/total_score) - - candidates = [''.join(word.split(' ')) for word in candidates ] - - total_prob = 0 - - for candidate_word in candidates: - if candidate_word in word_prob_dict.keys(): - total_prob += word_prob_dict[candidate_word] - - candidate_word_prob_norm_dict[src_word.lower()] = {} - for candidate_word in candidates: - if candidate_word in word_prob_dict.keys(): - candidate_word_prob_norm_dict[src_word.lower()][candidate_word] = (word_prob_dict[candidate_word]/total_prob) - - output_data = {} - for src_word in input_data.keys(): - - temp_candidates_tuple_list = [] - candidates = input_data[src_word] - candidates = [ ''.join(word.split(' ')) for word in candidates] - - - for candidate_word in candidates: - if candidate_word in word_prob_dict.keys(): - temp_candidates_tuple_list.append((candidate_word, alpha*candidate_word_result_norm_dict[src_word.lower()][' '.join(list(candidate_word))] + (1-alpha)*candidate_word_prob_norm_dict[src_word.lower()][candidate_word] )) - else: - temp_candidates_tuple_list.append((candidate_word, 0 )) - - temp_candidates_tuple_list.sort(key = lambda x: x[1], reverse = True ) - - temp_candidates_list = [] - for cadidate_tuple in temp_candidates_tuple_list: - temp_candidates_list.append(' '.join(list(cadidate_tuple[0]))) - - output_data[src_word] = temp_candidates_list - - return output_data - - def post_process(self, translation_str, tgt_lang): - lines = translation_str.split('\n') - - list_s = [line for line in lines if 'S-' in line] - # list_t = [line for line in lines if 'T-' in line] - list_h = [line for line in lines if 'H-' in line] - # list_d = [line for line in lines if 'D-' in line] - - list_s.sort(key = lambda x: int(x.split('\t')[0].split('-')[1]) ) - # list_t.sort(key = lambda x: int(x.split('\t')[0].split('-')[1]) ) - list_h.sort(key = lambda x: int(x.split('\t')[0].split('-')[1]) ) - # list_d.sort(key = lambda x: int(x.split('\t')[0].split('-')[1]) ) - - res_dict = {} - for s in list_s: - s_id = int(s.split('\t')[0].split('-')[1]) - - res_dict[s_id] = { 'S' : s.split('\t')[1] } - - # for t in list_t: - # t_id = int(t.split('\t')[0].split('-')[1]) - # if s_id == t_id: - # res_dict[s_id]['T'] = t.split('\t')[1] - - res_dict[s_id]['H'] = [] - # res_dict[s_id]['D'] = [] - - for h in list_h: - h_id = int(h.split('\t')[0].split('-')[1]) - - if s_id == h_id: - res_dict[s_id]['H'].append( ( h.split('\t')[2], pow(2,float(h.split('\t')[1])) ) ) - - # for d in list_d: - # d_id = int(d.split('\t')[0].split('-')[1]) - - # if s_id == d_id: - # res_dict[s_id]['D'].append( ( d.split('\t')[2], pow(2,float(d.split('\t')[1])) ) ) - - for r in res_dict.keys(): - res_dict[r]['H'].sort(key = lambda x : float(x[1]) ,reverse =True) - # res_dict[r]['D'].sort(key = lambda x : float(x[1]) ,reverse =True) - - - # for rescoring - result_dict = {} - for i in res_dict.keys(): - result_dict[res_dict[i]['S']] = {} - for j in range(len(res_dict[i]['H'])): - result_dict[res_dict[i]['S']][res_dict[i]['H'][j][0]] = res_dict[i]['H'][j][1] - - - transliterated_word_list = [] - if self._rescore: - output_dir = self.rescore(res_dict, result_dict, tgt_lang, alpha = 0.9) - for src_word in output_dir.keys(): - for j in range(len(output_dir[src_word])): - transliterated_word_list.append( output_dir[src_word][j] ) - - else: - for i in res_dict.keys(): - # transliterated_word_list.append( res_dict[i]['S'] + ' : ' + res_dict[i]['H'][0][0] ) - for j in range(len(res_dict[i]['H'])): - transliterated_word_list.append( res_dict[i]['H'][j][0] ) - - # remove extra spaces - # transliterated_word_list = [''.join(pair.split(':')[0].split(' ')[1:]) + ' : ' + ''.join(pair.split(':')[1].split(' ')) for pair in transliterated_word_list] - - transliterated_word_list = [''.join(word.split(' ')) for word in transliterated_word_list] - - return transliterated_word_list - - def _transliterate_word(self, text, src_lang, tgt_lang, topk=4, nativize_punctuations=True, nativize_numerals=False): - if not text: - return text - text = text.lower().strip() - - if src_lang != 'en': - # Our model does not transliterate native punctuations or numerals - # So process them first so that they are not considered for transliteration - text = text.translate(INDIC_TO_LATIN_PUNCT_TRANSLATOR) - text = text.translate(INDIC_TO_STANDARD_NUMERALS_TRANSLATOR) - else: - # Transliterate punctuations & numerals if tgt_lang is Indic - if nativize_punctuations: - if tgt_lang in RTL_LANG_CODES: - text = text.translate(LATIN_TO_PERSOARABIC_PUNC_TRANSLATOR) - text = nativize_latin_fullstop(text, tgt_lang) - if nativize_numerals: - text = text.translate(LATIN_TO_NATIVE_NUMERALS_TRANSLATORS[tgt_lang]) - - matches = LANG_WORD_REGEXES[src_lang].findall(text) - - if not matches: - return [text] - - src_word = matches[-1] - - transliteration_list = self.batch_transliterate_words([src_word], src_lang, tgt_lang, topk=topk)[0] - - if tgt_lang != 'en' or tgt_lang != 'sa': - # If users want to avoid yuktAkshara, this is facilitated by allowing them to type subwords inorder to construct a word - # For example, "ଜନ୍‍ସନ୍‍ଙ୍କୁ" can be written by "ଜନ୍‍" + "ସନ୍‍" + "କୁ" - # Not enabled for Sanskrit, as sandhi compounds are generally written word-by-word - for i in range(len(transliteration_list)): - transliteration_list[i] = hardfix_wordfinal_virama(transliteration_list[i]) - - if src_word == text: - return transliteration_list - - return [ - rreplace(text, src_word, tgt_word) - for tgt_word in transliteration_list - ] - - def batch_transliterate_words(self, words, src_lang, tgt_lang, topk=4): - perprcossed_words = self.pre_process(words, src_lang, tgt_lang) - translation_str = self.transliterator.translate(perprcossed_words, nbest=topk) - - # FIXME: Handle properly in `post_process()` to return results for all words - transliteration_list = self.post_process(translation_str, tgt_lang) - - # Lang-specific patches. TODO: Move to indic-nlp-library - if tgt_lang == 'mr': - for i in range(len(transliteration_list)): - transliteration_list[i] = transliteration_list[i].replace("अॅ", 'ॲ') - - if tgt_lang == 'or': - for i in range(len(transliteration_list)): - transliteration_list[i] = fix_odia_confusing_ambiguous_yuktakshara(transliteration_list[i]) - - if tgt_lang == 'sa': - for i in range(len(transliteration_list)): - transliteration_list[i] = explicit_devanagari_wordfinal_schwa_delete(words[0], transliteration_list[i]) - # Retain only unique, preserving order - transliteration_list = list(dict.fromkeys(transliteration_list)) - - return [transliteration_list] - - def _transliterate_sentence(self, text, src_lang, tgt_lang, nativize_punctuations=True, nativize_numerals=False): - # TODO: Minimize code redundancy with `_transliterate_word()` - - if not text: - return text - text = text.lower().strip() - - if src_lang != 'en': - # Our model does not transliterate native punctuations or numerals - # So process them first so that they are not considered for transliteration - text = text.translate(INDIC_TO_LATIN_PUNCT_TRANSLATOR) - text = text.translate(INDIC_TO_STANDARD_NUMERALS_TRANSLATOR) - else: - # Transliterate punctuations & numerals if tgt_lang is Indic - if nativize_punctuations: - if tgt_lang in RTL_LANG_CODES: - text = text.translate(LATIN_TO_PERSOARABIC_PUNC_TRANSLATOR) - text = nativize_latin_fullstop(text, tgt_lang) - if nativize_numerals: - text = text.translate(LATIN_TO_NATIVE_NUMERALS_TRANSLATORS[tgt_lang]) - - matches = LANG_WORD_REGEXES[src_lang].findall(text) - - if not matches: - return text - - out_str = text - for match in matches: - result = self.batch_transliterate_words([match], src_lang, tgt_lang)[0][0] - out_str = re.sub(match, result, out_str, 1) - return out_str diff --git a/spaces/NCTCMumbai/NCTC/models/official/nlp/data/create_finetuning_data.py b/spaces/NCTCMumbai/NCTC/models/official/nlp/data/create_finetuning_data.py deleted file mode 100644 index 8fae97e127680d8828d23442ecd7592abb39b584..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/nlp/data/create_finetuning_data.py +++ /dev/null @@ -1,316 +0,0 @@ -# Copyright 2019 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""BERT finetuning task dataset generator.""" - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import functools -import json -import os - -from absl import app -from absl import flags -import tensorflow as tf -from official.nlp.bert import tokenization -from official.nlp.data import classifier_data_lib -from official.nlp.data import sentence_retrieval_lib -# word-piece tokenizer based squad_lib -from official.nlp.data import squad_lib as squad_lib_wp -# sentence-piece tokenizer based squad_lib -from official.nlp.data import squad_lib_sp - -FLAGS = flags.FLAGS - -flags.DEFINE_enum( - "fine_tuning_task_type", "classification", - ["classification", "regression", "squad", "retrieval"], - "The name of the BERT fine tuning task for which data " - "will be generated..") - -# BERT classification specific flags. -flags.DEFINE_string( - "input_data_dir", None, - "The input data dir. Should contain the .tsv files (or other data files) " - "for the task.") - -flags.DEFINE_enum("classification_task_name", "MNLI", - ["COLA", "MNLI", "MRPC", "QNLI", "QQP", "SST-2", "XNLI", - "PAWS-X", "XTREME-XNLI", "XTREME-PAWS-X"], - "The name of the task to train BERT classifier. The " - "difference between XTREME-XNLI and XNLI is: 1. the format " - "of input tsv files; 2. the dev set for XTREME is english " - "only and for XNLI is all languages combined. Same for " - "PAWS-X.") - -flags.DEFINE_enum("retrieval_task_name", "bucc", ["bucc", "tatoeba"], - "The name of sentence retrieval task for scoring") - -# XNLI task specific flag. -flags.DEFINE_string( - "xnli_language", "en", - "Language of training data for XNIL task. If the value is 'all', the data " - "of all languages will be used for training.") - -# PAWS-X task specific flag. -flags.DEFINE_string( - "pawsx_language", "en", - "Language of trainig data for PAWS-X task. If the value is 'all', the data " - "of all languages will be used for training.") - -# BERT Squad task specific flags. -flags.DEFINE_string( - "squad_data_file", None, - "The input data file in for generating training data for BERT squad task.") - -flags.DEFINE_integer( - "doc_stride", 128, - "When splitting up a long document into chunks, how much stride to " - "take between chunks.") - -flags.DEFINE_integer( - "max_query_length", 64, - "The maximum number of tokens for the question. Questions longer than " - "this will be truncated to this length.") - -flags.DEFINE_bool( - "version_2_with_negative", False, - "If true, the SQuAD examples contain some that do not have an answer.") - -# Shared flags across BERT fine-tuning tasks. -flags.DEFINE_string("vocab_file", None, - "The vocabulary file that the BERT model was trained on.") - -flags.DEFINE_string( - "train_data_output_path", None, - "The path in which generated training input data will be written as tf" - " records.") - -flags.DEFINE_string( - "eval_data_output_path", None, - "The path in which generated evaluation input data will be written as tf" - " records.") - -flags.DEFINE_string( - "test_data_output_path", None, - "The path in which generated test input data will be written as tf" - " records. If None, do not generate test data. Must be a pattern template" - " as test_{}.tfrecords if processor has language specific test data.") - -flags.DEFINE_string("meta_data_file_path", None, - "The path in which input meta data will be written.") - -flags.DEFINE_bool( - "do_lower_case", True, - "Whether to lower case the input text. Should be True for uncased " - "models and False for cased models.") - -flags.DEFINE_integer( - "max_seq_length", 128, - "The maximum total input sequence length after WordPiece tokenization. " - "Sequences longer than this will be truncated, and sequences shorter " - "than this will be padded.") - -flags.DEFINE_string("sp_model_file", "", - "The path to the model used by sentence piece tokenizer.") - -flags.DEFINE_enum( - "tokenizer_impl", "word_piece", ["word_piece", "sentence_piece"], - "Specifies the tokenizer implementation, i.e., whehter to use word_piece " - "or sentence_piece tokenizer. Canonical BERT uses word_piece tokenizer, " - "while ALBERT uses sentence_piece tokenizer.") - -flags.DEFINE_string("tfds_params", "", - "Comma-separated list of TFDS parameter assigments for " - "generic classfication data import (for more details " - "see the TfdsProcessor class documentation).") - - -def generate_classifier_dataset(): - """Generates classifier dataset and returns input meta data.""" - assert (FLAGS.input_data_dir and FLAGS.classification_task_name - or FLAGS.tfds_params) - - if FLAGS.tokenizer_impl == "word_piece": - tokenizer = tokenization.FullTokenizer( - vocab_file=FLAGS.vocab_file, do_lower_case=FLAGS.do_lower_case) - processor_text_fn = tokenization.convert_to_unicode - else: - assert FLAGS.tokenizer_impl == "sentence_piece" - tokenizer = tokenization.FullSentencePieceTokenizer(FLAGS.sp_model_file) - processor_text_fn = functools.partial( - tokenization.preprocess_text, lower=FLAGS.do_lower_case) - - if FLAGS.tfds_params: - processor = classifier_data_lib.TfdsProcessor( - tfds_params=FLAGS.tfds_params, - process_text_fn=processor_text_fn) - return classifier_data_lib.generate_tf_record_from_data_file( - processor, - None, - tokenizer, - train_data_output_path=FLAGS.train_data_output_path, - eval_data_output_path=FLAGS.eval_data_output_path, - test_data_output_path=FLAGS.test_data_output_path, - max_seq_length=FLAGS.max_seq_length) - else: - processors = { - "cola": - classifier_data_lib.ColaProcessor, - "mnli": - classifier_data_lib.MnliProcessor, - "mrpc": - classifier_data_lib.MrpcProcessor, - "qnli": - classifier_data_lib.QnliProcessor, - "qqp": classifier_data_lib.QqpProcessor, - "rte": classifier_data_lib.RteProcessor, - "sst-2": - classifier_data_lib.SstProcessor, - "xnli": - functools.partial(classifier_data_lib.XnliProcessor, - language=FLAGS.xnli_language), - "paws-x": - functools.partial(classifier_data_lib.PawsxProcessor, - language=FLAGS.pawsx_language), - "xtreme-xnli": - functools.partial(classifier_data_lib.XtremeXnliProcessor), - "xtreme-paws-x": - functools.partial(classifier_data_lib.XtremePawsxProcessor) - } - task_name = FLAGS.classification_task_name.lower() - if task_name not in processors: - raise ValueError("Task not found: %s" % (task_name)) - - processor = processors[task_name](process_text_fn=processor_text_fn) - return classifier_data_lib.generate_tf_record_from_data_file( - processor, - FLAGS.input_data_dir, - tokenizer, - train_data_output_path=FLAGS.train_data_output_path, - eval_data_output_path=FLAGS.eval_data_output_path, - test_data_output_path=FLAGS.test_data_output_path, - max_seq_length=FLAGS.max_seq_length) - - -def generate_regression_dataset(): - """Generates regression dataset and returns input meta data.""" - if FLAGS.tokenizer_impl == "word_piece": - tokenizer = tokenization.FullTokenizer( - vocab_file=FLAGS.vocab_file, do_lower_case=FLAGS.do_lower_case) - processor_text_fn = tokenization.convert_to_unicode - else: - assert FLAGS.tokenizer_impl == "sentence_piece" - tokenizer = tokenization.FullSentencePieceTokenizer(FLAGS.sp_model_file) - processor_text_fn = functools.partial( - tokenization.preprocess_text, lower=FLAGS.do_lower_case) - - if FLAGS.tfds_params: - processor = classifier_data_lib.TfdsProcessor( - tfds_params=FLAGS.tfds_params, - process_text_fn=processor_text_fn) - return classifier_data_lib.generate_tf_record_from_data_file( - processor, - None, - tokenizer, - train_data_output_path=FLAGS.train_data_output_path, - eval_data_output_path=FLAGS.eval_data_output_path, - test_data_output_path=FLAGS.test_data_output_path, - max_seq_length=FLAGS.max_seq_length) - else: - raise ValueError("No data processor found for the given regression task.") - - -def generate_squad_dataset(): - """Generates squad training dataset and returns input meta data.""" - assert FLAGS.squad_data_file - if FLAGS.tokenizer_impl == "word_piece": - return squad_lib_wp.generate_tf_record_from_json_file( - FLAGS.squad_data_file, FLAGS.vocab_file, FLAGS.train_data_output_path, - FLAGS.max_seq_length, FLAGS.do_lower_case, FLAGS.max_query_length, - FLAGS.doc_stride, FLAGS.version_2_with_negative) - else: - assert FLAGS.tokenizer_impl == "sentence_piece" - return squad_lib_sp.generate_tf_record_from_json_file( - FLAGS.squad_data_file, FLAGS.sp_model_file, - FLAGS.train_data_output_path, FLAGS.max_seq_length, FLAGS.do_lower_case, - FLAGS.max_query_length, FLAGS.doc_stride, FLAGS.version_2_with_negative) - - -def generate_retrieval_dataset(): - """Generate retrieval test and dev dataset and returns input meta data.""" - assert (FLAGS.input_data_dir and FLAGS.retrieval_task_name) - if FLAGS.tokenizer_impl == "word_piece": - tokenizer = tokenization.FullTokenizer( - vocab_file=FLAGS.vocab_file, do_lower_case=FLAGS.do_lower_case) - processor_text_fn = tokenization.convert_to_unicode - else: - assert FLAGS.tokenizer_impl == "sentence_piece" - tokenizer = tokenization.FullSentencePieceTokenizer(FLAGS.sp_model_file) - processor_text_fn = functools.partial( - tokenization.preprocess_text, lower=FLAGS.do_lower_case) - - processors = { - "bucc": sentence_retrieval_lib.BuccProcessor, - "tatoeba": sentence_retrieval_lib.TatoebaProcessor, - } - - task_name = FLAGS.retrieval_task_name.lower() - if task_name not in processors: - raise ValueError("Task not found: %s" % task_name) - - processor = processors[task_name](process_text_fn=processor_text_fn) - - return sentence_retrieval_lib.generate_sentence_retrevial_tf_record( - processor, - FLAGS.input_data_dir, - tokenizer, - FLAGS.eval_data_output_path, - FLAGS.test_data_output_path, - FLAGS.max_seq_length) - - -def main(_): - if FLAGS.tokenizer_impl == "word_piece": - if not FLAGS.vocab_file: - raise ValueError( - "FLAG vocab_file for word-piece tokenizer is not specified.") - else: - assert FLAGS.tokenizer_impl == "sentence_piece" - if not FLAGS.sp_model_file: - raise ValueError( - "FLAG sp_model_file for sentence-piece tokenizer is not specified.") - - if FLAGS.fine_tuning_task_type != "retrieval": - flags.mark_flag_as_required("train_data_output_path") - - if FLAGS.fine_tuning_task_type == "classification": - input_meta_data = generate_classifier_dataset() - elif FLAGS.fine_tuning_task_type == "regression": - input_meta_data = generate_regression_dataset() - elif FLAGS.fine_tuning_task_type == "retrieval": - input_meta_data = generate_retrieval_dataset() - else: - input_meta_data = generate_squad_dataset() - - tf.io.gfile.makedirs(os.path.dirname(FLAGS.meta_data_file_path)) - with tf.io.gfile.GFile(FLAGS.meta_data_file_path, "w") as writer: - writer.write(json.dumps(input_meta_data, indent=4) + "\n") - - -if __name__ == "__main__": - flags.mark_flag_as_required("meta_data_file_path") - app.run(main) diff --git a/spaces/NCTCMumbai/NCTC/models/research/adversarial_text/inputs.py b/spaces/NCTCMumbai/NCTC/models/research/adversarial_text/inputs.py deleted file mode 100644 index 48a523d8d489ec03a10f68847fd263cc1641e678..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/research/adversarial_text/inputs.py +++ /dev/null @@ -1,342 +0,0 @@ -# Copyright 2017 Google Inc. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Input utils for virtual adversarial text classification.""" - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import os - -# Dependency imports - -import tensorflow as tf - -from data import data_utils - - -class VatxtInput(object): - """Wrapper around NextQueuedSequenceBatch.""" - - def __init__(self, - batch, - state_name=None, - tokens=None, - num_states=0, - eos_id=None): - """Construct VatxtInput. - - Args: - batch: NextQueuedSequenceBatch. - state_name: str, name of state to fetch and save. - tokens: int Tensor, tokens. Defaults to batch's F_TOKEN_ID sequence. - num_states: int The number of states to store. - eos_id: int Id of end of Sequence. - """ - self._batch = batch - self._state_name = state_name - self._tokens = (tokens if tokens is not None else - batch.sequences[data_utils.SequenceWrapper.F_TOKEN_ID]) - self._num_states = num_states - - w = batch.sequences[data_utils.SequenceWrapper.F_WEIGHT] - self._weights = w - - l = batch.sequences[data_utils.SequenceWrapper.F_LABEL] - self._labels = l - - # eos weights - self._eos_weights = None - if eos_id: - ew = tf.cast(tf.equal(self._tokens, eos_id), tf.float32) - self._eos_weights = ew - - @property - def tokens(self): - return self._tokens - - @property - def weights(self): - return self._weights - - @property - def eos_weights(self): - return self._eos_weights - - @property - def labels(self): - return self._labels - - @property - def length(self): - return self._batch.length - - @property - def state_name(self): - return self._state_name - - @property - def state(self): - # LSTM tuple states - state_names = _get_tuple_state_names(self._num_states, self._state_name) - return tuple([ - tf.contrib.rnn.LSTMStateTuple( - self._batch.state(c_name), self._batch.state(h_name)) - for c_name, h_name in state_names - ]) - - def save_state(self, value): - # LSTM tuple states - state_names = _get_tuple_state_names(self._num_states, self._state_name) - save_ops = [] - for (c_state, h_state), (c_name, h_name) in zip(value, state_names): - save_ops.append(self._batch.save_state(c_name, c_state)) - save_ops.append(self._batch.save_state(h_name, h_state)) - return tf.group(*save_ops) - - -def _get_tuple_state_names(num_states, base_name): - """Returns state names for use with LSTM tuple state.""" - state_names = [('{}_{}_c'.format(i, base_name), '{}_{}_h'.format( - i, base_name)) for i in range(num_states)] - return state_names - - -def _split_bidir_tokens(batch): - tokens = batch.sequences[data_utils.SequenceWrapper.F_TOKEN_ID] - # Tokens have shape [batch, time, 2] - # forward and reverse have shape [batch, time]. - forward, reverse = [ - tf.squeeze(t, axis=[2]) for t in tf.split(tokens, 2, axis=2) - ] - return forward, reverse - - -def _filenames_for_data_spec(phase, bidir, pretrain, use_seq2seq): - """Returns input filenames for configuration. - - Args: - phase: str, 'train', 'test', or 'valid'. - bidir: bool, bidirectional model. - pretrain: bool, pretraining or classification. - use_seq2seq: bool, seq2seq data, only valid if pretrain=True. - - Returns: - Tuple of filenames. - - Raises: - ValueError: if an invalid combination of arguments is provided that does not - map to any data files (e.g. pretrain=False, use_seq2seq=True). - """ - data_spec = (phase, bidir, pretrain, use_seq2seq) - data_specs = { - ('train', True, True, False): (data_utils.TRAIN_LM, - data_utils.TRAIN_REV_LM), - ('train', True, False, False): (data_utils.TRAIN_BD_CLASS,), - ('train', False, True, False): (data_utils.TRAIN_LM,), - ('train', False, True, True): (data_utils.TRAIN_SA,), - ('train', False, False, False): (data_utils.TRAIN_CLASS,), - ('test', True, True, False): (data_utils.TEST_LM, - data_utils.TRAIN_REV_LM), - ('test', True, False, False): (data_utils.TEST_BD_CLASS,), - ('test', False, True, False): (data_utils.TEST_LM,), - ('test', False, True, True): (data_utils.TEST_SA,), - ('test', False, False, False): (data_utils.TEST_CLASS,), - ('valid', True, False, False): (data_utils.VALID_BD_CLASS,), - ('valid', False, False, False): (data_utils.VALID_CLASS,), - } - if data_spec not in data_specs: - raise ValueError( - 'Data specification (phase, bidir, pretrain, use_seq2seq) %s not ' - 'supported' % str(data_spec)) - - return data_specs[data_spec] - - -def _read_single_sequence_example(file_list, tokens_shape=None): - """Reads and parses SequenceExamples from TFRecord-encoded file_list.""" - tf.logging.info('Constructing TFRecordReader from files: %s', file_list) - file_queue = tf.train.string_input_producer(file_list) - reader = tf.TFRecordReader() - seq_key, serialized_record = reader.read(file_queue) - ctx, sequence = tf.parse_single_sequence_example( - serialized_record, - sequence_features={ - data_utils.SequenceWrapper.F_TOKEN_ID: - tf.FixedLenSequenceFeature(tokens_shape or [], dtype=tf.int64), - data_utils.SequenceWrapper.F_LABEL: - tf.FixedLenSequenceFeature([], dtype=tf.int64), - data_utils.SequenceWrapper.F_WEIGHT: - tf.FixedLenSequenceFeature([], dtype=tf.float32), - }) - return seq_key, ctx, sequence - - -def _read_and_batch(data_dir, - fname, - state_name, - state_size, - num_layers, - unroll_steps, - batch_size, - bidir_input=False): - """Inputs for text model. - - Args: - data_dir: str, directory containing TFRecord files of SequenceExample. - fname: str, input file name. - state_name: string, key for saved state of LSTM. - state_size: int, size of LSTM state. - num_layers: int, the number of layers in the LSTM. - unroll_steps: int, number of timesteps to unroll for TBTT. - batch_size: int, batch size. - bidir_input: bool, whether the input is bidirectional. If True, creates 2 - states, state_name and state_name + '_reverse'. - - Returns: - Instance of NextQueuedSequenceBatch - - Raises: - ValueError: if file for input specification is not found. - """ - data_path = os.path.join(data_dir, fname) - if not tf.gfile.Exists(data_path): - raise ValueError('Failed to find file: %s' % data_path) - - tokens_shape = [2] if bidir_input else [] - seq_key, ctx, sequence = _read_single_sequence_example( - [data_path], tokens_shape=tokens_shape) - # Set up stateful queue reader. - state_names = _get_tuple_state_names(num_layers, state_name) - initial_states = {} - for c_state, h_state in state_names: - initial_states[c_state] = tf.zeros(state_size) - initial_states[h_state] = tf.zeros(state_size) - if bidir_input: - rev_state_names = _get_tuple_state_names(num_layers, - '{}_reverse'.format(state_name)) - for rev_c_state, rev_h_state in rev_state_names: - initial_states[rev_c_state] = tf.zeros(state_size) - initial_states[rev_h_state] = tf.zeros(state_size) - batch = tf.contrib.training.batch_sequences_with_states( - input_key=seq_key, - input_sequences=sequence, - input_context=ctx, - input_length=tf.shape(sequence['token_id'])[0], - initial_states=initial_states, - num_unroll=unroll_steps, - batch_size=batch_size, - allow_small_batch=False, - num_threads=4, - capacity=batch_size * 10, - make_keys_unique=True, - make_keys_unique_seed=29392) - return batch - - -def inputs(data_dir=None, - phase='train', - bidir=False, - pretrain=False, - use_seq2seq=False, - state_name='lstm', - state_size=None, - num_layers=0, - batch_size=32, - unroll_steps=100, - eos_id=None): - """Inputs for text model. - - Args: - data_dir: str, directory containing TFRecord files of SequenceExample. - phase: str, dataset for evaluation {'train', 'valid', 'test'}. - bidir: bool, bidirectional LSTM. - pretrain: bool, whether to read pretraining data or classification data. - use_seq2seq: bool, whether to read seq2seq data or the language model data. - state_name: string, key for saved state of LSTM. - state_size: int, size of LSTM state. - num_layers: int, the number of LSTM layers. - batch_size: int, batch size. - unroll_steps: int, number of timesteps to unroll for TBTT. - eos_id: int, id of end of sequence. used for the kl weights on vat - Returns: - Instance of VatxtInput (x2 if bidir=True and pretrain=True, i.e. forward and - reverse). - """ - with tf.name_scope('inputs'): - filenames = _filenames_for_data_spec(phase, bidir, pretrain, use_seq2seq) - - if bidir and pretrain: - # Bidirectional pretraining - # Requires separate forward and reverse language model data. - forward_fname, reverse_fname = filenames - forward_batch = _read_and_batch(data_dir, forward_fname, state_name, - state_size, num_layers, unroll_steps, - batch_size) - state_name_rev = state_name + '_reverse' - reverse_batch = _read_and_batch(data_dir, reverse_fname, state_name_rev, - state_size, num_layers, unroll_steps, - batch_size) - forward_input = VatxtInput( - forward_batch, - state_name=state_name, - num_states=num_layers, - eos_id=eos_id) - reverse_input = VatxtInput( - reverse_batch, - state_name=state_name_rev, - num_states=num_layers, - eos_id=eos_id) - return forward_input, reverse_input - - elif bidir: - # Classifier bidirectional LSTM - # Shared data source, but separate token/state streams - fname, = filenames - batch = _read_and_batch( - data_dir, - fname, - state_name, - state_size, - num_layers, - unroll_steps, - batch_size, - bidir_input=True) - forward_tokens, reverse_tokens = _split_bidir_tokens(batch) - forward_input = VatxtInput( - batch, - state_name=state_name, - tokens=forward_tokens, - num_states=num_layers) - reverse_input = VatxtInput( - batch, - state_name=state_name + '_reverse', - tokens=reverse_tokens, - num_states=num_layers) - return forward_input, reverse_input - else: - # Unidirectional LM or classifier - fname, = filenames - batch = _read_and_batch( - data_dir, - fname, - state_name, - state_size, - num_layers, - unroll_steps, - batch_size, - bidir_input=False) - return VatxtInput( - batch, state_name=state_name, num_states=num_layers, eos_id=eos_id) diff --git a/spaces/NCTCMumbai/NCTC/models/research/brain_coder/single_task/misc.py b/spaces/NCTCMumbai/NCTC/models/research/brain_coder/single_task/misc.py deleted file mode 100644 index 07061d81c8aaafd4d97efc11ecca451528c6e9dd..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/research/brain_coder/single_task/misc.py +++ /dev/null @@ -1,149 +0,0 @@ -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -"""Utilities specific to this project.""" - -from collections import namedtuple -from six import string_types - - -##################### -# BF-lang utilities # -##################### - - -BF_EOS_INT = 0 # Also used as SOS (start of sequence). -BF_EOS_CHAR = TEXT_EOS_CHAR = '_' -BF_LANG_INTS = range(1, 9) -BF_INT_TO_CHAR = [BF_EOS_CHAR, '>', '<', '+', '-', '[', ']', '.', ','] -BF_CHAR_TO_INT = dict([(c, i) for i, c in enumerate(BF_INT_TO_CHAR)]) - - -RewardInfo = namedtuple('RewardInfo', ['episode_rewards', 'input_case', - 'correct_output', - 'code_output', 'reason', 'input_type', - 'output_type']) - - -class IOType(object): - string = 'string' - integer = 'integer' - boolean = 'boolean' - - -class IOTuple(tuple): - pass - - -def flatten(lst): - return [item for row in lst for item in row] - - -def bf_num_tokens(): - # BF tokens plus EOS. - return len(BF_INT_TO_CHAR) - - -def bf_char2int(bf_char): - """Convert BF code char to int token.""" - return BF_CHAR_TO_INT[bf_char] - - -def bf_int2char(bf_int): - """Convert BF int token to code char.""" - return BF_INT_TO_CHAR[bf_int] - - -def bf_tokens_to_string(bf_tokens, truncate=True): - """Convert token list to code string. Will truncate at EOS token. - - Args: - bf_tokens: Python list of ints representing the code string. - truncate: If true, the output string will end at the first EOS token. - If false, the entire token list is converted to string. - - Returns: - String representation of the tokens. - - Raises: - ValueError: If bf_tokens is not a python list. - """ - if not isinstance(bf_tokens, list): - raise ValueError('Only python list supported here.') - if truncate: - try: - eos_index = bf_tokens.index(BF_EOS_INT) - except ValueError: - eos_index = len(bf_tokens) - else: - eos_index = len(bf_tokens) - return ''.join([BF_INT_TO_CHAR[t] for t in bf_tokens[:eos_index]]) - - -def bf_string_to_tokens(bf_string): - """Convert string to token list. Will strip and append EOS token.""" - tokens = [BF_CHAR_TO_INT[char] for char in bf_string.strip()] - tokens.append(BF_EOS_INT) - return tokens - - -def tokens_to_text(tokens): - """Convert token list to human readable text.""" - return ''.join( - [TEXT_EOS_CHAR if t == 0 else chr(t - 1 + ord('A')) for t in tokens]) - - -################################### -# Number representation utilities # -################################### - - -# https://en.wikipedia.org/wiki/Metric_prefix -si_magnitudes = { - 'k': 1e3, - 'm': 1e6, - 'g': 1e9} - - -def si_to_int(s): - """Convert string ending with SI magnitude to int. - - Examples: 5K ==> 5000, 12M ==> 12000000. - - Args: - s: String in the form 'xx..xP' where x is a digit and P is an SI prefix. - - Returns: - Integer equivalent to the string. - """ - if isinstance(s, string_types) and s[-1].lower() in si_magnitudes.keys(): - return int(int(s[:-1]) * si_magnitudes[s[-1].lower()]) - return int(s) - - -def int_to_si(n): - """Convert integer to string with SI magnitude. - - `n` will be truncated. - - Examples: 5432 ==> 5k, 12345678 ==> 12M - - Args: - n: Integer to represent as a string. - - Returns: - String representation of `n` containing SI magnitude. - """ - m = abs(n) - sign = -1 if n < 0 else 1 - if m < 1e3: - return str(n) - if m < 1e6: - return '{0}K'.format(sign*int(m / 1e3)) - if m < 1e9: - return '{0}M'.format(sign*int(m / 1e6)) - if m < 1e12: - return '{0}G'.format(sign*int(m / 1e9)) - return str(m) - diff --git a/spaces/NCTCMumbai/NCTC/models/research/cognitive_mapping_and_planning/cfgs/config_distill.py b/spaces/NCTCMumbai/NCTC/models/research/cognitive_mapping_and_planning/cfgs/config_distill.py deleted file mode 100644 index 53be2f8a5f12ee701a53c1c354079659da6958d4..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/research/cognitive_mapping_and_planning/cfgs/config_distill.py +++ /dev/null @@ -1,114 +0,0 @@ -# Copyright 2016 The TensorFlow Authors All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== - -import pprint -import copy -import os -from tensorflow.python.platform import app -from tensorflow.python.platform import flags -import logging -import src.utils as utils -import cfgs.config_common as cc - - -import tensorflow as tf - -rgb_resnet_v2_50_path = 'cache/resnet_v2_50_inception_preprocessed/model.ckpt-5136169' - -def get_default_args(): - robot = utils.Foo(radius=15, base=10, height=140, sensor_height=120, - camera_elevation_degree=-15) - - camera_param = utils.Foo(width=225, height=225, z_near=0.05, z_far=20.0, - fov=60., modalities=['rgb', 'depth']) - - env = utils.Foo(padding=10, resolution=5, num_point_threshold=2, - valid_min=-10, valid_max=200, n_samples_per_face=200) - - data_augment = utils.Foo(lr_flip=0, delta_angle=1, delta_xy=4, relight=False, - relight_fast=False, structured=False) - - task_params = utils.Foo(num_actions=4, step_size=4, num_steps=0, - batch_size=32, room_seed=0, base_class='Building', - task='mapping', n_ori=6, data_augment=data_augment, - output_transform_to_global_map=False, - output_canonical_map=False, - output_incremental_transform=False, - output_free_space=False, move_type='shortest_path', - toy_problem=0) - - buildinger_args = utils.Foo(building_names=['area1_gates_wingA_floor1_westpart'], - env_class=None, robot=robot, - task_params=task_params, env=env, - camera_param=camera_param) - - solver_args = utils.Foo(seed=0, learning_rate_decay=0.1, - clip_gradient_norm=0, max_steps=120000, - initial_learning_rate=0.001, momentum=0.99, - steps_per_decay=40000, logdir=None, sync=False, - adjust_lr_sync=True, wt_decay=0.0001, - data_loss_wt=1.0, reg_loss_wt=1.0, - num_workers=1, task=0, ps_tasks=0, master='local') - - summary_args = utils.Foo(display_interval=1, test_iters=100) - - control_args = utils.Foo(train=False, test=False, - force_batchnorm_is_training_at_test=False) - - arch_args = utils.Foo(rgb_encoder='resnet_v2_50', d_encoder='resnet_v2_50') - - return utils.Foo(solver=solver_args, - summary=summary_args, control=control_args, arch=arch_args, - buildinger=buildinger_args) - -def get_vars(config_name): - vars = config_name.split('_') - if len(vars) == 1: # All data or not. - vars.append('noall') - if len(vars) == 2: # n_ori - vars.append('4') - logging.error('vars: %s', vars) - return vars - -def get_args_for_config(config_name): - args = get_default_args() - config_name, mode = config_name.split('+') - vars = get_vars(config_name) - - logging.info('config_name: %s, mode: %s', config_name, mode) - - args.buildinger.task_params.n_ori = int(vars[2]) - args.solver.freeze_conv = True - args.solver.pretrained_path = rgb_resnet_v2_50_path - args.buildinger.task_params.img_channels = 5 - args.solver.data_loss_wt = 0.00001 - - if vars[0] == 'v0': - None - else: - logging.error('config_name: %s undefined', config_name) - - args.buildinger.task_params.height = args.buildinger.camera_param.height - args.buildinger.task_params.width = args.buildinger.camera_param.width - args.buildinger.task_params.modalities = args.buildinger.camera_param.modalities - - if vars[1] == 'all': - args = cc.get_args_for_mode_building_all(args, mode) - elif vars[1] == 'noall': - args = cc.get_args_for_mode_building(args, mode) - - # Log the arguments - logging.error('%s', args) - return args diff --git a/spaces/NN520/AI/src/app/loading.css b/spaces/NN520/AI/src/app/loading.css deleted file mode 100644 index eaaab6a86a228334c4eca3c5368ae6f0f593d405..0000000000000000000000000000000000000000 --- a/spaces/NN520/AI/src/app/loading.css +++ /dev/null @@ -1,68 +0,0 @@ -::-webkit-scrollbar { - width: 10px; - height: 10px; - display: none; -} - -::-webkit-scrollbar-button:start:decrement, -::-webkit-scrollbar-button:end:increment { - height: 30px; - background-color: transparent; -} - -::-webkit-scrollbar-track-piece { - background-color: #3b3b3b; - -webkit-border-radius: 16px; -} - -::-webkit-scrollbar-thumb:vertical { - height: 50px; - background-color: #666; - border: 1px solid #eee; - -webkit-border-radius: 6px; -} - -/* loading start */ -.loading-spinner { - display: flex; - justify-content: center; - align-items: center; - height: 100vh; - opacity: 1; - transition: opacity .8s ease-out; -} - -.loading-spinner.hidden { - opacity: 0; -} - -.loading-spinner>div { - width: 30px; - height: 30px; - background: linear-gradient(90deg, #2870EA 10.79%, #1B4AEF 87.08%); - - border-radius: 100%; - display: inline-block; - animation: sk-bouncedelay 1.4s infinite ease-in-out both; -} - -.loading-spinner .bounce1 { - animation-delay: -0.32s; -} - -.loading-spinner .bounce2 { - animation-delay: -0.16s; -} - -@keyframes sk-bouncedelay { - - 0%, - 80%, - 100% { - transform: scale(0); - } - - 40% { - transform: scale(1.0); - } -} diff --git a/spaces/NoCrypt/DeepDanbooru_string/app.py b/spaces/NoCrypt/DeepDanbooru_string/app.py deleted file mode 100644 index f29cdc77acb0ec46bcc3c5d5454730fa1c765e8a..0000000000000000000000000000000000000000 --- a/spaces/NoCrypt/DeepDanbooru_string/app.py +++ /dev/null @@ -1,185 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import argparse -import functools -import os -import html -import pathlib -import tarfile - -import deepdanbooru as dd -import gradio as gr -import huggingface_hub -import numpy as np -import PIL.Image -import tensorflow as tf -import piexif -import piexif.helper - -TITLE = 'DeepDanbooru String' - -TOKEN = os.environ['TOKEN'] -MODEL_REPO = 'NoCrypt/DeepDanbooru_string' -MODEL_FILENAME = 'model-resnet_custom_v3.h5' -LABEL_FILENAME = 'tags.txt' - - -def parse_args() -> argparse.Namespace: - parser = argparse.ArgumentParser() - parser.add_argument('--score-slider-step', type=float, default=0.05) - parser.add_argument('--score-threshold', type=float, default=0.5) - parser.add_argument('--theme', type=str, default='dark-grass') - parser.add_argument('--live', action='store_true') - parser.add_argument('--share', action='store_true') - parser.add_argument('--port', type=int) - parser.add_argument('--disable-queue', - dest='enable_queue', - action='store_false') - parser.add_argument('--allow-flagging', type=str, default='never') - return parser.parse_args() - - -def load_sample_image_paths() -> list[pathlib.Path]: - image_dir = pathlib.Path('images') - if not image_dir.exists(): - dataset_repo = 'hysts/sample-images-TADNE' - path = huggingface_hub.hf_hub_download(dataset_repo, - 'images.tar.gz', - repo_type='dataset', - use_auth_token=TOKEN) - with tarfile.open(path) as f: - f.extractall() - return sorted(image_dir.glob('*')) - - -def load_model() -> tf.keras.Model: - path = huggingface_hub.hf_hub_download(MODEL_REPO, - MODEL_FILENAME, - use_auth_token=TOKEN) - model = tf.keras.models.load_model(path) - return model - - -def load_labels() -> list[str]: - path = huggingface_hub.hf_hub_download(MODEL_REPO, - LABEL_FILENAME, - use_auth_token=TOKEN) - with open(path) as f: - labels = [line.strip() for line in f.readlines()] - return labels - -def plaintext_to_html(text): - text = "

      " + "
      \n".join([f"{html.escape(x)}" for x in text.split('\n')]) + "

      " - return text - -def predict(image: PIL.Image.Image, score_threshold: float, - model: tf.keras.Model, labels: list[str]) -> dict[str, float]: - rawimage = image - _, height, width, _ = model.input_shape - image = np.asarray(image) - image = tf.image.resize(image, - size=(height, width), - method=tf.image.ResizeMethod.AREA, - preserve_aspect_ratio=True) - image = image.numpy() - image = dd.image.transform_and_pad_image(image, width, height) - image = image / 255. - probs = model.predict(image[None, ...])[0] - probs = probs.astype(float) - res = dict() - for prob, label in zip(probs.tolist(), labels): - if prob < score_threshold: - continue - res[label] = prob - b = dict(sorted(res.items(),key=lambda item:item[1], reverse=True)) - a = ', '.join(list(b.keys())).replace('_',' ').replace('(','\(').replace(')','\)') - c = ', '.join(list(b.keys())) - - items = rawimage.info - geninfo = '' - - if "exif" in rawimage.info: - exif = piexif.load(rawimage.info["exif"]) - exif_comment = (exif or {}).get("Exif", {}).get(piexif.ExifIFD.UserComment, b'') - try: - exif_comment = piexif.helper.UserComment.load(exif_comment) - except ValueError: - exif_comment = exif_comment.decode('utf8', errors="ignore") - - items['exif comment'] = exif_comment - geninfo = exif_comment - - for field in ['jfif', 'jfif_version', 'jfif_unit', 'jfif_density', 'dpi', 'exif', - 'loop', 'background', 'timestamp', 'duration']: - items.pop(field, None) - - geninfo = items.get('parameters', geninfo) - - info = f""" -

      PNG Info

      -""" - for key, text in items.items(): - info += f""" -
      -

      {plaintext_to_html(str(key))}

      -

      {plaintext_to_html(str(text))}

      -
      -""".strip()+"\n" - - if len(info) == 0: - message = "Nothing found in the image." - info = f"

      {message}

      " - - return (a,c,res,info) - - -def main(): - args = parse_args() - model = load_model() - labels = load_labels() - - func = functools.partial(predict, model=model, labels=labels) - func = functools.update_wrapper(func, predict) - - gr.Interface( - func, - [ - gr.inputs.Image(type='pil', label='Input'), - gr.inputs.Slider(0, - 1, - step=args.score_slider_step, - default=args.score_threshold, - label='Score Threshold'), - ], - [ - gr.outputs.Textbox(label='Output (string)'), - gr.outputs.Textbox(label='Output (raw string)'), - gr.outputs.Label(label='Output (label)'), - gr.outputs.HTML() - ], - examples=[ - ['miku.jpg',0.5], - ['miku2.jpg',0.5] - ], - title=TITLE, - description=''' -Demo for [KichangKim/DeepDanbooru](https://github.com/KichangKim/DeepDanbooru) with "ready to copy" prompt and a prompt analyzer. - -Modified from [hysts/DeepDanbooru](https://huggingface.co/spaces/hysts/DeepDanbooru) - -PNG Info code forked from [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) - ''', - theme=args.theme, - allow_flagging=args.allow_flagging, - live=args.live, - ).launch( - enable_queue=args.enable_queue, - server_port=args.port, - share=args.share, - ) - - -if __name__ == '__main__': - main() diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/scripts/average_checkpoints.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/scripts/average_checkpoints.py deleted file mode 100644 index c512f802bce6b3395cc42a0e4eb39181e9f8c873..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/scripts/average_checkpoints.py +++ /dev/null @@ -1,158 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import collections -import os -import re - -import torch -from fairseq.file_io import PathManager - - -def average_checkpoints(inputs): - """Loads checkpoints from inputs and returns a model with averaged weights. - - Args: - inputs: An iterable of string paths of checkpoints to load from. - - Returns: - A dict of string keys mapping to various values. The 'model' key - from the returned dict should correspond to an OrderedDict mapping - string parameter names to torch Tensors. - """ - params_dict = collections.OrderedDict() - params_keys = None - new_state = None - num_models = len(inputs) - - for fpath in inputs: - with PathManager.open(fpath, "rb") as f: - state = torch.load( - f, - map_location=( - lambda s, _: torch.serialization.default_restore_location(s, "cpu") - ), - ) - # Copies over the settings from the first checkpoint - if new_state is None: - new_state = state - - model_params = state["model"] - - model_params_keys = list(model_params.keys()) - if params_keys is None: - params_keys = model_params_keys - elif params_keys != model_params_keys: - raise KeyError( - "For checkpoint {}, expected list of params: {}, " - "but found: {}".format(f, params_keys, model_params_keys) - ) - - for k in params_keys: - p = model_params[k] - if isinstance(p, torch.HalfTensor): - p = p.float() - if k not in params_dict: - params_dict[k] = p.clone() - # NOTE: clone() is needed in case of p is a shared parameter - else: - params_dict[k] += p - - averaged_params = collections.OrderedDict() - for k, v in params_dict.items(): - averaged_params[k] = v - if averaged_params[k].is_floating_point(): - averaged_params[k].div_(num_models) - else: - averaged_params[k] //= num_models - new_state["model"] = averaged_params - return new_state - - -def last_n_checkpoints(paths, n, update_based, upper_bound=None): - assert len(paths) == 1 - path = paths[0] - if update_based: - pt_regexp = re.compile(r"checkpoint_\d+_(\d+)\.pt") - else: - pt_regexp = re.compile(r"checkpoint(\d+)\.pt") - files = PathManager.ls(path) - - entries = [] - for f in files: - m = pt_regexp.fullmatch(f) - if m is not None: - sort_key = int(m.group(1)) - if upper_bound is None or sort_key <= upper_bound: - entries.append((sort_key, m.group(0))) - if len(entries) < n: - raise Exception( - "Found {} checkpoint files but need at least {}", len(entries), n - ) - return [os.path.join(path, x[1]) for x in sorted(entries, reverse=True)[:n]] - - -def main(): - parser = argparse.ArgumentParser( - description="Tool to average the params of input checkpoints to " - "produce a new checkpoint", - ) - # fmt: off - parser.add_argument('--inputs', required=True, nargs='+', - help='Input checkpoint file paths.') - parser.add_argument('--output', required=True, metavar='FILE', - help='Write the new checkpoint containing the averaged weights to this path.') - num_group = parser.add_mutually_exclusive_group() - num_group.add_argument('--num-epoch-checkpoints', type=int, - help='if set, will try to find checkpoints with names checkpoint_xx.pt in the path specified by input, ' - 'and average last this many of them.') - num_group.add_argument('--num-update-checkpoints', type=int, - help='if set, will try to find checkpoints with names checkpoint_ee_xx.pt in the path specified by input, ' - 'and average last this many of them.') - parser.add_argument('--checkpoint-upper-bound', type=int, - help='when using --num-epoch-checkpoints, this will set an upper bound on which epoch to use, ' - 'when using --num-update-checkpoints, this will set an upper bound on which update to use' - 'e.g., with --num-epoch-checkpoints=10 --checkpoint-upper-bound=50, checkpoints 41-50 would be averaged.' - 'e.g., with --num-update-checkpoints=10 --checkpoint-upper-bound=50000, checkpoints 40500-50000 would be averaged assuming --save-interval-updates 500' - ) - # fmt: on - args = parser.parse_args() - print(args) - - num = None - is_update_based = False - if args.num_update_checkpoints is not None: - num = args.num_update_checkpoints - is_update_based = True - elif args.num_epoch_checkpoints is not None: - num = args.num_epoch_checkpoints - - assert args.checkpoint_upper_bound is None or ( - args.num_epoch_checkpoints is not None - or args.num_update_checkpoints is not None - ), "--checkpoint-upper-bound requires --num-epoch-checkpoints or --num-update-checkpoints" - assert ( - args.num_epoch_checkpoints is None or args.num_update_checkpoints is None - ), "Cannot combine --num-epoch-checkpoints and --num-update-checkpoints" - - if num is not None: - args.inputs = last_n_checkpoints( - args.inputs, - num, - is_update_based, - upper_bound=args.checkpoint_upper_bound, - ) - print("averaging checkpoints: ", args.inputs) - - new_state = average_checkpoints(args.inputs) - with PathManager.open(args.output, "wb") as f: - torch.save(new_state, f) - print("Finished writing averaged checkpoint to {}".format(args.output)) - - -if __name__ == "__main__": - main() diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_character_token_embedder.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_character_token_embedder.py deleted file mode 100644 index 24940ebd21a0e4465ca6052409353a3179e9cf6d..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_character_token_embedder.py +++ /dev/null @@ -1,48 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import unittest - -import torch -from fairseq.data import Dictionary -from fairseq.modules import CharacterTokenEmbedder - - -class TestCharacterTokenEmbedder(unittest.TestCase): - def test_character_token_embedder(self): - vocab = Dictionary() - vocab.add_symbol("hello") - vocab.add_symbol("there") - - embedder = CharacterTokenEmbedder( - vocab, [(2, 16), (4, 32), (8, 64), (16, 2)], 64, 5, 2 - ) - - test_sents = [["hello", "unk", "there"], ["there"], ["hello", "there"]] - max_len = max(len(s) for s in test_sents) - input = torch.LongTensor(len(test_sents), max_len + 2).fill_(vocab.pad()) - for i in range(len(test_sents)): - input[i][0] = vocab.eos() - for j in range(len(test_sents[i])): - input[i][j + 1] = vocab.index(test_sents[i][j]) - input[i][j + 2] = vocab.eos() - embs = embedder(input) - - assert embs.size() == (len(test_sents), max_len + 2, 5) - self.assertAlmostEqual(embs[0][0], embs[1][0]) - self.assertAlmostEqual(embs[0][0], embs[0][-1]) - self.assertAlmostEqual(embs[0][1], embs[2][1]) - self.assertAlmostEqual(embs[0][3], embs[1][1]) - - embs.sum().backward() - assert embedder.char_embeddings.weight.grad is not None - - def assertAlmostEqual(self, t1, t2): - self.assertEqual(t1.size(), t2.size(), "size mismatch") - self.assertLess((t1 - t2).abs().max(), 1e-6) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_metrics.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_metrics.py deleted file mode 100644 index 2de6969cf4445bc6cda44dacf6de765ea30d5f5b..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_metrics.py +++ /dev/null @@ -1,77 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import unittest -import uuid - -from fairseq import metrics - - -class TestMetrics(unittest.TestCase): - def test_nesting(self): - with metrics.aggregate() as a: - metrics.log_scalar("loss", 1) - with metrics.aggregate() as b: - metrics.log_scalar("loss", 2) - - self.assertEqual(a.get_smoothed_values()["loss"], 1.5) - self.assertEqual(b.get_smoothed_values()["loss"], 2) - - def test_new_root(self): - with metrics.aggregate() as a: - metrics.log_scalar("loss", 1) - with metrics.aggregate(new_root=True) as b: - metrics.log_scalar("loss", 2) - - self.assertEqual(a.get_smoothed_values()["loss"], 1) - self.assertEqual(b.get_smoothed_values()["loss"], 2) - - def test_nested_new_root(self): - with metrics.aggregate() as layer1: - metrics.log_scalar("loss", 1) - with metrics.aggregate(new_root=True) as layer2: - metrics.log_scalar("loss", 2) - with metrics.aggregate() as layer3: - metrics.log_scalar("loss", 3) - with metrics.aggregate(new_root=True) as layer4: - metrics.log_scalar("loss", 4) - metrics.log_scalar("loss", 1.5) - - self.assertEqual(layer4.get_smoothed_values()["loss"], 4) - self.assertEqual(layer3.get_smoothed_values()["loss"], 3) - self.assertEqual(layer2.get_smoothed_values()["loss"], 2.5) - self.assertEqual(layer1.get_smoothed_values()["loss"], 1.25) - - def test_named(self): - name = str(uuid.uuid4()) - metrics.reset_meters(name) - - with metrics.aggregate(name): - metrics.log_scalar("loss", 1) - - metrics.log_scalar("loss", 3) - - with metrics.aggregate(name): - metrics.log_scalar("loss", 2) - - self.assertEqual(metrics.get_smoothed_values(name)["loss"], 1.5) - - def test_nested_duplicate_names(self): - name = str(uuid.uuid4()) - metrics.reset_meters(name) - - with metrics.aggregate(name): - metrics.log_scalar("loss", 1) - with metrics.aggregate() as other: - with metrics.aggregate(name): - metrics.log_scalar("loss", 2) - metrics.log_scalar("loss", 6) - - self.assertEqual(metrics.get_smoothed_values(name)["loss"], 3) - self.assertEqual(other.get_smoothed_values()["loss"], 2) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/layer_norm.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/layer_norm.py deleted file mode 100644 index 234609d9e213a650e0032aaa0ca0462a818bfead..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/layer_norm.py +++ /dev/null @@ -1,50 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn -import torch.nn.functional as F - - -try: - from apex.normalization import FusedLayerNorm as _FusedLayerNorm - - has_fused_layernorm = True - - class FusedLayerNorm(_FusedLayerNorm): - @torch.jit.unused - def forward(self, x): - if not x.is_cuda: - return super().forward(x) - else: - with torch.cuda.device(x.device): - return super().forward(x) - - -except ImportError: - has_fused_layernorm = False - - -def LayerNorm(normalized_shape, eps=1e-5, elementwise_affine=True, export=False): - if torch.jit.is_scripting(): - export = True - if not export and torch.cuda.is_available() and has_fused_layernorm: - return FusedLayerNorm(normalized_shape, eps, elementwise_affine) - return torch.nn.LayerNorm(normalized_shape, eps, elementwise_affine) - - -class Fp32LayerNorm(nn.LayerNorm): - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - - def forward(self, input): - output = F.layer_norm( - input.float(), - self.normalized_shape, - self.weight.float() if self.weight is not None else None, - self.bias.float() if self.bias is not None else None, - self.eps, - ) - return output.type_as(input) diff --git a/spaces/PaddlePaddle/ERNIE-Layout/app.py b/spaces/PaddlePaddle/ERNIE-Layout/app.py deleted file mode 100644 index 0bf5271b7eb60ffdb5f6a59a1dd09ea38a7984fe..0000000000000000000000000000000000000000 --- a/spaces/PaddlePaddle/ERNIE-Layout/app.py +++ /dev/null @@ -1,522 +0,0 @@ -#-*- coding: UTF-8 -*- -# Copyright 2022 The Impira Team and the HuggingFace Team. -# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import os -import json -import base64 -from io import BytesIO -from PIL import Image -import traceback - -import requests -import numpy as np -import gradio as gr -import pdf2image -import fitz -import cv2 - -fitz_tools = fitz.Tools() - - -def pdf2img(stream, pagenos, dpi=300, thread_count=3, height=1600): - images = [] - cimages = pdf2image.convert_from_bytes( - stream, dpi=dpi, thread_count=thread_count, first_page=pagenos[0] + 1, last_page=pagenos[-1] + 1, - size=height) - for _image in cimages: - image = np.array(_image) - image = image[..., ::-1] - images.append(image) - return images - - -class PdfReader(object): - """pdf reader""" - def __init__(self, - stream: bytes, - image_height: int = 1600): - self.stream = stream - self._image_height = image_height - self._dpi = 200 - self._inpdf = self.load_file(stream) - - @staticmethod - def load_file(stream): - """load document""" - try: - inpdf = fitz.Document(stream=stream, filetype="pdf") - except Exception as e: - print(f"[PDF_READER]-[Failed to load the file]-[{repr(e)}]") - return inpdf - - @staticmethod - def _convert_page_obj_to_image(page_obj, image_height: int = None): - """fitz convert pdf to image - - Args: - page_obj ([type]): [description] - ratio ([type]): [description] - - Returns: - [type]: [description] - """ - if image_height: - _, page_height = page_obj.rect.x1 - \ - page_obj.rect.x0, page_obj.rect.y1 - page_obj.rect.y0 - ratio = image_height / page_height - else: - ratio = 1.0 - trans = fitz.Matrix(ratio, ratio) - pixmap = page_obj.get_pixmap(matrix=trans, alpha=False) - image = cv2.imdecode(np.frombuffer(pixmap.tobytes(), np.uint8), -1) - fitz_tools.store_shrink(100) - return image - - def get_page_image(self, - pageno): - """get page image - - Args: - pageno ([type]): [description] - - Returns: - [type]: [description] - """ - try: - page_obj = self._inpdf[pageno] - return self._convert_page_obj_to_image(page_obj, self._image_height) - except Exception as e: - print(f"[Failed to convert the PDF to images]-[{repr(e)}]") - try: - return pdf2img(stream=self.stream, - pagenos=[pageno], - height=self._image_height, - dpi=self._dpi)[0] - except Exception as e: - print(f"[Failed to convert the PDF to images]-[{repr(e)}]") - return None - - -examples = [ - [ - "budget_form.png", - "What is the total actual and/or obligated expenses of ECG Center?" - ], - [ - "poster.png", - "Which gift idea needs a printer?" - ], - [ - "receipt.png", - "เบอร์โทรร้านอะไรคะ?" - ], - [ - "medical_bill_2.jpg", - "患者さんは何でお金を払いますか。" - ], - [ - "resume.png", - "五百丁本次想要担任的是什么职位?", - ], - [ - "custom_declaration_form.png", - "在哪个口岸进口?" - ], - [ - "invoice.jpg", - "发票号码是多少?", - ], -] - -prompt_files = { - "发票号码是多少?": "invoice.jpg", - "五百丁本次想要担任的是什么职位?": "resume.png", - "在哪个口岸进口?": "custom_declaration_form.png", - "What is the total actual and/or obligated expenses of ECG Center?": "budget_form.png", - "Which gift idea needs a printer?": "poster.png", - "患者さんは何でお金を払いますか。": "medical_bill_2.jpg", - "เบอร์โทรร้านอะไรคะ?": "receipt.png", -} - -lang_map = { - "invoice.jpg": "ch", - "resume.png": "ch", - "custom_declaration_form.png": "ch", - "medical_bill_1.png": "ch", - "budget_form.png": "en", - "website_design_guide.jpeg": "en", - "poster.png": "en", - "medical_bill_2.jpg": "ch", - "receipt.png": "en" -} - - -def load_document(path): - if path.startswith("http://") or path.startswith("https://"): - resp = requests.get(path, allow_redirects=True, stream=True) - b = resp.raw - else: - b = open(path, "rb") - - if path.endswith(".pdf"): - images_list = [] - pdfreader = PdfReader(stream=b.read()) - for p_no in range(0, pdfreader._inpdf.page_count): - img_np = pdfreader.get_page_image(pageno=p_no) - images_list.append(img_np) - else: - image = Image.open(b) - images_list = [np.array(image.convert("RGB"))] - return images_list - -def process_path(path): - error = None - if path: - try: - images_list = load_document(path) - return ( - path, - gr.update(visible=True, value=images_list), - gr.update(visible=True), - gr.update(visible=False, value=None), - gr.update(visible=False, value=None), - None, - ) - except Exception as e: - traceback.print_exc() - error = str(e) - return ( - None, - gr.update(visible=False, value=None), - gr.update(visible=False), - gr.update(visible=False, value=None), - gr.update(visible=False, value=None), - gr.update(visible=True, value=error) if error is not None else None, - None, - ) - - -def process_upload(file): - if file: - return process_path(file.name) - else: - return ( - None, - gr.update(visible=False, value=None), - gr.update(visible=False), - gr.update(visible=False, value=None), - gr.update(visible=False, value=None), - None, - ) - - -def np2base64(image_np): - image = cv2.imencode('.jpg', image_np)[1] - base64_str = str(base64.b64encode(image))[2:-1] - return base64_str - - -def get_base64(path): - if path.startswith("http://") or path.startswith("https://"): - resp = requests.get(path, allow_redirects=True, stream=True) - b = resp.raw - else: - b = open(path, "rb") - - if path.endswith(".pdf"): - images_list = [] - pdfreader = PdfReader(stream=b.read()) - for p_no in range(0, min(pdfreader._inpdf.page_count, 1)): - img_np = pdfreader.get_page_image(pageno=p_no) - images_list.append(img_np) - base64_str = np2base64(images_list[0]) - else: - base64_str = base64.b64encode(b.read()).decode() - return base64_str - - -def process_prompt(prompt, document, lang="ch", model="docprompt_v1"): - if not prompt: - prompt = "What is the total actual and/or obligated expenses of ECG Center?" - if document is None: - return None, None, None - - access_token = os.environ['token'] - url = f"https://aip.baidubce.com/rpc/2.0/nlp-itec/poc/docprompt?access_token={access_token}" - - base64_str = get_base64(document) - - r = requests.post(url, json={"doc": base64_str, "prompt": [prompt], "lang": lang, "model": model}) - response = r.json() - predictions = response['result'] - img_list = response['image'] - pages = [Image.open(BytesIO(base64.b64decode(img))) for img in img_list] - - text_value = predictions[0]['result'][0]['value'] - - return ( - gr.update(visible=True, value=pages), - gr.update(visible=True, value=predictions), - gr.update( - visible=True, - value=text_value, - ), - ) - - -def load_example_document(img, prompt): - if img is not None: - document = prompt_files[prompt] - lang = lang_map[document] - preview, answer, answer_text = process_prompt(prompt, document, lang, "docprompt_v1") - return document, prompt, preview, gr.update(visible=True), answer, answer_text - else: - return None, None, None, gr.update(visible=False), None, None - - -def read_content(file_path: str) -> str: - """read the content of target file - """ - with open(file_path, 'r', encoding='utf-8') as f: - content = f.read() - - return content - - -CSS = """ -#prompt input { - font-size: 16px; -} -#url-textbox { - padding: 0 !important; -} -#short-upload-box .w-full { - min-height: 10rem !important; -} -/* I think something like this can be used to re-shape - * the table - */ -/* -.gr-samples-table tr { - display: inline; -} -.gr-samples-table .p-2 { - width: 100px; -} -*/ -#select-a-file { - width: 100%; -} -#file-clear { - padding-top: 2px !important; - padding-bottom: 2px !important; - padding-left: 8px !important; - padding-right: 8px !important; - margin-top: 10px; -} -.gradio-container .gr-button-primary { - background: linear-gradient(180deg, #CDF9BE 0%, #AFF497 100%); - border: 1px solid #B0DCCC; - border-radius: 8px; - color: #1B8700; -} -.gradio-container.dark button#submit-button { - background: linear-gradient(180deg, #CDF9BE 0%, #AFF497 100%); - border: 1px solid #B0DCCC; - border-radius: 8px; - color: #1B8700 -} -table.gr-samples-table tr td { - border: none; - outline: none; -} -table.gr-samples-table tr td:first-of-type { - width: 0%; -} -div#short-upload-box div.absolute { - display: none !important; -} -gradio-app > div > div > div > div.w-full > div, .gradio-app > div > div > div > div.w-full > div { - gap: 0px 2%; -} -gradio-app div div div div.w-full, .gradio-app div div div div.w-full { - gap: 0px; -} -gradio-app h2, .gradio-app h2 { - padding-top: 10px; -} -#answer { - overflow-y: scroll; - color: white; - background: #666; - border-color: #666; - font-size: 20px; - font-weight: bold; -} -#answer span { - color: white; -} -#answer textarea { - color:white; - background: #777; - border-color: #777; - font-size: 18px; -} -#url-error input { - color: red; -} -""" - -with gr.Blocks(css=CSS) as demo: - gr.HTML(read_content("header.html")) - gr.Markdown( - "DocPrompt🔖 is a Document Prompt Engine using ERNIE-Layout as the backbone model." - "The engine is powered by BAIDU WenXin Document Intelligence Team " - "and has the ability for multilingual documents information extraction and question ansering. " - "For more details, please visit the [Github](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/model_zoo/ernie-layout)." - "ERNIE-Layout paper please refer to [ERNIE-Layout](https://paperswithcode.com/paper/ernie-layout-layout-knowledge-enhanced-pre)" - ) - - document = gr.Variable() - example_prompt = gr.Textbox(visible=False) - example_image = gr.Image(visible=False) - with gr.Row(equal_height=True): - with gr.Column(): - with gr.Row(): - gr.Markdown("## 1. Select a file", elem_id="select-a-file") - img_clear_button = gr.Button( - "Clear", variant="secondary", elem_id="file-clear", visible=False - ) - image = gr.Gallery(visible=False) - with gr.Row(equal_height=True): - with gr.Column(): - with gr.Row(): - url = gr.Textbox( - show_label=False, - placeholder="URL", - lines=1, - max_lines=1, - elem_id="url-textbox", - ) - submit = gr.Button("Get") - url_error = gr.Textbox( - visible=False, - elem_id="url-error", - max_lines=1, - interactive=False, - label="Error", - ) - gr.Markdown("— or —") - upload = gr.File(label=None, interactive=True, elem_id="short-upload-box") - gr.Examples( - examples=examples, - inputs=[example_image, example_prompt], - ) - - with gr.Column() as col: - gr.Markdown("## 2. Make a request") - prompt = gr.Textbox( - label="Prompt (No restrictions on the setting of prompt. You can type any prompt.)", - placeholder="e.g. What is the total actual and/or obligated expenses of ECG Center?", - lines=1, - max_lines=1, - ) - ocr_lang = gr.Radio( - choices=["ch", "en"], - value="en", - label="Select OCR Language (Please choose ch for Chinese images.)", - ) - model = gr.Radio( - choices=["docprompt_v1", "docprompt_v2"], - value="docprompt_v1", - label="Select Inference Model.", - ) - - with gr.Row(): - clear_button = gr.Button("Clear", variant="secondary") - submit_button = gr.Button( - "Submit", variant="primary", elem_id="submit-button" - ) - with gr.Column(): - output_text = gr.Textbox( - label="Top Answer", visible=False, elem_id="answer" - ) - output = gr.JSON(label="Output", visible=False) - - for cb in [img_clear_button, clear_button]: - cb.click( - lambda _: ( - gr.update(visible=False, value=None), - None, - gr.update(visible=False, value=None), - gr.update(visible=False, value=None), - gr.update(visible=False), - None, - None, - None, - gr.update(visible=False, value=None), - None, - ), - inputs=clear_button, - outputs=[ - image, - document, - output, - output_text, - img_clear_button, - example_image, - upload, - url, - url_error, - prompt, - ], - ) - - upload.change( - fn=process_upload, - inputs=[upload], - outputs=[document, image, img_clear_button, output, output_text, url_error], - ) - submit.click( - fn=process_path, - inputs=[url], - outputs=[document, image, img_clear_button, output, output_text, url_error], - ) - - prompt.submit( - fn=process_prompt, - inputs=[prompt, document, ocr_lang, model], - outputs=[image, output, output_text], - ) - - submit_button.click( - fn=process_prompt, - inputs=[prompt, document, ocr_lang, model], - outputs=[image, output, output_text], - ) - - example_image.change( - fn=load_example_document, - inputs=[example_image, example_prompt], - outputs=[document, prompt, image, img_clear_button, output, output_text], - ) - - gr.Markdown("[![Stargazers repo roster for @PaddlePaddle/PaddleNLP](https://reporoster.com/stars/PaddlePaddle/PaddleNLP)](https://github.com/PaddlePaddle/PaddleNLP)") - gr.HTML(read_content("footer.html")) - - -if __name__ == "__main__": - demo.launch(enable_queue=False) \ No newline at end of file diff --git a/spaces/PeepDaSlan9/Bark-Voice-Cloning/training/training_prepare.py b/spaces/PeepDaSlan9/Bark-Voice-Cloning/training/training_prepare.py deleted file mode 100644 index da4b30622d096fe636a0db358c43336eeef4d959..0000000000000000000000000000000000000000 --- a/spaces/PeepDaSlan9/Bark-Voice-Cloning/training/training_prepare.py +++ /dev/null @@ -1,73 +0,0 @@ -import random -import uuid -import numpy -import os -import random -import fnmatch - -from tqdm.auto import tqdm -from scipy.io import wavfile - -from bark.generation import load_model, SAMPLE_RATE -from bark.api import semantic_to_waveform - -from bark import text_to_semantic -from bark.generation import load_model - -from training.data import load_books, random_split_chunk - -output = 'training/data/output' -output_wav = 'training/data/output_wav' - - -def prepare_semantics_from_text(num_generations): - loaded_data = load_books(True) - - print('Loading semantics model') - load_model(use_gpu=True, use_small=False, force_reload=False, model_type='text') - - if not os.path.isdir(output): - os.mkdir(output) - - loop = 1 - while 1: - filename = uuid.uuid4().hex + '.npy' - file_name = os.path.join(output, filename) - text = '' - while not len(text) > 0: - text = random_split_chunk(loaded_data) # Obtain a short chunk of text - text = text.strip() - print(f'{loop} Generating semantics for text:', text) - loop+=1 - semantics = text_to_semantic(text, temp=round(random.uniform(0.6, 0.8), ndigits=2)) - numpy.save(file_name, semantics) - - -def prepare_wavs_from_semantics(): - if not os.path.isdir(output): - raise Exception('No \'output\' folder, make sure you run create_data.py first!') - if not os.path.isdir(output_wav): - os.mkdir(output_wav) - - print('Loading coarse model') - load_model(use_gpu=True, use_small=False, force_reload=False, model_type='coarse') - print('Loading fine model') - load_model(use_gpu=True, use_small=False, force_reload=False, model_type='fine') - - files = fnmatch.filter(os.listdir(output), '*.npy') - current = 1 - total = len(files) - - for i, f in tqdm(enumerate(files), total=len(files)): - real_name = '.'.join(f.split('.')[:-1]) # Cut off the extension - file_name = os.path.join(output, f) - out_file = os.path.join(output_wav, f'{real_name}.wav') - if not os.path.isfile(out_file) and os.path.isfile(file_name): # Don't process files that have already been processed, to be able to continue previous generations - print(f'Processing ({i+1}/{total}) -> {f}') - wav = semantic_to_waveform(numpy.load(file_name), temp=round(random.uniform(0.6, 0.8), ndigits=2)) - # Change to PCM16 - # wav = (wav * 32767).astype(np.int16) - wavfile.write(out_file, SAMPLE_RATE, wav) - - print('Done!') - diff --git a/spaces/Pie31415/control-animation/annotator/midas/midas/midas_net_custom.py b/spaces/Pie31415/control-animation/annotator/midas/midas/midas_net_custom.py deleted file mode 100644 index 50e4acb5e53d5fabefe3dde16ab49c33c2b7797c..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/midas/midas/midas_net_custom.py +++ /dev/null @@ -1,128 +0,0 @@ -"""MidashNet: Network for monocular depth estimation trained by mixing several datasets. -This file contains code that is adapted from -https://github.com/thomasjpfan/pytorch_refinenet/blob/master/pytorch_refinenet/refinenet/refinenet_4cascade.py -""" -import torch -import torch.nn as nn - -from .base_model import BaseModel -from .blocks import FeatureFusionBlock, FeatureFusionBlock_custom, Interpolate, _make_encoder - - -class MidasNet_small(BaseModel): - """Network for monocular depth estimation. - """ - - def __init__(self, path=None, features=64, backbone="efficientnet_lite3", non_negative=True, exportable=True, channels_last=False, align_corners=True, - blocks={'expand': True}): - """Init. - - Args: - path (str, optional): Path to saved model. Defaults to None. - features (int, optional): Number of features. Defaults to 256. - backbone (str, optional): Backbone network for encoder. Defaults to resnet50 - """ - print("Loading weights: ", path) - - super(MidasNet_small, self).__init__() - - use_pretrained = False if path else True - - self.channels_last = channels_last - self.blocks = blocks - self.backbone = backbone - - self.groups = 1 - - features1=features - features2=features - features3=features - features4=features - self.expand = False - if "expand" in self.blocks and self.blocks['expand'] == True: - self.expand = True - features1=features - features2=features*2 - features3=features*4 - features4=features*8 - - self.pretrained, self.scratch = _make_encoder(self.backbone, features, use_pretrained, groups=self.groups, expand=self.expand, exportable=exportable) - - self.scratch.activation = nn.ReLU(False) - - self.scratch.refinenet4 = FeatureFusionBlock_custom(features4, self.scratch.activation, deconv=False, bn=False, expand=self.expand, align_corners=align_corners) - self.scratch.refinenet3 = FeatureFusionBlock_custom(features3, self.scratch.activation, deconv=False, bn=False, expand=self.expand, align_corners=align_corners) - self.scratch.refinenet2 = FeatureFusionBlock_custom(features2, self.scratch.activation, deconv=False, bn=False, expand=self.expand, align_corners=align_corners) - self.scratch.refinenet1 = FeatureFusionBlock_custom(features1, self.scratch.activation, deconv=False, bn=False, align_corners=align_corners) - - - self.scratch.output_conv = nn.Sequential( - nn.Conv2d(features, features//2, kernel_size=3, stride=1, padding=1, groups=self.groups), - Interpolate(scale_factor=2, mode="bilinear"), - nn.Conv2d(features//2, 32, kernel_size=3, stride=1, padding=1), - self.scratch.activation, - nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0), - nn.ReLU(True) if non_negative else nn.Identity(), - nn.Identity(), - ) - - if path: - self.load(path) - - - def forward(self, x): - """Forward pass. - - Args: - x (tensor): input data (image) - - Returns: - tensor: depth - """ - if self.channels_last==True: - print("self.channels_last = ", self.channels_last) - x.contiguous(memory_format=torch.channels_last) - - - layer_1 = self.pretrained.layer1(x) - layer_2 = self.pretrained.layer2(layer_1) - layer_3 = self.pretrained.layer3(layer_2) - layer_4 = self.pretrained.layer4(layer_3) - - layer_1_rn = self.scratch.layer1_rn(layer_1) - layer_2_rn = self.scratch.layer2_rn(layer_2) - layer_3_rn = self.scratch.layer3_rn(layer_3) - layer_4_rn = self.scratch.layer4_rn(layer_4) - - - path_4 = self.scratch.refinenet4(layer_4_rn) - path_3 = self.scratch.refinenet3(path_4, layer_3_rn) - path_2 = self.scratch.refinenet2(path_3, layer_2_rn) - path_1 = self.scratch.refinenet1(path_2, layer_1_rn) - - out = self.scratch.output_conv(path_1) - - return torch.squeeze(out, dim=1) - - - -def fuse_model(m): - prev_previous_type = nn.Identity() - prev_previous_name = '' - previous_type = nn.Identity() - previous_name = '' - for name, module in m.named_modules(): - if prev_previous_type == nn.Conv2d and previous_type == nn.BatchNorm2d and type(module) == nn.ReLU: - # print("FUSED ", prev_previous_name, previous_name, name) - torch.quantization.fuse_modules(m, [prev_previous_name, previous_name, name], inplace=True) - elif prev_previous_type == nn.Conv2d and previous_type == nn.BatchNorm2d: - # print("FUSED ", prev_previous_name, previous_name) - torch.quantization.fuse_modules(m, [prev_previous_name, previous_name], inplace=True) - # elif previous_type == nn.Conv2d and type(module) == nn.ReLU: - # print("FUSED ", previous_name, name) - # torch.quantization.fuse_modules(m, [previous_name, name], inplace=True) - - prev_previous_type = previous_type - prev_previous_name = previous_name - previous_type = type(module) - previous_name = name \ No newline at end of file diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/image/photometric.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/image/photometric.py deleted file mode 100644 index 5085d012019c0cbf56f66f421a378278c1a058ae..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/image/photometric.py +++ /dev/null @@ -1,428 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import cv2 -import numpy as np - -from ..utils import is_tuple_of -from .colorspace import bgr2gray, gray2bgr - - -def imnormalize(img, mean, std, to_rgb=True): - """Normalize an image with mean and std. - - Args: - img (ndarray): Image to be normalized. - mean (ndarray): The mean to be used for normalize. - std (ndarray): The std to be used for normalize. - to_rgb (bool): Whether to convert to rgb. - - Returns: - ndarray: The normalized image. - """ - img = img.copy().astype(np.float32) - return imnormalize_(img, mean, std, to_rgb) - - -def imnormalize_(img, mean, std, to_rgb=True): - """Inplace normalize an image with mean and std. - - Args: - img (ndarray): Image to be normalized. - mean (ndarray): The mean to be used for normalize. - std (ndarray): The std to be used for normalize. - to_rgb (bool): Whether to convert to rgb. - - Returns: - ndarray: The normalized image. - """ - # cv2 inplace normalization does not accept uint8 - assert img.dtype != np.uint8 - mean = np.float64(mean.reshape(1, -1)) - stdinv = 1 / np.float64(std.reshape(1, -1)) - if to_rgb: - cv2.cvtColor(img, cv2.COLOR_BGR2RGB, img) # inplace - cv2.subtract(img, mean, img) # inplace - cv2.multiply(img, stdinv, img) # inplace - return img - - -def imdenormalize(img, mean, std, to_bgr=True): - assert img.dtype != np.uint8 - mean = mean.reshape(1, -1).astype(np.float64) - std = std.reshape(1, -1).astype(np.float64) - img = cv2.multiply(img, std) # make a copy - cv2.add(img, mean, img) # inplace - if to_bgr: - cv2.cvtColor(img, cv2.COLOR_RGB2BGR, img) # inplace - return img - - -def iminvert(img): - """Invert (negate) an image. - - Args: - img (ndarray): Image to be inverted. - - Returns: - ndarray: The inverted image. - """ - return np.full_like(img, 255) - img - - -def solarize(img, thr=128): - """Solarize an image (invert all pixel values above a threshold) - - Args: - img (ndarray): Image to be solarized. - thr (int): Threshold for solarizing (0 - 255). - - Returns: - ndarray: The solarized image. - """ - img = np.where(img < thr, img, 255 - img) - return img - - -def posterize(img, bits): - """Posterize an image (reduce the number of bits for each color channel) - - Args: - img (ndarray): Image to be posterized. - bits (int): Number of bits (1 to 8) to use for posterizing. - - Returns: - ndarray: The posterized image. - """ - shift = 8 - bits - img = np.left_shift(np.right_shift(img, shift), shift) - return img - - -def adjust_color(img, alpha=1, beta=None, gamma=0): - r"""It blends the source image and its gray image: - - .. math:: - output = img * alpha + gray\_img * beta + gamma - - Args: - img (ndarray): The input source image. - alpha (int | float): Weight for the source image. Default 1. - beta (int | float): Weight for the converted gray image. - If None, it's assigned the value (1 - `alpha`). - gamma (int | float): Scalar added to each sum. - Same as :func:`cv2.addWeighted`. Default 0. - - Returns: - ndarray: Colored image which has the same size and dtype as input. - """ - gray_img = bgr2gray(img) - gray_img = np.tile(gray_img[..., None], [1, 1, 3]) - if beta is None: - beta = 1 - alpha - colored_img = cv2.addWeighted(img, alpha, gray_img, beta, gamma) - if not colored_img.dtype == np.uint8: - # Note when the dtype of `img` is not the default `np.uint8` - # (e.g. np.float32), the value in `colored_img` got from cv2 - # is not guaranteed to be in range [0, 255], so here clip - # is needed. - colored_img = np.clip(colored_img, 0, 255) - return colored_img - - -def imequalize(img): - """Equalize the image histogram. - - This function applies a non-linear mapping to the input image, - in order to create a uniform distribution of grayscale values - in the output image. - - Args: - img (ndarray): Image to be equalized. - - Returns: - ndarray: The equalized image. - """ - - def _scale_channel(im, c): - """Scale the data in the corresponding channel.""" - im = im[:, :, c] - # Compute the histogram of the image channel. - histo = np.histogram(im, 256, (0, 255))[0] - # For computing the step, filter out the nonzeros. - nonzero_histo = histo[histo > 0] - step = (np.sum(nonzero_histo) - nonzero_histo[-1]) // 255 - if not step: - lut = np.array(range(256)) - else: - # Compute the cumulative sum, shifted by step // 2 - # and then normalized by step. - lut = (np.cumsum(histo) + (step // 2)) // step - # Shift lut, prepending with 0. - lut = np.concatenate([[0], lut[:-1]], 0) - # handle potential integer overflow - lut[lut > 255] = 255 - # If step is zero, return the original image. - # Otherwise, index from lut. - return np.where(np.equal(step, 0), im, lut[im]) - - # Scales each channel independently and then stacks - # the result. - s1 = _scale_channel(img, 0) - s2 = _scale_channel(img, 1) - s3 = _scale_channel(img, 2) - equalized_img = np.stack([s1, s2, s3], axis=-1) - return equalized_img.astype(img.dtype) - - -def adjust_brightness(img, factor=1.): - """Adjust image brightness. - - This function controls the brightness of an image. An - enhancement factor of 0.0 gives a black image. - A factor of 1.0 gives the original image. This function - blends the source image and the degenerated black image: - - .. math:: - output = img * factor + degenerated * (1 - factor) - - Args: - img (ndarray): Image to be brightened. - factor (float): A value controls the enhancement. - Factor 1.0 returns the original image, lower - factors mean less color (brightness, contrast, - etc), and higher values more. Default 1. - - Returns: - ndarray: The brightened image. - """ - degenerated = np.zeros_like(img) - # Note manually convert the dtype to np.float32, to - # achieve as close results as PIL.ImageEnhance.Brightness. - # Set beta=1-factor, and gamma=0 - brightened_img = cv2.addWeighted( - img.astype(np.float32), factor, degenerated.astype(np.float32), - 1 - factor, 0) - brightened_img = np.clip(brightened_img, 0, 255) - return brightened_img.astype(img.dtype) - - -def adjust_contrast(img, factor=1.): - """Adjust image contrast. - - This function controls the contrast of an image. An - enhancement factor of 0.0 gives a solid grey - image. A factor of 1.0 gives the original image. It - blends the source image and the degenerated mean image: - - .. math:: - output = img * factor + degenerated * (1 - factor) - - Args: - img (ndarray): Image to be contrasted. BGR order. - factor (float): Same as :func:`mmcv.adjust_brightness`. - - Returns: - ndarray: The contrasted image. - """ - gray_img = bgr2gray(img) - hist = np.histogram(gray_img, 256, (0, 255))[0] - mean = round(np.sum(gray_img) / np.sum(hist)) - degenerated = (np.ones_like(img[..., 0]) * mean).astype(img.dtype) - degenerated = gray2bgr(degenerated) - contrasted_img = cv2.addWeighted( - img.astype(np.float32), factor, degenerated.astype(np.float32), - 1 - factor, 0) - contrasted_img = np.clip(contrasted_img, 0, 255) - return contrasted_img.astype(img.dtype) - - -def auto_contrast(img, cutoff=0): - """Auto adjust image contrast. - - This function maximize (normalize) image contrast by first removing cutoff - percent of the lightest and darkest pixels from the histogram and remapping - the image so that the darkest pixel becomes black (0), and the lightest - becomes white (255). - - Args: - img (ndarray): Image to be contrasted. BGR order. - cutoff (int | float | tuple): The cutoff percent of the lightest and - darkest pixels to be removed. If given as tuple, it shall be - (low, high). Otherwise, the single value will be used for both. - Defaults to 0. - - Returns: - ndarray: The contrasted image. - """ - - def _auto_contrast_channel(im, c, cutoff): - im = im[:, :, c] - # Compute the histogram of the image channel. - histo = np.histogram(im, 256, (0, 255))[0] - # Remove cut-off percent pixels from histo - histo_sum = np.cumsum(histo) - cut_low = histo_sum[-1] * cutoff[0] // 100 - cut_high = histo_sum[-1] - histo_sum[-1] * cutoff[1] // 100 - histo_sum = np.clip(histo_sum, cut_low, cut_high) - cut_low - histo = np.concatenate([[histo_sum[0]], np.diff(histo_sum)], 0) - - # Compute mapping - low, high = np.nonzero(histo)[0][0], np.nonzero(histo)[0][-1] - # If all the values have been cut off, return the origin img - if low >= high: - return im - scale = 255.0 / (high - low) - offset = -low * scale - lut = np.array(range(256)) - lut = lut * scale + offset - lut = np.clip(lut, 0, 255) - return lut[im] - - if isinstance(cutoff, (int, float)): - cutoff = (cutoff, cutoff) - else: - assert isinstance(cutoff, tuple), 'cutoff must be of type int, ' \ - f'float or tuple, but got {type(cutoff)} instead.' - # Auto adjusts contrast for each channel independently and then stacks - # the result. - s1 = _auto_contrast_channel(img, 0, cutoff) - s2 = _auto_contrast_channel(img, 1, cutoff) - s3 = _auto_contrast_channel(img, 2, cutoff) - contrasted_img = np.stack([s1, s2, s3], axis=-1) - return contrasted_img.astype(img.dtype) - - -def adjust_sharpness(img, factor=1., kernel=None): - """Adjust image sharpness. - - This function controls the sharpness of an image. An - enhancement factor of 0.0 gives a blurred image. A - factor of 1.0 gives the original image. And a factor - of 2.0 gives a sharpened image. It blends the source - image and the degenerated mean image: - - .. math:: - output = img * factor + degenerated * (1 - factor) - - Args: - img (ndarray): Image to be sharpened. BGR order. - factor (float): Same as :func:`mmcv.adjust_brightness`. - kernel (np.ndarray, optional): Filter kernel to be applied on the img - to obtain the degenerated img. Defaults to None. - - Note: - No value sanity check is enforced on the kernel set by users. So with - an inappropriate kernel, the ``adjust_sharpness`` may fail to perform - the function its name indicates but end up performing whatever - transform determined by the kernel. - - Returns: - ndarray: The sharpened image. - """ - - if kernel is None: - # adopted from PIL.ImageFilter.SMOOTH - kernel = np.array([[1., 1., 1.], [1., 5., 1.], [1., 1., 1.]]) / 13 - assert isinstance(kernel, np.ndarray), \ - f'kernel must be of type np.ndarray, but got {type(kernel)} instead.' - assert kernel.ndim == 2, \ - f'kernel must have a dimension of 2, but got {kernel.ndim} instead.' - - degenerated = cv2.filter2D(img, -1, kernel) - sharpened_img = cv2.addWeighted( - img.astype(np.float32), factor, degenerated.astype(np.float32), - 1 - factor, 0) - sharpened_img = np.clip(sharpened_img, 0, 255) - return sharpened_img.astype(img.dtype) - - -def adjust_lighting(img, eigval, eigvec, alphastd=0.1, to_rgb=True): - """AlexNet-style PCA jitter. - - This data augmentation is proposed in `ImageNet Classification with Deep - Convolutional Neural Networks - `_. - - Args: - img (ndarray): Image to be adjusted lighting. BGR order. - eigval (ndarray): the eigenvalue of the convariance matrix of pixel - values, respectively. - eigvec (ndarray): the eigenvector of the convariance matrix of pixel - values, respectively. - alphastd (float): The standard deviation for distribution of alpha. - Defaults to 0.1 - to_rgb (bool): Whether to convert img to rgb. - - Returns: - ndarray: The adjusted image. - """ - assert isinstance(eigval, np.ndarray) and isinstance(eigvec, np.ndarray), \ - f'eigval and eigvec should both be of type np.ndarray, got ' \ - f'{type(eigval)} and {type(eigvec)} instead.' - - assert eigval.ndim == 1 and eigvec.ndim == 2 - assert eigvec.shape == (3, eigval.shape[0]) - n_eigval = eigval.shape[0] - assert isinstance(alphastd, float), 'alphastd should be of type float, ' \ - f'got {type(alphastd)} instead.' - - img = img.copy().astype(np.float32) - if to_rgb: - cv2.cvtColor(img, cv2.COLOR_BGR2RGB, img) # inplace - - alpha = np.random.normal(0, alphastd, n_eigval) - alter = eigvec \ - * np.broadcast_to(alpha.reshape(1, n_eigval), (3, n_eigval)) \ - * np.broadcast_to(eigval.reshape(1, n_eigval), (3, n_eigval)) - alter = np.broadcast_to(alter.sum(axis=1).reshape(1, 1, 3), img.shape) - img_adjusted = img + alter - return img_adjusted - - -def lut_transform(img, lut_table): - """Transform array by look-up table. - - The function lut_transform fills the output array with values from the - look-up table. Indices of the entries are taken from the input array. - - Args: - img (ndarray): Image to be transformed. - lut_table (ndarray): look-up table of 256 elements; in case of - multi-channel input array, the table should either have a single - channel (in this case the same table is used for all channels) or - the same number of channels as in the input array. - - Returns: - ndarray: The transformed image. - """ - assert isinstance(img, np.ndarray) - assert 0 <= np.min(img) and np.max(img) <= 255 - assert isinstance(lut_table, np.ndarray) - assert lut_table.shape == (256, ) - - return cv2.LUT(np.array(img, dtype=np.uint8), lut_table) - - -def clahe(img, clip_limit=40.0, tile_grid_size=(8, 8)): - """Use CLAHE method to process the image. - - See `ZUIDERVELD,K. Contrast Limited Adaptive Histogram Equalization[J]. - Graphics Gems, 1994:474-485.` for more information. - - Args: - img (ndarray): Image to be processed. - clip_limit (float): Threshold for contrast limiting. Default: 40.0. - tile_grid_size (tuple[int]): Size of grid for histogram equalization. - Input image will be divided into equally sized rectangular tiles. - It defines the number of tiles in row and column. Default: (8, 8). - - Returns: - ndarray: The processed image. - """ - assert isinstance(img, np.ndarray) - assert img.ndim == 2 - assert isinstance(clip_limit, (float, int)) - assert is_tuple_of(tile_grid_size, int) - assert len(tile_grid_size) == 2 - - clahe = cv2.createCLAHE(clip_limit, tile_grid_size) - return clahe.apply(np.array(img, dtype=np.uint8)) diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/ops/multi_scale_deform_attn.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/ops/multi_scale_deform_attn.py deleted file mode 100644 index c52dda18b41705705b47dd0e995b124048c16fba..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/ops/multi_scale_deform_attn.py +++ /dev/null @@ -1,358 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math -import warnings - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.autograd.function import Function, once_differentiable - -from annotator.uniformer.mmcv import deprecated_api_warning -from annotator.uniformer.mmcv.cnn import constant_init, xavier_init -from annotator.uniformer.mmcv.cnn.bricks.registry import ATTENTION -from annotator.uniformer.mmcv.runner import BaseModule -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', ['ms_deform_attn_backward', 'ms_deform_attn_forward']) - - -class MultiScaleDeformableAttnFunction(Function): - - @staticmethod - def forward(ctx, value, value_spatial_shapes, value_level_start_index, - sampling_locations, attention_weights, im2col_step): - """GPU version of multi-scale deformable attention. - - Args: - value (Tensor): The value has shape - (bs, num_keys, mum_heads, embed_dims//num_heads) - value_spatial_shapes (Tensor): Spatial shape of - each feature map, has shape (num_levels, 2), - last dimension 2 represent (h, w) - sampling_locations (Tensor): The location of sampling points, - has shape - (bs ,num_queries, num_heads, num_levels, num_points, 2), - the last dimension 2 represent (x, y). - attention_weights (Tensor): The weight of sampling points used - when calculate the attention, has shape - (bs ,num_queries, num_heads, num_levels, num_points), - im2col_step (Tensor): The step used in image to column. - - Returns: - Tensor: has shape (bs, num_queries, embed_dims) - """ - - ctx.im2col_step = im2col_step - output = ext_module.ms_deform_attn_forward( - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - im2col_step=ctx.im2col_step) - ctx.save_for_backward(value, value_spatial_shapes, - value_level_start_index, sampling_locations, - attention_weights) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - """GPU version of backward function. - - Args: - grad_output (Tensor): Gradient - of output tensor of forward. - - Returns: - Tuple[Tensor]: Gradient - of input tensors in forward. - """ - value, value_spatial_shapes, value_level_start_index,\ - sampling_locations, attention_weights = ctx.saved_tensors - grad_value = torch.zeros_like(value) - grad_sampling_loc = torch.zeros_like(sampling_locations) - grad_attn_weight = torch.zeros_like(attention_weights) - - ext_module.ms_deform_attn_backward( - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - grad_output.contiguous(), - grad_value, - grad_sampling_loc, - grad_attn_weight, - im2col_step=ctx.im2col_step) - - return grad_value, None, None, \ - grad_sampling_loc, grad_attn_weight, None - - -def multi_scale_deformable_attn_pytorch(value, value_spatial_shapes, - sampling_locations, attention_weights): - """CPU version of multi-scale deformable attention. - - Args: - value (Tensor): The value has shape - (bs, num_keys, mum_heads, embed_dims//num_heads) - value_spatial_shapes (Tensor): Spatial shape of - each feature map, has shape (num_levels, 2), - last dimension 2 represent (h, w) - sampling_locations (Tensor): The location of sampling points, - has shape - (bs ,num_queries, num_heads, num_levels, num_points, 2), - the last dimension 2 represent (x, y). - attention_weights (Tensor): The weight of sampling points used - when calculate the attention, has shape - (bs ,num_queries, num_heads, num_levels, num_points), - - Returns: - Tensor: has shape (bs, num_queries, embed_dims) - """ - - bs, _, num_heads, embed_dims = value.shape - _, num_queries, num_heads, num_levels, num_points, _ =\ - sampling_locations.shape - value_list = value.split([H_ * W_ for H_, W_ in value_spatial_shapes], - dim=1) - sampling_grids = 2 * sampling_locations - 1 - sampling_value_list = [] - for level, (H_, W_) in enumerate(value_spatial_shapes): - # bs, H_*W_, num_heads, embed_dims -> - # bs, H_*W_, num_heads*embed_dims -> - # bs, num_heads*embed_dims, H_*W_ -> - # bs*num_heads, embed_dims, H_, W_ - value_l_ = value_list[level].flatten(2).transpose(1, 2).reshape( - bs * num_heads, embed_dims, H_, W_) - # bs, num_queries, num_heads, num_points, 2 -> - # bs, num_heads, num_queries, num_points, 2 -> - # bs*num_heads, num_queries, num_points, 2 - sampling_grid_l_ = sampling_grids[:, :, :, - level].transpose(1, 2).flatten(0, 1) - # bs*num_heads, embed_dims, num_queries, num_points - sampling_value_l_ = F.grid_sample( - value_l_, - sampling_grid_l_, - mode='bilinear', - padding_mode='zeros', - align_corners=False) - sampling_value_list.append(sampling_value_l_) - # (bs, num_queries, num_heads, num_levels, num_points) -> - # (bs, num_heads, num_queries, num_levels, num_points) -> - # (bs, num_heads, 1, num_queries, num_levels*num_points) - attention_weights = attention_weights.transpose(1, 2).reshape( - bs * num_heads, 1, num_queries, num_levels * num_points) - output = (torch.stack(sampling_value_list, dim=-2).flatten(-2) * - attention_weights).sum(-1).view(bs, num_heads * embed_dims, - num_queries) - return output.transpose(1, 2).contiguous() - - -@ATTENTION.register_module() -class MultiScaleDeformableAttention(BaseModule): - """An attention module used in Deformable-Detr. - - `Deformable DETR: Deformable Transformers for End-to-End Object Detection. - `_. - - Args: - embed_dims (int): The embedding dimension of Attention. - Default: 256. - num_heads (int): Parallel attention heads. Default: 64. - num_levels (int): The number of feature map used in - Attention. Default: 4. - num_points (int): The number of sampling points for - each query in each head. Default: 4. - im2col_step (int): The step used in image_to_column. - Default: 64. - dropout (float): A Dropout layer on `inp_identity`. - Default: 0.1. - batch_first (bool): Key, Query and Value are shape of - (batch, n, embed_dim) - or (n, batch, embed_dim). Default to False. - norm_cfg (dict): Config dict for normalization layer. - Default: None. - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - """ - - def __init__(self, - embed_dims=256, - num_heads=8, - num_levels=4, - num_points=4, - im2col_step=64, - dropout=0.1, - batch_first=False, - norm_cfg=None, - init_cfg=None): - super().__init__(init_cfg) - if embed_dims % num_heads != 0: - raise ValueError(f'embed_dims must be divisible by num_heads, ' - f'but got {embed_dims} and {num_heads}') - dim_per_head = embed_dims // num_heads - self.norm_cfg = norm_cfg - self.dropout = nn.Dropout(dropout) - self.batch_first = batch_first - - # you'd better set dim_per_head to a power of 2 - # which is more efficient in the CUDA implementation - def _is_power_of_2(n): - if (not isinstance(n, int)) or (n < 0): - raise ValueError( - 'invalid input for _is_power_of_2: {} (type: {})'.format( - n, type(n))) - return (n & (n - 1) == 0) and n != 0 - - if not _is_power_of_2(dim_per_head): - warnings.warn( - "You'd better set embed_dims in " - 'MultiScaleDeformAttention to make ' - 'the dimension of each attention head a power of 2 ' - 'which is more efficient in our CUDA implementation.') - - self.im2col_step = im2col_step - self.embed_dims = embed_dims - self.num_levels = num_levels - self.num_heads = num_heads - self.num_points = num_points - self.sampling_offsets = nn.Linear( - embed_dims, num_heads * num_levels * num_points * 2) - self.attention_weights = nn.Linear(embed_dims, - num_heads * num_levels * num_points) - self.value_proj = nn.Linear(embed_dims, embed_dims) - self.output_proj = nn.Linear(embed_dims, embed_dims) - self.init_weights() - - def init_weights(self): - """Default initialization for Parameters of Module.""" - constant_init(self.sampling_offsets, 0.) - thetas = torch.arange( - self.num_heads, - dtype=torch.float32) * (2.0 * math.pi / self.num_heads) - grid_init = torch.stack([thetas.cos(), thetas.sin()], -1) - grid_init = (grid_init / - grid_init.abs().max(-1, keepdim=True)[0]).view( - self.num_heads, 1, 1, - 2).repeat(1, self.num_levels, self.num_points, 1) - for i in range(self.num_points): - grid_init[:, :, i, :] *= i + 1 - - self.sampling_offsets.bias.data = grid_init.view(-1) - constant_init(self.attention_weights, val=0., bias=0.) - xavier_init(self.value_proj, distribution='uniform', bias=0.) - xavier_init(self.output_proj, distribution='uniform', bias=0.) - self._is_init = True - - @deprecated_api_warning({'residual': 'identity'}, - cls_name='MultiScaleDeformableAttention') - def forward(self, - query, - key=None, - value=None, - identity=None, - query_pos=None, - key_padding_mask=None, - reference_points=None, - spatial_shapes=None, - level_start_index=None, - **kwargs): - """Forward Function of MultiScaleDeformAttention. - - Args: - query (Tensor): Query of Transformer with shape - (num_query, bs, embed_dims). - key (Tensor): The key tensor with shape - `(num_key, bs, embed_dims)`. - value (Tensor): The value tensor with shape - `(num_key, bs, embed_dims)`. - identity (Tensor): The tensor used for addition, with the - same shape as `query`. Default None. If None, - `query` will be used. - query_pos (Tensor): The positional encoding for `query`. - Default: None. - key_pos (Tensor): The positional encoding for `key`. Default - None. - reference_points (Tensor): The normalized reference - points with shape (bs, num_query, num_levels, 2), - all elements is range in [0, 1], top-left (0,0), - bottom-right (1, 1), including padding area. - or (N, Length_{query}, num_levels, 4), add - additional two dimensions is (w, h) to - form reference boxes. - key_padding_mask (Tensor): ByteTensor for `query`, with - shape [bs, num_key]. - spatial_shapes (Tensor): Spatial shape of features in - different levels. With shape (num_levels, 2), - last dimension represents (h, w). - level_start_index (Tensor): The start index of each level. - A tensor has shape ``(num_levels, )`` and can be represented - as [0, h_0*w_0, h_0*w_0+h_1*w_1, ...]. - - Returns: - Tensor: forwarded results with shape [num_query, bs, embed_dims]. - """ - - if value is None: - value = query - - if identity is None: - identity = query - if query_pos is not None: - query = query + query_pos - if not self.batch_first: - # change to (bs, num_query ,embed_dims) - query = query.permute(1, 0, 2) - value = value.permute(1, 0, 2) - - bs, num_query, _ = query.shape - bs, num_value, _ = value.shape - assert (spatial_shapes[:, 0] * spatial_shapes[:, 1]).sum() == num_value - - value = self.value_proj(value) - if key_padding_mask is not None: - value = value.masked_fill(key_padding_mask[..., None], 0.0) - value = value.view(bs, num_value, self.num_heads, -1) - sampling_offsets = self.sampling_offsets(query).view( - bs, num_query, self.num_heads, self.num_levels, self.num_points, 2) - attention_weights = self.attention_weights(query).view( - bs, num_query, self.num_heads, self.num_levels * self.num_points) - attention_weights = attention_weights.softmax(-1) - - attention_weights = attention_weights.view(bs, num_query, - self.num_heads, - self.num_levels, - self.num_points) - if reference_points.shape[-1] == 2: - offset_normalizer = torch.stack( - [spatial_shapes[..., 1], spatial_shapes[..., 0]], -1) - sampling_locations = reference_points[:, :, None, :, None, :] \ - + sampling_offsets \ - / offset_normalizer[None, None, None, :, None, :] - elif reference_points.shape[-1] == 4: - sampling_locations = reference_points[:, :, None, :, None, :2] \ - + sampling_offsets / self.num_points \ - * reference_points[:, :, None, :, None, 2:] \ - * 0.5 - else: - raise ValueError( - f'Last dim of reference_points must be' - f' 2 or 4, but get {reference_points.shape[-1]} instead.') - if torch.cuda.is_available() and value.is_cuda: - output = MultiScaleDeformableAttnFunction.apply( - value, spatial_shapes, level_start_index, sampling_locations, - attention_weights, self.im2col_step) - else: - output = multi_scale_deformable_attn_pytorch( - value, spatial_shapes, sampling_locations, attention_weights) - - output = self.output_proj(output) - - if not self.batch_first: - # (num_query, bs ,embed_dims) - output = output.permute(1, 0, 2) - - return self.dropout(output) + identity diff --git a/spaces/ProteinDesignLab/protpardelle/ProteinMPNN/examples/submit_example_3.sh b/spaces/ProteinDesignLab/protpardelle/ProteinMPNN/examples/submit_example_3.sh deleted file mode 100644 index d4b72ee61d9eb3d3ea5e983afae797c34ed847ff..0000000000000000000000000000000000000000 --- a/spaces/ProteinDesignLab/protpardelle/ProteinMPNN/examples/submit_example_3.sh +++ /dev/null @@ -1,27 +0,0 @@ -#!/bin/bash -#SBATCH -p gpu -#SBATCH --mem=32g -#SBATCH --gres=gpu:rtx2080:1 -#SBATCH -c 3 -#SBATCH --output=example_3.out - -source activate mlfold - -path_to_PDB="../inputs/PDB_complexes/pdbs/3HTN.pdb" - -output_dir="../outputs/example_3_outputs" -if [ ! -d $output_dir ] -then - mkdir -p $output_dir -fi - -chains_to_design="A B" - -python ../protein_mpnn_run.py \ - --pdb_path $path_to_PDB \ - --pdb_path_chains "$chains_to_design" \ - --out_folder $output_dir \ - --num_seq_per_target 2 \ - --sampling_temp "0.1" \ - --seed 37 \ - --batch_size 1 diff --git a/spaces/Ramos-Ramos/albef-vqa/model.py b/spaces/Ramos-Ramos/albef-vqa/model.py deleted file mode 100644 index c86dfb8b4a120fe0807a2029fafa3a1982445d1a..0000000000000000000000000000000000000000 --- a/spaces/Ramos-Ramos/albef-vqa/model.py +++ /dev/null @@ -1,666 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the BSD-style license found in the -# LICENSE file in the root directory of this source tree. - -import copy -from typing import Any, Callable, Dict, List, Optional, Tuple, Union - -import torch -import torch.nn.functional as F -from torch import nn, Tensor -from torchmultimodal.models.albef.image_encoder import ALBEFVisionEncoder -from torchmultimodal.models.albef.model import ALBEFModel, ALBEFModelWithSimilarity -from torchmultimodal.models.albef.multimodal_encoder import ALBEFMultimodalEncoder -from torchmultimodal.modules.encoders.bert_text_encoder import bert_text_encoder -from torchmultimodal.modules.layers.text_embedding import BERTTextEmbeddings -from torchmultimodal.modules.losses.albef import ( - CausalLanguageModelingLoss, - ImageTextContrastiveLoss, -) -from torchmultimodal.utils.attention import get_causal_attention_mask -from torchmultimodal.utils.common import momentum_update, remove_grad - - -_ALBEF_PRETRAINED_URLS = { - "vqa": "https://download.pytorch.org/models/multimodal/albef/pretrained_vqa_checkpoint.pt", - "retrieval": "https://download.pytorch.org/models/multimodal/albef/pretrained_retrieval_checkpoint.pt", -} - - -class PredictionHead(nn.Module): - """ - Predict the following token autoregressively. - - Args: - vocab_size (int): The number of different tokens the prediction_head can predict. - hidden_size (int): The hidden size of the prediction_head. - layer_norm_eps (float): The epsilon used by the prediction_head normalization layer. - transform_act_fn (Callable[[Tensor], Tensor]): The activation function in the prediction_head. - - Inputs: - hidden_states (Tensor): The hidden states of preceding tokens. - - Returns: - Tensor: Prediction scores for the following token. - """ - - def __init__( - self, - vocab_size: int = 30522, - hidden_size: int = 768, - layer_norm_eps: float = 1e-12, - transform_act_fn: Callable[[Tensor], Tensor] = nn.functional.gelu, - ) -> None: - super().__init__() - self.dense = nn.Linear(hidden_size, hidden_size) - self.transform_act_fn = transform_act_fn - self.layer_norm = nn.LayerNorm(hidden_size, eps=layer_norm_eps) - self.decoder = nn.Linear(hidden_size, vocab_size) - - def forward(self, hidden_states: Tensor) -> Tensor: - hidden_states = self.dense(hidden_states) - hidden_states = self.transform_act_fn(hidden_states) - hidden_states = self.layer_norm(hidden_states) - hidden_states = self.decoder(hidden_states) - return hidden_states - - -class ALBEFDecoder(nn.Module): - """ - Generate the prediction scores for answers from image and question hidden states. - - Args: - text_embeddings (ALBEFTextEmbeddings): Instantiated ALBEFTextEmbeddings. - multimodal_encoder (ALBEFMultimodalEncoder): Instantiated ALBEFMultimodalEncoder. - prediction_head (PredictionHead): Instantiated PredictionHead. - - Inputs: - input_ids (Tensor of shape (batch_size, seq_len)): - Input ids for input text tokens. - attention_mask (Tensor of shape (batch_size, seq_len)): - Input attention mask to avoid performing attention on padding token indices. - encoder_hidden_states (Tensor of shape (batch_size, encoder_seq_len, hidden_size)): - The encoder hidden states. - encoder_attention_mask (Tensor of shape (batch_size, encoder_seq_len)): - The attention mask for encoder hidden states. - - Returns: - Tensor: Prediction scores for answers. - """ - - def __init__( - self, - text_embeddings: BERTTextEmbeddings, - multimodal_encoder: ALBEFMultimodalEncoder, - prediction_head: PredictionHead, - ) -> None: - super().__init__() - self.text_embeddings = text_embeddings - self.multimodal_encoder = multimodal_encoder - self.prediction_head = prediction_head - - def get_extended_attention_mask_for_decoder(self, attention_mask: Tensor) -> Tensor: - """ - Apply a causal mask in addition to the padding mask and make the mask broadcastable, - such that future and masked tokens are ignored. - - Args: - attention_mask (Tensor): - Padding mask with ones indicating tokens to attend to, zeros for tokens to ignore. - - Returns: - extended_attention_mask (Tensor): - The broadcastable attention mask, with the same dtype as ``attention_mask.dtype``. - """ - device = attention_mask.device - batch_size, seq_length = attention_mask.shape - causal_mask = get_causal_attention_mask(seq_length).to(device) - causal_mask = causal_mask.repeat(batch_size, 1).view( - batch_size, seq_length, seq_length - ) - extended_attention_mask = ( - causal_mask[:, None, :, :] * attention_mask[:, None, None, :] - ) - extended_attention_mask = extended_attention_mask.to(dtype=attention_mask.dtype) - return extended_attention_mask - - def forward( - self, - input_ids: Tensor, - attention_mask: Tensor, - encoder_hidden_states: Tensor, - encoder_attention_mask: Tensor, - ) -> Tensor: - hidden_states = self.text_embeddings(input_ids) - attention_mask = self.get_extended_attention_mask_for_decoder(attention_mask) - decoder_output = self.multimodal_encoder( - hidden_states=hidden_states, - attention_mask=attention_mask, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - ) - prediction_scores = self.prediction_head(decoder_output) - return prediction_scores - - -class ALBEFModelForVQA(nn.Module): - """ - ALBEF Model for VQA finetuning and inference. - - Args: - model (ALBEFModel): Instantiated ALBEFModel. - answer_decoder (ALBEFDecoder): Instantiated ALBEFDecoder. - loss (CausalLanguageModelingLoss): Instantiated CausalLanguageModelingLoss. - - Inputs: - image (Tensor of shape (B, C, H, W)): Image features. - question (Tensor of shape (B, L)): Question text features. - question_atts (Tensor of shape (B, L)): Question attention mask. - answers (Tensor of shape (N, M)): Answer text features. - answers_atts (Tensor of shape (N, M)): Answer attention mask. - ans_weights (Optional[Tensor] of shape (N)): Weights for each answer. - Required if is_train is True. - ans_lengths (Optional[List[int]] of length B): Number of answers for each question. - ans_lengths should sum to N. - Required if is_train is True. - alpha (Optional[float]): The interpolation value between clm_loss and loss_distill. - Required if is_train is True. - k (Optional[int]): The number of answers to return for inference. - Required if is_train is False. - is_train (Optional[bool]): Whether the model is in training. - - Returns: - is_train is True: - Tensor: The masked language modeling loss for input. - is_train is False: - Tuple[Tensor, Tensor]: The ids and probabilities for the top k predicted answers. - """ - - def __init__( - self, - model: ALBEFModel, - answer_decoder: ALBEFDecoder, - loss: CausalLanguageModelingLoss, - ) -> None: - super().__init__() - self.model = model - self.answer_decoder = answer_decoder - self.loss = loss - self.answer_decoder_m = copy.deepcopy(self.answer_decoder) - remove_grad( - self.answer_decoder_m - ) # remove gradient for the momentum decoder model - - def _train_forward( - self, - image: Tensor, - question: Tensor, - question_atts: Tensor, - answers: Tensor, - answers_atts: Tensor, - ans_weights: Tensor, - ans_lengths: List[int], - alpha: float, - ) -> Tensor: - """ - Forward step for training. Encode the inputs with the ALBEFModel. - Generate pseudo-targets using answer_decoder_m (momentum decoder model). - Generate answer predictions using answer_decoder. - Compute masked language modeling loss of the predictions using answers as labels, - pseudo-targets as soft-labels, and alpha as their interpolation value. - - Inputs: - image (Tensor of shape (B, C, H, W)): Image features. - question (Tensor of shape (B, L)): Question text features. - question_atts (Tensor of shape (B, L)): Question attention mask. - answers (Tensor of shape (N, M)): Answer text features. - answers_atts (Tensor of shape (N, M)): Answer attention mask. - ans_weights (Tensor of shape (N)): Weights for each answer. - ans_lengths (List[int] of length B): Number of answers for each question. - ans_lengths should sum to N. - alpha (float): The interpolation value between clm_loss and loss_distill. - - Returns: - Tensor: The masked language modeling loss for input. - """ - # get image-question embeddings from the ALBEFModel and format it to match the ans_lengths - encoder_outputs = self.model(image, question, question_atts) - ( - encoder_hidden_states, - encoder_hidden_states_m, - encoder_attention_mask, - ) = self._encoder_hidden_states( - encoder_outputs.multimodal_embeddings, - encoder_outputs.multimodal_embeddings_m, - question_atts, - ans_lengths, - ) - - # use the momentum model to generate pseudo-targets - with torch.no_grad(): - momentum_update( - self.answer_decoder, self.answer_decoder_m, self.model.momentum - ) - prediction_scores_m = self.answer_decoder_m( - input_ids=answers, - attention_mask=answers_atts, - encoder_hidden_states=encoder_hidden_states_m, - encoder_attention_mask=encoder_attention_mask, - ) - - # generate answer predictions - prediction_scores = self.answer_decoder( - input_ids=answers, - attention_mask=answers_atts, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - ) - - # compute masked language modeling loss from the prediction scores - labels = answers.masked_fill(answers == 0, self.loss.mask_token_id) - loss = self.loss(labels, prediction_scores, prediction_scores_m, alpha) - loss = ans_weights * loss - loss = loss.sum() / image.size(0) - return loss - - def _eval_forward( - self, - image: Tensor, - question: Tensor, - question_atts: Tensor, - answers: Tensor, - answer_atts: Tensor, - k: int = 128, - ) -> Tuple[Tensor, Tensor]: - """ - Forward step for evaluation. Encode the inputs with the ALBEFModel. - Generate answer autoregressively using the decoder, starting with the [CLS] token. - Compute the answer ids and their perspective probabilities of the top k predictions. - - Inputs: - image (Tensor of shape (B, C, H, W)): Image features. - question (Tensor of shape (B, L)): Question text features. - question_atts (Tensor of shape (B, L)): Question attention mask. - answers (Tensor of shape (N, M)): Answer text features. - answer_atts (Tensor of shape (N, M)): Answer attention mask. - k (int): The number of answers to return for inference. - - Returns: - Tuple[Tensor, Tensor]: The ids and probabilities for the top k predicted answers. - """ - # get multimodal embeddings from the ALBEFModel and - # feed it to the decoder as cross attention - encoder_outputs = self.model(image, question, question_atts) - - # use cls token as the decoder's initial input token - num_ques = question.size(0) - start_ids = answers[0, 0].repeat(num_ques, 1) - atts = torch.ones(start_ids.shape).to(image.device) - - # auto-regressively generates the answer - prediction_scores = self.answer_decoder( - input_ids=start_ids, - attention_mask=atts, - encoder_hidden_states=encoder_outputs.multimodal_embeddings, - encoder_attention_mask=question_atts, - ) - - logits = prediction_scores[:, 0, :] - answer_first_token = answers[:, 1] - prob_first_token = F.softmax(logits, dim=1).index_select( - dim=1, index=answer_first_token - ) - topk_probs, topk_ids = prob_first_token.topk(k, dim=1) - - input_ids = [] - input_atts = [] - for topk_id in topk_ids: - input_ids.append(answers.index_select(dim=0, index=topk_id)) - input_atts.append(answer_atts.index_select(dim=0, index=topk_id)) - input_ids = torch.cat(input_ids) - input_atts = torch.cat(input_atts) - targets_ids = input_ids.masked_fill(input_ids == 0, self.loss.mask_token_id) - - question_states = encoder_outputs.multimodal_embeddings.repeat_interleave( - k, dim=0 - ) - question_atts = question_atts.repeat_interleave(k, dim=0) - - prediction_scores = self.answer_decoder( - input_ids=input_ids, - attention_mask=input_atts, - encoder_hidden_states=question_states, - encoder_attention_mask=question_atts, - ) - - answer_loss = self.loss(targets_ids, prediction_scores) - answer_loss = answer_loss.view(input_ids.size(0), -1) - - # topk_prob: first token probability - topk_probs = topk_probs.view(-1, 1) - log_probs = torch.cat([topk_probs.log(), -answer_loss], dim=1) - - # re-calculate log probabilities for the answer sequences using chain rule - log_probs_sum = log_probs.sum(1) - log_probs_sum = log_probs_sum.view(num_ques, k) - - topk_probs = F.softmax(log_probs_sum, dim=-1) - - # get top-k after re-ranking - topk_probs, rerank_id = topk_probs.topk(k, dim=1) - topk_ids = torch.gather(topk_ids, 1, rerank_id) - - return topk_ids, topk_probs - - def _encoder_hidden_states( - self, - multimodal_embeds: Tensor, - multimodal_embeds_m: Tensor, - question_atts: Tensor, - ans_lengths: List[int], - ) -> Tuple[Tensor, Tensor, Tensor]: - """ - Repeat each image-question input, repeat its embedding and mask to match the number of answers it has. - - Args: - multimodal_embeds (Tensor): Image-question embeddings. - multimodal_embeds_m (Tensor): Image-question embeddings from the momentum model. - question_atts (Tensor): Question attention mask. - ans_lengths (List[int]): The number of answers each image-question input has. - - Returns: - encoder_hidden_states (Tensor): Image-question embeddings after the repetition. - encoder_hidden_states_m (Tensor): Image-question embeddings from the momentum model after the repetition. - encoder_attention_mask (Tensor): Question attention mask after the repetition. - """ - encoder_hidden_states = [] - encoder_attention_mask = [] - for b, n in enumerate(ans_lengths): - encoder_hidden_states += [multimodal_embeds[b]] * n - encoder_attention_mask += [question_atts[b]] * n - encoder_hidden_states = torch.stack(encoder_hidden_states) - encoder_attention_mask = torch.stack(encoder_attention_mask) - - with torch.no_grad(): - encoder_hidden_states_m = [] - for b, n in enumerate(ans_lengths): - encoder_hidden_states_m += [multimodal_embeds_m[b]] * n - encoder_hidden_states_m = torch.stack(encoder_hidden_states_m) - - return encoder_hidden_states, encoder_hidden_states_m, encoder_attention_mask - - def forward( - self, - image: Tensor, - question: Tensor, - question_atts: Tensor, - answers: Tensor, - answers_atts: Tensor, - ans_weights: Optional[Tensor] = None, - ans_lengths: Optional[List[int]] = None, - alpha: Optional[float] = 0.0, - k: Optional[int] = 128, - is_train: Optional[bool] = True, - ) -> Union[Tensor, Tuple[Tensor, Tensor]]: - if is_train: - return self._train_forward( - image, - question, - question_atts, - answers, - answers_atts, - ans_weights, - ans_lengths, - alpha, - ) - else: - return self._eval_forward( - image, - question, - question_atts, - answers, - answers_atts, - k, - ) - - -class ALBEFModelForRetrieval(nn.Module): - """ - ALBEF Model for Retrieval finetuning and inference. - In training mode, the forward step computes image-text contrastive loss and - image-text matching loss. - In evaluation mode, the forward step takes 3 types of input: - image: encode image input, project and normalize the embeddings. - text: encode text input, project and normalize the embeddings. - multimodal: create multimodal embeddings from image and text - embeddings, and compute image-text matching scores. - - Args: - model_with_similarity (ALBEFModelWithSimilarity): Instantiated ALBEFModelWithSimilarity. - itc_loss (ImageTextContrastiveLoss): Instantiated ImageTextContrastiveLoss. - hidden_size (int): Dimensionality of encoder outputs. - - Inputs: - image (Optional[Tensor] of shape (B, C, H, W)): Image features. - Required if is_train is True. - Required if input_type is "image" or "multimodal". - text (Optional[Tensor] of shape (B, L)): Text features. - Required if is_train is True. - Required if input_type is "text" or "multimodal". - text_atts (Tensor of shape (B, L)): Text attention mask. - Required if is_train is True. - Required if input_type is "text" or "multimodal". - idx (Tensor of shape (B)): Identifier for each image sample. - Required if is_train is True. - alpha (Optional[float]): The interpolation value between clm_loss and loss_distill. - Default is 0. - input_type (Optional[str]): "image", "text", or "multimodal" indicating the encoding type. - Required if is_train is False. - is_train (Optional[bool]): Whether the model is in training. - Default is True. - - Returns: - is_train is True: - Tensor: The sum of itc loss and itm loss. - is_train is False: - input_type is "image": - Tuple[Tensor, Tensor]: Image embeddings and projected image features. - input_type is "text": - Tuple[Tensor, Tensor]: Text embeddings and projected text features. - input_type is "multimodal" - Tensor: Scores for the retrieval task. - """ - - def __init__( - self, - model_with_similarity: ALBEFModelWithSimilarity, - itc_loss: ImageTextContrastiveLoss, - hidden_size: int, - ) -> None: - super().__init__() - self.model_with_similarity = model_with_similarity - self.itc_loss = itc_loss - self.itm_head = nn.Linear(hidden_size, 2) - - def _train_forward( - self, - image: Tensor, - text: Tensor, - text_atts: Tensor, - idx: Tensor, - alpha: float, - ) -> Tensor: - encoder_output = self.model_with_similarity(image, text, text_atts, idx) - - # compute image-text contrastive loss - similarity_outputs = encoder_output.similarity - similarity_targets = encoder_output.sim_targets - itc_loss = self.itc_loss( - similarity_outputs.sim_i2t, - similarity_outputs.sim_t2i, - similarity_outputs.sim_i2t_m, - similarity_outputs.sim_t2i_m, - similarity_targets, - alpha, - ) - - # compute image-text matching loss - pos_embeddings = encoder_output.multimodal_embeddings[:, 0, :] - neg_embeddings = encoder_output.multimodal_embeddings_neg[:, 0, :] - vl_embeddings = torch.cat([pos_embeddings, neg_embeddings], dim=0) - vl_output = self.itm_head(vl_embeddings) - itm_labels = torch.cat( - [ - torch.ones(pos_embeddings.size(0), dtype=torch.long), - torch.zeros(neg_embeddings.size(0), dtype=torch.long), - ], - dim=0, - ).to(vl_embeddings.device) - itm_loss = F.cross_entropy(vl_output, itm_labels) - - loss = itc_loss + itm_loss - return loss - - def _encode_image( - self, - image: Tensor, - ) -> Tuple[Tensor, Tensor]: - image_embed = self.model_with_similarity.albef_model.vision_encoder(image) - image_feat = F.normalize( - self.model_with_similarity.vision_proj(image_embed[:, 0, :]), dim=-1 - ) - return image_embed, image_feat - - def _encode_text( - self, - text: Tensor, - text_atts: Tensor, - ) -> Tuple[Tensor, Tensor]: - text_embed = self.model_with_similarity.albef_model.text_encoder( - text, text_atts - ).last_hidden_state - text_feat = F.normalize( - self.model_with_similarity.text_proj(text_embed[:, 0, :]), dim=-1 - ) - return text_embed, text_feat - - def _image_text_matching_score( - self, - image: Tensor, - text: Tensor, - text_atts: Tensor, - ) -> Tensor: - multimodal_embeds = self.model_with_similarity.albef_model.multimodal_encoder( - text, - text_atts, - image, - ) - score = self.itm_head(multimodal_embeds[:, 0, :])[:, 1] - return score - - def _eval_forward( - self, - input_type: str, - image: Optional[Tensor], - text: Optional[Tensor], - text_atts: Optional[Tensor], - ) -> Union[Tensor, Tuple[Tensor, Tensor]]: - if input_type == "image": - assert image is not None, "image input tensor cannot be None" - return self._encode_image(image) - - elif input_type == "text": - assert ( - text is not None and text_atts is not None - ), "text and text attention mask cannot be None" - return self._encode_text(text, text_atts) - - elif input_type == "multimodal": - assert ( - image is not None and text is not None and text_atts is not None - ), "image embeddings, text embeddings, and text attention mask cannot be None" - return self._image_text_matching_score(image, text, text_atts) - - else: - raise ValueError("input_type must be image, text, or multimodal") - - def forward( - self, - image: Optional[Tensor] = None, - text: Optional[Tensor] = None, - text_atts: Optional[Tensor] = None, - idx: Optional[Tensor] = None, - alpha: Optional[Tensor] = 0.0, - input_type: Optional[str] = None, - is_train: Optional[bool] = True, - ) -> Union[Tensor, Tuple[Tensor, Tensor]]: - if is_train: - return self._train_forward( - image, - text, - text_atts, - idx, - alpha, - ) - else: - return self._eval_forward( - input_type, - image, - text, - text_atts, - ) - - -def albef_model_for_vqa( - config: Dict[str, Any], pretrained: bool = False -) -> ALBEFModelForVQA: - vision_encoder = ALBEFVisionEncoder(**config["vision_encoder_args"]) - text_encoder = bert_text_encoder(**config["text_encoder_args"]) - question_multimodal_encoder = ALBEFMultimodalEncoder( - **config["multimodal_encoder_args"] - ) - text_embeddings = BERTTextEmbeddings(**config["text_embeddings_args"]) - answer_multimodal_encoder = ALBEFMultimodalEncoder( - **config["multimodal_encoder_args"] - ) - prediction_head = PredictionHead(**config["prediction_head_args"]) - albef_model = ALBEFModel(vision_encoder, text_encoder, question_multimodal_encoder) - decoder = ALBEFDecoder(text_embeddings, answer_multimodal_encoder, prediction_head) - loss = CausalLanguageModelingLoss() - model = ALBEFModelForVQA(albef_model, decoder, loss) - - if pretrained: - checkpoint = torch.hub.load_state_dict_from_url( - _ALBEF_PRETRAINED_URLS["vqa"], map_location="cpu" - ) - model.load_state_dict(checkpoint) - return model - - -def albef_model_for_retrieval( - config: Dict[str, Any], pretrained: bool = False -) -> ALBEFModelForRetrieval: - vision_encoder = ALBEFVisionEncoder(**config["vision_encoder_args"]) - text_encoder = bert_text_encoder(**config["text_encoder_args"]) - multimodal_encoder = ALBEFMultimodalEncoder(**config["multimodal_encoder_args"]) - vision_proj = nn.Linear(**config["projection_args"]) - text_proj = nn.Linear(**config["projection_args"]) - - albef_model = ALBEFModel(vision_encoder, text_encoder, multimodal_encoder) - albef_model_with_sim = ALBEFModelWithSimilarity( - albef_model, vision_proj, text_proj, **config["similarity_args"] - ) - itc_loss = ImageTextContrastiveLoss() - - model = ALBEFModelForRetrieval( - albef_model_with_sim, itc_loss, config["hidden_size"] - ) - - if pretrained: - checkpoint = torch.hub.load_state_dict_from_url( - _ALBEF_PRETRAINED_URLS["retrieval"], map_location="cpu" - ) - model.load_state_dict(checkpoint) - return model diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_vendor/importlib_metadata/_compat.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_vendor/importlib_metadata/_compat.py deleted file mode 100644 index ef3136f8d2a13c3d251e146d8d754e21c3ed1c38..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_vendor/importlib_metadata/_compat.py +++ /dev/null @@ -1,71 +0,0 @@ -import sys -import platform - - -__all__ = ['install', 'NullFinder', 'Protocol'] - - -try: - from typing import Protocol -except ImportError: # pragma: no cover - from ..typing_extensions import Protocol # type: ignore - - -def install(cls): - """ - Class decorator for installation on sys.meta_path. - - Adds the backport DistributionFinder to sys.meta_path and - attempts to disable the finder functionality of the stdlib - DistributionFinder. - """ - sys.meta_path.append(cls()) - disable_stdlib_finder() - return cls - - -def disable_stdlib_finder(): - """ - Give the backport primacy for discovering path-based distributions - by monkey-patching the stdlib O_O. - - See #91 for more background for rationale on this sketchy - behavior. - """ - - def matches(finder): - return getattr( - finder, '__module__', None - ) == '_frozen_importlib_external' and hasattr(finder, 'find_distributions') - - for finder in filter(matches, sys.meta_path): # pragma: nocover - del finder.find_distributions - - -class NullFinder: - """ - A "Finder" (aka "MetaClassFinder") that never finds any modules, - but may find distributions. - """ - - @staticmethod - def find_spec(*args, **kwargs): - return None - - # In Python 2, the import system requires finders - # to have a find_module() method, but this usage - # is deprecated in Python 3 in favor of find_spec(). - # For the purposes of this finder (i.e. being present - # on sys.meta_path but having no other import - # system functionality), the two methods are identical. - find_module = find_spec - - -def pypy_partial(val): - """ - Adjust for variable stacklevel on partial under PyPy. - - Workaround for #327. - """ - is_pypy = platform.python_implementation() == 'PyPy' - return val + is_pypy diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/command/upload.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/command/upload.py deleted file mode 100644 index ec7f81e22772511d668e5ab92f625db33259e803..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/command/upload.py +++ /dev/null @@ -1,17 +0,0 @@ -from distutils import log -from distutils.command import upload as orig - -from setuptools.errors import RemovedCommandError - - -class upload(orig.upload): - """Formerly used to upload packages to PyPI.""" - - def run(self): - msg = ( - "The upload command has been removed, use twine to upload " - + "instead (https://pypi.org/p/twine)" - ) - - self.announce("ERROR: " + msg, log.ERROR) - raise RemovedCommandError(msg) diff --git a/spaces/Realcat/image-matching-webui/hloc/pipelines/RobotCar/README.md b/spaces/Realcat/image-matching-webui/hloc/pipelines/RobotCar/README.md deleted file mode 100644 index 9881d153d4930cf32b5481ecd4fa2c900fa58c8c..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/hloc/pipelines/RobotCar/README.md +++ /dev/null @@ -1,16 +0,0 @@ -# RobotCar Seasons dataset - -## Installation - -Download the dataset from [visuallocalization.net](https://www.visuallocalization.net): -```bash -export dataset=datasets/robotcar -wget -r -np -nH -R "index.html*" --cut-dirs=4 https://data.ciirc.cvut.cz/public/projects/2020VisualLocalization/RobotCar-Seasons/ -P $dataset -for condition in $dataset/images/*.zip; do unzip condition -d $dataset/images/; done -``` - -## Pipeline - -```bash -python3 -m hloc.pipelines.RobotCar.pipeline -``` diff --git a/spaces/Realcat/image-matching-webui/third_party/GlueStick/gluestick/models/wireframe.py b/spaces/Realcat/image-matching-webui/third_party/GlueStick/gluestick/models/wireframe.py deleted file mode 100644 index 9da539387c6da8a5a8df6c677af69803ccdb54b4..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/GlueStick/gluestick/models/wireframe.py +++ /dev/null @@ -1,349 +0,0 @@ -import numpy as np -import torch -from pytlsd import lsd -from sklearn.cluster import DBSCAN - -from .base_model import BaseModel -from .superpoint import SuperPoint, sample_descriptors -from ..geometry import warp_lines_torch - - -def lines_to_wireframe(lines, line_scores, all_descs, conf): - """Given a set of lines, their score and dense descriptors, - merge close-by endpoints and compute a wireframe defined by - its junctions and connectivity. - Returns: - junctions: list of [num_junc, 2] tensors listing all wireframe junctions - junc_scores: list of [num_junc] tensors with the junction score - junc_descs: list of [dim, num_junc] tensors with the junction descriptors - connectivity: list of [num_junc, num_junc] bool arrays with True when 2 junctions are connected - new_lines: the new set of [b_size, num_lines, 2, 2] lines - lines_junc_idx: a [b_size, num_lines, 2] tensor with the indices of the junctions of each endpoint - num_true_junctions: a list of the number of valid junctions for each image in the batch, - i.e. before filling with random ones - """ - b_size, _, _, _ = all_descs.shape - device = lines.device - endpoints = lines.reshape(b_size, -1, 2) - - ( - junctions, - junc_scores, - junc_descs, - connectivity, - new_lines, - lines_junc_idx, - num_true_junctions, - ) = ([], [], [], [], [], [], []) - for bs in range(b_size): - # Cluster the junctions that are close-by - db = DBSCAN(eps=conf.nms_radius, min_samples=1).fit(endpoints[bs].cpu().numpy()) - clusters = db.labels_ - n_clusters = len(set(clusters)) - num_true_junctions.append(n_clusters) - - # Compute the average junction and score for each cluster - clusters = torch.tensor(clusters, dtype=torch.long, device=device) - new_junc = torch.zeros(n_clusters, 2, dtype=torch.float, device=device) - new_junc.scatter_reduce_( - 0, - clusters[:, None].repeat(1, 2), - endpoints[bs], - reduce="mean", - include_self=False, - ) - junctions.append(new_junc) - new_scores = torch.zeros(n_clusters, dtype=torch.float, device=device) - new_scores.scatter_reduce_( - 0, - clusters, - torch.repeat_interleave(line_scores[bs], 2), - reduce="mean", - include_self=False, - ) - junc_scores.append(new_scores) - - # Compute the new lines - new_lines.append(junctions[-1][clusters].reshape(-1, 2, 2)) - lines_junc_idx.append(clusters.reshape(-1, 2)) - - # Compute the junction connectivity - junc_connect = torch.eye(n_clusters, dtype=torch.bool, device=device) - pairs = clusters.reshape(-1, 2) # these pairs are connected by a line - junc_connect[pairs[:, 0], pairs[:, 1]] = True - junc_connect[pairs[:, 1], pairs[:, 0]] = True - connectivity.append(junc_connect) - - # Interpolate the new junction descriptors - junc_descs.append( - sample_descriptors(junctions[-1][None], all_descs[bs : (bs + 1)], 8)[0] - ) - - new_lines = torch.stack(new_lines, dim=0) - lines_junc_idx = torch.stack(lines_junc_idx, dim=0) - return ( - junctions, - junc_scores, - junc_descs, - connectivity, - new_lines, - lines_junc_idx, - num_true_junctions, - ) - - -class SPWireframeDescriptor(BaseModel): - default_conf = { - "sp_params": { - "has_detector": True, - "has_descriptor": True, - "descriptor_dim": 256, - "trainable": False, - # Inference - "return_all": True, - "sparse_outputs": True, - "nms_radius": 4, - "detection_threshold": 0.005, - "max_num_keypoints": 1000, - "force_num_keypoints": True, - "remove_borders": 4, - }, - "wireframe_params": { - "merge_points": True, - "merge_line_endpoints": True, - "nms_radius": 3, - "max_n_junctions": 500, - }, - "max_n_lines": 250, - "min_length": 15, - } - required_data_keys = ["image"] - - def _init(self, conf): - self.conf = conf - self.sp = SuperPoint(conf.sp_params) - - def detect_lsd_lines(self, x, max_n_lines=None): - if max_n_lines is None: - max_n_lines = self.conf.max_n_lines - lines, scores, valid_lines = [], [], [] - for b in range(len(x)): - # For each image on batch - img = (x[b].squeeze().cpu().numpy() * 255).astype(np.uint8) - if max_n_lines is None: - b_segs = lsd(img) - else: - for s in [0.3, 0.4, 0.5, 0.7, 0.8, 1.0]: - b_segs = lsd(img, scale=s) - if len(b_segs) >= max_n_lines: - break - - segs_length = np.linalg.norm(b_segs[:, 2:4] - b_segs[:, 0:2], axis=1) - # Remove short lines - b_segs = b_segs[segs_length >= self.conf.min_length] - segs_length = segs_length[segs_length >= self.conf.min_length] - b_scores = b_segs[:, -1] * np.sqrt(segs_length) - # Take the most relevant segments with - indices = np.argsort(-b_scores) - if max_n_lines is not None: - indices = indices[:max_n_lines] - lines.append(torch.from_numpy(b_segs[indices, :4].reshape(-1, 2, 2))) - scores.append(torch.from_numpy(b_scores[indices])) - valid_lines.append(torch.ones_like(scores[-1], dtype=torch.bool)) - - lines = torch.stack(lines).to(x) - scores = torch.stack(scores).to(x) - valid_lines = torch.stack(valid_lines).to(x.device) - return lines, scores, valid_lines - - def _forward(self, data): - b_size, _, h, w = data["image"].shape - device = data["image"].device - - if not self.conf.sp_params.force_num_keypoints: - assert b_size == 1, "Only batch size of 1 accepted for non padded inputs" - - # Line detection - if "lines" not in data or "line_scores" not in data: - if "original_img" in data: - # Detect more lines, because when projecting them to the image most of them will be discarded - lines, line_scores, valid_lines = self.detect_lsd_lines( - data["original_img"], self.conf.max_n_lines * 3 - ) - # Apply the same transformation that is applied in homography_adaptation - lines, valid_lines2 = warp_lines_torch( - lines, data["H"], False, data["image"].shape[-2:] - ) - valid_lines = valid_lines & valid_lines2 - lines[~valid_lines] = -1 - line_scores[~valid_lines] = 0 - # Re-sort the line segments to pick the ones that are inside the image and have bigger score - sorted_scores, sorting_indices = torch.sort( - line_scores, dim=-1, descending=True - ) - line_scores = sorted_scores[:, : self.conf.max_n_lines] - sorting_indices = sorting_indices[:, : self.conf.max_n_lines] - lines = torch.take_along_dim(lines, sorting_indices[..., None, None], 1) - valid_lines = torch.take_along_dim(valid_lines, sorting_indices, 1) - else: - lines, line_scores, valid_lines = self.detect_lsd_lines(data["image"]) - - else: - lines, line_scores, valid_lines = ( - data["lines"], - data["line_scores"], - data["valid_lines"], - ) - if line_scores.shape[-1] != 0: - line_scores /= ( - line_scores.new_tensor(1e-8) + line_scores.max(dim=1).values[:, None] - ) - - # SuperPoint prediction - pred = self.sp(data) - - # Remove keypoints that are too close to line endpoints - if self.conf.wireframe_params.merge_points: - kp = pred["keypoints"] - line_endpts = lines.reshape(b_size, -1, 2) - dist_pt_lines = torch.norm(kp[:, :, None] - line_endpts[:, None], dim=-1) - # For each keypoint, mark it as valid or to remove - pts_to_remove = torch.any( - dist_pt_lines < self.conf.sp_params.nms_radius, dim=2 - ) - # Simply remove them (we assume batch_size = 1 here) - assert len(kp) == 1 - pred["keypoints"] = pred["keypoints"][0][~pts_to_remove[0]][None] - pred["keypoint_scores"] = pred["keypoint_scores"][0][~pts_to_remove[0]][ - None - ] - pred["descriptors"] = pred["descriptors"][0].T[~pts_to_remove[0]].T[None] - - # Connect the lines together to form a wireframe - orig_lines = lines.clone() - if self.conf.wireframe_params.merge_line_endpoints and len(lines[0]) > 0: - # Merge first close-by endpoints to connect lines - ( - line_points, - line_pts_scores, - line_descs, - line_association, - lines, - lines_junc_idx, - num_true_junctions, - ) = lines_to_wireframe( - lines, - line_scores, - pred["all_descriptors"], - conf=self.conf.wireframe_params, - ) - - # Add the keypoints to the junctions and fill the rest with random keypoints - (all_points, all_scores, all_descs, pl_associativity) = [], [], [], [] - for bs in range(b_size): - all_points.append( - torch.cat([line_points[bs], pred["keypoints"][bs]], dim=0) - ) - all_scores.append( - torch.cat([line_pts_scores[bs], pred["keypoint_scores"][bs]], dim=0) - ) - all_descs.append( - torch.cat([line_descs[bs], pred["descriptors"][bs]], dim=1) - ) - - associativity = torch.eye( - len(all_points[-1]), dtype=torch.bool, device=device - ) - associativity[ - : num_true_junctions[bs], : num_true_junctions[bs] - ] = line_association[bs][ - : num_true_junctions[bs], : num_true_junctions[bs] - ] - pl_associativity.append(associativity) - - all_points = torch.stack(all_points, dim=0) - all_scores = torch.stack(all_scores, dim=0) - all_descs = torch.stack(all_descs, dim=0) - pl_associativity = torch.stack(pl_associativity, dim=0) - else: - # Lines are independent - all_points = torch.cat( - [lines.reshape(b_size, -1, 2), pred["keypoints"]], dim=1 - ) - n_pts = all_points.shape[1] - num_lines = lines.shape[1] - num_true_junctions = [num_lines * 2] * b_size - all_scores = torch.cat( - [ - torch.repeat_interleave(line_scores, 2, dim=1), - pred["keypoint_scores"], - ], - dim=1, - ) - pred["line_descriptors"] = self.endpoints_pooling( - lines, pred["all_descriptors"], (h, w) - ) - all_descs = torch.cat( - [ - pred["line_descriptors"].reshape( - b_size, self.conf.sp_params.descriptor_dim, -1 - ), - pred["descriptors"], - ], - dim=2, - ) - pl_associativity = torch.eye(n_pts, dtype=torch.bool, device=device)[ - None - ].repeat(b_size, 1, 1) - lines_junc_idx = ( - torch.arange(num_lines * 2, device=device) - .reshape(1, -1, 2) - .repeat(b_size, 1, 1) - ) - - del pred["all_descriptors"] # Remove dense descriptors to save memory - torch.cuda.empty_cache() - - return { - "keypoints": all_points, - "keypoint_scores": all_scores, - "descriptors": all_descs, - "pl_associativity": pl_associativity, - "num_junctions": torch.tensor(num_true_junctions), - "lines": lines, - "orig_lines": orig_lines, - "lines_junc_idx": lines_junc_idx, - "line_scores": line_scores, - "valid_lines": valid_lines, - } - - @staticmethod - def endpoints_pooling(segs, all_descriptors, img_shape): - assert segs.ndim == 4 and segs.shape[-2:] == (2, 2) - filter_shape = all_descriptors.shape[-2:] - scale_x = filter_shape[1] / img_shape[1] - scale_y = filter_shape[0] / img_shape[0] - - scaled_segs = torch.round( - segs * torch.tensor([scale_x, scale_y]).to(segs) - ).long() - scaled_segs[..., 0] = torch.clip(scaled_segs[..., 0], 0, filter_shape[1] - 1) - scaled_segs[..., 1] = torch.clip(scaled_segs[..., 1], 0, filter_shape[0] - 1) - line_descriptors = [ - all_descriptors[ - None, - b, - ..., - torch.squeeze(b_segs[..., 1]), - torch.squeeze(b_segs[..., 0]), - ] - for b, b_segs in enumerate(scaled_segs) - ] - line_descriptors = torch.cat(line_descriptors) - return line_descriptors # Shape (1, 256, 308, 2) - - def loss(self, pred, data): - raise NotImplementedError - - def metrics(self, pred, data): - return {} diff --git a/spaces/Realcat/image-matching-webui/third_party/Roma/roma/checkpointing/checkpoint.py b/spaces/Realcat/image-matching-webui/third_party/Roma/roma/checkpointing/checkpoint.py deleted file mode 100644 index 6372d89fe86c00c7acedf015886717bfeca7bb1f..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/Roma/roma/checkpointing/checkpoint.py +++ /dev/null @@ -1,61 +0,0 @@ -import os -import torch -from torch.nn.parallel.data_parallel import DataParallel -from torch.nn.parallel.distributed import DistributedDataParallel -from loguru import logger -import gc - -import roma - - -class CheckPoint: - def __init__(self, dir=None, name="tmp"): - self.name = name - self.dir = dir - os.makedirs(self.dir, exist_ok=True) - - def save( - self, - model, - optimizer, - lr_scheduler, - n, - ): - if roma.RANK == 0: - assert model is not None - if isinstance(model, (DataParallel, DistributedDataParallel)): - model = model.module - states = { - "model": model.state_dict(), - "n": n, - "optimizer": optimizer.state_dict(), - "lr_scheduler": lr_scheduler.state_dict(), - } - torch.save(states, self.dir + self.name + f"_latest.pth") - logger.info(f"Saved states {list(states.keys())}, at step {n}") - - def load( - self, - model, - optimizer, - lr_scheduler, - n, - ): - if os.path.exists(self.dir + self.name + f"_latest.pth") and roma.RANK == 0: - states = torch.load(self.dir + self.name + f"_latest.pth") - if "model" in states: - model.load_state_dict(states["model"]) - if "n" in states: - n = states["n"] if states["n"] else n - if "optimizer" in states: - try: - optimizer.load_state_dict(states["optimizer"]) - except Exception as e: - print(f"Failed to load states for optimizer, with error {e}") - if "lr_scheduler" in states: - lr_scheduler.load_state_dict(states["lr_scheduler"]) - print(f"Loaded states {list(states.keys())}, at step {n}") - del states - gc.collect() - torch.cuda.empty_cache() - return model, optimizer, lr_scheduler, n diff --git a/spaces/Ricecake123/RVC-demo/docs/faiss_tips_en.md b/spaces/Ricecake123/RVC-demo/docs/faiss_tips_en.md deleted file mode 100644 index aafad6ed67f70ee1ea3a2a21ee0b5066ab1dcfa8..0000000000000000000000000000000000000000 --- a/spaces/Ricecake123/RVC-demo/docs/faiss_tips_en.md +++ /dev/null @@ -1,102 +0,0 @@ -faiss tuning TIPS -================== -# about faiss -faiss is a library of neighborhood searches for dense vectors, developed by facebook research, which efficiently implements many approximate neighborhood search methods. -Approximate Neighbor Search finds similar vectors quickly while sacrificing some accuracy. - -## faiss in RVC -In RVC, for the embedding of features converted by HuBERT, we search for embeddings similar to the embedding generated from the training data and mix them to achieve a conversion that is closer to the original speech. However, since this search takes time if performed naively, high-speed conversion is realized by using approximate neighborhood search. - -# implementation overview -In '/logs/your-experiment/3_feature256' where the model is located, features extracted by HuBERT from each voice data are located. -From here we read the npy files in order sorted by filename and concatenate the vectors to create big_npy. (This vector has shape [N, 256].) -After saving big_npy as /logs/your-experiment/total_fea.npy, train it with faiss. - -In this article, I will explain the meaning of these parameters. - -# Explanation of the method -## index factory -An index factory is a unique faiss notation that expresses a pipeline that connects multiple approximate neighborhood search methods as a string. -This allows you to try various approximate neighborhood search methods simply by changing the index factory string. -In RVC it is used like this: - -```python -index = faiss.index_factory(256, "IVF%s,Flat" % n_ivf) -``` -Among the arguments of index_factory, the first is the number of dimensions of the vector, the second is the index factory string, and the third is the distance to use. - -For more detailed notation -https://github.com/facebookresearch/faiss/wiki/The-index-factory - -## index for distance -There are two typical indexes used as similarity of embedding as follows. - -- Euclidean distance (METRIC_L2) -- inner product (METRIC_INNER_PRODUCT) - -Euclidean distance takes the squared difference in each dimension, sums the differences in all dimensions, and then takes the square root. This is the same as the distance in 2D and 3D that we use on a daily basis. -The inner product is not used as an index of similarity as it is, and the cosine similarity that takes the inner product after being normalized by the L2 norm is generally used. - -Which is better depends on the case, but cosine similarity is often used in embedding obtained by word2vec and similar image retrieval models learned by ArcFace. If you want to do l2 normalization on vector X with numpy, you can do it with the following code with eps small enough to avoid 0 division. - -```python -X_normed = X / np.maximum(eps, np.linalg.norm(X, ord=2, axis=-1, keepdims=True)) -``` - -Also, for the index factory, you can change the distance index used for calculation by choosing the value to pass as the third argument. - -```python -index = faiss.index_factory(dimention, text, faiss.METRIC_INNER_PRODUCT) -``` - -## IVF -IVF (Inverted file indexes) is an algorithm similar to the inverted index in full-text search. -During learning, the search target is clustered with kmeans, and Voronoi partitioning is performed using the cluster center. Each data point is assigned a cluster, so we create a dictionary that looks up the data points from the clusters. - -For example, if clusters are assigned as follows -|index|Cluster| -|-----|-------| -|1|A| -|2|B| -|3|A| -|4|C| -|5|B| - -The resulting inverted index looks like this: - -|cluster|index| -|-------|-----| -|A|1, 3| -|B|2, 5| -|C|4| - -When searching, we first search n_probe clusters from the clusters, and then calculate the distances for the data points belonging to each cluster. - -# recommend parameter -There are official guidelines on how to choose an index, so I will explain accordingly. -https://github.com/facebookresearch/faiss/wiki/Guidelines-to-choose-an-index - -For datasets below 1M, 4bit-PQ is the most efficient method available in faiss as of April 2023. -Combining this with IVF, narrowing down the candidates with 4bit-PQ, and finally recalculating the distance with an accurate index can be described by using the following index factory. - -```python -index = faiss.index_factory(256, "IVF1024,PQ128x4fs,RFlat") -``` - -## Recommended parameters for IVF -Consider the case of too many IVFs. For example, if coarse quantization by IVF is performed for the number of data, this is the same as a naive exhaustive search and is inefficient. -For 1M or less, IVF values are recommended between 4*sqrt(N) ~ 16*sqrt(N) for N number of data points. - -Since the calculation time increases in proportion to the number of n_probes, please consult with the accuracy and choose appropriately. Personally, I don't think RVC needs that much accuracy, so n_probe = 1 is fine. - -## FastScan -FastScan is a method that enables high-speed approximation of distances by Cartesian product quantization by performing them in registers. -Cartesian product quantization performs clustering independently for each d dimension (usually d = 2) during learning, calculates the distance between clusters in advance, and creates a lookup table. At the time of prediction, the distance of each dimension can be calculated in O(1) by looking at the lookup table. -So the number you specify after PQ usually specifies half the dimension of the vector. - -For a more detailed description of FastScan, please refer to the official documentation. -https://github.com/facebookresearch/faiss/wiki/Fast-accumulation-of-PQ-and-AQ-codes-(FastScan) - -## RFlat -RFlat is an instruction to recalculate the rough distance calculated by FastScan with the exact distance specified by the third argument of index factory. -When getting k neighbors, k*k_factor points are recalculated. diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/detectors/fsaf.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/detectors/fsaf.py deleted file mode 100644 index 9f10fa1ae10f31e6cb5de65505b14a4fc97dd022..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/detectors/fsaf.py +++ /dev/null @@ -1,17 +0,0 @@ -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class FSAF(SingleStageDetector): - """Implementation of `FSAF `_""" - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(FSAF, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained) diff --git a/spaces/Rongjiehuang/ProDiff/modules/parallel_wavegan/__init__.py b/spaces/Rongjiehuang/ProDiff/modules/parallel_wavegan/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Rongjiehuang/ProDiff/modules/parallel_wavegan/layers/residual_stack.py b/spaces/Rongjiehuang/ProDiff/modules/parallel_wavegan/layers/residual_stack.py deleted file mode 100644 index 6e07c8803ad348dd923f6b7c0f7aff14aab9cf78..0000000000000000000000000000000000000000 --- a/spaces/Rongjiehuang/ProDiff/modules/parallel_wavegan/layers/residual_stack.py +++ /dev/null @@ -1,75 +0,0 @@ -# -*- coding: utf-8 -*- - -# Copyright 2020 Tomoki Hayashi -# MIT License (https://opensource.org/licenses/MIT) - -"""Residual stack module in MelGAN.""" - -import torch - -from . import CausalConv1d - - -class ResidualStack(torch.nn.Module): - """Residual stack module introduced in MelGAN.""" - - def __init__(self, - kernel_size=3, - channels=32, - dilation=1, - bias=True, - nonlinear_activation="LeakyReLU", - nonlinear_activation_params={"negative_slope": 0.2}, - pad="ReflectionPad1d", - pad_params={}, - use_causal_conv=False, - ): - """Initialize ResidualStack module. - - Args: - kernel_size (int): Kernel size of dilation convolution layer. - channels (int): Number of channels of convolution layers. - dilation (int): Dilation factor. - bias (bool): Whether to add bias parameter in convolution layers. - nonlinear_activation (str): Activation function module name. - nonlinear_activation_params (dict): Hyperparameters for activation function. - pad (str): Padding function module name before dilated convolution layer. - pad_params (dict): Hyperparameters for padding function. - use_causal_conv (bool): Whether to use causal convolution. - - """ - super(ResidualStack, self).__init__() - - # defile residual stack part - if not use_causal_conv: - assert (kernel_size - 1) % 2 == 0, "Not support even number kernel size." - self.stack = torch.nn.Sequential( - getattr(torch.nn, nonlinear_activation)(**nonlinear_activation_params), - getattr(torch.nn, pad)((kernel_size - 1) // 2 * dilation, **pad_params), - torch.nn.Conv1d(channels, channels, kernel_size, dilation=dilation, bias=bias), - getattr(torch.nn, nonlinear_activation)(**nonlinear_activation_params), - torch.nn.Conv1d(channels, channels, 1, bias=bias), - ) - else: - self.stack = torch.nn.Sequential( - getattr(torch.nn, nonlinear_activation)(**nonlinear_activation_params), - CausalConv1d(channels, channels, kernel_size, dilation=dilation, - bias=bias, pad=pad, pad_params=pad_params), - getattr(torch.nn, nonlinear_activation)(**nonlinear_activation_params), - torch.nn.Conv1d(channels, channels, 1, bias=bias), - ) - - # defile extra layer for skip connection - self.skip_layer = torch.nn.Conv1d(channels, channels, 1, bias=bias) - - def forward(self, c): - """Calculate forward propagation. - - Args: - c (Tensor): Input tensor (B, channels, T). - - Returns: - Tensor: Output tensor (B, chennels, T). - - """ - return self.stack(c) + self.skip_layer(c) diff --git a/spaces/RugNlpFlashcards/Speech_Language_Processing_Jurafsky_Martin/query.py b/spaces/RugNlpFlashcards/Speech_Language_Processing_Jurafsky_Martin/query.py deleted file mode 100644 index 388d3c84b05f2c12b3ca64c82e54601c90455e07..0000000000000000000000000000000000000000 --- a/spaces/RugNlpFlashcards/Speech_Language_Processing_Jurafsky_Martin/query.py +++ /dev/null @@ -1,149 +0,0 @@ -import argparse -import torch -import transformers - -from typing import Dict, List, Literal, Tuple, cast -from datasets import load_dataset, DatasetDict -from dotenv import load_dotenv - -from src.readers.base_reader import Reader -from src.readers.longformer_reader import LongformerReader -from src.readers.dpr_reader import DprReader -from src.retrievers.base_retriever import Retriever -from src.retrievers.es_retriever import ESRetriever -from src.retrievers.faiss_retriever import ( - FaissRetriever, - FaissRetrieverOptions -) -from src.utils.preprocessing import context_to_reader_input -from src.utils.log import logger - - -# Setup environment -load_dotenv() -transformers.logging.set_verbosity_error() - - -def get_retriever(paragraphs: DatasetDict, - r: Literal["es", "faiss"], - lm: Literal["dpr", "longformer"]) -> Retriever: - match (r, lm): - case "es", _: - return ESRetriever() - case "faiss", "dpr": - options = FaissRetrieverOptions.dpr("./src/models/dpr.faiss") - return FaissRetriever(paragraphs, options) - case "faiss", "longformer": - options = FaissRetrieverOptions.longformer( - "./src/models/longformer.faiss") - return FaissRetriever(paragraphs, options) - case _: - raise ValueError("Retriever options not recognized") - - -def get_reader(lm: Literal["dpr", "longformer"]) -> Reader: - match lm: - case "dpr": - return DprReader() - case "longformer": - return LongformerReader() - case _: - raise ValueError("Language model not recognized") - - -def print_name(contexts: dict, section: str, id: int): - name = contexts[section][id] - if name != 'nan': - print(f" {section}: {name}") - - -def get_retrieval_span_scores(answers: List[tuple]): - # calculate answer scores - sm = torch.nn.Softmax(dim=0) - d_scores = sm(torch.Tensor( - [pred.relevance_score for pred in answers])) - s_scores = sm(torch.Tensor( - [pred.span_score for pred in answers])) - - return d_scores, s_scores - - -def print_answers(answers: List[tuple], scores: List[float], contexts: dict): - d_scores, s_scores = get_retrieval_span_scores(answers) - - for pos, answer in enumerate(answers): - print(f"{pos + 1:>4}. {answer.text}") - print(f" {'-' * len(answer.text)}") - print_name(contexts, 'chapter', answer.doc_id) - print_name(contexts, 'section', answer.doc_id) - print_name(contexts, 'subsection', answer.doc_id) - print(f" retrieval score: {scores[answer.doc_id]:6.02f}%") - print(f" document score: {d_scores[pos] * 100:6.02f}%") - print(f" span score: {s_scores[pos] * 100:6.02f}%") - print() - - -def probe(query: str, - retriever: Retriever, - reader: Reader, - num_answers: int = 5) \ - -> Tuple[List[tuple], List[float], Dict[str, List[str]]]: - scores, contexts = retriever.retrieve(query) - reader_input = context_to_reader_input(contexts) - answers = reader.read(query, reader_input, num_answers) - - return answers, scores, contexts - - -def default_probe(query: str): - # default probe is a probe that prints 5 answers with faiss - paragraphs = cast(DatasetDict, load_dataset( - "GroNLP/ik-nlp-22_slp", "paragraphs")) - retriever = get_retriever(paragraphs, "faiss", "dpr") - reader = DprReader() - - return probe(query, retriever, reader) - - -def main(args: argparse.Namespace): - # Initialize dataset - paragraphs = cast(DatasetDict, load_dataset( - "GroNLP/ik-nlp-22_slp", "paragraphs")) - - # Retrieve - retriever = get_retriever(paragraphs, args.retriever, args.lm) - reader = get_reader(args.lm) - answers, scores, contexts = probe( - args.query, retriever, reader, args.top) - - # Print output - print("Question: " + args.query) - print("Answer(s):") - if args.lm == "dpr": - print_answers(answers, scores, contexts) - else: - answers = filter(lambda a: len(a[0].strip()) > 0, answers) - for pos, answer in enumerate(answers, start=1): - print(f" - {answer[0].strip()}") - - -if __name__ == "__main__": - # Set up CLI arguments - parser = argparse.ArgumentParser( - formatter_class=argparse.MetavarTypeHelpFormatter - ) - parser.add_argument( - "query", type=str, help="The question to feed to the QA system") - parser.add_argument( - "--top", "-t", type=int, default=1, - help="The number of answers to retrieve") - parser.add_argument( - "--retriever", "-r", type=str.lower, choices=["faiss", "es"], - default="faiss", help="The retrieval method to use") - parser.add_argument( - "--lm", "-l", type=str.lower, - choices=["dpr", "longformer"], default="dpr", - help="The language model to use for the FAISS retriever") - - args = parser.parse_args() - main(args) diff --git a/spaces/SQSora/VITS-Umamusume-voice-synthesizer/monotonic_align/setup.py b/spaces/SQSora/VITS-Umamusume-voice-synthesizer/monotonic_align/setup.py deleted file mode 100644 index 30c224807a70faa9df9c9eb75f8e80c8c867b16b..0000000000000000000000000000000000000000 --- a/spaces/SQSora/VITS-Umamusume-voice-synthesizer/monotonic_align/setup.py +++ /dev/null @@ -1,9 +0,0 @@ -from distutils.core import setup -from Cython.Build import cythonize -import numpy - -setup( - name = 'monotonic_align', - ext_modules = cythonize("core.pyx"), - include_dirs=[numpy.get_include()] -) diff --git a/spaces/SUPERSHANKY/ControlNet_Colab/gradio_depth2image.py b/spaces/SUPERSHANKY/ControlNet_Colab/gradio_depth2image.py deleted file mode 100644 index 6f367bd126dc52f251cfcd0a7a7d6bbf73859b05..0000000000000000000000000000000000000000 --- a/spaces/SUPERSHANKY/ControlNet_Colab/gradio_depth2image.py +++ /dev/null @@ -1,68 +0,0 @@ -# This file is adapted from https://github.com/lllyasviel/ControlNet/blob/f4748e3630d8141d7765e2bd9b1e348f47847707/gradio_depth2image.py -# The original license file is LICENSE.ControlNet in this repo. -import gradio as gr - - -def create_demo(process, max_images=12): - with gr.Blocks() as demo: - with gr.Row(): - gr.Markdown('## Control Stable Diffusion with Depth Maps') - with gr.Row(): - with gr.Column(): - input_image = gr.Image(source='upload', type='numpy') - prompt = gr.Textbox(label='Prompt') - run_button = gr.Button(label='Run') - with gr.Accordion('Advanced options', open=False): - num_samples = gr.Slider(label='Images', - minimum=1, - maximum=max_images, - value=1, - step=1) - image_resolution = gr.Slider(label='Image Resolution', - minimum=256, - maximum=768, - value=512, - step=256) - detect_resolution = gr.Slider(label='Depth Resolution', - minimum=128, - maximum=1024, - value=384, - step=1) - ddim_steps = gr.Slider(label='Steps', - minimum=1, - maximum=100, - value=20, - step=1) - scale = gr.Slider(label='Guidance Scale', - minimum=0.1, - maximum=30.0, - value=9.0, - step=0.1) - seed = gr.Slider(label='Seed', - minimum=-1, - maximum=2147483647, - step=1, - randomize=True) - eta = gr.Number(label='eta (DDIM)', value=0.0) - a_prompt = gr.Textbox( - label='Added Prompt', - value='best quality, extremely detailed') - n_prompt = gr.Textbox( - label='Negative Prompt', - value= - 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality' - ) - with gr.Column(): - result_gallery = gr.Gallery(label='Output', - show_label=False, - elem_id='gallery').style( - grid=2, height='auto') - ips = [ - input_image, prompt, a_prompt, n_prompt, num_samples, - image_resolution, detect_resolution, ddim_steps, scale, seed, eta - ] - run_button.click(fn=process, - inputs=ips, - outputs=[result_gallery], - api_name='depth') - return demo diff --git a/spaces/Sakukaze/VITS-Umamusume-voice-synthesizer/text/japanese.py b/spaces/Sakukaze/VITS-Umamusume-voice-synthesizer/text/japanese.py deleted file mode 100644 index 375e4d50872d5c68ee57ca17470a2ca425425eba..0000000000000000000000000000000000000000 --- a/spaces/Sakukaze/VITS-Umamusume-voice-synthesizer/text/japanese.py +++ /dev/null @@ -1,153 +0,0 @@ -import re -from unidecode import unidecode -import pyopenjtalk - - -# Regular expression matching Japanese without punctuation marks: -_japanese_characters = re.compile( - r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# Regular expression matching non-Japanese characters or punctuation marks: -_japanese_marks = re.compile( - r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# List of (symbol, Japanese) pairs for marks: -_symbols_to_japanese = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('%', 'パーセント') -]] - -# List of (romaji, ipa) pairs for marks: -_romaji_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ts', 'ʦ'), - ('u', 'ɯ'), - ('j', 'ʥ'), - ('y', 'j'), - ('ni', 'n^i'), - ('nj', 'n^'), - ('hi', 'çi'), - ('hj', 'ç'), - ('f', 'ɸ'), - ('I', 'i*'), - ('U', 'ɯ*'), - ('r', 'ɾ') -]] - -# List of (romaji, ipa2) pairs for marks: -_romaji_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('u', 'ɯ'), - ('ʧ', 'tʃ'), - ('j', 'dʑ'), - ('y', 'j'), - ('ni', 'n^i'), - ('nj', 'n^'), - ('hi', 'çi'), - ('hj', 'ç'), - ('f', 'ɸ'), - ('I', 'i*'), - ('U', 'ɯ*'), - ('r', 'ɾ') -]] - -# List of (consonant, sokuon) pairs: -_real_sokuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'Q([↑↓]*[kg])', r'k#\1'), - (r'Q([↑↓]*[tdjʧ])', r't#\1'), - (r'Q([↑↓]*[sʃ])', r's\1'), - (r'Q([↑↓]*[pb])', r'p#\1') -]] - -# List of (consonant, hatsuon) pairs: -_real_hatsuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'N([↑↓]*[pbm])', r'm\1'), - (r'N([↑↓]*[ʧʥj])', r'n^\1'), - (r'N([↑↓]*[tdn])', r'n\1'), - (r'N([↑↓]*[kg])', r'ŋ\1') -]] - - -def symbols_to_japanese(text): - for regex, replacement in _symbols_to_japanese: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_romaji_with_accent(text): - '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html''' - text = symbols_to_japanese(text) - sentences = re.split(_japanese_marks, text) - marks = re.findall(_japanese_marks, text) - text = '' - for i, sentence in enumerate(sentences): - if re.match(_japanese_characters, sentence): - if text != '': - text += ' ' - labels = pyopenjtalk.extract_fullcontext(sentence) - for n, label in enumerate(labels): - phoneme = re.search(r'\-([^\+]*)\+', label).group(1) - if phoneme not in ['sil', 'pau']: - text += phoneme.replace('ch', 'ʧ').replace('sh', - 'ʃ').replace('cl', 'Q') - else: - continue - # n_moras = int(re.search(r'/F:(\d+)_', label).group(1)) - a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1)) - a2 = int(re.search(r"\+(\d+)\+", label).group(1)) - a3 = int(re.search(r"\+(\d+)/", label).group(1)) - if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil', 'pau']: - a2_next = -1 - else: - a2_next = int( - re.search(r"\+(\d+)\+", labels[n + 1]).group(1)) - # Accent phrase boundary - if a3 == 1 and a2_next == 1: - text += ' ' - # Falling - elif a1 == 0 and a2_next == a2 + 1: - text += '↓' - # Rising - elif a2 == 1 and a2_next == 2: - text += '↑' - if i < len(marks): - text += unidecode(marks[i]).replace(' ', '') - return text - - -def get_real_sokuon(text): - for regex, replacement in _real_sokuon: - text = re.sub(regex, replacement, text) - return text - - -def get_real_hatsuon(text): - for regex, replacement in _real_hatsuon: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa(text): - text = japanese_to_romaji_with_accent(text).replace('...', '…') - text = re.sub( - r'([aiueo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text) - text = get_real_sokuon(text) - text = get_real_hatsuon(text) - for regex, replacement in _romaji_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa2(text): - text = japanese_to_romaji_with_accent(text).replace('...', '…') - text = get_real_sokuon(text) - text = get_real_hatsuon(text) - for regex, replacement in _romaji_to_ipa2: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa3(text): - text = japanese_to_ipa2(text).replace('n^', 'ȵ').replace( - 'ʃ', 'ɕ').replace('*', '\u0325').replace('#', '\u031a') - text = re.sub( - r'([aiɯeo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text) - text = re.sub(r'((?:^|\s)(?:ts|tɕ|[kpt]))', r'\1ʰ', text) - return text diff --git a/spaces/Salesforce/EDICT/my_diffusers/commands/diffusers_cli.py b/spaces/Salesforce/EDICT/my_diffusers/commands/diffusers_cli.py deleted file mode 100644 index 30084e55ba4eeec79c87a99eae3e60a6233dc556..0000000000000000000000000000000000000000 --- a/spaces/Salesforce/EDICT/my_diffusers/commands/diffusers_cli.py +++ /dev/null @@ -1,41 +0,0 @@ -#!/usr/bin/env python -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from argparse import ArgumentParser - -from .env import EnvironmentCommand - - -def main(): - parser = ArgumentParser("Diffusers CLI tool", usage="diffusers-cli []") - commands_parser = parser.add_subparsers(help="diffusers-cli command helpers") - - # Register commands - EnvironmentCommand.register_subcommand(commands_parser) - - # Let's go - args = parser.parse_args() - - if not hasattr(args, "func"): - parser.print_help() - exit(1) - - # Run - service = args.func(args) - service.run() - - -if __name__ == "__main__": - main() diff --git a/spaces/Salesforce/EDICT/my_half_diffusers/dependency_versions_table.py b/spaces/Salesforce/EDICT/my_half_diffusers/dependency_versions_table.py deleted file mode 100644 index 74c5331e5af63fbab6e583da377c811e00791391..0000000000000000000000000000000000000000 --- a/spaces/Salesforce/EDICT/my_half_diffusers/dependency_versions_table.py +++ /dev/null @@ -1,26 +0,0 @@ -# THIS FILE HAS BEEN AUTOGENERATED. To update: -# 1. modify the `_deps` dict in setup.py -# 2. run `make deps_table_update`` -deps = { - "Pillow": "Pillow", - "accelerate": "accelerate>=0.11.0", - "black": "black==22.3", - "datasets": "datasets", - "filelock": "filelock", - "flake8": "flake8>=3.8.3", - "hf-doc-builder": "hf-doc-builder>=0.3.0", - "huggingface-hub": "huggingface-hub>=0.8.1", - "importlib_metadata": "importlib_metadata", - "isort": "isort>=5.5.4", - "modelcards": "modelcards==0.1.4", - "numpy": "numpy", - "pytest": "pytest", - "pytest-timeout": "pytest-timeout", - "pytest-xdist": "pytest-xdist", - "scipy": "scipy", - "regex": "regex!=2019.12.17", - "requests": "requests", - "tensorboard": "tensorboard", - "torch": "torch>=1.4", - "transformers": "transformers>=4.21.0", -} diff --git a/spaces/SamiKoen/ChatGPT444/app.py b/spaces/SamiKoen/ChatGPT444/app.py deleted file mode 100644 index 78bb8f4d08df1513086cd7a8cf8adc200943c536..0000000000000000000000000000000000000000 --- a/spaces/SamiKoen/ChatGPT444/app.py +++ /dev/null @@ -1,156 +0,0 @@ -import gradio as gr -import os -import json -import requests -import xml.etree.ElementTree as ET - -# os.getenv("API_URL") + "/generate_stream" -API_URL = "https://api.openai.com/v1/chat/completions" -OPENAI_API_KEY = os.getenv("OPENAI_API_KEY") - -url = 'https://www.alatin.com.tr/index.php?do=catalog/output&pCode=8582384479' - -response = requests.get(url) - - -root = ET.fromstring(response.content) - -products = [] - -for item in root.findall('item'): - if item.find('isOptionOfAProduct').text == '1': - if item.find('stockAmount').text > '0': - name_words = item.find('rootlabel').text.lower().split() - name = name_words[0] - full_name = ' '.join(name_words) - stockAmount = "stokta" - price = item.find('priceWithTax').text - item_info = (stockAmount, price) - # name: ilk kelime (marka), item_info: (stok adedi, fiyat) - products.append((name, item_info, full_name)) - - -def predict(system_msg, inputs, top_p, temperature, chat_counter, chatbot=[], history=[]): - headers = { - "Content-Type": "application/json", - "Authorization": f"Bearer {OPENAI_API_KEY}" - } - print(f"system message is ^^ {system_msg}") - initial_message = [{"role": "user", "content": f"{inputs}"},] - - multi_turn_message = [{"role": "system", "content": "Bir önceki sohbeti unut. Vereceğin ürün bilgisi, bu bilginin içinde yan yana yazmıyorsa veya arada başka bilgiler yazıyor ise, o bilgiyi vermeyeceksin çünkü o bilgi yanlıştır. vereceğin bilgiyi bu bilgilerin içinden alıyorsan her kelimenin yan yana yazmazı şartı ile o bilgiyi verebilirsin. Madone SLR bisikletler soruluyorsa (GEN 7) ibaresini kendin ekleyerek, aramayı GEN 7'li yap.Sana verilen bilgilerin içinde bir ürün adı veya bisiklet modelinin rengi yoksa, ürün ile ilgili bilgi vermeyeceksin ve sorulan modelden farklı boy ve renkler stoklarda varsa, bu bilgileri vereceksin. Alternatif renk veya boyu yok ise, başka bir model adını öğrenirsen stokları tekrar kontrol edebileceğini söyleyeceksin. Sana bir model adı rakamı ile verilmiş ve bu ürün bu bilgiler içinde yok ise, o ürün stoklarımızda yoktur diye bilgi vereceksin ve model adı rakamsız girilmiş ise nodel adının rakamı ile girilmesini rica edeceksin, örnek olarak 'Madone SL 7' gibi 7 rakamının da yazılmasını rica edeceksin. Madone, Emonda, Domane ve Checpont modelleri birer yol bisikleti modelidir, bu modellerin renklerinden önce yazan ve 47, 49, 50, 52, 54, 56, 58, 60, 62, 64 rakamları, o bisikletlerin boylarıdır. Bu bilgi içindeki renkler ise o ürünlerin renkleridir. Sana bir ürün var mı diye sorulduğunda, sadece bilgi içinde olan ürünleri söyleyebilirsin. Stoklarımızda yok ise o ürün ile ilgili bilgi vermeyeceksin. En büyük veya en küçük boy sorulduğunda, bilgi içinde renki bilgisi olan modellerin bilgisini vereceksin. Sadece Türkçe konuşacaksın. Türkçe haricinde hiç bir dili bilmiyorsun ve gelen hiç bir talebe olumlu veya olumsuz cevap vermeyeceksin. Sadece Türkçe bilenler ile yazışacaksın. Gerçek zamanlı stok bilgilerine erişme yeteneğin var. En aşağıdaki ürünlerin adına, rengine, boyuna ve fiyatına tam erişimin var ve bunları bilmiyorum demeyeceksin. Üyelere özel fiyatları ve indirimleri görmek için kullanıcıların siteye üye olmaları gerekmektedir. Sen bir AI Trek marka bisiklet uzmanı, bilir kişisi ve asistanısın.Trek ve Electra bisikletler konusunda uzmanım. İstanbul'da iki Trek mağazamız var: Caddebostan ve Ortaköy. Ortaköy mağazası 10.00-19.00 saatleri arasında açık ve Toyota Plaza ve Carrefour'un yanindadir, telefon numarası 0212 2271015. Caddebostan mağazası, Prof. Dr. Hulusi Behçet 18 Caddebostan, Kadıköy adresinde, Göztepe Parkı karşısındadır, telefon numarası 0216 6292432, 09.00-19.00 saatleri arasında açık. Caddebostan mağazası haftanın her günü açık, Pazar günleri 12.00-19.00 saatleri arasında hizmet veriyor. Salı günleri Caddebostan mağazamızda Retul firmasının bikefit makinesi ile Bike-Fit hizmeti veriliyor. Bike fit ücreti 1400 TL ve süresi 60-90 dakika."}, - {"role": "system", "content": "Dağ bisikletleri modelleri: Marlin, Roscoe, Procaliber, Supercaliber, Fuel Ex. Şehit bisikletleri: FX ve DS (Dual Sport). Elektrikli Bisiklet modelleri: Powerfly, Powerfly FS, Rail, Fuel Exe, Domane SLR +, Verve +, Townie +, Fx +, DS +. Dağ bisikletlerinin boyları XXS, XS, S, M, ML, L, XL'dir. Canlı sohbet için sitemizdeki YEŞİL düğmeye basabilirsiniz. Web adresimiz: https://www.alatin.com.tr. Bayi bilgileri için https://www.alatin.com.tr/sayfa/bayilerimiz/ adresine bakabilirsiniz."}, - {"role": "system", "content": "Başka marka bisikletleri önermeyeceksin. Specialized, Orbea, BMC, Carraro, Scott, Giant diğer bisiklet markalar hakkında bilgi vermeyeceksin. Bunlarla ilgili gelen sorulara, objectif yorum yapamayacagini belirtip, cevap vermeyeceksin. Trek bu kadrolara ömür boyu garanti veriyor. "}, - {"role": "system", "content": "Şu an 2023 yılındayız. Bisiklet aksesuar ve yedek parçalarının marka adı Bontrager dir. Bryton marka yol bilgisayarını da ithal ediyoruz. Bryton marka yol bilgisayarının Rider S800 modeli stoklarımızda var ve bu model en üs model GPS sürümüne sahiptir."}, - {"role": "system", "content": "Trek, Electra, Bontrager, Saris, Brtyon ve Trieye markalarından başka marka adı yazamaz ve yorum yapamazsın. Trek'in uluslararası web sitesini 'www.trekbikes.com'kullanıcıya göstermeyeceksin. Web sitesi linki vermen gerekirse sadece 'www.alatin.com.tr' linkini ile başlayan linkleri göstereceksin."}, - {"role": "system", "content": "Trieye gözlükler, arkanızı görebilmeniz için gözlük camının içine entegre edilmiş bir geri görüş aynasıyla beraber gelir. Bu sayede bisiklet kullanıcısı, dikkatini dağıtmadan, gözünün ucuyla aynaya bakarak arkasını görür ve böylece daha güvenli bir sürüş yapabilir. Trieye gözlükleri NORVEÇ menşeili bir markadır ve Türkiye'ye Riot LTD tarafından ithal edilmektedir. Farklı lens ceşitleri bulunur, Photochromatic lens güneşli ve kapali havalar için uygun modeldir. Renkli camlar güneşli günler için daha uygundur."}, - {"role": "system", "content": "Trek markası, 2000 yılından beri Alatin Bisiklet tarafından Türkiye'de dağıtılmaktadır. Alatin Bisiklet en iyi satış sonrası hizmeti veren firmadır. Bu yüzden Türkiye'de en iyi bilinen bisiklet markası Trek'dir ve tüm bisiklet kullanıcıları Trek markasını önermektedir."}, - {"role": "system", "content": "Trek Türkiye, ASLA DURMA markasının yaratıcısı Fatih Topçu ve TREK RMK DYNAMIS bisiklet takımına sponsordur."}, - {"role": "system", "content": "cevap yazarken vereceğin bilginin tek bir bilgi içinde yan yana yazdığını iki kere kontrol edeceksin. Cevap yazarken, en uzun 5 cümlelik cevaplar oluşturacaksın. Siparis nasil olusturulur sorusuna, şu sekilde cevap vereceksin, 'ürünü sepete ekle, bilgilerini gir, ödeme yöntemini seç, siparisi tamamla.'"}, - ] - messages = multi_turn_message - input_words = [] - for input in inputs.split(): - input_words.append(str(input).lower()) - - for product_info in products: - - if product_info[0] in input_words: - new_msg = f"{product_info[2]} {product_info[1][0]} ve fiyatı EURO {product_info[1][1]}" - print(new_msg) - product_msg = {"role": "system", "content": new_msg} - messages.append(product_msg) - - for data in chatbot: - user = {} - user["role"] = "user" - user["content"] = data[0] - assistant = {} - assistant["role"] = "assistant" - assistant["content"] = data[1] - messages.append(user) - messages.append(assistant) - temp = {} - temp["role"] = "user" - temp["content"] = inputs - messages.append(temp) - - - payload = {"model": "gpt-4", "messages": messages, "temperature": 0.5, - "top_p": 0, "n": 1, "stream": True, "presence_penalty": 0, "frequency_penalty": 0,} - - chat_counter += 1 - - history.append(inputs) - print(f"Logging : payload is - {payload}") - - response = requests.post(API_URL, headers=headers, - json=payload, stream=True) - print(f"Logging : response code - {response}") - token_counter = 0 - partial_words = "" - - counter = 0 - for chunk in response.iter_lines(): - - if counter == 0: - counter += 1 - continue - - if chunk.decode(): - chunk = chunk.decode() - - if len(chunk) > 12 and "content" in json.loads(chunk[6:])['choices'][0]['delta']: - partial_words = partial_words + \ - json.loads(chunk[6:])['choices'][0]["delta"]["content"] - if token_counter == 0: - history.append(" " + partial_words) - else: - history[-1] = partial_words - chat = [(history[i], history[i + 1]) for i in range(0, - len(history) - 1, 2)] # convert to tuples of list - token_counter += 1 - # resembles {chatbot: chat, state: history} - yield chat, history, chat_counter, response - - -def reset_textbox(): - return gr.update(value='') - - -def set_visible_false(): - return gr.update(visible=False) - - -def set_visible_true(): - return gr.update(visible=False) - - -theme_addon_msg = "" -system_msg_info = "" -theme = gr.themes.Soft(primary_hue="zinc", secondary_hue="green", neutral_hue="blue", - text_size=gr.themes.sizes.text_sm) - -with gr.Blocks(css="""#col_container { margin-left: auto; margin-right: auto;} #chatbot {height: 450px; overflow: auto;}""", - theme=theme) as demo: - with gr.Column(elem_id="col_container"): - with gr.Accordion("", open=False, visible=False): - system_msg = gr.Textbox(value="") - new_msg = gr.Textbox(value="") - accordion_msg = gr.HTML(value="", visible=False) - chatbot = gr.Chatbot(label='Trek Sanal Asistanı - Yapay Zeka Desteği İle Sorularınıza Cevap Alın', elem_id="chatbot") - inputs = gr.Textbox( - placeholder="Buraya yazın, yanıtlayalım.", show_label=False) - state = gr.State([]) - with gr.Accordion("", open=False, visible=False): - top_p = gr.Slider(minimum=-0, maximum=1.0, value=0.5, - step=0.05, interactive=False, visible=False) - temperature = gr.Slider( - minimum=-0, maximum=5.0, value=0.1, step=0.1, interactive=False, visible=False) - chat_counter = gr.Number(value=0, visible=False, precision=0) - - inputs.submit(predict, [system_msg, inputs, top_p, temperature, chat_counter, chatbot, state], [ - chatbot, state, chat_counter],) # openai_api_key - inputs.submit(reset_textbox, [], [inputs]) - -demo.queue(max_size=10, concurrency_count=10).launch(debug=True) \ No newline at end of file diff --git a/spaces/Samuelcr8/EVA/Dockerfile b/spaces/Samuelcr8/EVA/Dockerfile deleted file mode 100644 index f082970621660a3a398d4266140ceb3a4baa4895..0000000000000000000000000000000000000000 --- a/spaces/Samuelcr8/EVA/Dockerfile +++ /dev/null @@ -1,6 +0,0 @@ -FROM argilla/argilla-quickstart:latest - -# Define datasets to preload: full=all datasets, single=one dataset, and none=no datasets. -ENV LOAD_DATASETS=single - -CMD whoami && /start_quickstart_argilla.sh \ No newline at end of file diff --git a/spaces/Sapiensia/diffuse-the-rest/src/app.html b/spaces/Sapiensia/diffuse-the-rest/src/app.html deleted file mode 100644 index 5b53ef7e3ae7406d0fc85a12f29f4eee1f9816bd..0000000000000000000000000000000000000000 --- a/spaces/Sapiensia/diffuse-the-rest/src/app.html +++ /dev/null @@ -1,12 +0,0 @@ - - - - - - - %sveltekit.head% - - -
      %sveltekit.body%
      - - diff --git a/spaces/ShotaA/TalkTuner/static/css/style.css b/spaces/ShotaA/TalkTuner/static/css/style.css deleted file mode 100644 index d662046f36ddd304d0339bc58597247a10e72a5e..0000000000000000000000000000000000000000 --- a/spaces/ShotaA/TalkTuner/static/css/style.css +++ /dev/null @@ -1,55 +0,0 @@ -@import url('https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.15.1/css/all.min.css'); - -.voice-recorder { - display: flex; - flex-direction: column; - align-items: center; - gap: 1rem; -} - -.record-button { - display: flex; - align-items: center; - justify-content: center; - width: 60px; - height: 60px; - border-radius: 50%; - background-color: #f44336; - cursor: pointer; - box-shadow: 0 2px 5px rgba(0, 0, 0, 0.3); - transition: all 0.3s; -} - -.record-button.recording { - background-color: #d32f2f; -} - -.record-button:hover { - background-color: #e53935; -} - -.record-button:active { - background-color: #d32f2f; -} - -.record-button i { - font-size: 24px; - color: white; - cursor: pointer; -} - -.recording-text { - font-size: 14px; - color: #f44336; - font-weight: bold; -} - -.record-button.disabled { - background-color: #ccc; - cursor: default; - pointer-events: none; -} - -.record-button.disabled i { - color: #999; -} \ No newline at end of file diff --git a/spaces/Shruhrid/Next_Word_Prediction/README.md b/spaces/Shruhrid/Next_Word_Prediction/README.md deleted file mode 100644 index bfd64e2fb698d44b7af0da02fc718029bfbf1e5b..0000000000000000000000000000000000000000 --- a/spaces/Shruhrid/Next_Word_Prediction/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Next_Word_Prediction -emoji: 🐢 -colorFrom: indigo -colorTo: gray -sdk: gradio -sdk_version: 2.9.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/Shubham89/Meshwork-chatbot/app.py b/spaces/Shubham89/Meshwork-chatbot/app.py deleted file mode 100644 index 94bc0a9ef428c9149aacbdc1e4e7a4f563478452..0000000000000000000000000000000000000000 --- a/spaces/Shubham89/Meshwork-chatbot/app.py +++ /dev/null @@ -1,23 +0,0 @@ -import gradio -import os -from llama_index import SimpleDirectoryReader, GPTListIndex, readers, GPTSimpleVectorIndex, LLMPredictor, PromptHelper, ServiceContext - -#openai.api_key = "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" -#os.environ["OPENAI_API_KEY"] = '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' -#os.environ["OPENAI_API_KEY"] = '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' -os.environ["OPENAI_API_KEY"] = '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' - -def gradio_ask_ai(user_input): - index = GPTSimpleVectorIndex.load_from_disk('index.json') - query = user_input - # f=open("https://huggingface.co/spaces/Shubham89/Meshwork-chatbot/blob/main/yo.txt","a") - # f.write("file created") - # f.close() - response = index.query(query) - return response.response -#a = gradio.File() - -demo = gradio.Interface(fn=gradio_ask_ai, inputs = "text", outputs = "text", title = "Meshworks bot") -#demo = gradio.Interface(fn=gradio_ask_ai, inputs = "text", outputs = [a], title = "Meshworks bot") - -demo.launch(inline=False) \ No newline at end of file diff --git a/spaces/Slava917/pronunciation-trainer/app.py b/spaces/Slava917/pronunciation-trainer/app.py deleted file mode 100644 index 05aec2529dfaa2971f817405ca9d9faf88d83cd5..0000000000000000000000000000000000000000 --- a/spaces/Slava917/pronunciation-trainer/app.py +++ /dev/null @@ -1,59 +0,0 @@ -import pandas as pd -import gradio as gr -print(gr.__version__) -import torch -import torchaudio - - -df= pd.read_csv('native_words_subset.csv') - -torch._C._jit_override_can_fuse_on_cpu(False) -torch._C._jit_override_can_fuse_on_gpu(False) -torch._C._jit_set_texpr_fuser_enabled(False) -torch._C._jit_set_nvfuser_enabled(False) - -loader = torch.jit.load("audio_loader.pt") -model = torch.jit.load('QuartzNet_thunderspeech_3.pt').eval() - -vocab = model.text_transform.vocab.itos -vocab[-1] = '' - -def convert_probs(probs): - ids = probs.argmax(1)[0] - s = [] - if vocab[ids[0]]: s.append(vocab[ids[0]]) - for i in range(1,len(ids)): - if ids[i-1] != ids[i]: - new = vocab[ids[i]] - if new: s.append(new) - #return '.'.join(s) - return s - - -def predict(path): - audio = loader(path) - probs = model(audio, torch.tensor(audio.shape[0] * [audio.shape[-1]], device=audio.device))[0] - return convert_probs(probs) - - -from difflib import SequenceMatcher - -def similar(a, b): - return SequenceMatcher(None, a, b).ratio() - -def compare(chosen_word, path): - etalons = [list(val.split('.')) for val in df.loc[df['replica'] == chosen_word, 'transcription'].values] - user = predict(path) - coeff = 0.0 - idx=0 - for i in range(len(etalons)): - new_coeff = similar(user, etalons[i]) - if new_coeff > coeff: - coeff = new_coeff - idx=i - return f'The similarity coefficient of your pronunciation and the pronunciation of a native speaker is {coeff}. The closer the coefficient is to 1, the better.' + '\nYour pronunciation: [' + ''.join(user) + ']\nClosest native pronunciation: [' + ''.join(etalons[idx]) + ']' - - -word_choice = gr.inputs.Dropdown(sorted(list(df['replica'].unique())), label="Choose a word") - -gr.Interface(fn=compare, inputs=[word_choice, gr.inputs.Audio(source='microphone', type='filepath', optional=True)], outputs= 'text').launch(debug=True) \ No newline at end of file diff --git a/spaces/Smols/GPT4/Dockerfile b/spaces/Smols/GPT4/Dockerfile deleted file mode 100644 index eef259fa372a804549fb0af0913718a13344da34..0000000000000000000000000000000000000000 --- a/spaces/Smols/GPT4/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM node:18-bullseye-slim -RUN apt-get update && \ - apt-get install -y git -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app -WORKDIR /app -RUN npm install -COPY Dockerfile greeting.md* .env* ./ -RUN npm run build -EXPOSE 7860 -ENV NODE_ENV=production -CMD [ "npm", "start" ] diff --git a/spaces/SpacesExamples/InvokeAI/README.md b/spaces/SpacesExamples/InvokeAI/README.md deleted file mode 100644 index 4e822583569b2c8df509a89e0ccd03b0bc0f2e76..0000000000000000000000000000000000000000 --- a/spaces/SpacesExamples/InvokeAI/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: InvokeAI -emoji: ⚡ -colorFrom: red -colorTo: yellow -sdk: docker -app_port: 9090 -pinned: false -suggested_hardware: t4-small ---- \ No newline at end of file diff --git a/spaces/SpacesExamples/vscode/README.md b/spaces/SpacesExamples/vscode/README.md deleted file mode 100644 index 441890d69594192b9821be037bfdb02afe7bbb81..0000000000000000000000000000000000000000 --- a/spaces/SpacesExamples/vscode/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Visual Studio Code -emoji: 💻🐳 -colorFrom: red -colorTo: blue -sdk: docker -pinned: false -tags: - - vscode ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/interactiveshell.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/interactiveshell.py deleted file mode 100644 index 7392de7c02279f7c90eb41da47cc6554d60870e1..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/interactiveshell.py +++ /dev/null @@ -1,3910 +0,0 @@ -# -*- coding: utf-8 -*- -"""Main IPython class.""" - -#----------------------------------------------------------------------------- -# Copyright (C) 2001 Janko Hauser -# Copyright (C) 2001-2007 Fernando Perez. -# Copyright (C) 2008-2011 The IPython Development Team -# -# Distributed under the terms of the BSD License. The full license is in -# the file COPYING, distributed as part of this software. -#----------------------------------------------------------------------------- - - -import abc -import ast -import atexit -import bdb -import builtins as builtin_mod -import functools -import inspect -import os -import re -import runpy -import subprocess -import sys -import tempfile -import traceback -import types -import warnings -from ast import stmt -from io import open as io_open -from logging import error -from pathlib import Path -from typing import Callable -from typing import List as ListType, Dict as DictType, Any as AnyType -from typing import Optional, Sequence, Tuple -from warnings import warn - -from pickleshare import PickleShareDB -from tempfile import TemporaryDirectory -from traitlets import ( - Any, - Bool, - CaselessStrEnum, - Dict, - Enum, - Instance, - Integer, - List, - Type, - Unicode, - default, - observe, - validate, -) -from traitlets.config.configurable import SingletonConfigurable -from traitlets.utils.importstring import import_item - -import IPython.core.hooks -from IPython.core import magic, oinspect, page, prefilter, ultratb -from IPython.core.alias import Alias, AliasManager -from IPython.core.autocall import ExitAutocall -from IPython.core.builtin_trap import BuiltinTrap -from IPython.core.compilerop import CachingCompiler -from IPython.core.debugger import InterruptiblePdb -from IPython.core.display_trap import DisplayTrap -from IPython.core.displayhook import DisplayHook -from IPython.core.displaypub import DisplayPublisher -from IPython.core.error import InputRejected, UsageError -from IPython.core.events import EventManager, available_events -from IPython.core.extensions import ExtensionManager -from IPython.core.formatters import DisplayFormatter -from IPython.core.history import HistoryManager -from IPython.core.inputtransformer2 import ESC_MAGIC, ESC_MAGIC2 -from IPython.core.logger import Logger -from IPython.core.macro import Macro -from IPython.core.payload import PayloadManager -from IPython.core.prefilter import PrefilterManager -from IPython.core.profiledir import ProfileDir -from IPython.core.usage import default_banner -from IPython.display import display -from IPython.paths import get_ipython_dir -from IPython.testing.skipdoctest import skip_doctest -from IPython.utils import PyColorize, io, openpy, py3compat -from IPython.utils.decorators import undoc -from IPython.utils.io import ask_yes_no -from IPython.utils.ipstruct import Struct -from IPython.utils.path import ensure_dir_exists, get_home_dir, get_py_filename -from IPython.utils.process import getoutput, system -from IPython.utils.strdispatch import StrDispatch -from IPython.utils.syspathcontext import prepended_to_syspath -from IPython.utils.text import DollarFormatter, LSString, SList, format_screen -from IPython.core.oinspect import OInfo - - -sphinxify: Optional[Callable] - -try: - import docrepr.sphinxify as sphx - - def sphinxify(oinfo): - wrapped_docstring = sphx.wrap_main_docstring(oinfo) - - def sphinxify_docstring(docstring): - with TemporaryDirectory() as dirname: - return { - "text/html": sphx.sphinxify(wrapped_docstring, dirname), - "text/plain": docstring, - } - - return sphinxify_docstring -except ImportError: - sphinxify = None - - -class ProvisionalWarning(DeprecationWarning): - """ - Warning class for unstable features - """ - pass - -from ast import Module - -_assign_nodes = (ast.AugAssign, ast.AnnAssign, ast.Assign) -_single_targets_nodes = (ast.AugAssign, ast.AnnAssign) - -#----------------------------------------------------------------------------- -# Await Helpers -#----------------------------------------------------------------------------- - -# we still need to run things using the asyncio eventloop, but there is no -# async integration -from .async_helpers import ( - _asyncio_runner, - _curio_runner, - _pseudo_sync_runner, - _should_be_async, - _trio_runner, -) - -#----------------------------------------------------------------------------- -# Globals -#----------------------------------------------------------------------------- - -# compiled regexps for autoindent management -dedent_re = re.compile(r'^\s+raise|^\s+return|^\s+pass') - -#----------------------------------------------------------------------------- -# Utilities -#----------------------------------------------------------------------------- - - -def is_integer_string(s: str): - """ - Variant of "str.isnumeric()" that allow negative values and other ints. - """ - try: - int(s) - return True - except ValueError: - return False - raise ValueError("Unexpected error") - - -@undoc -def softspace(file, newvalue): - """Copied from code.py, to remove the dependency""" - - oldvalue = 0 - try: - oldvalue = file.softspace - except AttributeError: - pass - try: - file.softspace = newvalue - except (AttributeError, TypeError): - # "attribute-less object" or "read-only attributes" - pass - return oldvalue - -@undoc -def no_op(*a, **kw): - pass - - -class SpaceInInput(Exception): pass - - -class SeparateUnicode(Unicode): - r"""A Unicode subclass to validate separate_in, separate_out, etc. - - This is a Unicode based trait that converts '0'->'' and ``'\\n'->'\n'``. - """ - - def validate(self, obj, value): - if value == '0': value = '' - value = value.replace('\\n','\n') - return super(SeparateUnicode, self).validate(obj, value) - - -@undoc -class DummyMod(object): - """A dummy module used for IPython's interactive module when - a namespace must be assigned to the module's __dict__.""" - __spec__ = None - - -class ExecutionInfo(object): - """The arguments used for a call to :meth:`InteractiveShell.run_cell` - - Stores information about what is going to happen. - """ - raw_cell = None - store_history = False - silent = False - shell_futures = True - cell_id = None - - def __init__(self, raw_cell, store_history, silent, shell_futures, cell_id): - self.raw_cell = raw_cell - self.store_history = store_history - self.silent = silent - self.shell_futures = shell_futures - self.cell_id = cell_id - - def __repr__(self): - name = self.__class__.__qualname__ - raw_cell = ( - (self.raw_cell[:50] + "..") if len(self.raw_cell) > 50 else self.raw_cell - ) - return ( - '<%s object at %x, raw_cell="%s" store_history=%s silent=%s shell_futures=%s cell_id=%s>' - % ( - name, - id(self), - raw_cell, - self.store_history, - self.silent, - self.shell_futures, - self.cell_id, - ) - ) - - -class ExecutionResult(object): - """The result of a call to :meth:`InteractiveShell.run_cell` - - Stores information about what took place. - """ - execution_count = None - error_before_exec = None - error_in_exec: Optional[BaseException] = None - info = None - result = None - - def __init__(self, info): - self.info = info - - @property - def success(self): - return (self.error_before_exec is None) and (self.error_in_exec is None) - - def raise_error(self): - """Reraises error if `success` is `False`, otherwise does nothing""" - if self.error_before_exec is not None: - raise self.error_before_exec - if self.error_in_exec is not None: - raise self.error_in_exec - - def __repr__(self): - name = self.__class__.__qualname__ - return '<%s object at %x, execution_count=%s error_before_exec=%s error_in_exec=%s info=%s result=%s>' %\ - (name, id(self), self.execution_count, self.error_before_exec, self.error_in_exec, repr(self.info), repr(self.result)) - -@functools.wraps(io_open) -def _modified_open(file, *args, **kwargs): - if file in {0, 1, 2}: - raise ValueError( - f"IPython won't let you open fd={file} by default " - "as it is likely to crash IPython. If you know what you are doing, " - "you can use builtins' open." - ) - - return io_open(file, *args, **kwargs) - -class InteractiveShell(SingletonConfigurable): - """An enhanced, interactive shell for Python.""" - - _instance = None - - ast_transformers = List([], help= - """ - A list of ast.NodeTransformer subclass instances, which will be applied - to user input before code is run. - """ - ).tag(config=True) - - autocall = Enum((0,1,2), default_value=0, help= - """ - Make IPython automatically call any callable object even if you didn't - type explicit parentheses. For example, 'str 43' becomes 'str(43)' - automatically. The value can be '0' to disable the feature, '1' for - 'smart' autocall, where it is not applied if there are no more - arguments on the line, and '2' for 'full' autocall, where all callable - objects are automatically called (even if no arguments are present). - """ - ).tag(config=True) - - autoindent = Bool(True, help= - """ - Autoindent IPython code entered interactively. - """ - ).tag(config=True) - - autoawait = Bool(True, help= - """ - Automatically run await statement in the top level repl. - """ - ).tag(config=True) - - loop_runner_map ={ - 'asyncio':(_asyncio_runner, True), - 'curio':(_curio_runner, True), - 'trio':(_trio_runner, True), - 'sync': (_pseudo_sync_runner, False) - } - - loop_runner = Any(default_value="IPython.core.interactiveshell._asyncio_runner", - allow_none=True, - help="""Select the loop runner that will be used to execute top-level asynchronous code""" - ).tag(config=True) - - @default('loop_runner') - def _default_loop_runner(self): - return import_item("IPython.core.interactiveshell._asyncio_runner") - - @validate('loop_runner') - def _import_runner(self, proposal): - if isinstance(proposal.value, str): - if proposal.value in self.loop_runner_map: - runner, autoawait = self.loop_runner_map[proposal.value] - self.autoawait = autoawait - return runner - runner = import_item(proposal.value) - if not callable(runner): - raise ValueError('loop_runner must be callable') - return runner - if not callable(proposal.value): - raise ValueError('loop_runner must be callable') - return proposal.value - - automagic = Bool(True, help= - """ - Enable magic commands to be called without the leading %. - """ - ).tag(config=True) - - banner1 = Unicode(default_banner, - help="""The part of the banner to be printed before the profile""" - ).tag(config=True) - banner2 = Unicode('', - help="""The part of the banner to be printed after the profile""" - ).tag(config=True) - - cache_size = Integer(1000, help= - """ - Set the size of the output cache. The default is 1000, you can - change it permanently in your config file. Setting it to 0 completely - disables the caching system, and the minimum value accepted is 3 (if - you provide a value less than 3, it is reset to 0 and a warning is - issued). This limit is defined because otherwise you'll spend more - time re-flushing a too small cache than working - """ - ).tag(config=True) - color_info = Bool(True, help= - """ - Use colors for displaying information about objects. Because this - information is passed through a pager (like 'less'), and some pagers - get confused with color codes, this capability can be turned off. - """ - ).tag(config=True) - colors = CaselessStrEnum(('Neutral', 'NoColor','LightBG','Linux'), - default_value='Neutral', - help="Set the color scheme (NoColor, Neutral, Linux, or LightBG)." - ).tag(config=True) - debug = Bool(False).tag(config=True) - disable_failing_post_execute = Bool(False, - help="Don't call post-execute functions that have failed in the past." - ).tag(config=True) - display_formatter = Instance(DisplayFormatter, allow_none=True) - displayhook_class = Type(DisplayHook) - display_pub_class = Type(DisplayPublisher) - compiler_class = Type(CachingCompiler) - inspector_class = Type( - oinspect.Inspector, help="Class to use to instantiate the shell inspector" - ).tag(config=True) - - sphinxify_docstring = Bool(False, help= - """ - Enables rich html representation of docstrings. (This requires the - docrepr module). - """).tag(config=True) - - @observe("sphinxify_docstring") - def _sphinxify_docstring_changed(self, change): - if change['new']: - warn("`sphinxify_docstring` is provisional since IPython 5.0 and might change in future versions." , ProvisionalWarning) - - enable_html_pager = Bool(False, help= - """ - (Provisional API) enables html representation in mime bundles sent - to pagers. - """).tag(config=True) - - @observe("enable_html_pager") - def _enable_html_pager_changed(self, change): - if change['new']: - warn("`enable_html_pager` is provisional since IPython 5.0 and might change in future versions.", ProvisionalWarning) - - data_pub_class = None - - exit_now = Bool(False) - exiter = Instance(ExitAutocall) - @default('exiter') - def _exiter_default(self): - return ExitAutocall(self) - # Monotonically increasing execution counter - execution_count = Integer(1) - filename = Unicode("") - ipython_dir= Unicode('').tag(config=True) # Set to get_ipython_dir() in __init__ - - # Used to transform cells before running them, and check whether code is complete - input_transformer_manager = Instance('IPython.core.inputtransformer2.TransformerManager', - ()) - - @property - def input_transformers_cleanup(self): - return self.input_transformer_manager.cleanup_transforms - - input_transformers_post = List([], - help="A list of string input transformers, to be applied after IPython's " - "own input transformations." - ) - - @property - def input_splitter(self): - """Make this available for backward compatibility (pre-7.0 release) with existing code. - - For example, ipykernel ipykernel currently uses - `shell.input_splitter.check_complete` - """ - from warnings import warn - warn("`input_splitter` is deprecated since IPython 7.0, prefer `input_transformer_manager`.", - DeprecationWarning, stacklevel=2 - ) - return self.input_transformer_manager - - logstart = Bool(False, help= - """ - Start logging to the default log file in overwrite mode. - Use `logappend` to specify a log file to **append** logs to. - """ - ).tag(config=True) - logfile = Unicode('', help= - """ - The name of the logfile to use. - """ - ).tag(config=True) - logappend = Unicode('', help= - """ - Start logging to the given file in append mode. - Use `logfile` to specify a log file to **overwrite** logs to. - """ - ).tag(config=True) - object_info_string_level = Enum((0,1,2), default_value=0, - ).tag(config=True) - pdb = Bool(False, help= - """ - Automatically call the pdb debugger after every exception. - """ - ).tag(config=True) - display_page = Bool(False, - help="""If True, anything that would be passed to the pager - will be displayed as regular output instead.""" - ).tag(config=True) - - - show_rewritten_input = Bool(True, - help="Show rewritten input, e.g. for autocall." - ).tag(config=True) - - quiet = Bool(False).tag(config=True) - - history_length = Integer(10000, - help='Total length of command history' - ).tag(config=True) - - history_load_length = Integer(1000, help= - """ - The number of saved history entries to be loaded - into the history buffer at startup. - """ - ).tag(config=True) - - ast_node_interactivity = Enum(['all', 'last', 'last_expr', 'none', 'last_expr_or_assign'], - default_value='last_expr', - help=""" - 'all', 'last', 'last_expr' or 'none', 'last_expr_or_assign' specifying - which nodes should be run interactively (displaying output from expressions). - """ - ).tag(config=True) - - warn_venv = Bool( - True, - help="Warn if running in a virtual environment with no IPython installed (so IPython from the global environment is used).", - ).tag(config=True) - - # TODO: this part of prompt management should be moved to the frontends. - # Use custom TraitTypes that convert '0'->'' and '\\n'->'\n' - separate_in = SeparateUnicode('\n').tag(config=True) - separate_out = SeparateUnicode('').tag(config=True) - separate_out2 = SeparateUnicode('').tag(config=True) - wildcards_case_sensitive = Bool(True).tag(config=True) - xmode = CaselessStrEnum(('Context', 'Plain', 'Verbose', 'Minimal'), - default_value='Context', - help="Switch modes for the IPython exception handlers." - ).tag(config=True) - - # Subcomponents of InteractiveShell - alias_manager = Instance('IPython.core.alias.AliasManager', allow_none=True) - prefilter_manager = Instance('IPython.core.prefilter.PrefilterManager', allow_none=True) - builtin_trap = Instance('IPython.core.builtin_trap.BuiltinTrap', allow_none=True) - display_trap = Instance('IPython.core.display_trap.DisplayTrap', allow_none=True) - extension_manager = Instance('IPython.core.extensions.ExtensionManager', allow_none=True) - payload_manager = Instance('IPython.core.payload.PayloadManager', allow_none=True) - history_manager = Instance('IPython.core.history.HistoryAccessorBase', allow_none=True) - magics_manager = Instance('IPython.core.magic.MagicsManager', allow_none=True) - - profile_dir = Instance('IPython.core.application.ProfileDir', allow_none=True) - @property - def profile(self): - if self.profile_dir is not None: - name = os.path.basename(self.profile_dir.location) - return name.replace('profile_','') - - - # Private interface - _post_execute = Dict() - - # Tracks any GUI loop loaded for pylab - pylab_gui_select = None - - last_execution_succeeded = Bool(True, help='Did last executed command succeeded') - - last_execution_result = Instance('IPython.core.interactiveshell.ExecutionResult', help='Result of executing the last command', allow_none=True) - - def __init__(self, ipython_dir=None, profile_dir=None, - user_module=None, user_ns=None, - custom_exceptions=((), None), **kwargs): - # This is where traits with a config_key argument are updated - # from the values on config. - super(InteractiveShell, self).__init__(**kwargs) - if 'PromptManager' in self.config: - warn('As of IPython 5.0 `PromptManager` config will have no effect' - ' and has been replaced by TerminalInteractiveShell.prompts_class') - self.configurables = [self] - - # These are relatively independent and stateless - self.init_ipython_dir(ipython_dir) - self.init_profile_dir(profile_dir) - self.init_instance_attrs() - self.init_environment() - - # Check if we're in a virtualenv, and set up sys.path. - self.init_virtualenv() - - # Create namespaces (user_ns, user_global_ns, etc.) - self.init_create_namespaces(user_module, user_ns) - # This has to be done after init_create_namespaces because it uses - # something in self.user_ns, but before init_sys_modules, which - # is the first thing to modify sys. - # TODO: When we override sys.stdout and sys.stderr before this class - # is created, we are saving the overridden ones here. Not sure if this - # is what we want to do. - self.save_sys_module_state() - self.init_sys_modules() - - # While we're trying to have each part of the code directly access what - # it needs without keeping redundant references to objects, we have too - # much legacy code that expects ip.db to exist. - self.db = PickleShareDB(os.path.join(self.profile_dir.location, 'db')) - - self.init_history() - self.init_encoding() - self.init_prefilter() - - self.init_syntax_highlighting() - self.init_hooks() - self.init_events() - self.init_pushd_popd_magic() - self.init_user_ns() - self.init_logger() - self.init_builtins() - - # The following was in post_config_initialization - self.init_inspector() - self.raw_input_original = input - self.init_completer() - # TODO: init_io() needs to happen before init_traceback handlers - # because the traceback handlers hardcode the stdout/stderr streams. - # This logic in in debugger.Pdb and should eventually be changed. - self.init_io() - self.init_traceback_handlers(custom_exceptions) - self.init_prompts() - self.init_display_formatter() - self.init_display_pub() - self.init_data_pub() - self.init_displayhook() - self.init_magics() - self.init_alias() - self.init_logstart() - self.init_pdb() - self.init_extension_manager() - self.init_payload() - self.events.trigger('shell_initialized', self) - atexit.register(self.atexit_operations) - - # The trio runner is used for running Trio in the foreground thread. It - # is different from `_trio_runner(async_fn)` in `async_helpers.py` - # which calls `trio.run()` for every cell. This runner runs all cells - # inside a single Trio event loop. If used, it is set from - # `ipykernel.kernelapp`. - self.trio_runner = None - - def get_ipython(self): - """Return the currently running IPython instance.""" - return self - - #------------------------------------------------------------------------- - # Trait changed handlers - #------------------------------------------------------------------------- - @observe('ipython_dir') - def _ipython_dir_changed(self, change): - ensure_dir_exists(change['new']) - - def set_autoindent(self,value=None): - """Set the autoindent flag. - - If called with no arguments, it acts as a toggle.""" - if value is None: - self.autoindent = not self.autoindent - else: - self.autoindent = value - - def set_trio_runner(self, tr): - self.trio_runner = tr - - #------------------------------------------------------------------------- - # init_* methods called by __init__ - #------------------------------------------------------------------------- - - def init_ipython_dir(self, ipython_dir): - if ipython_dir is not None: - self.ipython_dir = ipython_dir - return - - self.ipython_dir = get_ipython_dir() - - def init_profile_dir(self, profile_dir): - if profile_dir is not None: - self.profile_dir = profile_dir - return - self.profile_dir = ProfileDir.create_profile_dir_by_name( - self.ipython_dir, "default" - ) - - def init_instance_attrs(self): - self.more = False - - # command compiler - self.compile = self.compiler_class() - - # Make an empty namespace, which extension writers can rely on both - # existing and NEVER being used by ipython itself. This gives them a - # convenient location for storing additional information and state - # their extensions may require, without fear of collisions with other - # ipython names that may develop later. - self.meta = Struct() - - # Temporary files used for various purposes. Deleted at exit. - # The files here are stored with Path from Pathlib - self.tempfiles = [] - self.tempdirs = [] - - # keep track of where we started running (mainly for crash post-mortem) - # This is not being used anywhere currently. - self.starting_dir = os.getcwd() - - # Indentation management - self.indent_current_nsp = 0 - - # Dict to track post-execution functions that have been registered - self._post_execute = {} - - def init_environment(self): - """Any changes we need to make to the user's environment.""" - pass - - def init_encoding(self): - # Get system encoding at startup time. Certain terminals (like Emacs - # under Win32 have it set to None, and we need to have a known valid - # encoding to use in the raw_input() method - try: - self.stdin_encoding = sys.stdin.encoding or 'ascii' - except AttributeError: - self.stdin_encoding = 'ascii' - - - @observe('colors') - def init_syntax_highlighting(self, changes=None): - # Python source parser/formatter for syntax highlighting - pyformat = PyColorize.Parser(style=self.colors, parent=self).format - self.pycolorize = lambda src: pyformat(src,'str') - - def refresh_style(self): - # No-op here, used in subclass - pass - - def init_pushd_popd_magic(self): - # for pushd/popd management - self.home_dir = get_home_dir() - - self.dir_stack = [] - - def init_logger(self): - self.logger = Logger(self.home_dir, logfname='ipython_log.py', - logmode='rotate') - - def init_logstart(self): - """Initialize logging in case it was requested at the command line. - """ - if self.logappend: - self.magic('logstart %s append' % self.logappend) - elif self.logfile: - self.magic('logstart %s' % self.logfile) - elif self.logstart: - self.magic('logstart') - - - def init_builtins(self): - # A single, static flag that we set to True. Its presence indicates - # that an IPython shell has been created, and we make no attempts at - # removing on exit or representing the existence of more than one - # IPython at a time. - builtin_mod.__dict__['__IPYTHON__'] = True - builtin_mod.__dict__['display'] = display - - self.builtin_trap = BuiltinTrap(shell=self) - - @observe('colors') - def init_inspector(self, changes=None): - # Object inspector - self.inspector = self.inspector_class( - oinspect.InspectColors, - PyColorize.ANSICodeColors, - self.colors, - self.object_info_string_level, - ) - - def init_io(self): - # implemented in subclasses, TerminalInteractiveShell does call - # colorama.init(). - pass - - def init_prompts(self): - # Set system prompts, so that scripts can decide if they are running - # interactively. - sys.ps1 = 'In : ' - sys.ps2 = '...: ' - sys.ps3 = 'Out: ' - - def init_display_formatter(self): - self.display_formatter = DisplayFormatter(parent=self) - self.configurables.append(self.display_formatter) - - def init_display_pub(self): - self.display_pub = self.display_pub_class(parent=self, shell=self) - self.configurables.append(self.display_pub) - - def init_data_pub(self): - if not self.data_pub_class: - self.data_pub = None - return - self.data_pub = self.data_pub_class(parent=self) - self.configurables.append(self.data_pub) - - def init_displayhook(self): - # Initialize displayhook, set in/out prompts and printing system - self.displayhook = self.displayhook_class( - parent=self, - shell=self, - cache_size=self.cache_size, - ) - self.configurables.append(self.displayhook) - # This is a context manager that installs/revmoes the displayhook at - # the appropriate time. - self.display_trap = DisplayTrap(hook=self.displayhook) - - @staticmethod - def get_path_links(p: Path): - """Gets path links including all symlinks - - Examples - -------- - In [1]: from IPython.core.interactiveshell import InteractiveShell - - In [2]: import sys, pathlib - - In [3]: paths = InteractiveShell.get_path_links(pathlib.Path(sys.executable)) - - In [4]: len(paths) == len(set(paths)) - Out[4]: True - - In [5]: bool(paths) - Out[5]: True - """ - paths = [p] - while p.is_symlink(): - new_path = Path(os.readlink(p)) - if not new_path.is_absolute(): - new_path = p.parent / new_path - p = new_path - paths.append(p) - return paths - - def init_virtualenv(self): - """Add the current virtualenv to sys.path so the user can import modules from it. - This isn't perfect: it doesn't use the Python interpreter with which the - virtualenv was built, and it ignores the --no-site-packages option. A - warning will appear suggesting the user installs IPython in the - virtualenv, but for many cases, it probably works well enough. - - Adapted from code snippets online. - - http://blog.ufsoft.org/2009/1/29/ipython-and-virtualenv - """ - if 'VIRTUAL_ENV' not in os.environ: - # Not in a virtualenv - return - elif os.environ["VIRTUAL_ENV"] == "": - warn("Virtual env path set to '', please check if this is intended.") - return - - p = Path(sys.executable) - p_venv = Path(os.environ["VIRTUAL_ENV"]) - - # fallback venv detection: - # stdlib venv may symlink sys.executable, so we can't use realpath. - # but others can symlink *to* the venv Python, so we can't just use sys.executable. - # So we just check every item in the symlink tree (generally <= 3) - paths = self.get_path_links(p) - - # In Cygwin paths like "c:\..." and '\cygdrive\c\...' are possible - if p_venv.parts[1] == "cygdrive": - drive_name = p_venv.parts[2] - p_venv = (drive_name + ":/") / Path(*p_venv.parts[3:]) - - if any(p_venv == p.parents[1] for p in paths): - # Our exe is inside or has access to the virtualenv, don't need to do anything. - return - - if sys.platform == "win32": - virtual_env = str(Path(os.environ["VIRTUAL_ENV"], "Lib", "site-packages")) - else: - virtual_env_path = Path( - os.environ["VIRTUAL_ENV"], "lib", "python{}.{}", "site-packages" - ) - p_ver = sys.version_info[:2] - - # Predict version from py[thon]-x.x in the $VIRTUAL_ENV - re_m = re.search(r"\bpy(?:thon)?([23])\.(\d+)\b", os.environ["VIRTUAL_ENV"]) - if re_m: - predicted_path = Path(str(virtual_env_path).format(*re_m.groups())) - if predicted_path.exists(): - p_ver = re_m.groups() - - virtual_env = str(virtual_env_path).format(*p_ver) - if self.warn_venv: - warn( - "Attempting to work in a virtualenv. If you encounter problems, " - "please install IPython inside the virtualenv." - ) - import site - sys.path.insert(0, virtual_env) - site.addsitedir(virtual_env) - - #------------------------------------------------------------------------- - # Things related to injections into the sys module - #------------------------------------------------------------------------- - - def save_sys_module_state(self): - """Save the state of hooks in the sys module. - - This has to be called after self.user_module is created. - """ - self._orig_sys_module_state = {'stdin': sys.stdin, - 'stdout': sys.stdout, - 'stderr': sys.stderr, - 'excepthook': sys.excepthook} - self._orig_sys_modules_main_name = self.user_module.__name__ - self._orig_sys_modules_main_mod = sys.modules.get(self.user_module.__name__) - - def restore_sys_module_state(self): - """Restore the state of the sys module.""" - try: - for k, v in self._orig_sys_module_state.items(): - setattr(sys, k, v) - except AttributeError: - pass - # Reset what what done in self.init_sys_modules - if self._orig_sys_modules_main_mod is not None: - sys.modules[self._orig_sys_modules_main_name] = self._orig_sys_modules_main_mod - - #------------------------------------------------------------------------- - # Things related to the banner - #------------------------------------------------------------------------- - - @property - def banner(self): - banner = self.banner1 - if self.profile and self.profile != 'default': - banner += '\nIPython profile: %s\n' % self.profile - if self.banner2: - banner += '\n' + self.banner2 - return banner - - def show_banner(self, banner=None): - if banner is None: - banner = self.banner - sys.stdout.write(banner) - - #------------------------------------------------------------------------- - # Things related to hooks - #------------------------------------------------------------------------- - - def init_hooks(self): - # hooks holds pointers used for user-side customizations - self.hooks = Struct() - - self.strdispatchers = {} - - # Set all default hooks, defined in the IPython.hooks module. - hooks = IPython.core.hooks - for hook_name in hooks.__all__: - # default hooks have priority 100, i.e. low; user hooks should have - # 0-100 priority - self.set_hook(hook_name, getattr(hooks, hook_name), 100) - - if self.display_page: - self.set_hook('show_in_pager', page.as_hook(page.display_page), 90) - - def set_hook(self, name, hook, priority=50, str_key=None, re_key=None): - """set_hook(name,hook) -> sets an internal IPython hook. - - IPython exposes some of its internal API as user-modifiable hooks. By - adding your function to one of these hooks, you can modify IPython's - behavior to call at runtime your own routines.""" - - # At some point in the future, this should validate the hook before it - # accepts it. Probably at least check that the hook takes the number - # of args it's supposed to. - - f = types.MethodType(hook,self) - - # check if the hook is for strdispatcher first - if str_key is not None: - sdp = self.strdispatchers.get(name, StrDispatch()) - sdp.add_s(str_key, f, priority ) - self.strdispatchers[name] = sdp - return - if re_key is not None: - sdp = self.strdispatchers.get(name, StrDispatch()) - sdp.add_re(re.compile(re_key), f, priority ) - self.strdispatchers[name] = sdp - return - - dp = getattr(self.hooks, name, None) - if name not in IPython.core.hooks.__all__: - print("Warning! Hook '%s' is not one of %s" % \ - (name, IPython.core.hooks.__all__ )) - - if name in IPython.core.hooks.deprecated: - alternative = IPython.core.hooks.deprecated[name] - raise ValueError( - "Hook {} has been deprecated since IPython 5.0. Use {} instead.".format( - name, alternative - ) - ) - - if not dp: - dp = IPython.core.hooks.CommandChainDispatcher() - - try: - dp.add(f,priority) - except AttributeError: - # it was not commandchain, plain old func - replace - dp = f - - setattr(self.hooks,name, dp) - - #------------------------------------------------------------------------- - # Things related to events - #------------------------------------------------------------------------- - - def init_events(self): - self.events = EventManager(self, available_events) - - self.events.register("pre_execute", self._clear_warning_registry) - - def register_post_execute(self, func): - """DEPRECATED: Use ip.events.register('post_run_cell', func) - - Register a function for calling after code execution. - """ - raise ValueError( - "ip.register_post_execute is deprecated since IPython 1.0, use " - "ip.events.register('post_run_cell', func) instead." - ) - - def _clear_warning_registry(self): - # clear the warning registry, so that different code blocks with - # overlapping line number ranges don't cause spurious suppression of - # warnings (see gh-6611 for details) - if "__warningregistry__" in self.user_global_ns: - del self.user_global_ns["__warningregistry__"] - - #------------------------------------------------------------------------- - # Things related to the "main" module - #------------------------------------------------------------------------- - - def new_main_mod(self, filename, modname): - """Return a new 'main' module object for user code execution. - - ``filename`` should be the path of the script which will be run in the - module. Requests with the same filename will get the same module, with - its namespace cleared. - - ``modname`` should be the module name - normally either '__main__' or - the basename of the file without the extension. - - When scripts are executed via %run, we must keep a reference to their - __main__ module around so that Python doesn't - clear it, rendering references to module globals useless. - - This method keeps said reference in a private dict, keyed by the - absolute path of the script. This way, for multiple executions of the - same script we only keep one copy of the namespace (the last one), - thus preventing memory leaks from old references while allowing the - objects from the last execution to be accessible. - """ - filename = os.path.abspath(filename) - try: - main_mod = self._main_mod_cache[filename] - except KeyError: - main_mod = self._main_mod_cache[filename] = types.ModuleType( - modname, - doc="Module created for script run in IPython") - else: - main_mod.__dict__.clear() - main_mod.__name__ = modname - - main_mod.__file__ = filename - # It seems pydoc (and perhaps others) needs any module instance to - # implement a __nonzero__ method - main_mod.__nonzero__ = lambda : True - - return main_mod - - def clear_main_mod_cache(self): - """Clear the cache of main modules. - - Mainly for use by utilities like %reset. - - Examples - -------- - In [15]: import IPython - - In [16]: m = _ip.new_main_mod(IPython.__file__, 'IPython') - - In [17]: len(_ip._main_mod_cache) > 0 - Out[17]: True - - In [18]: _ip.clear_main_mod_cache() - - In [19]: len(_ip._main_mod_cache) == 0 - Out[19]: True - """ - self._main_mod_cache.clear() - - #------------------------------------------------------------------------- - # Things related to debugging - #------------------------------------------------------------------------- - - def init_pdb(self): - # Set calling of pdb on exceptions - # self.call_pdb is a property - self.call_pdb = self.pdb - - def _get_call_pdb(self): - return self._call_pdb - - def _set_call_pdb(self,val): - - if val not in (0,1,False,True): - raise ValueError('new call_pdb value must be boolean') - - # store value in instance - self._call_pdb = val - - # notify the actual exception handlers - self.InteractiveTB.call_pdb = val - - call_pdb = property(_get_call_pdb,_set_call_pdb,None, - 'Control auto-activation of pdb at exceptions') - - def debugger(self,force=False): - """Call the pdb debugger. - - Keywords: - - - force(False): by default, this routine checks the instance call_pdb - flag and does not actually invoke the debugger if the flag is false. - The 'force' option forces the debugger to activate even if the flag - is false. - """ - - if not (force or self.call_pdb): - return - - if not hasattr(sys,'last_traceback'): - error('No traceback has been produced, nothing to debug.') - return - - self.InteractiveTB.debugger(force=True) - - #------------------------------------------------------------------------- - # Things related to IPython's various namespaces - #------------------------------------------------------------------------- - default_user_namespaces = True - - def init_create_namespaces(self, user_module=None, user_ns=None): - # Create the namespace where the user will operate. user_ns is - # normally the only one used, and it is passed to the exec calls as - # the locals argument. But we do carry a user_global_ns namespace - # given as the exec 'globals' argument, This is useful in embedding - # situations where the ipython shell opens in a context where the - # distinction between locals and globals is meaningful. For - # non-embedded contexts, it is just the same object as the user_ns dict. - - # FIXME. For some strange reason, __builtins__ is showing up at user - # level as a dict instead of a module. This is a manual fix, but I - # should really track down where the problem is coming from. Alex - # Schmolck reported this problem first. - - # A useful post by Alex Martelli on this topic: - # Re: inconsistent value from __builtins__ - # Von: Alex Martelli - # Datum: Freitag 01 Oktober 2004 04:45:34 nachmittags/abends - # Gruppen: comp.lang.python - - # Michael Hohn wrote: - # > >>> print type(builtin_check.get_global_binding('__builtins__')) - # > - # > >>> print type(__builtins__) - # > - # > Is this difference in return value intentional? - - # Well, it's documented that '__builtins__' can be either a dictionary - # or a module, and it's been that way for a long time. Whether it's - # intentional (or sensible), I don't know. In any case, the idea is - # that if you need to access the built-in namespace directly, you - # should start with "import __builtin__" (note, no 's') which will - # definitely give you a module. Yeah, it's somewhat confusing:-(. - - # These routines return a properly built module and dict as needed by - # the rest of the code, and can also be used by extension writers to - # generate properly initialized namespaces. - if (user_ns is not None) or (user_module is not None): - self.default_user_namespaces = False - self.user_module, self.user_ns = self.prepare_user_module(user_module, user_ns) - - # A record of hidden variables we have added to the user namespace, so - # we can list later only variables defined in actual interactive use. - self.user_ns_hidden = {} - - # Now that FakeModule produces a real module, we've run into a nasty - # problem: after script execution (via %run), the module where the user - # code ran is deleted. Now that this object is a true module (needed - # so doctest and other tools work correctly), the Python module - # teardown mechanism runs over it, and sets to None every variable - # present in that module. Top-level references to objects from the - # script survive, because the user_ns is updated with them. However, - # calling functions defined in the script that use other things from - # the script will fail, because the function's closure had references - # to the original objects, which are now all None. So we must protect - # these modules from deletion by keeping a cache. - # - # To avoid keeping stale modules around (we only need the one from the - # last run), we use a dict keyed with the full path to the script, so - # only the last version of the module is held in the cache. Note, - # however, that we must cache the module *namespace contents* (their - # __dict__). Because if we try to cache the actual modules, old ones - # (uncached) could be destroyed while still holding references (such as - # those held by GUI objects that tend to be long-lived)> - # - # The %reset command will flush this cache. See the cache_main_mod() - # and clear_main_mod_cache() methods for details on use. - - # This is the cache used for 'main' namespaces - self._main_mod_cache = {} - - # A table holding all the namespaces IPython deals with, so that - # introspection facilities can search easily. - self.ns_table = {'user_global':self.user_module.__dict__, - 'user_local':self.user_ns, - 'builtin':builtin_mod.__dict__ - } - - @property - def user_global_ns(self): - return self.user_module.__dict__ - - def prepare_user_module(self, user_module=None, user_ns=None): - """Prepare the module and namespace in which user code will be run. - - When IPython is started normally, both parameters are None: a new module - is created automatically, and its __dict__ used as the namespace. - - If only user_module is provided, its __dict__ is used as the namespace. - If only user_ns is provided, a dummy module is created, and user_ns - becomes the global namespace. If both are provided (as they may be - when embedding), user_ns is the local namespace, and user_module - provides the global namespace. - - Parameters - ---------- - user_module : module, optional - The current user module in which IPython is being run. If None, - a clean module will be created. - user_ns : dict, optional - A namespace in which to run interactive commands. - - Returns - ------- - A tuple of user_module and user_ns, each properly initialised. - """ - if user_module is None and user_ns is not None: - user_ns.setdefault("__name__", "__main__") - user_module = DummyMod() - user_module.__dict__ = user_ns - - if user_module is None: - user_module = types.ModuleType("__main__", - doc="Automatically created module for IPython interactive environment") - - # We must ensure that __builtin__ (without the final 's') is always - # available and pointing to the __builtin__ *module*. For more details: - # http://mail.python.org/pipermail/python-dev/2001-April/014068.html - user_module.__dict__.setdefault('__builtin__', builtin_mod) - user_module.__dict__.setdefault('__builtins__', builtin_mod) - - if user_ns is None: - user_ns = user_module.__dict__ - - return user_module, user_ns - - def init_sys_modules(self): - # We need to insert into sys.modules something that looks like a - # module but which accesses the IPython namespace, for shelve and - # pickle to work interactively. Normally they rely on getting - # everything out of __main__, but for embedding purposes each IPython - # instance has its own private namespace, so we can't go shoving - # everything into __main__. - - # note, however, that we should only do this for non-embedded - # ipythons, which really mimic the __main__.__dict__ with their own - # namespace. Embedded instances, on the other hand, should not do - # this because they need to manage the user local/global namespaces - # only, but they live within a 'normal' __main__ (meaning, they - # shouldn't overtake the execution environment of the script they're - # embedded in). - - # This is overridden in the InteractiveShellEmbed subclass to a no-op. - main_name = self.user_module.__name__ - sys.modules[main_name] = self.user_module - - def init_user_ns(self): - """Initialize all user-visible namespaces to their minimum defaults. - - Certain history lists are also initialized here, as they effectively - act as user namespaces. - - Notes - ----- - All data structures here are only filled in, they are NOT reset by this - method. If they were not empty before, data will simply be added to - them. - """ - # This function works in two parts: first we put a few things in - # user_ns, and we sync that contents into user_ns_hidden so that these - # initial variables aren't shown by %who. After the sync, we add the - # rest of what we *do* want the user to see with %who even on a new - # session (probably nothing, so they really only see their own stuff) - - # The user dict must *always* have a __builtin__ reference to the - # Python standard __builtin__ namespace, which must be imported. - # This is so that certain operations in prompt evaluation can be - # reliably executed with builtins. Note that we can NOT use - # __builtins__ (note the 's'), because that can either be a dict or a - # module, and can even mutate at runtime, depending on the context - # (Python makes no guarantees on it). In contrast, __builtin__ is - # always a module object, though it must be explicitly imported. - - # For more details: - # http://mail.python.org/pipermail/python-dev/2001-April/014068.html - ns = {} - - # make global variables for user access to the histories - ns['_ih'] = self.history_manager.input_hist_parsed - ns['_oh'] = self.history_manager.output_hist - ns['_dh'] = self.history_manager.dir_hist - - # user aliases to input and output histories. These shouldn't show up - # in %who, as they can have very large reprs. - ns['In'] = self.history_manager.input_hist_parsed - ns['Out'] = self.history_manager.output_hist - - # Store myself as the public api!!! - ns['get_ipython'] = self.get_ipython - - ns['exit'] = self.exiter - ns['quit'] = self.exiter - ns["open"] = _modified_open - - # Sync what we've added so far to user_ns_hidden so these aren't seen - # by %who - self.user_ns_hidden.update(ns) - - # Anything put into ns now would show up in %who. Think twice before - # putting anything here, as we really want %who to show the user their - # stuff, not our variables. - - # Finally, update the real user's namespace - self.user_ns.update(ns) - - @property - def all_ns_refs(self): - """Get a list of references to all the namespace dictionaries in which - IPython might store a user-created object. - - Note that this does not include the displayhook, which also caches - objects from the output.""" - return [self.user_ns, self.user_global_ns, self.user_ns_hidden] + \ - [m.__dict__ for m in self._main_mod_cache.values()] - - def reset(self, new_session=True, aggressive=False): - """Clear all internal namespaces, and attempt to release references to - user objects. - - If new_session is True, a new history session will be opened. - """ - # Clear histories - self.history_manager.reset(new_session) - # Reset counter used to index all histories - if new_session: - self.execution_count = 1 - - # Reset last execution result - self.last_execution_succeeded = True - self.last_execution_result = None - - # Flush cached output items - if self.displayhook.do_full_cache: - self.displayhook.flush() - - # The main execution namespaces must be cleared very carefully, - # skipping the deletion of the builtin-related keys, because doing so - # would cause errors in many object's __del__ methods. - if self.user_ns is not self.user_global_ns: - self.user_ns.clear() - ns = self.user_global_ns - drop_keys = set(ns.keys()) - drop_keys.discard('__builtin__') - drop_keys.discard('__builtins__') - drop_keys.discard('__name__') - for k in drop_keys: - del ns[k] - - self.user_ns_hidden.clear() - - # Restore the user namespaces to minimal usability - self.init_user_ns() - if aggressive and not hasattr(self, "_sys_modules_keys"): - print("Cannot restore sys.module, no snapshot") - elif aggressive: - print("culling sys module...") - current_keys = set(sys.modules.keys()) - for k in current_keys - self._sys_modules_keys: - if k.startswith("multiprocessing"): - continue - del sys.modules[k] - - # Restore the default and user aliases - self.alias_manager.clear_aliases() - self.alias_manager.init_aliases() - - # Now define aliases that only make sense on the terminal, because they - # need direct access to the console in a way that we can't emulate in - # GUI or web frontend - if os.name == 'posix': - for cmd in ('clear', 'more', 'less', 'man'): - if cmd not in self.magics_manager.magics['line']: - self.alias_manager.soft_define_alias(cmd, cmd) - - # Flush the private list of module references kept for script - # execution protection - self.clear_main_mod_cache() - - def del_var(self, varname, by_name=False): - """Delete a variable from the various namespaces, so that, as - far as possible, we're not keeping any hidden references to it. - - Parameters - ---------- - varname : str - The name of the variable to delete. - by_name : bool - If True, delete variables with the given name in each - namespace. If False (default), find the variable in the user - namespace, and delete references to it. - """ - if varname in ('__builtin__', '__builtins__'): - raise ValueError("Refusing to delete %s" % varname) - - ns_refs = self.all_ns_refs - - if by_name: # Delete by name - for ns in ns_refs: - try: - del ns[varname] - except KeyError: - pass - else: # Delete by object - try: - obj = self.user_ns[varname] - except KeyError as e: - raise NameError("name '%s' is not defined" % varname) from e - # Also check in output history - ns_refs.append(self.history_manager.output_hist) - for ns in ns_refs: - to_delete = [n for n, o in ns.items() if o is obj] - for name in to_delete: - del ns[name] - - # Ensure it is removed from the last execution result - if self.last_execution_result.result is obj: - self.last_execution_result = None - - # displayhook keeps extra references, but not in a dictionary - for name in ('_', '__', '___'): - if getattr(self.displayhook, name) is obj: - setattr(self.displayhook, name, None) - - def reset_selective(self, regex=None): - """Clear selective variables from internal namespaces based on a - specified regular expression. - - Parameters - ---------- - regex : string or compiled pattern, optional - A regular expression pattern that will be used in searching - variable names in the users namespaces. - """ - if regex is not None: - try: - m = re.compile(regex) - except TypeError as e: - raise TypeError('regex must be a string or compiled pattern') from e - # Search for keys in each namespace that match the given regex - # If a match is found, delete the key/value pair. - for ns in self.all_ns_refs: - for var in ns: - if m.search(var): - del ns[var] - - def push(self, variables, interactive=True): - """Inject a group of variables into the IPython user namespace. - - Parameters - ---------- - variables : dict, str or list/tuple of str - The variables to inject into the user's namespace. If a dict, a - simple update is done. If a str, the string is assumed to have - variable names separated by spaces. A list/tuple of str can also - be used to give the variable names. If just the variable names are - give (list/tuple/str) then the variable values looked up in the - callers frame. - interactive : bool - If True (default), the variables will be listed with the ``who`` - magic. - """ - vdict = None - - # We need a dict of name/value pairs to do namespace updates. - if isinstance(variables, dict): - vdict = variables - elif isinstance(variables, (str, list, tuple)): - if isinstance(variables, str): - vlist = variables.split() - else: - vlist = variables - vdict = {} - cf = sys._getframe(1) - for name in vlist: - try: - vdict[name] = eval(name, cf.f_globals, cf.f_locals) - except: - print('Could not get variable %s from %s' % - (name,cf.f_code.co_name)) - else: - raise ValueError('variables must be a dict/str/list/tuple') - - # Propagate variables to user namespace - self.user_ns.update(vdict) - - # And configure interactive visibility - user_ns_hidden = self.user_ns_hidden - if interactive: - for name in vdict: - user_ns_hidden.pop(name, None) - else: - user_ns_hidden.update(vdict) - - def drop_by_id(self, variables): - """Remove a dict of variables from the user namespace, if they are the - same as the values in the dictionary. - - This is intended for use by extensions: variables that they've added can - be taken back out if they are unloaded, without removing any that the - user has overwritten. - - Parameters - ---------- - variables : dict - A dictionary mapping object names (as strings) to the objects. - """ - for name, obj in variables.items(): - if name in self.user_ns and self.user_ns[name] is obj: - del self.user_ns[name] - self.user_ns_hidden.pop(name, None) - - #------------------------------------------------------------------------- - # Things related to object introspection - #------------------------------------------------------------------------- - @staticmethod - def _find_parts(oname: str) -> Tuple[bool, ListType[str]]: - """ - Given an object name, return a list of parts of this object name. - - Basically split on docs when using attribute access, - and extract the value when using square bracket. - - - For example foo.bar[3].baz[x] -> foo, bar, 3, baz, x - - - Returns - ------- - parts_ok: bool - wether we were properly able to parse parts. - parts: list of str - extracted parts - - - - """ - raw_parts = oname.split(".") - parts = [] - parts_ok = True - for p in raw_parts: - if p.endswith("]"): - var, *indices = p.split("[") - if not var.isidentifier(): - parts_ok = False - break - parts.append(var) - for ind in indices: - if ind[-1] != "]" and not is_integer_string(ind[:-1]): - parts_ok = False - break - parts.append(ind[:-1]) - continue - - if not p.isidentifier(): - parts_ok = False - parts.append(p) - - return parts_ok, parts - - def _ofind( - self, oname: str, namespaces: Optional[Sequence[Tuple[str, AnyType]]] = None - ) -> OInfo: - """Find an object in the available namespaces. - - - Returns - ------- - OInfo with fields: - - ismagic - - isalias - - found - - obj - - namespac - - parent - - Has special code to detect magic functions. - """ - oname = oname.strip() - parts_ok, parts = self._find_parts(oname) - - if ( - not oname.startswith(ESC_MAGIC) - and not oname.startswith(ESC_MAGIC2) - and not parts_ok - ): - return OInfo( - ismagic=False, - isalias=False, - found=False, - obj=None, - namespace=None, - parent=None, - ) - - if namespaces is None: - # Namespaces to search in: - # Put them in a list. The order is important so that we - # find things in the same order that Python finds them. - namespaces = [ ('Interactive', self.user_ns), - ('Interactive (global)', self.user_global_ns), - ('Python builtin', builtin_mod.__dict__), - ] - - ismagic = False - isalias = False - found = False - ospace = None - parent = None - obj = None - - - # Look for the given name by splitting it in parts. If the head is - # found, then we look for all the remaining parts as members, and only - # declare success if we can find them all. - oname_parts = parts - oname_head, oname_rest = oname_parts[0],oname_parts[1:] - for nsname,ns in namespaces: - try: - obj = ns[oname_head] - except KeyError: - continue - else: - for idx, part in enumerate(oname_rest): - try: - parent = obj - # The last part is looked up in a special way to avoid - # descriptor invocation as it may raise or have side - # effects. - if idx == len(oname_rest) - 1: - obj = self._getattr_property(obj, part) - else: - if is_integer_string(part): - obj = obj[int(part)] - else: - obj = getattr(obj, part) - except: - # Blanket except b/c some badly implemented objects - # allow __getattr__ to raise exceptions other than - # AttributeError, which then crashes IPython. - break - else: - # If we finish the for loop (no break), we got all members - found = True - ospace = nsname - break # namespace loop - - # Try to see if it's magic - if not found: - obj = None - if oname.startswith(ESC_MAGIC2): - oname = oname.lstrip(ESC_MAGIC2) - obj = self.find_cell_magic(oname) - elif oname.startswith(ESC_MAGIC): - oname = oname.lstrip(ESC_MAGIC) - obj = self.find_line_magic(oname) - else: - # search without prefix, so run? will find %run? - obj = self.find_line_magic(oname) - if obj is None: - obj = self.find_cell_magic(oname) - if obj is not None: - found = True - ospace = 'IPython internal' - ismagic = True - isalias = isinstance(obj, Alias) - - # Last try: special-case some literals like '', [], {}, etc: - if not found and oname_head in ["''",'""','[]','{}','()']: - obj = eval(oname_head) - found = True - ospace = 'Interactive' - - return OInfo( - obj=obj, - found=found, - parent=parent, - ismagic=ismagic, - isalias=isalias, - namespace=ospace, - ) - - @staticmethod - def _getattr_property(obj, attrname): - """Property-aware getattr to use in object finding. - - If attrname represents a property, return it unevaluated (in case it has - side effects or raises an error. - - """ - if not isinstance(obj, type): - try: - # `getattr(type(obj), attrname)` is not guaranteed to return - # `obj`, but does so for property: - # - # property.__get__(self, None, cls) -> self - # - # The universal alternative is to traverse the mro manually - # searching for attrname in class dicts. - if is_integer_string(attrname): - return obj[int(attrname)] - else: - attr = getattr(type(obj), attrname) - except AttributeError: - pass - else: - # This relies on the fact that data descriptors (with both - # __get__ & __set__ magic methods) take precedence over - # instance-level attributes: - # - # class A(object): - # @property - # def foobar(self): return 123 - # a = A() - # a.__dict__['foobar'] = 345 - # a.foobar # == 123 - # - # So, a property may be returned right away. - if isinstance(attr, property): - return attr - - # Nothing helped, fall back. - return getattr(obj, attrname) - - def _object_find(self, oname, namespaces=None) -> OInfo: - """Find an object and return a struct with info about it.""" - return self._ofind(oname, namespaces) - - def _inspect(self, meth, oname, namespaces=None, **kw): - """Generic interface to the inspector system. - - This function is meant to be called by pdef, pdoc & friends. - """ - info: OInfo = self._object_find(oname, namespaces) - if self.sphinxify_docstring: - if sphinxify is None: - raise ImportError("Module ``docrepr`` required but missing") - docformat = sphinxify(self.object_inspect(oname)) - else: - docformat = None - if info.found or hasattr(info.parent, oinspect.HOOK_NAME): - pmethod = getattr(self.inspector, meth) - # TODO: only apply format_screen to the plain/text repr of the mime - # bundle. - formatter = format_screen if info.ismagic else docformat - if meth == 'pdoc': - pmethod(info.obj, oname, formatter) - elif meth == 'pinfo': - pmethod( - info.obj, - oname, - formatter, - info, - enable_html_pager=self.enable_html_pager, - **kw, - ) - else: - pmethod(info.obj, oname) - else: - print('Object `%s` not found.' % oname) - return 'not found' # so callers can take other action - - def object_inspect(self, oname, detail_level=0): - """Get object info about oname""" - with self.builtin_trap: - info = self._object_find(oname) - if info.found: - return self.inspector.info(info.obj, oname, info=info, - detail_level=detail_level - ) - else: - return oinspect.object_info(name=oname, found=False) - - def object_inspect_text(self, oname, detail_level=0): - """Get object info as formatted text""" - return self.object_inspect_mime(oname, detail_level)['text/plain'] - - def object_inspect_mime(self, oname, detail_level=0, omit_sections=()): - """Get object info as a mimebundle of formatted representations. - - A mimebundle is a dictionary, keyed by mime-type. - It must always have the key `'text/plain'`. - """ - with self.builtin_trap: - info = self._object_find(oname) - if info.found: - docformat = ( - sphinxify(self.object_inspect(oname)) - if self.sphinxify_docstring - else None - ) - return self.inspector._get_info( - info.obj, - oname, - info=info, - detail_level=detail_level, - formatter=docformat, - omit_sections=omit_sections, - ) - else: - raise KeyError(oname) - - #------------------------------------------------------------------------- - # Things related to history management - #------------------------------------------------------------------------- - - def init_history(self): - """Sets up the command history, and starts regular autosaves.""" - self.history_manager = HistoryManager(shell=self, parent=self) - self.configurables.append(self.history_manager) - - #------------------------------------------------------------------------- - # Things related to exception handling and tracebacks (not debugging) - #------------------------------------------------------------------------- - - debugger_cls = InterruptiblePdb - - def init_traceback_handlers(self, custom_exceptions): - # Syntax error handler. - self.SyntaxTB = ultratb.SyntaxTB(color_scheme='NoColor', parent=self) - - # The interactive one is initialized with an offset, meaning we always - # want to remove the topmost item in the traceback, which is our own - # internal code. Valid modes: ['Plain','Context','Verbose','Minimal'] - self.InteractiveTB = ultratb.AutoFormattedTB(mode = 'Plain', - color_scheme='NoColor', - tb_offset = 1, - debugger_cls=self.debugger_cls, parent=self) - - # The instance will store a pointer to the system-wide exception hook, - # so that runtime code (such as magics) can access it. This is because - # during the read-eval loop, it may get temporarily overwritten. - self.sys_excepthook = sys.excepthook - - # and add any custom exception handlers the user may have specified - self.set_custom_exc(*custom_exceptions) - - # Set the exception mode - self.InteractiveTB.set_mode(mode=self.xmode) - - def set_custom_exc(self, exc_tuple, handler): - """set_custom_exc(exc_tuple, handler) - - Set a custom exception handler, which will be called if any of the - exceptions in exc_tuple occur in the mainloop (specifically, in the - run_code() method). - - Parameters - ---------- - exc_tuple : tuple of exception classes - A *tuple* of exception classes, for which to call the defined - handler. It is very important that you use a tuple, and NOT A - LIST here, because of the way Python's except statement works. If - you only want to trap a single exception, use a singleton tuple:: - - exc_tuple == (MyCustomException,) - - handler : callable - handler must have the following signature:: - - def my_handler(self, etype, value, tb, tb_offset=None): - ... - return structured_traceback - - Your handler must return a structured traceback (a list of strings), - or None. - - This will be made into an instance method (via types.MethodType) - of IPython itself, and it will be called if any of the exceptions - listed in the exc_tuple are caught. If the handler is None, an - internal basic one is used, which just prints basic info. - - To protect IPython from crashes, if your handler ever raises an - exception or returns an invalid result, it will be immediately - disabled. - - Notes - ----- - WARNING: by putting in your own exception handler into IPython's main - execution loop, you run a very good chance of nasty crashes. This - facility should only be used if you really know what you are doing. - """ - - if not isinstance(exc_tuple, tuple): - raise TypeError("The custom exceptions must be given as a tuple.") - - def dummy_handler(self, etype, value, tb, tb_offset=None): - print('*** Simple custom exception handler ***') - print('Exception type :', etype) - print('Exception value:', value) - print('Traceback :', tb) - - def validate_stb(stb): - """validate structured traceback return type - - return type of CustomTB *should* be a list of strings, but allow - single strings or None, which are harmless. - - This function will *always* return a list of strings, - and will raise a TypeError if stb is inappropriate. - """ - msg = "CustomTB must return list of strings, not %r" % stb - if stb is None: - return [] - elif isinstance(stb, str): - return [stb] - elif not isinstance(stb, list): - raise TypeError(msg) - # it's a list - for line in stb: - # check every element - if not isinstance(line, str): - raise TypeError(msg) - return stb - - if handler is None: - wrapped = dummy_handler - else: - def wrapped(self,etype,value,tb,tb_offset=None): - """wrap CustomTB handler, to protect IPython from user code - - This makes it harder (but not impossible) for custom exception - handlers to crash IPython. - """ - try: - stb = handler(self,etype,value,tb,tb_offset=tb_offset) - return validate_stb(stb) - except: - # clear custom handler immediately - self.set_custom_exc((), None) - print("Custom TB Handler failed, unregistering", file=sys.stderr) - # show the exception in handler first - stb = self.InteractiveTB.structured_traceback(*sys.exc_info()) - print(self.InteractiveTB.stb2text(stb)) - print("The original exception:") - stb = self.InteractiveTB.structured_traceback( - (etype,value,tb), tb_offset=tb_offset - ) - return stb - - self.CustomTB = types.MethodType(wrapped,self) - self.custom_exceptions = exc_tuple - - def excepthook(self, etype, value, tb): - """One more defense for GUI apps that call sys.excepthook. - - GUI frameworks like wxPython trap exceptions and call - sys.excepthook themselves. I guess this is a feature that - enables them to keep running after exceptions that would - otherwise kill their mainloop. This is a bother for IPython - which expects to catch all of the program exceptions with a try: - except: statement. - - Normally, IPython sets sys.excepthook to a CrashHandler instance, so if - any app directly invokes sys.excepthook, it will look to the user like - IPython crashed. In order to work around this, we can disable the - CrashHandler and replace it with this excepthook instead, which prints a - regular traceback using our InteractiveTB. In this fashion, apps which - call sys.excepthook will generate a regular-looking exception from - IPython, and the CrashHandler will only be triggered by real IPython - crashes. - - This hook should be used sparingly, only in places which are not likely - to be true IPython errors. - """ - self.showtraceback((etype, value, tb), tb_offset=0) - - def _get_exc_info(self, exc_tuple=None): - """get exc_info from a given tuple, sys.exc_info() or sys.last_type etc. - - Ensures sys.last_type,value,traceback hold the exc_info we found, - from whichever source. - - raises ValueError if none of these contain any information - """ - if exc_tuple is None: - etype, value, tb = sys.exc_info() - else: - etype, value, tb = exc_tuple - - if etype is None: - if hasattr(sys, 'last_type'): - etype, value, tb = sys.last_type, sys.last_value, \ - sys.last_traceback - - if etype is None: - raise ValueError("No exception to find") - - # Now store the exception info in sys.last_type etc. - # WARNING: these variables are somewhat deprecated and not - # necessarily safe to use in a threaded environment, but tools - # like pdb depend on their existence, so let's set them. If we - # find problems in the field, we'll need to revisit their use. - sys.last_type = etype - sys.last_value = value - sys.last_traceback = tb - - return etype, value, tb - - def show_usage_error(self, exc): - """Show a short message for UsageErrors - - These are special exceptions that shouldn't show a traceback. - """ - print("UsageError: %s" % exc, file=sys.stderr) - - def get_exception_only(self, exc_tuple=None): - """ - Return as a string (ending with a newline) the exception that - just occurred, without any traceback. - """ - etype, value, tb = self._get_exc_info(exc_tuple) - msg = traceback.format_exception_only(etype, value) - return ''.join(msg) - - def showtraceback(self, exc_tuple=None, filename=None, tb_offset=None, - exception_only=False, running_compiled_code=False): - """Display the exception that just occurred. - - If nothing is known about the exception, this is the method which - should be used throughout the code for presenting user tracebacks, - rather than directly invoking the InteractiveTB object. - - A specific showsyntaxerror() also exists, but this method can take - care of calling it if needed, so unless you are explicitly catching a - SyntaxError exception, don't try to analyze the stack manually and - simply call this method.""" - - try: - try: - etype, value, tb = self._get_exc_info(exc_tuple) - except ValueError: - print('No traceback available to show.', file=sys.stderr) - return - - if issubclass(etype, SyntaxError): - # Though this won't be called by syntax errors in the input - # line, there may be SyntaxError cases with imported code. - self.showsyntaxerror(filename, running_compiled_code) - elif etype is UsageError: - self.show_usage_error(value) - else: - if exception_only: - stb = ['An exception has occurred, use %tb to see ' - 'the full traceback.\n'] - stb.extend(self.InteractiveTB.get_exception_only(etype, - value)) - else: - try: - # Exception classes can customise their traceback - we - # use this in IPython.parallel for exceptions occurring - # in the engines. This should return a list of strings. - if hasattr(value, "_render_traceback_"): - stb = value._render_traceback_() - else: - stb = self.InteractiveTB.structured_traceback( - etype, value, tb, tb_offset=tb_offset - ) - - except Exception: - print( - "Unexpected exception formatting exception. Falling back to standard exception" - ) - traceback.print_exc() - return None - - self._showtraceback(etype, value, stb) - if self.call_pdb: - # drop into debugger - self.debugger(force=True) - return - - # Actually show the traceback - self._showtraceback(etype, value, stb) - - except KeyboardInterrupt: - print('\n' + self.get_exception_only(), file=sys.stderr) - - def _showtraceback(self, etype, evalue, stb: str): - """Actually show a traceback. - - Subclasses may override this method to put the traceback on a different - place, like a side channel. - """ - val = self.InteractiveTB.stb2text(stb) - try: - print(val) - except UnicodeEncodeError: - print(val.encode("utf-8", "backslashreplace").decode()) - - def showsyntaxerror(self, filename=None, running_compiled_code=False): - """Display the syntax error that just occurred. - - This doesn't display a stack trace because there isn't one. - - If a filename is given, it is stuffed in the exception instead - of what was there before (because Python's parser always uses - "" when reading from a string). - - If the syntax error occurred when running a compiled code (i.e. running_compile_code=True), - longer stack trace will be displayed. - """ - etype, value, last_traceback = self._get_exc_info() - - if filename and issubclass(etype, SyntaxError): - try: - value.filename = filename - except: - # Not the format we expect; leave it alone - pass - - # If the error occurred when executing compiled code, we should provide full stacktrace. - elist = traceback.extract_tb(last_traceback) if running_compiled_code else [] - stb = self.SyntaxTB.structured_traceback(etype, value, elist) - self._showtraceback(etype, value, stb) - - # This is overridden in TerminalInteractiveShell to show a message about - # the %paste magic. - def showindentationerror(self): - """Called by _run_cell when there's an IndentationError in code entered - at the prompt. - - This is overridden in TerminalInteractiveShell to show a message about - the %paste magic.""" - self.showsyntaxerror() - - @skip_doctest - def set_next_input(self, s, replace=False): - """ Sets the 'default' input string for the next command line. - - Example:: - - In [1]: _ip.set_next_input("Hello Word") - In [2]: Hello Word_ # cursor is here - """ - self.rl_next_input = s - - def _indent_current_str(self): - """return the current level of indentation as a string""" - return self.input_splitter.get_indent_spaces() * ' ' - - #------------------------------------------------------------------------- - # Things related to text completion - #------------------------------------------------------------------------- - - def init_completer(self): - """Initialize the completion machinery. - - This creates completion machinery that can be used by client code, - either interactively in-process (typically triggered by the readline - library), programmatically (such as in test suites) or out-of-process - (typically over the network by remote frontends). - """ - from IPython.core.completer import IPCompleter - from IPython.core.completerlib import ( - cd_completer, - magic_run_completer, - module_completer, - reset_completer, - ) - - self.Completer = IPCompleter(shell=self, - namespace=self.user_ns, - global_namespace=self.user_global_ns, - parent=self, - ) - self.configurables.append(self.Completer) - - # Add custom completers to the basic ones built into IPCompleter - sdisp = self.strdispatchers.get('complete_command', StrDispatch()) - self.strdispatchers['complete_command'] = sdisp - self.Completer.custom_completers = sdisp - - self.set_hook('complete_command', module_completer, str_key = 'import') - self.set_hook('complete_command', module_completer, str_key = 'from') - self.set_hook('complete_command', module_completer, str_key = '%aimport') - self.set_hook('complete_command', magic_run_completer, str_key = '%run') - self.set_hook('complete_command', cd_completer, str_key = '%cd') - self.set_hook('complete_command', reset_completer, str_key = '%reset') - - @skip_doctest - def complete(self, text, line=None, cursor_pos=None): - """Return the completed text and a list of completions. - - Parameters - ---------- - text : string - A string of text to be completed on. It can be given as empty and - instead a line/position pair are given. In this case, the - completer itself will split the line like readline does. - line : string, optional - The complete line that text is part of. - cursor_pos : int, optional - The position of the cursor on the input line. - - Returns - ------- - text : string - The actual text that was completed. - matches : list - A sorted list with all possible completions. - - Notes - ----- - The optional arguments allow the completion to take more context into - account, and are part of the low-level completion API. - - This is a wrapper around the completion mechanism, similar to what - readline does at the command line when the TAB key is hit. By - exposing it as a method, it can be used by other non-readline - environments (such as GUIs) for text completion. - - Examples - -------- - In [1]: x = 'hello' - - In [2]: _ip.complete('x.l') - Out[2]: ('x.l', ['x.ljust', 'x.lower', 'x.lstrip']) - """ - - # Inject names into __builtin__ so we can complete on the added names. - with self.builtin_trap: - return self.Completer.complete(text, line, cursor_pos) - - def set_custom_completer(self, completer, pos=0) -> None: - """Adds a new custom completer function. - - The position argument (defaults to 0) is the index in the completers - list where you want the completer to be inserted. - - `completer` should have the following signature:: - - def completion(self: Completer, text: string) -> List[str]: - raise NotImplementedError - - It will be bound to the current Completer instance and pass some text - and return a list with current completions to suggest to the user. - """ - - newcomp = types.MethodType(completer, self.Completer) - self.Completer.custom_matchers.insert(pos,newcomp) - - def set_completer_frame(self, frame=None): - """Set the frame of the completer.""" - if frame: - self.Completer.namespace = frame.f_locals - self.Completer.global_namespace = frame.f_globals - else: - self.Completer.namespace = self.user_ns - self.Completer.global_namespace = self.user_global_ns - - #------------------------------------------------------------------------- - # Things related to magics - #------------------------------------------------------------------------- - - def init_magics(self): - from IPython.core import magics as m - self.magics_manager = magic.MagicsManager(shell=self, - parent=self, - user_magics=m.UserMagics(self)) - self.configurables.append(self.magics_manager) - - # Expose as public API from the magics manager - self.register_magics = self.magics_manager.register - - self.register_magics(m.AutoMagics, m.BasicMagics, m.CodeMagics, - m.ConfigMagics, m.DisplayMagics, m.ExecutionMagics, - m.ExtensionMagics, m.HistoryMagics, m.LoggingMagics, - m.NamespaceMagics, m.OSMagics, m.PackagingMagics, - m.PylabMagics, m.ScriptMagics, - ) - self.register_magics(m.AsyncMagics) - - # Register Magic Aliases - mman = self.magics_manager - # FIXME: magic aliases should be defined by the Magics classes - # or in MagicsManager, not here - mman.register_alias('ed', 'edit') - mman.register_alias('hist', 'history') - mman.register_alias('rep', 'recall') - mman.register_alias('SVG', 'svg', 'cell') - mman.register_alias('HTML', 'html', 'cell') - mman.register_alias('file', 'writefile', 'cell') - - # FIXME: Move the color initialization to the DisplayHook, which - # should be split into a prompt manager and displayhook. We probably - # even need a centralize colors management object. - self.run_line_magic('colors', self.colors) - - # Defined here so that it's included in the documentation - @functools.wraps(magic.MagicsManager.register_function) - def register_magic_function(self, func, magic_kind='line', magic_name=None): - self.magics_manager.register_function( - func, magic_kind=magic_kind, magic_name=magic_name - ) - - def _find_with_lazy_load(self, /, type_, magic_name: str): - """ - Try to find a magic potentially lazy-loading it. - - Parameters - ---------- - - type_: "line"|"cell" - the type of magics we are trying to find/lazy load. - magic_name: str - The name of the magic we are trying to find/lazy load - - - Note that this may have any side effects - """ - finder = {"line": self.find_line_magic, "cell": self.find_cell_magic}[type_] - fn = finder(magic_name) - if fn is not None: - return fn - lazy = self.magics_manager.lazy_magics.get(magic_name) - if lazy is None: - return None - - self.run_line_magic("load_ext", lazy) - res = finder(magic_name) - return res - - def run_line_magic(self, magic_name: str, line, _stack_depth=1): - """Execute the given line magic. - - Parameters - ---------- - magic_name : str - Name of the desired magic function, without '%' prefix. - line : str - The rest of the input line as a single string. - _stack_depth : int - If run_line_magic() is called from magic() then _stack_depth=2. - This is added to ensure backward compatibility for use of 'get_ipython().magic()' - """ - fn = self._find_with_lazy_load("line", magic_name) - if fn is None: - lazy = self.magics_manager.lazy_magics.get(magic_name) - if lazy: - self.run_line_magic("load_ext", lazy) - fn = self.find_line_magic(magic_name) - if fn is None: - cm = self.find_cell_magic(magic_name) - etpl = "Line magic function `%%%s` not found%s." - extra = '' if cm is None else (' (But cell magic `%%%%%s` exists, ' - 'did you mean that instead?)' % magic_name ) - raise UsageError(etpl % (magic_name, extra)) - else: - # Note: this is the distance in the stack to the user's frame. - # This will need to be updated if the internal calling logic gets - # refactored, or else we'll be expanding the wrong variables. - - # Determine stack_depth depending on where run_line_magic() has been called - stack_depth = _stack_depth - if getattr(fn, magic.MAGIC_NO_VAR_EXPAND_ATTR, False): - # magic has opted out of var_expand - magic_arg_s = line - else: - magic_arg_s = self.var_expand(line, stack_depth) - # Put magic args in a list so we can call with f(*a) syntax - args = [magic_arg_s] - kwargs = {} - # Grab local namespace if we need it: - if getattr(fn, "needs_local_scope", False): - kwargs['local_ns'] = self.get_local_scope(stack_depth) - with self.builtin_trap: - result = fn(*args, **kwargs) - - # The code below prevents the output from being displayed - # when using magics with decodator @output_can_be_silenced - # when the last Python token in the expression is a ';'. - if getattr(fn, magic.MAGIC_OUTPUT_CAN_BE_SILENCED, False): - if DisplayHook.semicolon_at_end_of_expression(magic_arg_s): - return None - - return result - - def get_local_scope(self, stack_depth): - """Get local scope at given stack depth. - - Parameters - ---------- - stack_depth : int - Depth relative to calling frame - """ - return sys._getframe(stack_depth + 1).f_locals - - def run_cell_magic(self, magic_name, line, cell): - """Execute the given cell magic. - - Parameters - ---------- - magic_name : str - Name of the desired magic function, without '%' prefix. - line : str - The rest of the first input line as a single string. - cell : str - The body of the cell as a (possibly multiline) string. - """ - fn = self._find_with_lazy_load("cell", magic_name) - if fn is None: - lm = self.find_line_magic(magic_name) - etpl = "Cell magic `%%{0}` not found{1}." - extra = '' if lm is None else (' (But line magic `%{0}` exists, ' - 'did you mean that instead?)'.format(magic_name)) - raise UsageError(etpl.format(magic_name, extra)) - elif cell == '': - message = '%%{0} is a cell magic, but the cell body is empty.'.format(magic_name) - if self.find_line_magic(magic_name) is not None: - message += ' Did you mean the line magic %{0} (single %)?'.format(magic_name) - raise UsageError(message) - else: - # Note: this is the distance in the stack to the user's frame. - # This will need to be updated if the internal calling logic gets - # refactored, or else we'll be expanding the wrong variables. - stack_depth = 2 - if getattr(fn, magic.MAGIC_NO_VAR_EXPAND_ATTR, False): - # magic has opted out of var_expand - magic_arg_s = line - else: - magic_arg_s = self.var_expand(line, stack_depth) - kwargs = {} - if getattr(fn, "needs_local_scope", False): - kwargs['local_ns'] = self.user_ns - - with self.builtin_trap: - args = (magic_arg_s, cell) - result = fn(*args, **kwargs) - - # The code below prevents the output from being displayed - # when using magics with decodator @output_can_be_silenced - # when the last Python token in the expression is a ';'. - if getattr(fn, magic.MAGIC_OUTPUT_CAN_BE_SILENCED, False): - if DisplayHook.semicolon_at_end_of_expression(cell): - return None - - return result - - def find_line_magic(self, magic_name): - """Find and return a line magic by name. - - Returns None if the magic isn't found.""" - return self.magics_manager.magics['line'].get(magic_name) - - def find_cell_magic(self, magic_name): - """Find and return a cell magic by name. - - Returns None if the magic isn't found.""" - return self.magics_manager.magics['cell'].get(magic_name) - - def find_magic(self, magic_name, magic_kind='line'): - """Find and return a magic of the given type by name. - - Returns None if the magic isn't found.""" - return self.magics_manager.magics[magic_kind].get(magic_name) - - def magic(self, arg_s): - """ - DEPRECATED - - Deprecated since IPython 0.13 (warning added in - 8.1), use run_line_magic(magic_name, parameter_s). - - Call a magic function by name. - - Input: a string containing the name of the magic function to call and - any additional arguments to be passed to the magic. - - magic('name -opt foo bar') is equivalent to typing at the ipython - prompt: - - In[1]: %name -opt foo bar - - To call a magic without arguments, simply use magic('name'). - - This provides a proper Python function to call IPython's magics in any - valid Python code you can type at the interpreter, including loops and - compound statements. - """ - warnings.warn( - "`magic(...)` is deprecated since IPython 0.13 (warning added in " - "8.1), use run_line_magic(magic_name, parameter_s).", - DeprecationWarning, - stacklevel=2, - ) - # TODO: should we issue a loud deprecation warning here? - magic_name, _, magic_arg_s = arg_s.partition(' ') - magic_name = magic_name.lstrip(prefilter.ESC_MAGIC) - return self.run_line_magic(magic_name, magic_arg_s, _stack_depth=2) - - #------------------------------------------------------------------------- - # Things related to macros - #------------------------------------------------------------------------- - - def define_macro(self, name, themacro): - """Define a new macro - - Parameters - ---------- - name : str - The name of the macro. - themacro : str or Macro - The action to do upon invoking the macro. If a string, a new - Macro object is created by passing the string to it. - """ - - from IPython.core import macro - - if isinstance(themacro, str): - themacro = macro.Macro(themacro) - if not isinstance(themacro, macro.Macro): - raise ValueError('A macro must be a string or a Macro instance.') - self.user_ns[name] = themacro - - #------------------------------------------------------------------------- - # Things related to the running of system commands - #------------------------------------------------------------------------- - - def system_piped(self, cmd): - """Call the given cmd in a subprocess, piping stdout/err - - Parameters - ---------- - cmd : str - Command to execute (can not end in '&', as background processes are - not supported. Should not be a command that expects input - other than simple text. - """ - if cmd.rstrip().endswith('&'): - # this is *far* from a rigorous test - # We do not support backgrounding processes because we either use - # pexpect or pipes to read from. Users can always just call - # os.system() or use ip.system=ip.system_raw - # if they really want a background process. - raise OSError("Background processes not supported.") - - # we explicitly do NOT return the subprocess status code, because - # a non-None value would trigger :func:`sys.displayhook` calls. - # Instead, we store the exit_code in user_ns. - self.user_ns['_exit_code'] = system(self.var_expand(cmd, depth=1)) - - def system_raw(self, cmd): - """Call the given cmd in a subprocess using os.system on Windows or - subprocess.call using the system shell on other platforms. - - Parameters - ---------- - cmd : str - Command to execute. - """ - cmd = self.var_expand(cmd, depth=1) - # warn if there is an IPython magic alternative. - main_cmd = cmd.split()[0] - has_magic_alternatives = ("pip", "conda", "cd") - - if main_cmd in has_magic_alternatives: - warnings.warn( - ( - "You executed the system command !{0} which may not work " - "as expected. Try the IPython magic %{0} instead." - ).format(main_cmd) - ) - - # protect os.system from UNC paths on Windows, which it can't handle: - if sys.platform == 'win32': - from IPython.utils._process_win32 import AvoidUNCPath - with AvoidUNCPath() as path: - if path is not None: - cmd = '"pushd %s &&"%s' % (path, cmd) - try: - ec = os.system(cmd) - except KeyboardInterrupt: - print('\n' + self.get_exception_only(), file=sys.stderr) - ec = -2 - else: - # For posix the result of the subprocess.call() below is an exit - # code, which by convention is zero for success, positive for - # program failure. Exit codes above 128 are reserved for signals, - # and the formula for converting a signal to an exit code is usually - # signal_number+128. To more easily differentiate between exit - # codes and signals, ipython uses negative numbers. For instance - # since control-c is signal 2 but exit code 130, ipython's - # _exit_code variable will read -2. Note that some shells like - # csh and fish don't follow sh/bash conventions for exit codes. - executable = os.environ.get('SHELL', None) - try: - # Use env shell instead of default /bin/sh - ec = subprocess.call(cmd, shell=True, executable=executable) - except KeyboardInterrupt: - # intercept control-C; a long traceback is not useful here - print('\n' + self.get_exception_only(), file=sys.stderr) - ec = 130 - if ec > 128: - ec = -(ec - 128) - - # We explicitly do NOT return the subprocess status code, because - # a non-None value would trigger :func:`sys.displayhook` calls. - # Instead, we store the exit_code in user_ns. Note the semantics - # of _exit_code: for control-c, _exit_code == -signal.SIGNIT, - # but raising SystemExit(_exit_code) will give status 254! - self.user_ns['_exit_code'] = ec - - # use piped system by default, because it is better behaved - system = system_piped - - def getoutput(self, cmd, split=True, depth=0): - """Get output (possibly including stderr) from a subprocess. - - Parameters - ---------- - cmd : str - Command to execute (can not end in '&', as background processes are - not supported. - split : bool, optional - If True, split the output into an IPython SList. Otherwise, an - IPython LSString is returned. These are objects similar to normal - lists and strings, with a few convenience attributes for easier - manipulation of line-based output. You can use '?' on them for - details. - depth : int, optional - How many frames above the caller are the local variables which should - be expanded in the command string? The default (0) assumes that the - expansion variables are in the stack frame calling this function. - """ - if cmd.rstrip().endswith('&'): - # this is *far* from a rigorous test - raise OSError("Background processes not supported.") - out = getoutput(self.var_expand(cmd, depth=depth+1)) - if split: - out = SList(out.splitlines()) - else: - out = LSString(out) - return out - - #------------------------------------------------------------------------- - # Things related to aliases - #------------------------------------------------------------------------- - - def init_alias(self): - self.alias_manager = AliasManager(shell=self, parent=self) - self.configurables.append(self.alias_manager) - - #------------------------------------------------------------------------- - # Things related to extensions - #------------------------------------------------------------------------- - - def init_extension_manager(self): - self.extension_manager = ExtensionManager(shell=self, parent=self) - self.configurables.append(self.extension_manager) - - #------------------------------------------------------------------------- - # Things related to payloads - #------------------------------------------------------------------------- - - def init_payload(self): - self.payload_manager = PayloadManager(parent=self) - self.configurables.append(self.payload_manager) - - #------------------------------------------------------------------------- - # Things related to the prefilter - #------------------------------------------------------------------------- - - def init_prefilter(self): - self.prefilter_manager = PrefilterManager(shell=self, parent=self) - self.configurables.append(self.prefilter_manager) - # Ultimately this will be refactored in the new interpreter code, but - # for now, we should expose the main prefilter method (there's legacy - # code out there that may rely on this). - self.prefilter = self.prefilter_manager.prefilter_lines - - def auto_rewrite_input(self, cmd): - """Print to the screen the rewritten form of the user's command. - - This shows visual feedback by rewriting input lines that cause - automatic calling to kick in, like:: - - /f x - - into:: - - ------> f(x) - - after the user's input prompt. This helps the user understand that the - input line was transformed automatically by IPython. - """ - if not self.show_rewritten_input: - return - - # This is overridden in TerminalInteractiveShell to use fancy prompts - print("------> " + cmd) - - #------------------------------------------------------------------------- - # Things related to extracting values/expressions from kernel and user_ns - #------------------------------------------------------------------------- - - def _user_obj_error(self): - """return simple exception dict - - for use in user_expressions - """ - - etype, evalue, tb = self._get_exc_info() - stb = self.InteractiveTB.get_exception_only(etype, evalue) - - exc_info = { - "status": "error", - "traceback": stb, - "ename": etype.__name__, - "evalue": py3compat.safe_unicode(evalue), - } - - return exc_info - - def _format_user_obj(self, obj): - """format a user object to display dict - - for use in user_expressions - """ - - data, md = self.display_formatter.format(obj) - value = { - 'status' : 'ok', - 'data' : data, - 'metadata' : md, - } - return value - - def user_expressions(self, expressions): - """Evaluate a dict of expressions in the user's namespace. - - Parameters - ---------- - expressions : dict - A dict with string keys and string values. The expression values - should be valid Python expressions, each of which will be evaluated - in the user namespace. - - Returns - ------- - A dict, keyed like the input expressions dict, with the rich mime-typed - display_data of each value. - """ - out = {} - user_ns = self.user_ns - global_ns = self.user_global_ns - - for key, expr in expressions.items(): - try: - value = self._format_user_obj(eval(expr, global_ns, user_ns)) - except: - value = self._user_obj_error() - out[key] = value - return out - - #------------------------------------------------------------------------- - # Things related to the running of code - #------------------------------------------------------------------------- - - def ex(self, cmd): - """Execute a normal python statement in user namespace.""" - with self.builtin_trap: - exec(cmd, self.user_global_ns, self.user_ns) - - def ev(self, expr): - """Evaluate python expression expr in user namespace. - - Returns the result of evaluation - """ - with self.builtin_trap: - return eval(expr, self.user_global_ns, self.user_ns) - - def safe_execfile(self, fname, *where, exit_ignore=False, raise_exceptions=False, shell_futures=False): - """A safe version of the builtin execfile(). - - This version will never throw an exception, but instead print - helpful error messages to the screen. This only works on pure - Python files with the .py extension. - - Parameters - ---------- - fname : string - The name of the file to be executed. - *where : tuple - One or two namespaces, passed to execfile() as (globals,locals). - If only one is given, it is passed as both. - exit_ignore : bool (False) - If True, then silence SystemExit for non-zero status (it is always - silenced for zero status, as it is so common). - raise_exceptions : bool (False) - If True raise exceptions everywhere. Meant for testing. - shell_futures : bool (False) - If True, the code will share future statements with the interactive - shell. It will both be affected by previous __future__ imports, and - any __future__ imports in the code will affect the shell. If False, - __future__ imports are not shared in either direction. - - """ - fname = Path(fname).expanduser().resolve() - - # Make sure we can open the file - try: - with fname.open("rb"): - pass - except: - warn('Could not open file <%s> for safe execution.' % fname) - return - - # Find things also in current directory. This is needed to mimic the - # behavior of running a script from the system command line, where - # Python inserts the script's directory into sys.path - dname = str(fname.parent) - - with prepended_to_syspath(dname), self.builtin_trap: - try: - glob, loc = (where + (None, ))[:2] - py3compat.execfile( - fname, glob, loc, - self.compile if shell_futures else None) - except SystemExit as status: - # If the call was made with 0 or None exit status (sys.exit(0) - # or sys.exit() ), don't bother showing a traceback, as both of - # these are considered normal by the OS: - # > python -c'import sys;sys.exit(0)'; echo $? - # 0 - # > python -c'import sys;sys.exit()'; echo $? - # 0 - # For other exit status, we show the exception unless - # explicitly silenced, but only in short form. - if status.code: - if raise_exceptions: - raise - if not exit_ignore: - self.showtraceback(exception_only=True) - except: - if raise_exceptions: - raise - # tb offset is 2 because we wrap execfile - self.showtraceback(tb_offset=2) - - def safe_execfile_ipy(self, fname, shell_futures=False, raise_exceptions=False): - """Like safe_execfile, but for .ipy or .ipynb files with IPython syntax. - - Parameters - ---------- - fname : str - The name of the file to execute. The filename must have a - .ipy or .ipynb extension. - shell_futures : bool (False) - If True, the code will share future statements with the interactive - shell. It will both be affected by previous __future__ imports, and - any __future__ imports in the code will affect the shell. If False, - __future__ imports are not shared in either direction. - raise_exceptions : bool (False) - If True raise exceptions everywhere. Meant for testing. - """ - fname = Path(fname).expanduser().resolve() - - # Make sure we can open the file - try: - with fname.open("rb"): - pass - except: - warn('Could not open file <%s> for safe execution.' % fname) - return - - # Find things also in current directory. This is needed to mimic the - # behavior of running a script from the system command line, where - # Python inserts the script's directory into sys.path - dname = str(fname.parent) - - def get_cells(): - """generator for sequence of code blocks to run""" - if fname.suffix == ".ipynb": - from nbformat import read - nb = read(fname, as_version=4) - if not nb.cells: - return - for cell in nb.cells: - if cell.cell_type == 'code': - yield cell.source - else: - yield fname.read_text(encoding="utf-8") - - with prepended_to_syspath(dname): - try: - for cell in get_cells(): - result = self.run_cell(cell, silent=True, shell_futures=shell_futures) - if raise_exceptions: - result.raise_error() - elif not result.success: - break - except: - if raise_exceptions: - raise - self.showtraceback() - warn('Unknown failure executing file: <%s>' % fname) - - def safe_run_module(self, mod_name, where): - """A safe version of runpy.run_module(). - - This version will never throw an exception, but instead print - helpful error messages to the screen. - - `SystemExit` exceptions with status code 0 or None are ignored. - - Parameters - ---------- - mod_name : string - The name of the module to be executed. - where : dict - The globals namespace. - """ - try: - try: - where.update( - runpy.run_module(str(mod_name), run_name="__main__", - alter_sys=True) - ) - except SystemExit as status: - if status.code: - raise - except: - self.showtraceback() - warn('Unknown failure executing module: <%s>' % mod_name) - - def run_cell( - self, - raw_cell, - store_history=False, - silent=False, - shell_futures=True, - cell_id=None, - ): - """Run a complete IPython cell. - - Parameters - ---------- - raw_cell : str - The code (including IPython code such as %magic functions) to run. - store_history : bool - If True, the raw and translated cell will be stored in IPython's - history. For user code calling back into IPython's machinery, this - should be set to False. - silent : bool - If True, avoid side-effects, such as implicit displayhooks and - and logging. silent=True forces store_history=False. - shell_futures : bool - If True, the code will share future statements with the interactive - shell. It will both be affected by previous __future__ imports, and - any __future__ imports in the code will affect the shell. If False, - __future__ imports are not shared in either direction. - - Returns - ------- - result : :class:`ExecutionResult` - """ - result = None - try: - result = self._run_cell( - raw_cell, store_history, silent, shell_futures, cell_id - ) - finally: - self.events.trigger('post_execute') - if not silent: - self.events.trigger('post_run_cell', result) - return result - - def _run_cell( - self, - raw_cell: str, - store_history: bool, - silent: bool, - shell_futures: bool, - cell_id: str, - ) -> ExecutionResult: - """Internal method to run a complete IPython cell.""" - - # we need to avoid calling self.transform_cell multiple time on the same thing - # so we need to store some results: - preprocessing_exc_tuple = None - try: - transformed_cell = self.transform_cell(raw_cell) - except Exception: - transformed_cell = raw_cell - preprocessing_exc_tuple = sys.exc_info() - - assert transformed_cell is not None - coro = self.run_cell_async( - raw_cell, - store_history=store_history, - silent=silent, - shell_futures=shell_futures, - transformed_cell=transformed_cell, - preprocessing_exc_tuple=preprocessing_exc_tuple, - cell_id=cell_id, - ) - - # run_cell_async is async, but may not actually need an eventloop. - # when this is the case, we want to run it using the pseudo_sync_runner - # so that code can invoke eventloops (for example via the %run , and - # `%paste` magic. - if self.trio_runner: - runner = self.trio_runner - elif self.should_run_async( - raw_cell, - transformed_cell=transformed_cell, - preprocessing_exc_tuple=preprocessing_exc_tuple, - ): - runner = self.loop_runner - else: - runner = _pseudo_sync_runner - - try: - result = runner(coro) - except BaseException as e: - info = ExecutionInfo( - raw_cell, store_history, silent, shell_futures, cell_id - ) - result = ExecutionResult(info) - result.error_in_exec = e - self.showtraceback(running_compiled_code=True) - finally: - return result - - def should_run_async( - self, raw_cell: str, *, transformed_cell=None, preprocessing_exc_tuple=None - ) -> bool: - """Return whether a cell should be run asynchronously via a coroutine runner - - Parameters - ---------- - raw_cell : str - The code to be executed - - Returns - ------- - result: bool - Whether the code needs to be run with a coroutine runner or not - .. versionadded:: 7.0 - """ - if not self.autoawait: - return False - if preprocessing_exc_tuple is not None: - return False - assert preprocessing_exc_tuple is None - if transformed_cell is None: - warnings.warn( - "`should_run_async` will not call `transform_cell`" - " automatically in the future. Please pass the result to" - " `transformed_cell` argument and any exception that happen" - " during the" - "transform in `preprocessing_exc_tuple` in" - " IPython 7.17 and above.", - DeprecationWarning, - stacklevel=2, - ) - try: - cell = self.transform_cell(raw_cell) - except Exception: - # any exception during transform will be raised - # prior to execution - return False - else: - cell = transformed_cell - return _should_be_async(cell) - - async def run_cell_async( - self, - raw_cell: str, - store_history=False, - silent=False, - shell_futures=True, - *, - transformed_cell: Optional[str] = None, - preprocessing_exc_tuple: Optional[AnyType] = None, - cell_id=None, - ) -> ExecutionResult: - """Run a complete IPython cell asynchronously. - - Parameters - ---------- - raw_cell : str - The code (including IPython code such as %magic functions) to run. - store_history : bool - If True, the raw and translated cell will be stored in IPython's - history. For user code calling back into IPython's machinery, this - should be set to False. - silent : bool - If True, avoid side-effects, such as implicit displayhooks and - and logging. silent=True forces store_history=False. - shell_futures : bool - If True, the code will share future statements with the interactive - shell. It will both be affected by previous __future__ imports, and - any __future__ imports in the code will affect the shell. If False, - __future__ imports are not shared in either direction. - transformed_cell: str - cell that was passed through transformers - preprocessing_exc_tuple: - trace if the transformation failed. - - Returns - ------- - result : :class:`ExecutionResult` - - .. versionadded:: 7.0 - """ - info = ExecutionInfo(raw_cell, store_history, silent, shell_futures, cell_id) - result = ExecutionResult(info) - - if (not raw_cell) or raw_cell.isspace(): - self.last_execution_succeeded = True - self.last_execution_result = result - return result - - if silent: - store_history = False - - if store_history: - result.execution_count = self.execution_count - - def error_before_exec(value): - if store_history: - self.execution_count += 1 - result.error_before_exec = value - self.last_execution_succeeded = False - self.last_execution_result = result - return result - - self.events.trigger('pre_execute') - if not silent: - self.events.trigger('pre_run_cell', info) - - if transformed_cell is None: - warnings.warn( - "`run_cell_async` will not call `transform_cell`" - " automatically in the future. Please pass the result to" - " `transformed_cell` argument and any exception that happen" - " during the" - "transform in `preprocessing_exc_tuple` in" - " IPython 7.17 and above.", - DeprecationWarning, - stacklevel=2, - ) - # If any of our input transformation (input_transformer_manager or - # prefilter_manager) raises an exception, we store it in this variable - # so that we can display the error after logging the input and storing - # it in the history. - try: - cell = self.transform_cell(raw_cell) - except Exception: - preprocessing_exc_tuple = sys.exc_info() - cell = raw_cell # cell has to exist so it can be stored/logged - else: - preprocessing_exc_tuple = None - else: - if preprocessing_exc_tuple is None: - cell = transformed_cell - else: - cell = raw_cell - - # Do NOT store paste/cpaste magic history - if "get_ipython().run_line_magic(" in cell and "paste" in cell: - store_history = False - - # Store raw and processed history - if store_history: - self.history_manager.store_inputs(self.execution_count, cell, raw_cell) - if not silent: - self.logger.log(cell, raw_cell) - - # Display the exception if input processing failed. - if preprocessing_exc_tuple is not None: - self.showtraceback(preprocessing_exc_tuple) - if store_history: - self.execution_count += 1 - return error_before_exec(preprocessing_exc_tuple[1]) - - # Our own compiler remembers the __future__ environment. If we want to - # run code with a separate __future__ environment, use the default - # compiler - compiler = self.compile if shell_futures else self.compiler_class() - - _run_async = False - - with self.builtin_trap: - cell_name = compiler.cache(cell, self.execution_count, raw_code=raw_cell) - - with self.display_trap: - # Compile to bytecode - try: - code_ast = compiler.ast_parse(cell, filename=cell_name) - except self.custom_exceptions as e: - etype, value, tb = sys.exc_info() - self.CustomTB(etype, value, tb) - return error_before_exec(e) - except IndentationError as e: - self.showindentationerror() - return error_before_exec(e) - except (OverflowError, SyntaxError, ValueError, TypeError, - MemoryError) as e: - self.showsyntaxerror() - return error_before_exec(e) - - # Apply AST transformations - try: - code_ast = self.transform_ast(code_ast) - except InputRejected as e: - self.showtraceback() - return error_before_exec(e) - - # Give the displayhook a reference to our ExecutionResult so it - # can fill in the output value. - self.displayhook.exec_result = result - - # Execute the user code - interactivity = "none" if silent else self.ast_node_interactivity - - - has_raised = await self.run_ast_nodes(code_ast.body, cell_name, - interactivity=interactivity, compiler=compiler, result=result) - - self.last_execution_succeeded = not has_raised - self.last_execution_result = result - - # Reset this so later displayed values do not modify the - # ExecutionResult - self.displayhook.exec_result = None - - if store_history: - # Write output to the database. Does nothing unless - # history output logging is enabled. - self.history_manager.store_output(self.execution_count) - # Each cell is a *single* input, regardless of how many lines it has - self.execution_count += 1 - - return result - - def transform_cell(self, raw_cell): - """Transform an input cell before parsing it. - - Static transformations, implemented in IPython.core.inputtransformer2, - deal with things like ``%magic`` and ``!system`` commands. - These run on all input. - Dynamic transformations, for things like unescaped magics and the exit - autocall, depend on the state of the interpreter. - These only apply to single line inputs. - - These string-based transformations are followed by AST transformations; - see :meth:`transform_ast`. - """ - # Static input transformations - cell = self.input_transformer_manager.transform_cell(raw_cell) - - if len(cell.splitlines()) == 1: - # Dynamic transformations - only applied for single line commands - with self.builtin_trap: - # use prefilter_lines to handle trailing newlines - # restore trailing newline for ast.parse - cell = self.prefilter_manager.prefilter_lines(cell) + '\n' - - lines = cell.splitlines(keepends=True) - for transform in self.input_transformers_post: - lines = transform(lines) - cell = ''.join(lines) - - return cell - - def transform_ast(self, node): - """Apply the AST transformations from self.ast_transformers - - Parameters - ---------- - node : ast.Node - The root node to be transformed. Typically called with the ast.Module - produced by parsing user input. - - Returns - ------- - An ast.Node corresponding to the node it was called with. Note that it - may also modify the passed object, so don't rely on references to the - original AST. - """ - for transformer in self.ast_transformers: - try: - node = transformer.visit(node) - except InputRejected: - # User-supplied AST transformers can reject an input by raising - # an InputRejected. Short-circuit in this case so that we - # don't unregister the transform. - raise - except Exception: - warn("AST transformer %r threw an error. It will be unregistered." % transformer) - self.ast_transformers.remove(transformer) - - if self.ast_transformers: - ast.fix_missing_locations(node) - return node - - async def run_ast_nodes( - self, - nodelist: ListType[stmt], - cell_name: str, - interactivity="last_expr", - compiler=compile, - result=None, - ): - """Run a sequence of AST nodes. The execution mode depends on the - interactivity parameter. - - Parameters - ---------- - nodelist : list - A sequence of AST nodes to run. - cell_name : str - Will be passed to the compiler as the filename of the cell. Typically - the value returned by ip.compile.cache(cell). - interactivity : str - 'all', 'last', 'last_expr' , 'last_expr_or_assign' or 'none', - specifying which nodes should be run interactively (displaying output - from expressions). 'last_expr' will run the last node interactively - only if it is an expression (i.e. expressions in loops or other blocks - are not displayed) 'last_expr_or_assign' will run the last expression - or the last assignment. Other values for this parameter will raise a - ValueError. - - compiler : callable - A function with the same interface as the built-in compile(), to turn - the AST nodes into code objects. Default is the built-in compile(). - result : ExecutionResult, optional - An object to store exceptions that occur during execution. - - Returns - ------- - True if an exception occurred while running code, False if it finished - running. - """ - if not nodelist: - return - - - if interactivity == 'last_expr_or_assign': - if isinstance(nodelist[-1], _assign_nodes): - asg = nodelist[-1] - if isinstance(asg, ast.Assign) and len(asg.targets) == 1: - target = asg.targets[0] - elif isinstance(asg, _single_targets_nodes): - target = asg.target - else: - target = None - if isinstance(target, ast.Name): - nnode = ast.Expr(ast.Name(target.id, ast.Load())) - ast.fix_missing_locations(nnode) - nodelist.append(nnode) - interactivity = 'last_expr' - - _async = False - if interactivity == 'last_expr': - if isinstance(nodelist[-1], ast.Expr): - interactivity = "last" - else: - interactivity = "none" - - if interactivity == 'none': - to_run_exec, to_run_interactive = nodelist, [] - elif interactivity == 'last': - to_run_exec, to_run_interactive = nodelist[:-1], nodelist[-1:] - elif interactivity == 'all': - to_run_exec, to_run_interactive = [], nodelist - else: - raise ValueError("Interactivity was %r" % interactivity) - - try: - - def compare(code): - is_async = inspect.CO_COROUTINE & code.co_flags == inspect.CO_COROUTINE - return is_async - - # refactor that to just change the mod constructor. - to_run = [] - for node in to_run_exec: - to_run.append((node, "exec")) - - for node in to_run_interactive: - to_run.append((node, "single")) - - for node, mode in to_run: - if mode == "exec": - mod = Module([node], []) - elif mode == "single": - mod = ast.Interactive([node]) # type: ignore - with compiler.extra_flags( - getattr(ast, "PyCF_ALLOW_TOP_LEVEL_AWAIT", 0x0) - if self.autoawait - else 0x0 - ): - code = compiler(mod, cell_name, mode) - asy = compare(code) - if await self.run_code(code, result, async_=asy): - return True - - # Flush softspace - if softspace(sys.stdout, 0): - print() - - except: - # It's possible to have exceptions raised here, typically by - # compilation of odd code (such as a naked 'return' outside a - # function) that did parse but isn't valid. Typically the exception - # is a SyntaxError, but it's safest just to catch anything and show - # the user a traceback. - - # We do only one try/except outside the loop to minimize the impact - # on runtime, and also because if any node in the node list is - # broken, we should stop execution completely. - if result: - result.error_before_exec = sys.exc_info()[1] - self.showtraceback() - return True - - return False - - async def run_code(self, code_obj, result=None, *, async_=False): - """Execute a code object. - - When an exception occurs, self.showtraceback() is called to display a - traceback. - - Parameters - ---------- - code_obj : code object - A compiled code object, to be executed - result : ExecutionResult, optional - An object to store exceptions that occur during execution. - async_ : Bool (Experimental) - Attempt to run top-level asynchronous code in a default loop. - - Returns - ------- - False : successful execution. - True : an error occurred. - """ - # special value to say that anything above is IPython and should be - # hidden. - __tracebackhide__ = "__ipython_bottom__" - # Set our own excepthook in case the user code tries to call it - # directly, so that the IPython crash handler doesn't get triggered - old_excepthook, sys.excepthook = sys.excepthook, self.excepthook - - # we save the original sys.excepthook in the instance, in case config - # code (such as magics) needs access to it. - self.sys_excepthook = old_excepthook - outflag = True # happens in more places, so it's easier as default - try: - try: - if async_: - await eval(code_obj, self.user_global_ns, self.user_ns) - else: - exec(code_obj, self.user_global_ns, self.user_ns) - finally: - # Reset our crash handler in place - sys.excepthook = old_excepthook - except SystemExit as e: - if result is not None: - result.error_in_exec = e - self.showtraceback(exception_only=True) - warn("To exit: use 'exit', 'quit', or Ctrl-D.", stacklevel=1) - except bdb.BdbQuit: - etype, value, tb = sys.exc_info() - if result is not None: - result.error_in_exec = value - # the BdbQuit stops here - except self.custom_exceptions: - etype, value, tb = sys.exc_info() - if result is not None: - result.error_in_exec = value - self.CustomTB(etype, value, tb) - except: - if result is not None: - result.error_in_exec = sys.exc_info()[1] - self.showtraceback(running_compiled_code=True) - else: - outflag = False - return outflag - - # For backwards compatibility - runcode = run_code - - def check_complete(self, code: str) -> Tuple[str, str]: - """Return whether a block of code is ready to execute, or should be continued - - Parameters - ---------- - code : string - Python input code, which can be multiline. - - Returns - ------- - status : str - One of 'complete', 'incomplete', or 'invalid' if source is not a - prefix of valid code. - indent : str - When status is 'incomplete', this is some whitespace to insert on - the next line of the prompt. - """ - status, nspaces = self.input_transformer_manager.check_complete(code) - return status, ' ' * (nspaces or 0) - - #------------------------------------------------------------------------- - # Things related to GUI support and pylab - #------------------------------------------------------------------------- - - active_eventloop = None - - def enable_gui(self, gui=None): - raise NotImplementedError('Implement enable_gui in a subclass') - - def enable_matplotlib(self, gui=None): - """Enable interactive matplotlib and inline figure support. - - This takes the following steps: - - 1. select the appropriate eventloop and matplotlib backend - 2. set up matplotlib for interactive use with that backend - 3. configure formatters for inline figure display - 4. enable the selected gui eventloop - - Parameters - ---------- - gui : optional, string - If given, dictates the choice of matplotlib GUI backend to use - (should be one of IPython's supported backends, 'qt', 'osx', 'tk', - 'gtk', 'wx' or 'inline'), otherwise we use the default chosen by - matplotlib (as dictated by the matplotlib build-time options plus the - user's matplotlibrc configuration file). Note that not all backends - make sense in all contexts, for example a terminal ipython can't - display figures inline. - """ - from matplotlib_inline.backend_inline import configure_inline_support - - from IPython.core import pylabtools as pt - gui, backend = pt.find_gui_and_backend(gui, self.pylab_gui_select) - - if gui != 'inline': - # If we have our first gui selection, store it - if self.pylab_gui_select is None: - self.pylab_gui_select = gui - # Otherwise if they are different - elif gui != self.pylab_gui_select: - print('Warning: Cannot change to a different GUI toolkit: %s.' - ' Using %s instead.' % (gui, self.pylab_gui_select)) - gui, backend = pt.find_gui_and_backend(self.pylab_gui_select) - - pt.activate_matplotlib(backend) - configure_inline_support(self, backend) - - # Now we must activate the gui pylab wants to use, and fix %run to take - # plot updates into account - self.enable_gui(gui) - self.magics_manager.registry['ExecutionMagics'].default_runner = \ - pt.mpl_runner(self.safe_execfile) - - return gui, backend - - def enable_pylab(self, gui=None, import_all=True, welcome_message=False): - """Activate pylab support at runtime. - - This turns on support for matplotlib, preloads into the interactive - namespace all of numpy and pylab, and configures IPython to correctly - interact with the GUI event loop. The GUI backend to be used can be - optionally selected with the optional ``gui`` argument. - - This method only adds preloading the namespace to InteractiveShell.enable_matplotlib. - - Parameters - ---------- - gui : optional, string - If given, dictates the choice of matplotlib GUI backend to use - (should be one of IPython's supported backends, 'qt', 'osx', 'tk', - 'gtk', 'wx' or 'inline'), otherwise we use the default chosen by - matplotlib (as dictated by the matplotlib build-time options plus the - user's matplotlibrc configuration file). Note that not all backends - make sense in all contexts, for example a terminal ipython can't - display figures inline. - import_all : optional, bool, default: True - Whether to do `from numpy import *` and `from pylab import *` - in addition to module imports. - welcome_message : deprecated - This argument is ignored, no welcome message will be displayed. - """ - from IPython.core.pylabtools import import_pylab - - gui, backend = self.enable_matplotlib(gui) - - # We want to prevent the loading of pylab to pollute the user's - # namespace as shown by the %who* magics, so we execute the activation - # code in an empty namespace, and we update *both* user_ns and - # user_ns_hidden with this information. - ns = {} - import_pylab(ns, import_all) - # warn about clobbered names - ignored = {"__builtins__"} - both = set(ns).intersection(self.user_ns).difference(ignored) - clobbered = [ name for name in both if self.user_ns[name] is not ns[name] ] - self.user_ns.update(ns) - self.user_ns_hidden.update(ns) - return gui, backend, clobbered - - #------------------------------------------------------------------------- - # Utilities - #------------------------------------------------------------------------- - - def var_expand(self, cmd, depth=0, formatter=DollarFormatter()): - """Expand python variables in a string. - - The depth argument indicates how many frames above the caller should - be walked to look for the local namespace where to expand variables. - - The global namespace for expansion is always the user's interactive - namespace. - """ - ns = self.user_ns.copy() - try: - frame = sys._getframe(depth+1) - except ValueError: - # This is thrown if there aren't that many frames on the stack, - # e.g. if a script called run_line_magic() directly. - pass - else: - ns.update(frame.f_locals) - - try: - # We have to use .vformat() here, because 'self' is a valid and common - # name, and expanding **ns for .format() would make it collide with - # the 'self' argument of the method. - cmd = formatter.vformat(cmd, args=[], kwargs=ns) - except Exception: - # if formatter couldn't format, just let it go untransformed - pass - return cmd - - def mktempfile(self, data=None, prefix='ipython_edit_'): - """Make a new tempfile and return its filename. - - This makes a call to tempfile.mkstemp (created in a tempfile.mkdtemp), - but it registers the created filename internally so ipython cleans it up - at exit time. - - Optional inputs: - - - data(None): if data is given, it gets written out to the temp file - immediately, and the file is closed again.""" - - dir_path = Path(tempfile.mkdtemp(prefix=prefix)) - self.tempdirs.append(dir_path) - - handle, filename = tempfile.mkstemp(".py", prefix, dir=str(dir_path)) - os.close(handle) # On Windows, there can only be one open handle on a file - - file_path = Path(filename) - self.tempfiles.append(file_path) - - if data: - file_path.write_text(data, encoding="utf-8") - return filename - - def ask_yes_no(self, prompt, default=None, interrupt=None): - if self.quiet: - return True - return ask_yes_no(prompt,default,interrupt) - - def show_usage(self): - """Show a usage message""" - page.page(IPython.core.usage.interactive_usage) - - def extract_input_lines(self, range_str, raw=False): - """Return as a string a set of input history slices. - - Parameters - ---------- - range_str : str - The set of slices is given as a string, like "~5/6-~4/2 4:8 9", - since this function is for use by magic functions which get their - arguments as strings. The number before the / is the session - number: ~n goes n back from the current session. - - If empty string is given, returns history of current session - without the last input. - - raw : bool, optional - By default, the processed input is used. If this is true, the raw - input history is used instead. - - Notes - ----- - Slices can be described with two notations: - - * ``N:M`` -> standard python form, means including items N...(M-1). - * ``N-M`` -> include items N..M (closed endpoint). - """ - lines = self.history_manager.get_range_by_str(range_str, raw=raw) - text = "\n".join(x for _, _, x in lines) - - # Skip the last line, as it's probably the magic that called this - if not range_str: - if "\n" not in text: - text = "" - else: - text = text[: text.rfind("\n")] - - return text - - def find_user_code(self, target, raw=True, py_only=False, skip_encoding_cookie=True, search_ns=False): - """Get a code string from history, file, url, or a string or macro. - - This is mainly used by magic functions. - - Parameters - ---------- - target : str - A string specifying code to retrieve. This will be tried respectively - as: ranges of input history (see %history for syntax), url, - corresponding .py file, filename, or an expression evaluating to a - string or Macro in the user namespace. - - If empty string is given, returns complete history of current - session, without the last line. - - raw : bool - If true (default), retrieve raw history. Has no effect on the other - retrieval mechanisms. - - py_only : bool (default False) - Only try to fetch python code, do not try alternative methods to decode file - if unicode fails. - - Returns - ------- - A string of code. - ValueError is raised if nothing is found, and TypeError if it evaluates - to an object of another type. In each case, .args[0] is a printable - message. - """ - code = self.extract_input_lines(target, raw=raw) # Grab history - if code: - return code - try: - if target.startswith(('http://', 'https://')): - return openpy.read_py_url(target, skip_encoding_cookie=skip_encoding_cookie) - except UnicodeDecodeError as e: - if not py_only : - # Deferred import - from urllib.request import urlopen - response = urlopen(target) - return response.read().decode('latin1') - raise ValueError(("'%s' seem to be unreadable.") % target) from e - - potential_target = [target] - try : - potential_target.insert(0,get_py_filename(target)) - except IOError: - pass - - for tgt in potential_target : - if os.path.isfile(tgt): # Read file - try : - return openpy.read_py_file(tgt, skip_encoding_cookie=skip_encoding_cookie) - except UnicodeDecodeError as e: - if not py_only : - with io_open(tgt,'r', encoding='latin1') as f : - return f.read() - raise ValueError(("'%s' seem to be unreadable.") % target) from e - elif os.path.isdir(os.path.expanduser(tgt)): - raise ValueError("'%s' is a directory, not a regular file." % target) - - if search_ns: - # Inspect namespace to load object source - object_info = self.object_inspect(target, detail_level=1) - if object_info['found'] and object_info['source']: - return object_info['source'] - - try: # User namespace - codeobj = eval(target, self.user_ns) - except Exception as e: - raise ValueError(("'%s' was not found in history, as a file, url, " - "nor in the user namespace.") % target) from e - - if isinstance(codeobj, str): - return codeobj - elif isinstance(codeobj, Macro): - return codeobj.value - - raise TypeError("%s is neither a string nor a macro." % target, - codeobj) - - def _atexit_once(self): - """ - At exist operation that need to be called at most once. - Second call to this function per instance will do nothing. - """ - - if not getattr(self, "_atexit_once_called", False): - self._atexit_once_called = True - # Clear all user namespaces to release all references cleanly. - self.reset(new_session=False) - # Close the history session (this stores the end time and line count) - # this must be *before* the tempfile cleanup, in case of temporary - # history db - self.history_manager.end_session() - self.history_manager = None - - #------------------------------------------------------------------------- - # Things related to IPython exiting - #------------------------------------------------------------------------- - def atexit_operations(self): - """This will be executed at the time of exit. - - Cleanup operations and saving of persistent data that is done - unconditionally by IPython should be performed here. - - For things that may depend on startup flags or platform specifics (such - as having readline or not), register a separate atexit function in the - code that has the appropriate information, rather than trying to - clutter - """ - self._atexit_once() - - # Cleanup all tempfiles and folders left around - for tfile in self.tempfiles: - try: - tfile.unlink() - self.tempfiles.remove(tfile) - except FileNotFoundError: - pass - del self.tempfiles - for tdir in self.tempdirs: - try: - tdir.rmdir() - self.tempdirs.remove(tdir) - except FileNotFoundError: - pass - del self.tempdirs - - # Restore user's cursor - if hasattr(self, "editing_mode") and self.editing_mode == "vi": - sys.stdout.write("\x1b[0 q") - sys.stdout.flush() - - def cleanup(self): - self.restore_sys_module_state() - - - # Overridden in terminal subclass to change prompts - def switch_doctest_mode(self, mode): - pass - - -class InteractiveShellABC(metaclass=abc.ABCMeta): - """An abstract base class for InteractiveShell.""" - -InteractiveShellABC.register(InteractiveShell) diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/certifi/core.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/certifi/core.py deleted file mode 100644 index de028981b97e1fcc8ef4ab2c817cc8731b9c8738..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/certifi/core.py +++ /dev/null @@ -1,108 +0,0 @@ -""" -certifi.py -~~~~~~~~~~ - -This module returns the installation location of cacert.pem or its contents. -""" -import sys - - -if sys.version_info >= (3, 11): - - from importlib.resources import as_file, files - - _CACERT_CTX = None - _CACERT_PATH = None - - def where() -> str: - # This is slightly terrible, but we want to delay extracting the file - # in cases where we're inside of a zipimport situation until someone - # actually calls where(), but we don't want to re-extract the file - # on every call of where(), so we'll do it once then store it in a - # global variable. - global _CACERT_CTX - global _CACERT_PATH - if _CACERT_PATH is None: - # This is slightly janky, the importlib.resources API wants you to - # manage the cleanup of this file, so it doesn't actually return a - # path, it returns a context manager that will give you the path - # when you enter it and will do any cleanup when you leave it. In - # the common case of not needing a temporary file, it will just - # return the file system location and the __exit__() is a no-op. - # - # We also have to hold onto the actual context manager, because - # it will do the cleanup whenever it gets garbage collected, so - # we will also store that at the global level as well. - _CACERT_CTX = as_file(files("certifi").joinpath("cacert.pem")) - _CACERT_PATH = str(_CACERT_CTX.__enter__()) - - return _CACERT_PATH - - def contents() -> str: - return files("certifi").joinpath("cacert.pem").read_text(encoding="ascii") - -elif sys.version_info >= (3, 7): - - from importlib.resources import path as get_path, read_text - - _CACERT_CTX = None - _CACERT_PATH = None - - def where() -> str: - # This is slightly terrible, but we want to delay extracting the - # file in cases where we're inside of a zipimport situation until - # someone actually calls where(), but we don't want to re-extract - # the file on every call of where(), so we'll do it once then store - # it in a global variable. - global _CACERT_CTX - global _CACERT_PATH - if _CACERT_PATH is None: - # This is slightly janky, the importlib.resources API wants you - # to manage the cleanup of this file, so it doesn't actually - # return a path, it returns a context manager that will give - # you the path when you enter it and will do any cleanup when - # you leave it. In the common case of not needing a temporary - # file, it will just return the file system location and the - # __exit__() is a no-op. - # - # We also have to hold onto the actual context manager, because - # it will do the cleanup whenever it gets garbage collected, so - # we will also store that at the global level as well. - _CACERT_CTX = get_path("certifi", "cacert.pem") - _CACERT_PATH = str(_CACERT_CTX.__enter__()) - - return _CACERT_PATH - - def contents() -> str: - return read_text("certifi", "cacert.pem", encoding="ascii") - -else: - import os - import types - from typing import Union - - Package = Union[types.ModuleType, str] - Resource = Union[str, "os.PathLike"] - - # This fallback will work for Python versions prior to 3.7 that lack the - # importlib.resources module but relies on the existing `where` function - # so won't address issues with environments like PyOxidizer that don't set - # __file__ on modules. - def read_text( - package: Package, - resource: Resource, - encoding: str = 'utf-8', - errors: str = 'strict' - ) -> str: - with open(where(), encoding=encoding) as data: - return data.read() - - # If we don't have importlib.resources, then we will just do the old logic - # of assuming we're on the filesystem and munge the path directly. - def where() -> str: - f = os.path.dirname(__file__) - - return os.path.join(f, "cacert.pem") - - def contents() -> str: - return read_text("certifi", "cacert.pem", encoding="ascii") diff --git a/spaces/Superlang/ImageProcessor/annotator/normalbae/models/submodules/efficientnet_repo/geffnet/activations/__init__.py b/spaces/Superlang/ImageProcessor/annotator/normalbae/models/submodules/efficientnet_repo/geffnet/activations/__init__.py deleted file mode 100644 index 813421a743ffc33b8eb53ebf62dd4a03d831b654..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/normalbae/models/submodules/efficientnet_repo/geffnet/activations/__init__.py +++ /dev/null @@ -1,137 +0,0 @@ -from geffnet import config -from geffnet.activations.activations_me import * -from geffnet.activations.activations_jit import * -from geffnet.activations.activations import * -import torch - -_has_silu = 'silu' in dir(torch.nn.functional) - -_ACT_FN_DEFAULT = dict( - silu=F.silu if _has_silu else swish, - swish=F.silu if _has_silu else swish, - mish=mish, - relu=F.relu, - relu6=F.relu6, - sigmoid=sigmoid, - tanh=tanh, - hard_sigmoid=hard_sigmoid, - hard_swish=hard_swish, -) - -_ACT_FN_JIT = dict( - silu=F.silu if _has_silu else swish_jit, - swish=F.silu if _has_silu else swish_jit, - mish=mish_jit, -) - -_ACT_FN_ME = dict( - silu=F.silu if _has_silu else swish_me, - swish=F.silu if _has_silu else swish_me, - mish=mish_me, - hard_swish=hard_swish_me, - hard_sigmoid_jit=hard_sigmoid_me, -) - -_ACT_LAYER_DEFAULT = dict( - silu=nn.SiLU if _has_silu else Swish, - swish=nn.SiLU if _has_silu else Swish, - mish=Mish, - relu=nn.ReLU, - relu6=nn.ReLU6, - sigmoid=Sigmoid, - tanh=Tanh, - hard_sigmoid=HardSigmoid, - hard_swish=HardSwish, -) - -_ACT_LAYER_JIT = dict( - silu=nn.SiLU if _has_silu else SwishJit, - swish=nn.SiLU if _has_silu else SwishJit, - mish=MishJit, -) - -_ACT_LAYER_ME = dict( - silu=nn.SiLU if _has_silu else SwishMe, - swish=nn.SiLU if _has_silu else SwishMe, - mish=MishMe, - hard_swish=HardSwishMe, - hard_sigmoid=HardSigmoidMe -) - -_OVERRIDE_FN = dict() -_OVERRIDE_LAYER = dict() - - -def add_override_act_fn(name, fn): - global _OVERRIDE_FN - _OVERRIDE_FN[name] = fn - - -def update_override_act_fn(overrides): - assert isinstance(overrides, dict) - global _OVERRIDE_FN - _OVERRIDE_FN.update(overrides) - - -def clear_override_act_fn(): - global _OVERRIDE_FN - _OVERRIDE_FN = dict() - - -def add_override_act_layer(name, fn): - _OVERRIDE_LAYER[name] = fn - - -def update_override_act_layer(overrides): - assert isinstance(overrides, dict) - global _OVERRIDE_LAYER - _OVERRIDE_LAYER.update(overrides) - - -def clear_override_act_layer(): - global _OVERRIDE_LAYER - _OVERRIDE_LAYER = dict() - - -def get_act_fn(name='relu'): - """ Activation Function Factory - Fetching activation fns by name with this function allows export or torch script friendly - functions to be returned dynamically based on current config. - """ - if name in _OVERRIDE_FN: - return _OVERRIDE_FN[name] - use_me = not (config.is_exportable() or config.is_scriptable() or config.is_no_jit()) - if use_me and name in _ACT_FN_ME: - # If not exporting or scripting the model, first look for a memory optimized version - # activation with custom autograd, then fallback to jit scripted, then a Python or Torch builtin - return _ACT_FN_ME[name] - if config.is_exportable() and name in ('silu', 'swish'): - # FIXME PyTorch SiLU doesn't ONNX export, this is a temp hack - return swish - use_jit = not (config.is_exportable() or config.is_no_jit()) - # NOTE: export tracing should work with jit scripted components, but I keep running into issues - if use_jit and name in _ACT_FN_JIT: # jit scripted models should be okay for export/scripting - return _ACT_FN_JIT[name] - return _ACT_FN_DEFAULT[name] - - -def get_act_layer(name='relu'): - """ Activation Layer Factory - Fetching activation layers by name with this function allows export or torch script friendly - functions to be returned dynamically based on current config. - """ - if name in _OVERRIDE_LAYER: - return _OVERRIDE_LAYER[name] - use_me = not (config.is_exportable() or config.is_scriptable() or config.is_no_jit()) - if use_me and name in _ACT_LAYER_ME: - return _ACT_LAYER_ME[name] - if config.is_exportable() and name in ('silu', 'swish'): - # FIXME PyTorch SiLU doesn't ONNX export, this is a temp hack - return Swish - use_jit = not (config.is_exportable() or config.is_no_jit()) - # NOTE: export tracing should work with jit scripted components, but I keep running into issues - if use_jit and name in _ACT_FN_JIT: # jit scripted models should be okay for export/scripting - return _ACT_LAYER_JIT[name] - return _ACT_LAYER_DEFAULT[name] - - diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/modeling/roi_heads/fast_rcnn.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/modeling/roi_heads/fast_rcnn.py deleted file mode 100644 index a81c58ea863f32a24ed7d5ad3b2e4e4416c6a0ab..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/modeling/roi_heads/fast_rcnn.py +++ /dev/null @@ -1,569 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -from typing import Callable, Dict, List, Optional, Tuple, Union -import torch -from torch import nn -from torch.nn import functional as F - -from annotator.oneformer.detectron2.config import configurable -from annotator.oneformer.detectron2.data.detection_utils import get_fed_loss_cls_weights -from annotator.oneformer.detectron2.layers import ShapeSpec, batched_nms, cat, cross_entropy, nonzero_tuple -from annotator.oneformer.detectron2.modeling.box_regression import Box2BoxTransform, _dense_box_regression_loss -from annotator.oneformer.detectron2.structures import Boxes, Instances -from annotator.oneformer.detectron2.utils.events import get_event_storage - -__all__ = ["fast_rcnn_inference", "FastRCNNOutputLayers"] - - -logger = logging.getLogger(__name__) - -""" -Shape shorthand in this module: - - N: number of images in the minibatch - R: number of ROIs, combined over all images, in the minibatch - Ri: number of ROIs in image i - K: number of foreground classes. E.g.,there are 80 foreground classes in COCO. - -Naming convention: - - deltas: refers to the 4-d (dx, dy, dw, dh) deltas that parameterize the box2box - transform (see :class:`box_regression.Box2BoxTransform`). - - pred_class_logits: predicted class scores in [-inf, +inf]; use - softmax(pred_class_logits) to estimate P(class). - - gt_classes: ground-truth classification labels in [0, K], where [0, K) represent - foreground object classes and K represents the background class. - - pred_proposal_deltas: predicted box2box transform deltas for transforming proposals - to detection box predictions. - - gt_proposal_deltas: ground-truth box2box transform deltas -""" - - -def fast_rcnn_inference( - boxes: List[torch.Tensor], - scores: List[torch.Tensor], - image_shapes: List[Tuple[int, int]], - score_thresh: float, - nms_thresh: float, - topk_per_image: int, -): - """ - Call `fast_rcnn_inference_single_image` for all images. - - Args: - boxes (list[Tensor]): A list of Tensors of predicted class-specific or class-agnostic - boxes for each image. Element i has shape (Ri, K * 4) if doing - class-specific regression, or (Ri, 4) if doing class-agnostic - regression, where Ri is the number of predicted objects for image i. - This is compatible with the output of :meth:`FastRCNNOutputLayers.predict_boxes`. - scores (list[Tensor]): A list of Tensors of predicted class scores for each image. - Element i has shape (Ri, K + 1), where Ri is the number of predicted objects - for image i. Compatible with the output of :meth:`FastRCNNOutputLayers.predict_probs`. - image_shapes (list[tuple]): A list of (width, height) tuples for each image in the batch. - score_thresh (float): Only return detections with a confidence score exceeding this - threshold. - nms_thresh (float): The threshold to use for box non-maximum suppression. Value in [0, 1]. - topk_per_image (int): The number of top scoring detections to return. Set < 0 to return - all detections. - - Returns: - instances: (list[Instances]): A list of N instances, one for each image in the batch, - that stores the topk most confidence detections. - kept_indices: (list[Tensor]): A list of 1D tensor of length of N, each element indicates - the corresponding boxes/scores index in [0, Ri) from the input, for image i. - """ - result_per_image = [ - fast_rcnn_inference_single_image( - boxes_per_image, scores_per_image, image_shape, score_thresh, nms_thresh, topk_per_image - ) - for scores_per_image, boxes_per_image, image_shape in zip(scores, boxes, image_shapes) - ] - return [x[0] for x in result_per_image], [x[1] for x in result_per_image] - - -def _log_classification_stats(pred_logits, gt_classes, prefix="fast_rcnn"): - """ - Log the classification metrics to EventStorage. - - Args: - pred_logits: Rx(K+1) logits. The last column is for background class. - gt_classes: R labels - """ - num_instances = gt_classes.numel() - if num_instances == 0: - return - pred_classes = pred_logits.argmax(dim=1) - bg_class_ind = pred_logits.shape[1] - 1 - - fg_inds = (gt_classes >= 0) & (gt_classes < bg_class_ind) - num_fg = fg_inds.nonzero().numel() - fg_gt_classes = gt_classes[fg_inds] - fg_pred_classes = pred_classes[fg_inds] - - num_false_negative = (fg_pred_classes == bg_class_ind).nonzero().numel() - num_accurate = (pred_classes == gt_classes).nonzero().numel() - fg_num_accurate = (fg_pred_classes == fg_gt_classes).nonzero().numel() - - storage = get_event_storage() - storage.put_scalar(f"{prefix}/cls_accuracy", num_accurate / num_instances) - if num_fg > 0: - storage.put_scalar(f"{prefix}/fg_cls_accuracy", fg_num_accurate / num_fg) - storage.put_scalar(f"{prefix}/false_negative", num_false_negative / num_fg) - - -def fast_rcnn_inference_single_image( - boxes, - scores, - image_shape: Tuple[int, int], - score_thresh: float, - nms_thresh: float, - topk_per_image: int, -): - """ - Single-image inference. Return bounding-box detection results by thresholding - on scores and applying non-maximum suppression (NMS). - - Args: - Same as `fast_rcnn_inference`, but with boxes, scores, and image shapes - per image. - - Returns: - Same as `fast_rcnn_inference`, but for only one image. - """ - valid_mask = torch.isfinite(boxes).all(dim=1) & torch.isfinite(scores).all(dim=1) - if not valid_mask.all(): - boxes = boxes[valid_mask] - scores = scores[valid_mask] - - scores = scores[:, :-1] - num_bbox_reg_classes = boxes.shape[1] // 4 - # Convert to Boxes to use the `clip` function ... - boxes = Boxes(boxes.reshape(-1, 4)) - boxes.clip(image_shape) - boxes = boxes.tensor.view(-1, num_bbox_reg_classes, 4) # R x C x 4 - - # 1. Filter results based on detection scores. It can make NMS more efficient - # by filtering out low-confidence detections. - filter_mask = scores > score_thresh # R x K - # R' x 2. First column contains indices of the R predictions; - # Second column contains indices of classes. - filter_inds = filter_mask.nonzero() - if num_bbox_reg_classes == 1: - boxes = boxes[filter_inds[:, 0], 0] - else: - boxes = boxes[filter_mask] - scores = scores[filter_mask] - - # 2. Apply NMS for each class independently. - keep = batched_nms(boxes, scores, filter_inds[:, 1], nms_thresh) - if topk_per_image >= 0: - keep = keep[:topk_per_image] - boxes, scores, filter_inds = boxes[keep], scores[keep], filter_inds[keep] - - result = Instances(image_shape) - result.pred_boxes = Boxes(boxes) - result.scores = scores - result.pred_classes = filter_inds[:, 1] - return result, filter_inds[:, 0] - - -class FastRCNNOutputLayers(nn.Module): - """ - Two linear layers for predicting Fast R-CNN outputs: - - 1. proposal-to-detection box regression deltas - 2. classification scores - """ - - @configurable - def __init__( - self, - input_shape: ShapeSpec, - *, - box2box_transform, - num_classes: int, - test_score_thresh: float = 0.0, - test_nms_thresh: float = 0.5, - test_topk_per_image: int = 100, - cls_agnostic_bbox_reg: bool = False, - smooth_l1_beta: float = 0.0, - box_reg_loss_type: str = "smooth_l1", - loss_weight: Union[float, Dict[str, float]] = 1.0, - use_fed_loss: bool = False, - use_sigmoid_ce: bool = False, - get_fed_loss_cls_weights: Optional[Callable] = None, - fed_loss_num_classes: int = 50, - ): - """ - NOTE: this interface is experimental. - - Args: - input_shape (ShapeSpec): shape of the input feature to this module - box2box_transform (Box2BoxTransform or Box2BoxTransformRotated): - num_classes (int): number of foreground classes - test_score_thresh (float): threshold to filter predictions results. - test_nms_thresh (float): NMS threshold for prediction results. - test_topk_per_image (int): number of top predictions to produce per image. - cls_agnostic_bbox_reg (bool): whether to use class agnostic for bbox regression - smooth_l1_beta (float): transition point from L1 to L2 loss. Only used if - `box_reg_loss_type` is "smooth_l1" - box_reg_loss_type (str): Box regression loss type. One of: "smooth_l1", "giou", - "diou", "ciou" - loss_weight (float|dict): weights to use for losses. Can be single float for weighting - all losses, or a dict of individual weightings. Valid dict keys are: - * "loss_cls": applied to classification loss - * "loss_box_reg": applied to box regression loss - use_fed_loss (bool): whether to use federated loss which samples additional negative - classes to calculate the loss - use_sigmoid_ce (bool): whether to calculate the loss using weighted average of binary - cross entropy with logits. This could be used together with federated loss - get_fed_loss_cls_weights (Callable): a callable which takes dataset name and frequency - weight power, and returns the probabilities to sample negative classes for - federated loss. The implementation can be found in - detectron2/data/detection_utils.py - fed_loss_num_classes (int): number of federated classes to keep in total - """ - super().__init__() - if isinstance(input_shape, int): # some backward compatibility - input_shape = ShapeSpec(channels=input_shape) - self.num_classes = num_classes - input_size = input_shape.channels * (input_shape.width or 1) * (input_shape.height or 1) - # prediction layer for num_classes foreground classes and one background class (hence + 1) - self.cls_score = nn.Linear(input_size, num_classes + 1) - num_bbox_reg_classes = 1 if cls_agnostic_bbox_reg else num_classes - box_dim = len(box2box_transform.weights) - self.bbox_pred = nn.Linear(input_size, num_bbox_reg_classes * box_dim) - - nn.init.normal_(self.cls_score.weight, std=0.01) - nn.init.normal_(self.bbox_pred.weight, std=0.001) - for l in [self.cls_score, self.bbox_pred]: - nn.init.constant_(l.bias, 0) - - self.box2box_transform = box2box_transform - self.smooth_l1_beta = smooth_l1_beta - self.test_score_thresh = test_score_thresh - self.test_nms_thresh = test_nms_thresh - self.test_topk_per_image = test_topk_per_image - self.box_reg_loss_type = box_reg_loss_type - if isinstance(loss_weight, float): - loss_weight = {"loss_cls": loss_weight, "loss_box_reg": loss_weight} - self.loss_weight = loss_weight - self.use_fed_loss = use_fed_loss - self.use_sigmoid_ce = use_sigmoid_ce - self.fed_loss_num_classes = fed_loss_num_classes - - if self.use_fed_loss: - assert self.use_sigmoid_ce, "Please use sigmoid cross entropy loss with federated loss" - fed_loss_cls_weights = get_fed_loss_cls_weights() - assert ( - len(fed_loss_cls_weights) == self.num_classes - ), "Please check the provided fed_loss_cls_weights. Their size should match num_classes" - self.register_buffer("fed_loss_cls_weights", fed_loss_cls_weights) - - @classmethod - def from_config(cls, cfg, input_shape): - return { - "input_shape": input_shape, - "box2box_transform": Box2BoxTransform(weights=cfg.MODEL.ROI_BOX_HEAD.BBOX_REG_WEIGHTS), - # fmt: off - "num_classes" : cfg.MODEL.ROI_HEADS.NUM_CLASSES, - "cls_agnostic_bbox_reg" : cfg.MODEL.ROI_BOX_HEAD.CLS_AGNOSTIC_BBOX_REG, - "smooth_l1_beta" : cfg.MODEL.ROI_BOX_HEAD.SMOOTH_L1_BETA, - "test_score_thresh" : cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST, - "test_nms_thresh" : cfg.MODEL.ROI_HEADS.NMS_THRESH_TEST, - "test_topk_per_image" : cfg.TEST.DETECTIONS_PER_IMAGE, - "box_reg_loss_type" : cfg.MODEL.ROI_BOX_HEAD.BBOX_REG_LOSS_TYPE, - "loss_weight" : {"loss_box_reg": cfg.MODEL.ROI_BOX_HEAD.BBOX_REG_LOSS_WEIGHT}, # noqa - "use_fed_loss" : cfg.MODEL.ROI_BOX_HEAD.USE_FED_LOSS, - "use_sigmoid_ce" : cfg.MODEL.ROI_BOX_HEAD.USE_SIGMOID_CE, - "get_fed_loss_cls_weights" : lambda: get_fed_loss_cls_weights(dataset_names=cfg.DATASETS.TRAIN, freq_weight_power=cfg.MODEL.ROI_BOX_HEAD.FED_LOSS_FREQ_WEIGHT_POWER), # noqa - "fed_loss_num_classes" : cfg.MODEL.ROI_BOX_HEAD.FED_LOSS_NUM_CLASSES, - # fmt: on - } - - def forward(self, x): - """ - Args: - x: per-region features of shape (N, ...) for N bounding boxes to predict. - - Returns: - (Tensor, Tensor): - First tensor: shape (N,K+1), scores for each of the N box. Each row contains the - scores for K object categories and 1 background class. - - Second tensor: bounding box regression deltas for each box. Shape is shape (N,Kx4), - or (N,4) for class-agnostic regression. - """ - if x.dim() > 2: - x = torch.flatten(x, start_dim=1) - scores = self.cls_score(x) - proposal_deltas = self.bbox_pred(x) - return scores, proposal_deltas - - def losses(self, predictions, proposals): - """ - Args: - predictions: return values of :meth:`forward()`. - proposals (list[Instances]): proposals that match the features that were used - to compute predictions. The fields ``proposal_boxes``, ``gt_boxes``, - ``gt_classes`` are expected. - - Returns: - Dict[str, Tensor]: dict of losses - """ - scores, proposal_deltas = predictions - - # parse classification outputs - gt_classes = ( - cat([p.gt_classes for p in proposals], dim=0) if len(proposals) else torch.empty(0) - ) - _log_classification_stats(scores, gt_classes) - - # parse box regression outputs - if len(proposals): - proposal_boxes = cat([p.proposal_boxes.tensor for p in proposals], dim=0) # Nx4 - assert not proposal_boxes.requires_grad, "Proposals should not require gradients!" - # If "gt_boxes" does not exist, the proposals must be all negative and - # should not be included in regression loss computation. - # Here we just use proposal_boxes as an arbitrary placeholder because its - # value won't be used in self.box_reg_loss(). - gt_boxes = cat( - [(p.gt_boxes if p.has("gt_boxes") else p.proposal_boxes).tensor for p in proposals], - dim=0, - ) - else: - proposal_boxes = gt_boxes = torch.empty((0, 4), device=proposal_deltas.device) - - if self.use_sigmoid_ce: - loss_cls = self.sigmoid_cross_entropy_loss(scores, gt_classes) - else: - loss_cls = cross_entropy(scores, gt_classes, reduction="mean") - - losses = { - "loss_cls": loss_cls, - "loss_box_reg": self.box_reg_loss( - proposal_boxes, gt_boxes, proposal_deltas, gt_classes - ), - } - return {k: v * self.loss_weight.get(k, 1.0) for k, v in losses.items()} - - # Implementation from https://github.com/xingyizhou/CenterNet2/blob/master/projects/CenterNet2/centernet/modeling/roi_heads/fed_loss.py # noqa - # with slight modifications - def get_fed_loss_classes(self, gt_classes, num_fed_loss_classes, num_classes, weight): - """ - Args: - gt_classes: a long tensor of shape R that contains the gt class label of each proposal. - num_fed_loss_classes: minimum number of classes to keep when calculating federated loss. - Will sample negative classes if number of unique gt_classes is smaller than this value. - num_classes: number of foreground classes - weight: probabilities used to sample negative classes - - Returns: - Tensor: - classes to keep when calculating the federated loss, including both unique gt - classes and sampled negative classes. - """ - unique_gt_classes = torch.unique(gt_classes) - prob = unique_gt_classes.new_ones(num_classes + 1).float() - prob[-1] = 0 - if len(unique_gt_classes) < num_fed_loss_classes: - prob[:num_classes] = weight.float().clone() - prob[unique_gt_classes] = 0 - sampled_negative_classes = torch.multinomial( - prob, num_fed_loss_classes - len(unique_gt_classes), replacement=False - ) - fed_loss_classes = torch.cat([unique_gt_classes, sampled_negative_classes]) - else: - fed_loss_classes = unique_gt_classes - return fed_loss_classes - - # Implementation from https://github.com/xingyizhou/CenterNet2/blob/master/projects/CenterNet2/centernet/modeling/roi_heads/custom_fast_rcnn.py#L113 # noqa - # with slight modifications - def sigmoid_cross_entropy_loss(self, pred_class_logits, gt_classes): - """ - Args: - pred_class_logits: shape (N, K+1), scores for each of the N box. Each row contains the - scores for K object categories and 1 background class - gt_classes: a long tensor of shape R that contains the gt class label of each proposal. - """ - if pred_class_logits.numel() == 0: - return pred_class_logits.new_zeros([1])[0] - - N = pred_class_logits.shape[0] - K = pred_class_logits.shape[1] - 1 - - target = pred_class_logits.new_zeros(N, K + 1) - target[range(len(gt_classes)), gt_classes] = 1 - target = target[:, :K] - - cls_loss = F.binary_cross_entropy_with_logits( - pred_class_logits[:, :-1], target, reduction="none" - ) - - if self.use_fed_loss: - fed_loss_classes = self.get_fed_loss_classes( - gt_classes, - num_fed_loss_classes=self.fed_loss_num_classes, - num_classes=K, - weight=self.fed_loss_cls_weights, - ) - fed_loss_classes_mask = fed_loss_classes.new_zeros(K + 1) - fed_loss_classes_mask[fed_loss_classes] = 1 - fed_loss_classes_mask = fed_loss_classes_mask[:K] - weight = fed_loss_classes_mask.view(1, K).expand(N, K).float() - else: - weight = 1 - - loss = torch.sum(cls_loss * weight) / N - return loss - - def box_reg_loss(self, proposal_boxes, gt_boxes, pred_deltas, gt_classes): - """ - Args: - proposal_boxes/gt_boxes are tensors with the same shape (R, 4 or 5). - pred_deltas has shape (R, 4 or 5), or (R, num_classes * (4 or 5)). - gt_classes is a long tensor of shape R, the gt class label of each proposal. - R shall be the number of proposals. - """ - box_dim = proposal_boxes.shape[1] # 4 or 5 - # Regression loss is only computed for foreground proposals (those matched to a GT) - fg_inds = nonzero_tuple((gt_classes >= 0) & (gt_classes < self.num_classes))[0] - if pred_deltas.shape[1] == box_dim: # cls-agnostic regression - fg_pred_deltas = pred_deltas[fg_inds] - else: - fg_pred_deltas = pred_deltas.view(-1, self.num_classes, box_dim)[ - fg_inds, gt_classes[fg_inds] - ] - - loss_box_reg = _dense_box_regression_loss( - [proposal_boxes[fg_inds]], - self.box2box_transform, - [fg_pred_deltas.unsqueeze(0)], - [gt_boxes[fg_inds]], - ..., - self.box_reg_loss_type, - self.smooth_l1_beta, - ) - - # The reg loss is normalized using the total number of regions (R), not the number - # of foreground regions even though the box regression loss is only defined on - # foreground regions. Why? Because doing so gives equal training influence to - # each foreground example. To see how, consider two different minibatches: - # (1) Contains a single foreground region - # (2) Contains 100 foreground regions - # If we normalize by the number of foreground regions, the single example in - # minibatch (1) will be given 100 times as much influence as each foreground - # example in minibatch (2). Normalizing by the total number of regions, R, - # means that the single example in minibatch (1) and each of the 100 examples - # in minibatch (2) are given equal influence. - return loss_box_reg / max(gt_classes.numel(), 1.0) # return 0 if empty - - def inference(self, predictions: Tuple[torch.Tensor, torch.Tensor], proposals: List[Instances]): - """ - Args: - predictions: return values of :meth:`forward()`. - proposals (list[Instances]): proposals that match the features that were - used to compute predictions. The ``proposal_boxes`` field is expected. - - Returns: - list[Instances]: same as `fast_rcnn_inference`. - list[Tensor]: same as `fast_rcnn_inference`. - """ - boxes = self.predict_boxes(predictions, proposals) - scores = self.predict_probs(predictions, proposals) - image_shapes = [x.image_size for x in proposals] - return fast_rcnn_inference( - boxes, - scores, - image_shapes, - self.test_score_thresh, - self.test_nms_thresh, - self.test_topk_per_image, - ) - - def predict_boxes_for_gt_classes(self, predictions, proposals): - """ - Args: - predictions: return values of :meth:`forward()`. - proposals (list[Instances]): proposals that match the features that were used - to compute predictions. The fields ``proposal_boxes``, ``gt_classes`` are expected. - - Returns: - list[Tensor]: - A list of Tensors of predicted boxes for GT classes in case of - class-specific box head. Element i of the list has shape (Ri, B), where Ri is - the number of proposals for image i and B is the box dimension (4 or 5) - """ - if not len(proposals): - return [] - scores, proposal_deltas = predictions - proposal_boxes = cat([p.proposal_boxes.tensor for p in proposals], dim=0) - N, B = proposal_boxes.shape - predict_boxes = self.box2box_transform.apply_deltas( - proposal_deltas, proposal_boxes - ) # Nx(KxB) - - K = predict_boxes.shape[1] // B - if K > 1: - gt_classes = torch.cat([p.gt_classes for p in proposals], dim=0) - # Some proposals are ignored or have a background class. Their gt_classes - # cannot be used as index. - gt_classes = gt_classes.clamp_(0, K - 1) - - predict_boxes = predict_boxes.view(N, K, B)[ - torch.arange(N, dtype=torch.long, device=predict_boxes.device), gt_classes - ] - num_prop_per_image = [len(p) for p in proposals] - return predict_boxes.split(num_prop_per_image) - - def predict_boxes( - self, predictions: Tuple[torch.Tensor, torch.Tensor], proposals: List[Instances] - ): - """ - Args: - predictions: return values of :meth:`forward()`. - proposals (list[Instances]): proposals that match the features that were - used to compute predictions. The ``proposal_boxes`` field is expected. - - Returns: - list[Tensor]: - A list of Tensors of predicted class-specific or class-agnostic boxes - for each image. Element i has shape (Ri, K * B) or (Ri, B), where Ri is - the number of proposals for image i and B is the box dimension (4 or 5) - """ - if not len(proposals): - return [] - _, proposal_deltas = predictions - num_prop_per_image = [len(p) for p in proposals] - proposal_boxes = cat([p.proposal_boxes.tensor for p in proposals], dim=0) - predict_boxes = self.box2box_transform.apply_deltas( - proposal_deltas, - proposal_boxes, - ) # Nx(KxB) - return predict_boxes.split(num_prop_per_image) - - def predict_probs( - self, predictions: Tuple[torch.Tensor, torch.Tensor], proposals: List[Instances] - ): - """ - Args: - predictions: return values of :meth:`forward()`. - proposals (list[Instances]): proposals that match the features that were - used to compute predictions. - - Returns: - list[Tensor]: - A list of Tensors of predicted class probabilities for each image. - Element i has shape (Ri, K + 1), where Ri is the number of proposals for image i. - """ - scores, _ = predictions - num_inst_per_image = [len(p) for p in proposals] - if self.use_sigmoid_ce: - probs = scores.sigmoid() - else: - probs = F.softmax(scores, dim=-1) - return probs.split(num_inst_per_image, dim=0) diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/projects/deeplab/loss.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/projects/deeplab/loss.py deleted file mode 100644 index 3a43087b7c1a2b4d2b249fad117724dbd0f14fdd..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/projects/deeplab/loss.py +++ /dev/null @@ -1,40 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import torch -import torch.nn as nn - - -class DeepLabCE(nn.Module): - """ - Hard pixel mining with cross entropy loss, for semantic segmentation. - This is used in TensorFlow DeepLab frameworks. - Paper: DeeperLab: Single-Shot Image Parser - Reference: https://github.com/tensorflow/models/blob/bd488858d610e44df69da6f89277e9de8a03722c/research/deeplab/utils/train_utils.py#L33 # noqa - Arguments: - ignore_label: Integer, label to ignore. - top_k_percent_pixels: Float, the value lies in [0.0, 1.0]. When its - value < 1.0, only compute the loss for the top k percent pixels - (e.g., the top 20% pixels). This is useful for hard pixel mining. - weight: Tensor, a manual rescaling weight given to each class. - """ - - def __init__(self, ignore_label=-1, top_k_percent_pixels=1.0, weight=None): - super(DeepLabCE, self).__init__() - self.top_k_percent_pixels = top_k_percent_pixels - self.ignore_label = ignore_label - self.criterion = nn.CrossEntropyLoss( - weight=weight, ignore_index=ignore_label, reduction="none" - ) - - def forward(self, logits, labels, weights=None): - if weights is None: - pixel_losses = self.criterion(logits, labels).contiguous().view(-1) - else: - # Apply per-pixel loss weights. - pixel_losses = self.criterion(logits, labels) * weights - pixel_losses = pixel_losses.contiguous().view(-1) - if self.top_k_percent_pixels == 1.0: - return pixel_losses.mean() - - top_k_pixels = int(self.top_k_percent_pixels * pixel_losses.numel()) - pixel_losses, _ = torch.topk(pixel_losses, top_k_pixels) - return pixel_losses.mean() diff --git a/spaces/Theivaprakasham/yolov6/yolov6/utils/config.py b/spaces/Theivaprakasham/yolov6/yolov6/utils/config.py deleted file mode 100644 index 7f9c13a3085e0738a3547fc35c5106defed4c489..0000000000000000000000000000000000000000 --- a/spaces/Theivaprakasham/yolov6/yolov6/utils/config.py +++ /dev/null @@ -1,101 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -# The code is based on -# https://github.com/open-mmlab/mmcv/blob/master/mmcv/utils/config.py -# Copyright (c) OpenMMLab. - -import os.path as osp -import shutil -import sys -import tempfile -from importlib import import_module -from addict import Dict - - -class ConfigDict(Dict): - - def __missing__(self, name): - raise KeyError(name) - - def __getattr__(self, name): - try: - value = super(ConfigDict, self).__getattr__(name) - except KeyError: - ex = AttributeError("'{}' object has no attribute '{}'".format( - self.__class__.__name__, name)) - except Exception as e: - ex = e - else: - return value - raise ex - - -class Config(object): - - @staticmethod - def _file2dict(filename): - filename = str(filename) - if filename.endswith('.py'): - with tempfile.TemporaryDirectory() as temp_config_dir: - shutil.copyfile(filename, - osp.join(temp_config_dir, '_tempconfig.py')) - sys.path.insert(0, temp_config_dir) - mod = import_module('_tempconfig') - sys.path.pop(0) - cfg_dict = { - name: value - for name, value in mod.__dict__.items() - if not name.startswith('__') - } - # delete imported module - del sys.modules['_tempconfig'] - else: - raise IOError('Only .py type are supported now!') - cfg_text = filename + '\n' - with open(filename, 'r') as f: - cfg_text += f.read() - - return cfg_dict, cfg_text - - @staticmethod - def fromfile(filename): - cfg_dict, cfg_text = Config._file2dict(filename) - return Config(cfg_dict, cfg_text=cfg_text, filename=filename) - - def __init__(self, cfg_dict=None, cfg_text=None, filename=None): - if cfg_dict is None: - cfg_dict = dict() - elif not isinstance(cfg_dict, dict): - raise TypeError('cfg_dict must be a dict, but got {}'.format( - type(cfg_dict))) - - super(Config, self).__setattr__('_cfg_dict', ConfigDict(cfg_dict)) - super(Config, self).__setattr__('_filename', filename) - if cfg_text: - text = cfg_text - elif filename: - with open(filename, 'r') as f: - text = f.read() - else: - text = '' - super(Config, self).__setattr__('_text', text) - - @property - def filename(self): - return self._filename - - @property - def text(self): - return self._text - - def __repr__(self): - return 'Config (path: {}): {}'.format(self.filename, - self._cfg_dict.__repr__()) - - def __getattr__(self, name): - return getattr(self._cfg_dict, name) - - def __setattr__(self, name, value): - if isinstance(value, dict): - value = ConfigDict(value) - self._cfg_dict.__setattr__(name, value) diff --git a/spaces/UMich-siads699-fa22-spotamood/spotamood/pages/2_Model.py b/spaces/UMich-siads699-fa22-spotamood/spotamood/pages/2_Model.py deleted file mode 100644 index 284e6f37117d0a65432a33d41acfba6011d39b41..0000000000000000000000000000000000000000 --- a/spaces/UMich-siads699-fa22-spotamood/spotamood/pages/2_Model.py +++ /dev/null @@ -1,19 +0,0 @@ -import os -import streamlit as st -from PIL import Image - -path = os.path.dirname(__file__) - -st.markdown("# Information Retrieval") -st.sidebar.markdown("# Architect") -#TODO: Methodology: your report explains how you attempted to solve the problem and justifies your methodological approach. -#TODO: Technical depth: your report demonstrates mastery of learning objectives from multiple MADS courses. -#TODO: Context: your report cites at least 3 studies, blogs, academic articles, or other sources that are relevant to your project. All references in your report are correctly formatted with a consistent citation style (such as MLA or APA). - -# open image file -pipeline = Image.open(os.path.join(path,'..','assets','Spot-A-Mood-Pipeline.png')) -st.image(pipeline, caption='Information Retrieval Architecture') - -with open(os.path.join(path,'model.md')) as f: - model = f.read() -st.markdown(model) \ No newline at end of file diff --git a/spaces/UltimateAICourse/Prompt-Engineering/index.html b/spaces/UltimateAICourse/Prompt-Engineering/index.html deleted file mode 100644 index 7ef80c6220483b5efd0f51fa7ecdcb09bc66d2e0..0000000000000000000000000000000000000000 --- a/spaces/UltimateAICourse/Prompt-Engineering/index.html +++ /dev/null @@ -1,19 +0,0 @@ - - - - - - Prompt Engineering - - - -
      -

      Welcome to Prompt Engineering training guide, brought to you by The Ultimate AI Course by Mark Fulton.

      -

      Here you will learn advanced techniques for prompt engineering to improve your results with ChatGPT or any LLM.

      -

      - Also don't forget to visit - The Ultimate AI Course. -

      -
      - - diff --git a/spaces/Uncleming/AIGPT/README.md b/spaces/Uncleming/AIGPT/README.md deleted file mode 100644 index 7164a72974354d2bc86c260733958819078e6630..0000000000000000000000000000000000000000 --- a/spaces/Uncleming/AIGPT/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: AIGPT -emoji: 🐢 -colorFrom: gray -colorTo: yellow -sdk: docker -pinned: false -license: mit -app_port: 8081 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Vision-CAIR/MiniGPT-v2/minigpt4/datasets/datasets/cc_sbu_dataset.py b/spaces/Vision-CAIR/MiniGPT-v2/minigpt4/datasets/datasets/cc_sbu_dataset.py deleted file mode 100644 index 80b658d97ad47052653cecf25daeb512793bfc7b..0000000000000000000000000000000000000000 --- a/spaces/Vision-CAIR/MiniGPT-v2/minigpt4/datasets/datasets/cc_sbu_dataset.py +++ /dev/null @@ -1,47 +0,0 @@ -import os -from PIL import Image -import webdataset as wds -from minigpt4.datasets.datasets.base_dataset import BaseDataset -from minigpt4.datasets.datasets.caption_datasets import CaptionDataset - - -class CCSBUDataset(BaseDataset): - def __init__(self, vis_processor, text_processor, location): - super().__init__(vis_processor=vis_processor, text_processor=text_processor) - - self.inner_dataset = wds.DataPipeline( - wds.ResampledShards(location), - wds.tarfile_to_samples(handler=wds.warn_and_continue), - wds.shuffle(1000, handler=wds.warn_and_continue), - wds.decode("pilrgb", handler=wds.warn_and_continue), - wds.to_tuple("jpg", "json", handler=wds.warn_and_continue), - wds.map_tuple(self.vis_processor, handler=wds.warn_and_continue), - wds.map(self.to_dict, handler=wds.warn_and_continue), - ) - - def to_dict(self, sample): - return { - "image": sample[0], - "answer": self.text_processor(sample[1]["caption"]), - } - - -class CCSBUAlignDataset(CaptionDataset): - - def __getitem__(self, index): - - # TODO this assumes image input, not general enough - ann = self.annotation[index] - - img_file = '{}.jpg'.format(ann["image_id"]) - image_path = os.path.join(self.vis_root, img_file) - image = Image.open(image_path).convert("RGB") - - image = self.vis_processor(image) - caption = ann["caption"] - - return { - "image": image, - "answer": caption, - "image_id": self.img_ids[ann["image_id"]], - } \ No newline at end of file diff --git a/spaces/Wrathless/Dkrotzer-MusicalMagic/tests/common_utils/wav_utils.py b/spaces/Wrathless/Dkrotzer-MusicalMagic/tests/common_utils/wav_utils.py deleted file mode 100644 index d3a563ee1749a58217ece55c9a08b8d93c0fc386..0000000000000000000000000000000000000000 --- a/spaces/Wrathless/Dkrotzer-MusicalMagic/tests/common_utils/wav_utils.py +++ /dev/null @@ -1,32 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from pathlib import Path -import typing as tp - -import torch -import torchaudio - - -def get_white_noise(chs: int = 1, num_frames: int = 1): - wav = torch.randn(chs, num_frames) - return wav - - -def get_batch_white_noise(bs: int = 1, chs: int = 1, num_frames: int = 1): - wav = torch.randn(bs, chs, num_frames) - return wav - - -def save_wav(path: str, wav: torch.Tensor, sample_rate: int): - fp = Path(path) - kwargs: tp.Dict[str, tp.Any] = {} - if fp.suffix == '.wav': - kwargs['encoding'] = 'PCM_S' - kwargs['bits_per_sample'] = 16 - elif fp.suffix == '.mp3': - kwargs['compression'] = 320 - torchaudio.save(str(fp), wav, sample_rate, **kwargs) diff --git a/spaces/XAI/CHM-Corr/model/base/correlation.py b/spaces/XAI/CHM-Corr/model/base/correlation.py deleted file mode 100644 index 024fc9eb717f2564562dcc0e776eec1ed7d6667d..0000000000000000000000000000000000000000 --- a/spaces/XAI/CHM-Corr/model/base/correlation.py +++ /dev/null @@ -1,68 +0,0 @@ -r""" Provides functions that creates/manipulates correlation matrices """ - -import math - -from torch.nn.functional import interpolate as resize -import torch - -from .geometry import Geometry - - -class Correlation: - - @classmethod - def mutual_nn_filter(cls, correlation_matrix, eps=1e-30): - r""" Mutual nearest neighbor filtering (Rocco et al. NeurIPS'18 )""" - corr_src_max = torch.max(correlation_matrix, dim=2, keepdim=True)[0] - corr_trg_max = torch.max(correlation_matrix, dim=1, keepdim=True)[0] - corr_src_max[corr_src_max == 0] += eps - corr_trg_max[corr_trg_max == 0] += eps - - corr_src = correlation_matrix / corr_src_max - corr_trg = correlation_matrix / corr_trg_max - - return correlation_matrix * (corr_src * corr_trg) - - @classmethod - def build_correlation6d(self, src_feat, trg_feat, scales, conv2ds): - r""" Build 6-dimensional correlation tensor """ - - bsz, _, side, side = src_feat.size() - - # Construct feature pairs with multiple scales - _src_feats = [] - _trg_feats = [] - for scale, conv in zip(scales, conv2ds): - s = (round(side * math.sqrt(scale)),) * 2 - _src_feat = conv(resize(src_feat, s, mode='bilinear', align_corners=True)) - _trg_feat = conv(resize(trg_feat, s, mode='bilinear', align_corners=True)) - _src_feats.append(_src_feat) - _trg_feats.append(_trg_feat) - - # Build multiple 4-dimensional correlation tensor - corr6d = [] - for src_feat in _src_feats: - ch = src_feat.size(1) - - src_side = src_feat.size(-1) - src_feat = src_feat.view(bsz, ch, -1).transpose(1, 2) - src_norm = src_feat.norm(p=2, dim=2, keepdim=True) - - for trg_feat in _trg_feats: - trg_side = trg_feat.size(-1) - trg_feat = trg_feat.view(bsz, ch, -1) - trg_norm = trg_feat.norm(p=2, dim=1, keepdim=True) - - correlation = torch.bmm(src_feat, trg_feat) / torch.bmm(src_norm, trg_norm) - correlation = correlation.view(bsz, src_side, src_side, trg_side, trg_side).contiguous() - corr6d.append(correlation) - - # Resize the spatial sizes of the 4D tensors to the same size - for idx, correlation in enumerate(corr6d): - corr6d[idx] = Geometry.interpolate4d(correlation, [side, side]) - - # Build 6-dimensional correlation tensor - corr6d = torch.stack(corr6d).view(len(scales), len(scales), - bsz, side, side, side, side).permute(2, 0, 1, 3, 4, 5, 6) - return corr6d.clamp(min=0) - diff --git a/spaces/Xule/ChuanhuChatGPT/modules/__init__.py b/spaces/Xule/ChuanhuChatGPT/modules/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/XzJosh/Azuma-Bert-VITS2/text/japanese.py b/spaces/XzJosh/Azuma-Bert-VITS2/text/japanese.py deleted file mode 100644 index ddedafa0c5b7986068dc6c91637a86febc3923a9..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Azuma-Bert-VITS2/text/japanese.py +++ /dev/null @@ -1,104 +0,0 @@ -# modified from https://github.com/CjangCjengh/vits/blob/main/text/japanese.py -import re -import sys - -import pyopenjtalk - -from text import symbols - -# Regular expression matching Japanese without punctuation marks: -_japanese_characters = re.compile( - r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# Regular expression matching non-Japanese characters or punctuation marks: -_japanese_marks = re.compile( - r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# List of (symbol, Japanese) pairs for marks: -_symbols_to_japanese = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('%', 'パーセント') -]] - - -# List of (consonant, sokuon) pairs: -_real_sokuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'Q([↑↓]*[kg])', r'k#\1'), - (r'Q([↑↓]*[tdjʧ])', r't#\1'), - (r'Q([↑↓]*[sʃ])', r's\1'), - (r'Q([↑↓]*[pb])', r'p#\1') -]] - -# List of (consonant, hatsuon) pairs: -_real_hatsuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'N([↑↓]*[pbm])', r'm\1'), - (r'N([↑↓]*[ʧʥj])', r'n^\1'), - (r'N([↑↓]*[tdn])', r'n\1'), - (r'N([↑↓]*[kg])', r'ŋ\1') -]] - - - -def post_replace_ph(ph): - rep_map = { - ':': ',', - ';': ',', - ',': ',', - '。': '.', - '!': '!', - '?': '?', - '\n': '.', - "·": ",", - '、': ",", - '...': '…', - 'v': "V" - } - if ph in rep_map.keys(): - ph = rep_map[ph] - if ph in symbols: - return ph - if ph not in symbols: - ph = 'UNK' - return ph - -def symbols_to_japanese(text): - for regex, replacement in _symbols_to_japanese: - text = re.sub(regex, replacement, text) - return text - - -def preprocess_jap(text): - '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html''' - text = symbols_to_japanese(text) - sentences = re.split(_japanese_marks, text) - marks = re.findall(_japanese_marks, text) - text = [] - for i, sentence in enumerate(sentences): - if re.match(_japanese_characters, sentence): - p = pyopenjtalk.g2p(sentence) - text += p.split(" ") - - if i < len(marks): - text += [marks[i].replace(' ', '')] - return text - -def text_normalize(text): - # todo: jap text normalize - return text - -def g2p(norm_text): - phones = preprocess_jap(norm_text) - phones = [post_replace_ph(i) for i in phones] - # todo: implement tones and word2ph - tones = [0 for i in phones] - word2ph = [1 for i in phones] - return phones, tones, word2ph - - -if __name__ == '__main__': - for line in open("../../../Downloads/transcript_utf8.txt").readlines(): - text = line.split(":")[1] - phones, tones, word2ph = g2p(text) - for p in phones: - if p == "z": - print(text, phones) - sys.exit(0) diff --git a/spaces/XzJosh/ShanBao-Bert-VITS2/modules.py b/spaces/XzJosh/ShanBao-Bert-VITS2/modules.py deleted file mode 100644 index 92e0f32a51c472bfd1659a50a95a95d195281d2b..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/ShanBao-Bert-VITS2/modules.py +++ /dev/null @@ -1,452 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform -from attentions import Encoder - -LRELU_SLOPE = 0.1 - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x -class TransformerCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - n_layers, - n_heads, - p_dropout=0, - filter_channels=0, - mean_only=False, - wn_sharing_parameter=None, - gin_channels = 0 - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = Encoder(hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout, isflow = True, gin_channels = gin_channels) if wn_sharing_parameter is None else wn_sharing_parameter - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/XzJosh/ranran-Bert-VITS2/monotonic_align/__init__.py b/spaces/XzJosh/ranran-Bert-VITS2/monotonic_align/__init__.py deleted file mode 100644 index 75603d26cf2b8d6196f5a68a89f9e49d8e519bc8..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/ranran-Bert-VITS2/monotonic_align/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -from numpy import zeros, int32, float32 -from torch import from_numpy - -from .core import maximum_path_jit - -def maximum_path(neg_cent, mask): - device = neg_cent.device - dtype = neg_cent.dtype - neg_cent = neg_cent.data.cpu().numpy().astype(float32) - path = zeros(neg_cent.shape, dtype=int32) - - t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(int32) - t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(int32) - maximum_path_jit(path, neg_cent, t_t_max, t_s_max) - return from_numpy(path).to(device=device, dtype=dtype) diff --git a/spaces/YONG627/456123/yolov5-code-main/utils/loggers/clearml/clearml_utils.py b/spaces/YONG627/456123/yolov5-code-main/utils/loggers/clearml/clearml_utils.py deleted file mode 100644 index 2764abe90da80a7b270bca9c0fd89b99ec25af3b..0000000000000000000000000000000000000000 --- a/spaces/YONG627/456123/yolov5-code-main/utils/loggers/clearml/clearml_utils.py +++ /dev/null @@ -1,164 +0,0 @@ -"""Main Logger class for ClearML experiment tracking.""" -import glob -import re -from pathlib import Path - -import numpy as np -import yaml - -from utils.plots import Annotator, colors - -try: - import clearml - from clearml import Dataset, Task - - assert hasattr(clearml, '__version__') # verify package import not local dir -except (ImportError, AssertionError): - clearml = None - - -def construct_dataset(clearml_info_string): - """Load in a clearml dataset and fill the internal data_dict with its contents. - """ - dataset_id = clearml_info_string.replace('clearml://', '') - dataset = Dataset.get(dataset_id=dataset_id) - dataset_root_path = Path(dataset.get_local_copy()) - - # We'll search for the yaml file definition in the dataset - yaml_filenames = list(glob.glob(str(dataset_root_path / '*.yaml')) + glob.glob(str(dataset_root_path / '*.yml'))) - if len(yaml_filenames) > 1: - raise ValueError('More than one yaml file was found in the dataset root, cannot determine which one contains ' - 'the dataset definition this way.') - elif len(yaml_filenames) == 0: - raise ValueError('No yaml definition found in dataset root path, check that there is a correct yaml file ' - 'inside the dataset root path.') - with open(yaml_filenames[0]) as f: - dataset_definition = yaml.safe_load(f) - - assert set(dataset_definition.keys()).issuperset( - {'train', 'test', 'val', 'nc', 'names'} - ), "The right keys were not found in the yaml file, make sure it at least has the following keys: ('train', 'test', 'val', 'nc', 'names')" - - data_dict = dict() - data_dict['train'] = str( - (dataset_root_path / dataset_definition['train']).resolve()) if dataset_definition['train'] else None - data_dict['test'] = str( - (dataset_root_path / dataset_definition['test']).resolve()) if dataset_definition['test'] else None - data_dict['val'] = str( - (dataset_root_path / dataset_definition['val']).resolve()) if dataset_definition['val'] else None - data_dict['nc'] = dataset_definition['nc'] - data_dict['names'] = dataset_definition['names'] - - return data_dict - - -class ClearmlLogger: - """Log training runs, datasets, models, and predictions to ClearML. - - This logger sends information to ClearML at app.clear.ml or to your own hosted server. By default, - this information includes hyperparameters, system configuration and metrics, model metrics, code information and - basic data metrics and analyses. - - By providing additional command line arguments to train.py, datasets, - models and predictions can also be logged. - """ - - def __init__(self, opt, hyp): - """ - - Initialize ClearML Task, this object will capture the experiment - - Upload dataset version to ClearML Data if opt.upload_dataset is True - - arguments: - opt (namespace) -- Commandline arguments for this run - hyp (dict) -- Hyperparameters for this run - - """ - self.current_epoch = 0 - # Keep tracked of amount of logged images to enforce a limit - self.current_epoch_logged_images = set() - # Maximum number of images to log to clearML per epoch - self.max_imgs_to_log_per_epoch = 16 - # Get the interval of epochs when bounding box images should be logged - self.bbox_interval = opt.bbox_interval - self.clearml = clearml - self.task = None - self.data_dict = None - if self.clearml: - self.task = Task.init( - project_name=opt.project if opt.project != 'runs/train' else 'YOLOv5', - task_name=opt.name if opt.name != 'exp' else 'Training', - tags=['YOLOv5'], - output_uri=True, - reuse_last_task_id=opt.exist_ok, - auto_connect_frameworks={'pytorch': False} - # We disconnect pytorch auto-detection, because we added manual model save points in the code - ) - # ClearML's hooks will already grab all general parameters - # Only the hyperparameters coming from the yaml config file - # will have to be added manually! - self.task.connect(hyp, name='Hyperparameters') - self.task.connect(opt, name='Args') - - # Make sure the code is easily remotely runnable by setting the docker image to use by the remote agent - self.task.set_base_docker('ultralytics/yolov5:latest', - docker_arguments='--ipc=host -e="CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL=1"', - docker_setup_bash_script='pip install clearml') - - # Get ClearML Dataset Version if requested - if opt.data.startswith('clearml://'): - # data_dict should have the following keys: - # names, nc (number of classes), test, train, val (all three relative paths to ../datasets) - self.data_dict = construct_dataset(opt.data) - # Set data to data_dict because wandb will crash without this information and opt is the best way - # to give it to them - opt.data = self.data_dict - - def log_debug_samples(self, files, title='Debug Samples'): - """ - Log files (images) as debug samples in the ClearML task. - - arguments: - files (List(PosixPath)) a list of file paths in PosixPath format - title (str) A title that groups together images with the same values - """ - for f in files: - if f.exists(): - it = re.search(r'_batch(\d+)', f.name) - iteration = int(it.groups()[0]) if it else 0 - self.task.get_logger().report_image(title=title, - series=f.name.replace(it.group(), ''), - local_path=str(f), - iteration=iteration) - - def log_image_with_boxes(self, image_path, boxes, class_names, image, conf_threshold=0.25): - """ - Draw the bounding boxes on a single image and report the result as a ClearML debug sample. - - arguments: - image_path (PosixPath) the path the original image file - boxes (list): list of scaled predictions in the format - [xmin, ymin, xmax, ymax, confidence, class] - class_names (dict): dict containing mapping of class int to class name - image (Tensor): A torch tensor containing the actual image data - """ - if len(self.current_epoch_logged_images) < self.max_imgs_to_log_per_epoch and self.current_epoch >= 0: - # Log every bbox_interval times and deduplicate for any intermittend extra eval runs - if self.current_epoch % self.bbox_interval == 0 and image_path not in self.current_epoch_logged_images: - im = np.ascontiguousarray(np.moveaxis(image.mul(255).clamp(0, 255).byte().cpu().numpy(), 0, 2)) - annotator = Annotator(im=im, pil=True) - for i, (conf, class_nr, box) in enumerate(zip(boxes[:, 4], boxes[:, 5], boxes[:, :4])): - color = colors(i) - - class_name = class_names[int(class_nr)] - confidence_percentage = round(float(conf) * 100, 2) - label = f'{class_name}: {confidence_percentage}%' - - if conf > conf_threshold: - annotator.rectangle(box.cpu().numpy(), outline=color) - annotator.box_label(box.cpu().numpy(), label=label, color=color) - - annotated_image = annotator.result() - self.task.get_logger().report_image(title='Bounding Boxes', - series=image_path.name, - iteration=self.current_epoch, - image=annotated_image) - self.current_epoch_logged_images.add(image_path) diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/datasets/README.md b/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/datasets/README.md deleted file mode 100644 index 0eb44cc3b23beeb1755ab8d12002d26f13434235..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/datasets/README.md +++ /dev/null @@ -1,140 +0,0 @@ -# Use Builtin Datasets - -A dataset can be used by accessing [DatasetCatalog](https://detectron2.readthedocs.io/modules/data.html#detectron2.data.DatasetCatalog) -for its data, or [MetadataCatalog](https://detectron2.readthedocs.io/modules/data.html#detectron2.data.MetadataCatalog) for its metadata (class names, etc). -This document explains how to setup the builtin datasets so they can be used by the above APIs. -[Use Custom Datasets](https://detectron2.readthedocs.io/tutorials/datasets.html) gives a deeper dive on how to use `DatasetCatalog` and `MetadataCatalog`, -and how to add new datasets to them. - -Detectron2 has builtin support for a few datasets. -The datasets are assumed to exist in a directory specified by the environment variable -`DETECTRON2_DATASETS`. -Under this directory, detectron2 will look for datasets in the structure described below, if needed. -``` -$DETECTRON2_DATASETS/ - coco/ - lvis/ - cityscapes/ - VOC20{07,12}/ -``` - -You can set the location for builtin datasets by `export DETECTRON2_DATASETS=/path/to/datasets`. -If left unset, the default is `./datasets` relative to your current working directory. - -The [model zoo](https://github.com/facebookresearch/detectron2/blob/master/MODEL_ZOO.md) -contains configs and models that use these builtin datasets. - -## Expected dataset structure for [COCO instance/keypoint detection](https://cocodataset.org/#download): - -``` -coco/ - annotations/ - instances_{train,val}2017.json - person_keypoints_{train,val}2017.json - {train,val}2017/ - # image files that are mentioned in the corresponding json -``` - -You can use the 2014 version of the dataset as well. - -Some of the builtin tests (`dev/run_*_tests.sh`) uses a tiny version of the COCO dataset, -which you can download with `./datasets/prepare_for_tests.sh`. - -## Expected dataset structure for PanopticFPN: - -Extract panoptic annotations from [COCO website](https://cocodataset.org/#download) -into the following structure: -``` -coco/ - annotations/ - panoptic_{train,val}2017.json - panoptic_{train,val}2017/ # png annotations - panoptic_stuff_{train,val}2017/ # generated by the script mentioned below -``` - -Install panopticapi by: -``` -pip install git+https://github.com/cocodataset/panopticapi.git -``` -Then, run `python datasets/prepare_panoptic_fpn.py`, to extract semantic annotations from panoptic annotations. - -## Expected dataset structure for [LVIS instance segmentation](https://www.lvisdataset.org/dataset): -``` -coco/ - {train,val,test}2017/ -lvis/ - lvis_v0.5_{train,val}.json - lvis_v0.5_image_info_test.json - lvis_v1_{train,val}.json - lvis_v1_image_info_test{,_challenge}.json -``` - -Install lvis-api by: -``` -pip install git+https://github.com/lvis-dataset/lvis-api.git -``` - -To evaluate models trained on the COCO dataset using LVIS annotations, -run `python datasets/prepare_cocofied_lvis.py` to prepare "cocofied" LVIS annotations. - -## Expected dataset structure for [cityscapes](https://www.cityscapes-dataset.com/downloads/): -``` -cityscapes/ - gtFine/ - train/ - aachen/ - color.png, instanceIds.png, labelIds.png, polygons.json, - labelTrainIds.png - ... - val/ - test/ - # below are generated Cityscapes panoptic annotation - cityscapes_panoptic_train.json - cityscapes_panoptic_train/ - cityscapes_panoptic_val.json - cityscapes_panoptic_val/ - cityscapes_panoptic_test.json - cityscapes_panoptic_test/ - leftImg8bit/ - train/ - val/ - test/ -``` -Install cityscapes scripts by: -``` -pip install git+https://github.com/mcordts/cityscapesScripts.git -``` - -Note: to create labelTrainIds.png, first prepare the above structure, then run cityscapesescript with: -``` -CITYSCAPES_DATASET=/path/to/abovementioned/cityscapes python cityscapesscripts/preparation/createTrainIdLabelImgs.py -``` -These files are not needed for instance segmentation. - -Note: to generate Cityscapes panoptic dataset, run cityscapesescript with: -``` -CITYSCAPES_DATASET=/path/to/abovementioned/cityscapes python cityscapesscripts/preparation/createPanopticImgs.py -``` -These files are not needed for semantic and instance segmentation. - -## Expected dataset structure for [Pascal VOC](http://host.robots.ox.ac.uk/pascal/VOC/index.html): -``` -VOC20{07,12}/ - Annotations/ - ImageSets/ - Main/ - trainval.txt - test.txt - # train.txt or val.txt, if you use these splits - JPEGImages/ -``` - -## Expected dataset structure for [ADE20k Scene Parsing](http://sceneparsing.csail.mit.edu/): -``` -ADEChallengeData2016/ - annotations/ - annotations_detectron2/ - images/ - objectInfo150.txt -``` -The directory `annotations_detectron2` is generated by running `python datasets/prepare_ade20k_sem_seg.py`. diff --git a/spaces/YlcldKlns/bing/src/components/ui/voice/index.tsx b/spaces/YlcldKlns/bing/src/components/ui/voice/index.tsx deleted file mode 100644 index 4adcb632226bfced8b97092782811edf08b56569..0000000000000000000000000000000000000000 --- a/spaces/YlcldKlns/bing/src/components/ui/voice/index.tsx +++ /dev/null @@ -1,28 +0,0 @@ -import './index.scss' - -export interface VoiceProps extends CSSPropertyRule { - num?: number; - duration?: number; -} -export default function Voice({ duration = 400, num = 7, ...others }) { - return ( -
      - {Array.from({ length: num }).map((_, index) => { - const randomDuration = Math.random() * 100 + duration - const initialDelay = Math.random() * 2 * duration - const initialScale = Math.sin((index + 1) * Math.PI / num) - return ( -
      - ) - })} -
      - ) -} diff --git a/spaces/Yudha515/Rvc-Models/audiocraft/data/zip.py b/spaces/Yudha515/Rvc-Models/audiocraft/data/zip.py deleted file mode 100644 index 1f1154231da321dd38d151ff285dbcff5e38a6e0..0000000000000000000000000000000000000000 --- a/spaces/Yudha515/Rvc-Models/audiocraft/data/zip.py +++ /dev/null @@ -1,74 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import typing -import zipfile - -from dataclasses import dataclass -from functools import lru_cache -from typing_extensions import Literal - - -DEFAULT_SIZE = 32 -MODE = Literal['r', 'w', 'x', 'a'] - - -@dataclass(order=True) -class PathInZip: - """Class for holding a path of file within a zip file. - - Args: - path: The convention is : - Let's assume there is a zip file /some/location/foo.zip - and inside of it is a json file located at /data/file1.json, - Then we expect path = "/some/location/foo.zip:/data/file1.json" - """ - - INFO_PATH_SEP = ':' - zip_path: str - file_path: str - - def __init__(self, path: str) -> None: - split_path = path.split(self.INFO_PATH_SEP) - assert len(split_path) == 2 - self.zip_path, self.file_path = split_path - - @classmethod - def from_paths(cls, zip_path: str, file_path: str): - return cls(zip_path + cls.INFO_PATH_SEP + file_path) - - def __str__(self) -> str: - return self.zip_path + self.INFO_PATH_SEP + self.file_path - - -def _open_zip(path: str, mode: MODE = 'r'): - return zipfile.ZipFile(path, mode) - - -_cached_open_zip = lru_cache(DEFAULT_SIZE)(_open_zip) - - -def set_zip_cache_size(max_size: int): - """Sets the maximal LRU caching for zip file opening. - - Args: - max_size: the maximal LRU cache. - """ - global _cached_open_zip - _cached_open_zip = lru_cache(max_size)(_open_zip) - - -def open_file_in_zip(path_in_zip: PathInZip, mode: str = 'r') -> typing.IO: - """Opens a file stored inside a zip and returns a file-like object. - - Args: - path_in_zip: A PathInZip object representing the file to return a file-like object of. - mode: The mode in which to open the file with. - Returns: - A file-like object for PathInZip. - """ - zf = _cached_open_zip(path_in_zip.zip_path) - return zf.open(path_in_zip.file_path) diff --git a/spaces/Yudha515/Rvc-Models/audiocraft/models/builders.py b/spaces/Yudha515/Rvc-Models/audiocraft/models/builders.py deleted file mode 100644 index 77ee5f96fea2e3c9e475fe961bc1a5ee473ed8eb..0000000000000000000000000000000000000000 --- a/spaces/Yudha515/Rvc-Models/audiocraft/models/builders.py +++ /dev/null @@ -1,218 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -All the functions to build the relevant models and modules -from the Hydra config. -""" - -import typing as tp -import warnings - -import audiocraft -import omegaconf -import torch - -from .encodec import CompressionModel, EncodecModel, FlattenedCompressionModel # noqa -from .lm import LMModel -from ..modules.codebooks_patterns import ( - CodebooksPatternProvider, - DelayedPatternProvider, - ParallelPatternProvider, - UnrolledPatternProvider, - VALLEPattern, - MusicLMPattern, -) -from ..modules.conditioners import ( - BaseConditioner, - ConditioningProvider, - LUTConditioner, - T5Conditioner, - ConditionFuser, - ChromaStemConditioner, -) -from .. import quantization as qt -from ..utils.utils import dict_from_config - - -def get_quantizer(quantizer: str, cfg: omegaconf.DictConfig, dimension: int) -> qt.BaseQuantizer: - klass = { - 'no_quant': qt.DummyQuantizer, - 'rvq': qt.ResidualVectorQuantizer - }[quantizer] - kwargs = dict_from_config(getattr(cfg, quantizer)) - if quantizer != 'no_quant': - kwargs['dimension'] = dimension - return klass(**kwargs) - - -def get_encodec_autoencoder(encoder_name: str, cfg: omegaconf.DictConfig): - if encoder_name == 'seanet': - kwargs = dict_from_config(getattr(cfg, 'seanet')) - encoder_override_kwargs = kwargs.pop('encoder') - decoder_override_kwargs = kwargs.pop('decoder') - encoder_kwargs = {**kwargs, **encoder_override_kwargs} - decoder_kwargs = {**kwargs, **decoder_override_kwargs} - encoder = audiocraft.modules.SEANetEncoder(**encoder_kwargs) - decoder = audiocraft.modules.SEANetDecoder(**decoder_kwargs) - return encoder, decoder - else: - raise KeyError(f'Unexpected compression model {cfg.compression_model}') - - -def get_compression_model(cfg: omegaconf.DictConfig) -> CompressionModel: - """Instantiate a compression model. - """ - if cfg.compression_model == 'encodec': - kwargs = dict_from_config(getattr(cfg, 'encodec')) - encoder_name = kwargs.pop('autoencoder') - quantizer_name = kwargs.pop('quantizer') - encoder, decoder = get_encodec_autoencoder(encoder_name, cfg) - quantizer = get_quantizer(quantizer_name, cfg, encoder.dimension) - frame_rate = kwargs['sample_rate'] // encoder.hop_length - renormalize = kwargs.pop('renormalize', None) - renorm = kwargs.pop('renorm') - if renormalize is None: - renormalize = renorm is not None - warnings.warn("You are using a deprecated EnCodec model. Please migrate to new renormalization.") - return EncodecModel(encoder, decoder, quantizer, - frame_rate=frame_rate, renormalize=renormalize, **kwargs).to(cfg.device) - else: - raise KeyError(f'Unexpected compression model {cfg.compression_model}') - - -def get_lm_model(cfg: omegaconf.DictConfig) -> LMModel: - """Instantiate a transformer LM. - """ - if cfg.lm_model == 'transformer_lm': - kwargs = dict_from_config(getattr(cfg, 'transformer_lm')) - n_q = kwargs['n_q'] - q_modeling = kwargs.pop('q_modeling', None) - codebooks_pattern_cfg = getattr(cfg, 'codebooks_pattern') - attribute_dropout = dict_from_config(getattr(cfg, 'attribute_dropout')) - cls_free_guidance = dict_from_config(getattr(cfg, 'classifier_free_guidance')) - cfg_prob, cfg_coef = cls_free_guidance["training_dropout"], cls_free_guidance["inference_coef"] - fuser = get_condition_fuser(cfg) - condition_provider = get_conditioner_provider(kwargs["dim"], cfg).to(cfg.device) - if len(fuser.fuse2cond['cross']) > 0: # enforce cross-att programatically - kwargs['cross_attention'] = True - if codebooks_pattern_cfg.modeling is None: - assert q_modeling is not None, \ - 'LM model should either have a codebook pattern defined or transformer_lm.q_modeling' - codebooks_pattern_cfg = omegaconf.OmegaConf.create( - {'modeling': q_modeling, 'delay': {'delays': list(range(n_q))}} - ) - pattern_provider = get_codebooks_pattern_provider(n_q, codebooks_pattern_cfg) - return LMModel( - pattern_provider=pattern_provider, - condition_provider=condition_provider, - fuser=fuser, - cfg_dropout=cfg_prob, - cfg_coef=cfg_coef, - attribute_dropout=attribute_dropout, - dtype=getattr(torch, cfg.dtype), - device=cfg.device, - **kwargs - ).to(cfg.device) - else: - raise KeyError(f'Unexpected LM model {cfg.lm_model}') - - -def get_conditioner_provider(output_dim: int, cfg: omegaconf.DictConfig) -> ConditioningProvider: - """Instantiate a conditioning model. - """ - device = cfg.device - duration = cfg.dataset.segment_duration - cfg = getattr(cfg, "conditioners") - cfg = omegaconf.OmegaConf.create({}) if cfg is None else cfg - conditioners: tp.Dict[str, BaseConditioner] = {} - with omegaconf.open_dict(cfg): - condition_provider_args = cfg.pop('args', {}) - for cond, cond_cfg in cfg.items(): - model_type = cond_cfg["model"] - model_args = cond_cfg[model_type] - if model_type == "t5": - conditioners[str(cond)] = T5Conditioner(output_dim=output_dim, device=device, **model_args) - elif model_type == "lut": - conditioners[str(cond)] = LUTConditioner(output_dim=output_dim, **model_args) - elif model_type == "chroma_stem": - model_args.pop('cache_path', None) - conditioners[str(cond)] = ChromaStemConditioner( - output_dim=output_dim, - duration=duration, - device=device, - **model_args - ) - else: - raise ValueError(f"unrecognized conditioning model: {model_type}") - conditioner = ConditioningProvider(conditioners, device=device, **condition_provider_args) - return conditioner - - -def get_condition_fuser(cfg: omegaconf.DictConfig) -> ConditionFuser: - """Instantiate a condition fuser object. - """ - fuser_cfg = getattr(cfg, "fuser") - fuser_methods = ["sum", "cross", "prepend", "input_interpolate"] - fuse2cond = {k: fuser_cfg[k] for k in fuser_methods} - kwargs = {k: v for k, v in fuser_cfg.items() if k not in fuser_methods} - fuser = ConditionFuser(fuse2cond=fuse2cond, **kwargs) - return fuser - - -def get_codebooks_pattern_provider(n_q: int, cfg: omegaconf.DictConfig) -> CodebooksPatternProvider: - """Instantiate a codebooks pattern provider object. - """ - pattern_providers = { - 'parallel': ParallelPatternProvider, - 'delay': DelayedPatternProvider, - 'unroll': UnrolledPatternProvider, - 'valle': VALLEPattern, - 'musiclm': MusicLMPattern, - } - name = cfg.modeling - kwargs = dict_from_config(cfg.get(name)) if hasattr(cfg, name) else {} - klass = pattern_providers[name] - return klass(n_q, **kwargs) - - -def get_debug_compression_model(device='cpu'): - """Instantiate a debug compression model to be used for unit tests. - """ - seanet_kwargs = { - 'n_filters': 4, - 'n_residual_layers': 1, - 'dimension': 32, - 'ratios': [10, 8, 16] # 25 Hz at 32kHz - } - encoder = audiocraft.modules.SEANetEncoder(**seanet_kwargs) - decoder = audiocraft.modules.SEANetDecoder(**seanet_kwargs) - quantizer = qt.ResidualVectorQuantizer(dimension=32, bins=400, n_q=4) - init_x = torch.randn(8, 32, 128) - quantizer(init_x, 1) # initialize kmeans etc. - compression_model = EncodecModel( - encoder, decoder, quantizer, - frame_rate=25, sample_rate=32000, channels=1).to(device) - return compression_model.eval() - - -def get_debug_lm_model(device='cpu'): - """Instantiate a debug LM to be used for unit tests. - """ - pattern = DelayedPatternProvider(n_q=4) - dim = 16 - providers = { - 'description': LUTConditioner(n_bins=128, dim=dim, output_dim=dim, tokenizer="whitespace"), - } - condition_provider = ConditioningProvider(providers) - fuser = ConditionFuser( - {'cross': ['description'], 'prepend': [], - 'sum': [], 'input_interpolate': []}) - lm = LMModel( - pattern, condition_provider, fuser, - n_q=4, card=400, dim=dim, num_heads=4, custom=True, num_layers=2, - cross_attention=True, causal=True) - return lm.to(device).eval() diff --git a/spaces/Yuliang/ICON/lib/pymaf/utils/streamer.py b/spaces/Yuliang/ICON/lib/pymaf/utils/streamer.py deleted file mode 100644 index 1753677159f9550dc26c8b40c04b3713f90b959b..0000000000000000000000000000000000000000 --- a/spaces/Yuliang/ICON/lib/pymaf/utils/streamer.py +++ /dev/null @@ -1,142 +0,0 @@ -import cv2 -import torch -import numpy as np -import imageio - - -def aug_matrix(w1, h1, w2, h2): - dx = (w2 - w1) / 2.0 - dy = (h2 - h1) / 2.0 - - matrix_trans = np.array([[1.0, 0, dx], - [0, 1.0, dy], - [0, 0, 1.0]]) - - scale = np.min([float(w2)/w1, float(h2)/h1]) - - M = get_affine_matrix( - center=(w2 / 2.0, h2 / 2.0), - translate=(0, 0), - scale=scale) - - M = np.array(M + [0., 0., 1.]).reshape(3, 3) - M = M.dot(matrix_trans) - - return M - - -def get_affine_matrix(center, translate, scale): - cx, cy = center - tx, ty = translate - - M = [1, 0, 0, - 0, 1, 0] - M = [x * scale for x in M] - - # Apply translation and of center translation: RSS * C^-1 - M[2] += M[0] * (-cx) + M[1] * (-cy) - M[5] += M[3] * (-cx) + M[4] * (-cy) - - # Apply center translation: T * C * RSS * C^-1 - M[2] += cx + tx - M[5] += cy + ty - return M - - -class BaseStreamer(): - """This streamer will return images at 512x512 size. - """ - - def __init__(self, - width=512, height=512, pad=True, - mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5), - **kwargs): - self.width = width - self.height = height - self.pad = pad - self.mean = np.array(mean) - self.std = np.array(std) - - self.loader = self.create_loader() - - def create_loader(self): - raise NotImplementedError - yield np.zeros((600, 400, 3)) # in RGB (0, 255) - - def __getitem__(self, index): - image = next(self.loader) - in_height, in_width, _ = image.shape - M = aug_matrix(in_width, in_height, self.width, self.height, self.pad) - image = cv2.warpAffine( - image, M[0:2, :], (self.width, self.height), flags=cv2.INTER_CUBIC) - - input = np.float32(image) - input = (input / 255.0 - self.mean) / self.std # TO [-1.0, 1.0] - input = input.transpose(2, 0, 1) # TO [3 x H x W] - return torch.from_numpy(input).float() - - def __len__(self): - raise NotImplementedError - - -class CaptureStreamer(BaseStreamer): - """This streamer takes webcam as input. - """ - - def __init__(self, id=0, width=512, height=512, pad=True, **kwargs): - super().__init__(width, height, pad, **kwargs) - self.capture = cv2.VideoCapture(id) - - def create_loader(self): - while True: - _, image = self.capture.read() - image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) # RGB - yield image - - def __len__(self): - return 100_000_000 - - def __del__(self): - self.capture.release() - - -class VideoListStreamer(BaseStreamer): - """This streamer takes a list of video files as input. - """ - - def __init__(self, files, width=512, height=512, pad=True, **kwargs): - super().__init__(width, height, pad, **kwargs) - self.files = files - self.captures = [imageio.get_reader(f) for f in files] - self.nframes = sum([int(cap._meta["fps"] * cap._meta["duration"]) - for cap in self.captures]) - - def create_loader(self): - for capture in self.captures: - for image in capture: # RGB - yield image - - def __len__(self): - return self.nframes - - def __del__(self): - for capture in self.captures: - capture.close() - - -class ImageListStreamer(BaseStreamer): - """This streamer takes a list of image files as input. - """ - - def __init__(self, files, width=512, height=512, pad=True, **kwargs): - super().__init__(width, height, pad, **kwargs) - self.files = files - - def create_loader(self): - for f in self.files: - image = cv2.imread(f, cv2.IMREAD_UNCHANGED)[:, :, 0:3] - image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) # RGB - yield image - - def __len__(self): - return len(self.files) diff --git a/spaces/abdvl/datahub_qa_bot/docs/how/add-custom-ingestion-source.md b/spaces/abdvl/datahub_qa_bot/docs/how/add-custom-ingestion-source.md deleted file mode 100644 index 3678a54b2c349c9cd9f2a1e394ccfc88ba554596..0000000000000000000000000000000000000000 --- a/spaces/abdvl/datahub_qa_bot/docs/how/add-custom-ingestion-source.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -title: "Using a Custom Ingestion Source" ---- - - -# How to use a custom ingestion source without forking Datahub? - -Adding a custom ingestion source is the easiest way to extend Datahubs ingestion framework to support source systems -which are not yet officially supported by Datahub. - -## What you need to do - -First thing to do is building a custom source like it is described in -the [metadata-ingestion source guide](../../metadata-ingestion/adding-source.md) in your own project. - -### How to use this source? - -:::note -[UI Based Ingestion](../ui-ingestion.md) currently does not support custom ingestion sources. -::: - -To be able to use this source you just need to do a few things. - -1. Build a python package out of your project including the custom source class. -2. Install this package in your working environment where you are using the Datahub CLI to ingest metadata. - -Now you are able to just reference your ingestion source class as a type in the YAML recipe by using the fully qualified -package name. For example if your project structure looks like this `/src/my-source/custom_ingestion_source.py` -with the custom source class named `MySourceClass` your YAML recipe would look like the following: - -```yaml -source: - type: my-source.custom_ingestion_source.MySourceClass - config: - # place for your custom config defined in the configModel -``` - -If you now execute the ingestion the datahub client will pick up your code and call the `get_workunits` method and do -the rest for you. That's it. - -### Example code? - -For examples how this setup could look like and a good starting point for building your first custom source visit -our [meta-world](https://github.com/acryldata/meta-world) example repository. \ No newline at end of file diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/image/geometric.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/image/geometric.py deleted file mode 100644 index cf97c201cb4e43796c911919d03fb26a07ed817d..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/image/geometric.py +++ /dev/null @@ -1,728 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numbers - -import cv2 -import numpy as np - -from ..utils import to_2tuple -from .io import imread_backend - -try: - from PIL import Image -except ImportError: - Image = None - - -def _scale_size(size, scale): - """Rescale a size by a ratio. - - Args: - size (tuple[int]): (w, h). - scale (float | tuple(float)): Scaling factor. - - Returns: - tuple[int]: scaled size. - """ - if isinstance(scale, (float, int)): - scale = (scale, scale) - w, h = size - return int(w * float(scale[0]) + 0.5), int(h * float(scale[1]) + 0.5) - - -cv2_interp_codes = { - 'nearest': cv2.INTER_NEAREST, - 'bilinear': cv2.INTER_LINEAR, - 'bicubic': cv2.INTER_CUBIC, - 'area': cv2.INTER_AREA, - 'lanczos': cv2.INTER_LANCZOS4 -} - -if Image is not None: - pillow_interp_codes = { - 'nearest': Image.NEAREST, - 'bilinear': Image.BILINEAR, - 'bicubic': Image.BICUBIC, - 'box': Image.BOX, - 'lanczos': Image.LANCZOS, - 'hamming': Image.HAMMING - } - - -def imresize(img, - size, - return_scale=False, - interpolation='bilinear', - out=None, - backend=None): - """Resize image to a given size. - - Args: - img (ndarray): The input image. - size (tuple[int]): Target size (w, h). - return_scale (bool): Whether to return `w_scale` and `h_scale`. - interpolation (str): Interpolation method, accepted values are - "nearest", "bilinear", "bicubic", "area", "lanczos" for 'cv2' - backend, "nearest", "bilinear" for 'pillow' backend. - out (ndarray): The output destination. - backend (str | None): The image resize backend type. Options are `cv2`, - `pillow`, `None`. If backend is None, the global imread_backend - specified by ``mmcv.use_backend()`` will be used. Default: None. - - Returns: - tuple | ndarray: (`resized_img`, `w_scale`, `h_scale`) or - `resized_img`. - """ - h, w = img.shape[:2] - if backend is None: - backend = imread_backend - if backend not in ['cv2', 'pillow']: - raise ValueError(f'backend: {backend} is not supported for resize.' - f"Supported backends are 'cv2', 'pillow'") - - if backend == 'pillow': - assert img.dtype == np.uint8, 'Pillow backend only support uint8 type' - pil_image = Image.fromarray(img) - pil_image = pil_image.resize(size, pillow_interp_codes[interpolation]) - resized_img = np.array(pil_image) - else: - resized_img = cv2.resize( - img, size, dst=out, interpolation=cv2_interp_codes[interpolation]) - if not return_scale: - return resized_img - else: - w_scale = size[0] / w - h_scale = size[1] / h - return resized_img, w_scale, h_scale - - -def imresize_to_multiple(img, - divisor, - size=None, - scale_factor=None, - keep_ratio=False, - return_scale=False, - interpolation='bilinear', - out=None, - backend=None): - """Resize image according to a given size or scale factor and then rounds - up the the resized or rescaled image size to the nearest value that can be - divided by the divisor. - - Args: - img (ndarray): The input image. - divisor (int | tuple): Resized image size will be a multiple of - divisor. If divisor is a tuple, divisor should be - (w_divisor, h_divisor). - size (None | int | tuple[int]): Target size (w, h). Default: None. - scale_factor (None | float | tuple[float]): Multiplier for spatial - size. Should match input size if it is a tuple and the 2D style is - (w_scale_factor, h_scale_factor). Default: None. - keep_ratio (bool): Whether to keep the aspect ratio when resizing the - image. Default: False. - return_scale (bool): Whether to return `w_scale` and `h_scale`. - interpolation (str): Interpolation method, accepted values are - "nearest", "bilinear", "bicubic", "area", "lanczos" for 'cv2' - backend, "nearest", "bilinear" for 'pillow' backend. - out (ndarray): The output destination. - backend (str | None): The image resize backend type. Options are `cv2`, - `pillow`, `None`. If backend is None, the global imread_backend - specified by ``mmcv.use_backend()`` will be used. Default: None. - - Returns: - tuple | ndarray: (`resized_img`, `w_scale`, `h_scale`) or - `resized_img`. - """ - h, w = img.shape[:2] - if size is not None and scale_factor is not None: - raise ValueError('only one of size or scale_factor should be defined') - elif size is None and scale_factor is None: - raise ValueError('one of size or scale_factor should be defined') - elif size is not None: - size = to_2tuple(size) - if keep_ratio: - size = rescale_size((w, h), size, return_scale=False) - else: - size = _scale_size((w, h), scale_factor) - - divisor = to_2tuple(divisor) - size = tuple([int(np.ceil(s / d)) * d for s, d in zip(size, divisor)]) - resized_img, w_scale, h_scale = imresize( - img, - size, - return_scale=True, - interpolation=interpolation, - out=out, - backend=backend) - if return_scale: - return resized_img, w_scale, h_scale - else: - return resized_img - - -def imresize_like(img, - dst_img, - return_scale=False, - interpolation='bilinear', - backend=None): - """Resize image to the same size of a given image. - - Args: - img (ndarray): The input image. - dst_img (ndarray): The target image. - return_scale (bool): Whether to return `w_scale` and `h_scale`. - interpolation (str): Same as :func:`resize`. - backend (str | None): Same as :func:`resize`. - - Returns: - tuple or ndarray: (`resized_img`, `w_scale`, `h_scale`) or - `resized_img`. - """ - h, w = dst_img.shape[:2] - return imresize(img, (w, h), return_scale, interpolation, backend=backend) - - -def rescale_size(old_size, scale, return_scale=False): - """Calculate the new size to be rescaled to. - - Args: - old_size (tuple[int]): The old size (w, h) of image. - scale (float | tuple[int]): The scaling factor or maximum size. - If it is a float number, then the image will be rescaled by this - factor, else if it is a tuple of 2 integers, then the image will - be rescaled as large as possible within the scale. - return_scale (bool): Whether to return the scaling factor besides the - rescaled image size. - - Returns: - tuple[int]: The new rescaled image size. - """ - w, h = old_size - if isinstance(scale, (float, int)): - if scale <= 0: - raise ValueError(f'Invalid scale {scale}, must be positive.') - scale_factor = scale - elif isinstance(scale, tuple): - max_long_edge = max(scale) - max_short_edge = min(scale) - scale_factor = min(max_long_edge / max(h, w), - max_short_edge / min(h, w)) - else: - raise TypeError( - f'Scale must be a number or tuple of int, but got {type(scale)}') - - new_size = _scale_size((w, h), scale_factor) - - if return_scale: - return new_size, scale_factor - else: - return new_size - - -def imrescale(img, - scale, - return_scale=False, - interpolation='bilinear', - backend=None): - """Resize image while keeping the aspect ratio. - - Args: - img (ndarray): The input image. - scale (float | tuple[int]): The scaling factor or maximum size. - If it is a float number, then the image will be rescaled by this - factor, else if it is a tuple of 2 integers, then the image will - be rescaled as large as possible within the scale. - return_scale (bool): Whether to return the scaling factor besides the - rescaled image. - interpolation (str): Same as :func:`resize`. - backend (str | None): Same as :func:`resize`. - - Returns: - ndarray: The rescaled image. - """ - h, w = img.shape[:2] - new_size, scale_factor = rescale_size((w, h), scale, return_scale=True) - rescaled_img = imresize( - img, new_size, interpolation=interpolation, backend=backend) - if return_scale: - return rescaled_img, scale_factor - else: - return rescaled_img - - -def imflip(img, direction='horizontal'): - """Flip an image horizontally or vertically. - - Args: - img (ndarray): Image to be flipped. - direction (str): The flip direction, either "horizontal" or - "vertical" or "diagonal". - - Returns: - ndarray: The flipped image. - """ - assert direction in ['horizontal', 'vertical', 'diagonal'] - if direction == 'horizontal': - return np.flip(img, axis=1) - elif direction == 'vertical': - return np.flip(img, axis=0) - else: - return np.flip(img, axis=(0, 1)) - - -def imflip_(img, direction='horizontal'): - """Inplace flip an image horizontally or vertically. - - Args: - img (ndarray): Image to be flipped. - direction (str): The flip direction, either "horizontal" or - "vertical" or "diagonal". - - Returns: - ndarray: The flipped image (inplace). - """ - assert direction in ['horizontal', 'vertical', 'diagonal'] - if direction == 'horizontal': - return cv2.flip(img, 1, img) - elif direction == 'vertical': - return cv2.flip(img, 0, img) - else: - return cv2.flip(img, -1, img) - - -def imrotate(img, - angle, - center=None, - scale=1.0, - border_value=0, - interpolation='bilinear', - auto_bound=False): - """Rotate an image. - - Args: - img (ndarray): Image to be rotated. - angle (float): Rotation angle in degrees, positive values mean - clockwise rotation. - center (tuple[float], optional): Center point (w, h) of the rotation in - the source image. If not specified, the center of the image will be - used. - scale (float): Isotropic scale factor. - border_value (int): Border value. - interpolation (str): Same as :func:`resize`. - auto_bound (bool): Whether to adjust the image size to cover the whole - rotated image. - - Returns: - ndarray: The rotated image. - """ - if center is not None and auto_bound: - raise ValueError('`auto_bound` conflicts with `center`') - h, w = img.shape[:2] - if center is None: - center = ((w - 1) * 0.5, (h - 1) * 0.5) - assert isinstance(center, tuple) - - matrix = cv2.getRotationMatrix2D(center, -angle, scale) - if auto_bound: - cos = np.abs(matrix[0, 0]) - sin = np.abs(matrix[0, 1]) - new_w = h * sin + w * cos - new_h = h * cos + w * sin - matrix[0, 2] += (new_w - w) * 0.5 - matrix[1, 2] += (new_h - h) * 0.5 - w = int(np.round(new_w)) - h = int(np.round(new_h)) - rotated = cv2.warpAffine( - img, - matrix, (w, h), - flags=cv2_interp_codes[interpolation], - borderValue=border_value) - return rotated - - -def bbox_clip(bboxes, img_shape): - """Clip bboxes to fit the image shape. - - Args: - bboxes (ndarray): Shape (..., 4*k) - img_shape (tuple[int]): (height, width) of the image. - - Returns: - ndarray: Clipped bboxes. - """ - assert bboxes.shape[-1] % 4 == 0 - cmin = np.empty(bboxes.shape[-1], dtype=bboxes.dtype) - cmin[0::2] = img_shape[1] - 1 - cmin[1::2] = img_shape[0] - 1 - clipped_bboxes = np.maximum(np.minimum(bboxes, cmin), 0) - return clipped_bboxes - - -def bbox_scaling(bboxes, scale, clip_shape=None): - """Scaling bboxes w.r.t the box center. - - Args: - bboxes (ndarray): Shape(..., 4). - scale (float): Scaling factor. - clip_shape (tuple[int], optional): If specified, bboxes that exceed the - boundary will be clipped according to the given shape (h, w). - - Returns: - ndarray: Scaled bboxes. - """ - if float(scale) == 1.0: - scaled_bboxes = bboxes.copy() - else: - w = bboxes[..., 2] - bboxes[..., 0] + 1 - h = bboxes[..., 3] - bboxes[..., 1] + 1 - dw = (w * (scale - 1)) * 0.5 - dh = (h * (scale - 1)) * 0.5 - scaled_bboxes = bboxes + np.stack((-dw, -dh, dw, dh), axis=-1) - if clip_shape is not None: - return bbox_clip(scaled_bboxes, clip_shape) - else: - return scaled_bboxes - - -def imcrop(img, bboxes, scale=1.0, pad_fill=None): - """Crop image patches. - - 3 steps: scale the bboxes -> clip bboxes -> crop and pad. - - Args: - img (ndarray): Image to be cropped. - bboxes (ndarray): Shape (k, 4) or (4, ), location of cropped bboxes. - scale (float, optional): Scale ratio of bboxes, the default value - 1.0 means no padding. - pad_fill (Number | list[Number]): Value to be filled for padding. - Default: None, which means no padding. - - Returns: - list[ndarray] | ndarray: The cropped image patches. - """ - chn = 1 if img.ndim == 2 else img.shape[2] - if pad_fill is not None: - if isinstance(pad_fill, (int, float)): - pad_fill = [pad_fill for _ in range(chn)] - assert len(pad_fill) == chn - - _bboxes = bboxes[None, ...] if bboxes.ndim == 1 else bboxes - scaled_bboxes = bbox_scaling(_bboxes, scale).astype(np.int32) - clipped_bbox = bbox_clip(scaled_bboxes, img.shape) - - patches = [] - for i in range(clipped_bbox.shape[0]): - x1, y1, x2, y2 = tuple(clipped_bbox[i, :]) - if pad_fill is None: - patch = img[y1:y2 + 1, x1:x2 + 1, ...] - else: - _x1, _y1, _x2, _y2 = tuple(scaled_bboxes[i, :]) - if chn == 1: - patch_shape = (_y2 - _y1 + 1, _x2 - _x1 + 1) - else: - patch_shape = (_y2 - _y1 + 1, _x2 - _x1 + 1, chn) - patch = np.array( - pad_fill, dtype=img.dtype) * np.ones( - patch_shape, dtype=img.dtype) - x_start = 0 if _x1 >= 0 else -_x1 - y_start = 0 if _y1 >= 0 else -_y1 - w = x2 - x1 + 1 - h = y2 - y1 + 1 - patch[y_start:y_start + h, x_start:x_start + w, - ...] = img[y1:y1 + h, x1:x1 + w, ...] - patches.append(patch) - - if bboxes.ndim == 1: - return patches[0] - else: - return patches - - -def impad(img, - *, - shape=None, - padding=None, - pad_val=0, - padding_mode='constant'): - """Pad the given image to a certain shape or pad on all sides with - specified padding mode and padding value. - - Args: - img (ndarray): Image to be padded. - shape (tuple[int]): Expected padding shape (h, w). Default: None. - padding (int or tuple[int]): Padding on each border. If a single int is - provided this is used to pad all borders. If tuple of length 2 is - provided this is the padding on left/right and top/bottom - respectively. If a tuple of length 4 is provided this is the - padding for the left, top, right and bottom borders respectively. - Default: None. Note that `shape` and `padding` can not be both - set. - pad_val (Number | Sequence[Number]): Values to be filled in padding - areas when padding_mode is 'constant'. Default: 0. - padding_mode (str): Type of padding. Should be: constant, edge, - reflect or symmetric. Default: constant. - - - constant: pads with a constant value, this value is specified - with pad_val. - - edge: pads with the last value at the edge of the image. - - reflect: pads with reflection of image without repeating the - last value on the edge. For example, padding [1, 2, 3, 4] - with 2 elements on both sides in reflect mode will result - in [3, 2, 1, 2, 3, 4, 3, 2]. - - symmetric: pads with reflection of image repeating the last - value on the edge. For example, padding [1, 2, 3, 4] with - 2 elements on both sides in symmetric mode will result in - [2, 1, 1, 2, 3, 4, 4, 3] - - Returns: - ndarray: The padded image. - """ - - assert (shape is not None) ^ (padding is not None) - if shape is not None: - padding = (0, 0, shape[1] - img.shape[1], shape[0] - img.shape[0]) - - # check pad_val - if isinstance(pad_val, tuple): - assert len(pad_val) == img.shape[-1] - elif not isinstance(pad_val, numbers.Number): - raise TypeError('pad_val must be a int or a tuple. ' - f'But received {type(pad_val)}') - - # check padding - if isinstance(padding, tuple) and len(padding) in [2, 4]: - if len(padding) == 2: - padding = (padding[0], padding[1], padding[0], padding[1]) - elif isinstance(padding, numbers.Number): - padding = (padding, padding, padding, padding) - else: - raise ValueError('Padding must be a int or a 2, or 4 element tuple.' - f'But received {padding}') - - # check padding mode - assert padding_mode in ['constant', 'edge', 'reflect', 'symmetric'] - - border_type = { - 'constant': cv2.BORDER_CONSTANT, - 'edge': cv2.BORDER_REPLICATE, - 'reflect': cv2.BORDER_REFLECT_101, - 'symmetric': cv2.BORDER_REFLECT - } - img = cv2.copyMakeBorder( - img, - padding[1], - padding[3], - padding[0], - padding[2], - border_type[padding_mode], - value=pad_val) - - return img - - -def impad_to_multiple(img, divisor, pad_val=0): - """Pad an image to ensure each edge to be multiple to some number. - - Args: - img (ndarray): Image to be padded. - divisor (int): Padded image edges will be multiple to divisor. - pad_val (Number | Sequence[Number]): Same as :func:`impad`. - - Returns: - ndarray: The padded image. - """ - pad_h = int(np.ceil(img.shape[0] / divisor)) * divisor - pad_w = int(np.ceil(img.shape[1] / divisor)) * divisor - return impad(img, shape=(pad_h, pad_w), pad_val=pad_val) - - -def cutout(img, shape, pad_val=0): - """Randomly cut out a rectangle from the original img. - - Args: - img (ndarray): Image to be cutout. - shape (int | tuple[int]): Expected cutout shape (h, w). If given as a - int, the value will be used for both h and w. - pad_val (int | float | tuple[int | float]): Values to be filled in the - cut area. Defaults to 0. - - Returns: - ndarray: The cutout image. - """ - - channels = 1 if img.ndim == 2 else img.shape[2] - if isinstance(shape, int): - cut_h, cut_w = shape, shape - else: - assert isinstance(shape, tuple) and len(shape) == 2, \ - f'shape must be a int or a tuple with length 2, but got type ' \ - f'{type(shape)} instead.' - cut_h, cut_w = shape - if isinstance(pad_val, (int, float)): - pad_val = tuple([pad_val] * channels) - elif isinstance(pad_val, tuple): - assert len(pad_val) == channels, \ - 'Expected the num of elements in tuple equals the channels' \ - 'of input image. Found {} vs {}'.format( - len(pad_val), channels) - else: - raise TypeError(f'Invalid type {type(pad_val)} for `pad_val`') - - img_h, img_w = img.shape[:2] - y0 = np.random.uniform(img_h) - x0 = np.random.uniform(img_w) - - y1 = int(max(0, y0 - cut_h / 2.)) - x1 = int(max(0, x0 - cut_w / 2.)) - y2 = min(img_h, y1 + cut_h) - x2 = min(img_w, x1 + cut_w) - - if img.ndim == 2: - patch_shape = (y2 - y1, x2 - x1) - else: - patch_shape = (y2 - y1, x2 - x1, channels) - - img_cutout = img.copy() - patch = np.array( - pad_val, dtype=img.dtype) * np.ones( - patch_shape, dtype=img.dtype) - img_cutout[y1:y2, x1:x2, ...] = patch - - return img_cutout - - -def _get_shear_matrix(magnitude, direction='horizontal'): - """Generate the shear matrix for transformation. - - Args: - magnitude (int | float): The magnitude used for shear. - direction (str): The flip direction, either "horizontal" - or "vertical". - - Returns: - ndarray: The shear matrix with dtype float32. - """ - if direction == 'horizontal': - shear_matrix = np.float32([[1, magnitude, 0], [0, 1, 0]]) - elif direction == 'vertical': - shear_matrix = np.float32([[1, 0, 0], [magnitude, 1, 0]]) - return shear_matrix - - -def imshear(img, - magnitude, - direction='horizontal', - border_value=0, - interpolation='bilinear'): - """Shear an image. - - Args: - img (ndarray): Image to be sheared with format (h, w) - or (h, w, c). - magnitude (int | float): The magnitude used for shear. - direction (str): The flip direction, either "horizontal" - or "vertical". - border_value (int | tuple[int]): Value used in case of a - constant border. - interpolation (str): Same as :func:`resize`. - - Returns: - ndarray: The sheared image. - """ - assert direction in ['horizontal', - 'vertical'], f'Invalid direction: {direction}' - height, width = img.shape[:2] - if img.ndim == 2: - channels = 1 - elif img.ndim == 3: - channels = img.shape[-1] - if isinstance(border_value, int): - border_value = tuple([border_value] * channels) - elif isinstance(border_value, tuple): - assert len(border_value) == channels, \ - 'Expected the num of elements in tuple equals the channels' \ - 'of input image. Found {} vs {}'.format( - len(border_value), channels) - else: - raise ValueError( - f'Invalid type {type(border_value)} for `border_value`') - shear_matrix = _get_shear_matrix(magnitude, direction) - sheared = cv2.warpAffine( - img, - shear_matrix, - (width, height), - # Note case when the number elements in `border_value` - # greater than 3 (e.g. shearing masks whose channels large - # than 3) will raise TypeError in `cv2.warpAffine`. - # Here simply slice the first 3 values in `border_value`. - borderValue=border_value[:3], - flags=cv2_interp_codes[interpolation]) - return sheared - - -def _get_translate_matrix(offset, direction='horizontal'): - """Generate the translate matrix. - - Args: - offset (int | float): The offset used for translate. - direction (str): The translate direction, either - "horizontal" or "vertical". - - Returns: - ndarray: The translate matrix with dtype float32. - """ - if direction == 'horizontal': - translate_matrix = np.float32([[1, 0, offset], [0, 1, 0]]) - elif direction == 'vertical': - translate_matrix = np.float32([[1, 0, 0], [0, 1, offset]]) - return translate_matrix - - -def imtranslate(img, - offset, - direction='horizontal', - border_value=0, - interpolation='bilinear'): - """Translate an image. - - Args: - img (ndarray): Image to be translated with format - (h, w) or (h, w, c). - offset (int | float): The offset used for translate. - direction (str): The translate direction, either "horizontal" - or "vertical". - border_value (int | tuple[int]): Value used in case of a - constant border. - interpolation (str): Same as :func:`resize`. - - Returns: - ndarray: The translated image. - """ - assert direction in ['horizontal', - 'vertical'], f'Invalid direction: {direction}' - height, width = img.shape[:2] - if img.ndim == 2: - channels = 1 - elif img.ndim == 3: - channels = img.shape[-1] - if isinstance(border_value, int): - border_value = tuple([border_value] * channels) - elif isinstance(border_value, tuple): - assert len(border_value) == channels, \ - 'Expected the num of elements in tuple equals the channels' \ - 'of input image. Found {} vs {}'.format( - len(border_value), channels) - else: - raise ValueError( - f'Invalid type {type(border_value)} for `border_value`.') - translate_matrix = _get_translate_matrix(offset, direction) - translated = cv2.warpAffine( - img, - translate_matrix, - (width, height), - # Note case when the number elements in `border_value` - # greater than 3 (e.g. translating masks whose channels - # large than 3) will raise TypeError in `cv2.warpAffine`. - # Here simply slice the first 3 values in `border_value`. - borderValue=border_value[:3], - flags=cv2_interp_codes[interpolation]) - return translated diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/models/decode_heads/psp_head.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/models/decode_heads/psp_head.py deleted file mode 100644 index b5f1e71c70c3a20f4007c263ec471a87bb214a48..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/models/decode_heads/psp_head.py +++ /dev/null @@ -1,101 +0,0 @@ -import torch -import torch.nn as nn -from annotator.uniformer.mmcv.cnn import ConvModule - -from annotator.uniformer.mmseg.ops import resize -from ..builder import HEADS -from .decode_head import BaseDecodeHead - - -class PPM(nn.ModuleList): - """Pooling Pyramid Module used in PSPNet. - - Args: - pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid - Module. - in_channels (int): Input channels. - channels (int): Channels after modules, before conv_seg. - conv_cfg (dict|None): Config of conv layers. - norm_cfg (dict|None): Config of norm layers. - act_cfg (dict): Config of activation layers. - align_corners (bool): align_corners argument of F.interpolate. - """ - - def __init__(self, pool_scales, in_channels, channels, conv_cfg, norm_cfg, - act_cfg, align_corners): - super(PPM, self).__init__() - self.pool_scales = pool_scales - self.align_corners = align_corners - self.in_channels = in_channels - self.channels = channels - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - for pool_scale in pool_scales: - self.append( - nn.Sequential( - nn.AdaptiveAvgPool2d(pool_scale), - ConvModule( - self.in_channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg))) - - def forward(self, x): - """Forward function.""" - ppm_outs = [] - for ppm in self: - ppm_out = ppm(x) - upsampled_ppm_out = resize( - ppm_out, - size=x.size()[2:], - mode='bilinear', - align_corners=self.align_corners) - ppm_outs.append(upsampled_ppm_out) - return ppm_outs - - -@HEADS.register_module() -class PSPHead(BaseDecodeHead): - """Pyramid Scene Parsing Network. - - This head is the implementation of - `PSPNet `_. - - Args: - pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid - Module. Default: (1, 2, 3, 6). - """ - - def __init__(self, pool_scales=(1, 2, 3, 6), **kwargs): - super(PSPHead, self).__init__(**kwargs) - assert isinstance(pool_scales, (list, tuple)) - self.pool_scales = pool_scales - self.psp_modules = PPM( - self.pool_scales, - self.in_channels, - self.channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - align_corners=self.align_corners) - self.bottleneck = ConvModule( - self.in_channels + len(pool_scales) * self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - psp_outs = [x] - psp_outs.extend(self.psp_modules(x)) - psp_outs = torch.cat(psp_outs, dim=1) - output = self.bottleneck(psp_outs) - output = self.cls_seg(output) - return output diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/lib.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/lib.py deleted file mode 100644 index d264fcfd795741c42a65ad27ec39d88de30bbfc3..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/lib.py +++ /dev/null @@ -1,324 +0,0 @@ -"""Functions for loading dynamic libraries. - -These extend and correct ctypes functions. -""" - -import os -import re -import sys - -import ctypes -import ctypes.util - -import pyglet - -_debug_lib = pyglet.options['debug_lib'] -_debug_trace = pyglet.options['debug_trace'] - -_is_pyglet_doc_run = getattr(sys, "is_pyglet_doc_run", False) - -if pyglet.options['search_local_libs']: - script_path = pyglet.resource.get_script_home() - cwd = os.getcwd() - _local_lib_paths = [script_path, os.path.join(script_path, 'lib'), os.path.join(cwd, 'lib')] - if pyglet.compat_platform == 'win32': - os.environ["PATH"] += os.pathsep + os.pathsep.join(_local_lib_paths) -else: - _local_lib_paths = None - - -class _TraceFunction: - def __init__(self, func): - self.__dict__['_func'] = func - - def __str__(self): - return self._func.__name__ - - def __call__(self, *args, **kwargs): - return self._func(*args, **kwargs) - - def __getattr__(self, name): - return getattr(self._func, name) - - def __setattr__(self, name, value): - setattr(self._func, name, value) - - -class _TraceLibrary: - def __init__(self, library): - self._library = library - print(library) - - def __getattr__(self, name): - func = getattr(self._library, name) - f = _TraceFunction(func) - return f - - -if _is_pyglet_doc_run: - class LibraryMock: - """Mock library used when generating documentation.""" - def __getattr__(self, name): - return LibraryMock() - - def __setattr__(self, name, value): - pass - - def __call__(self, *args, **kwargs): - return LibraryMock() - - def __rshift__(self, other): - return 0 - - -class LibraryLoader: - - platform = pyglet.compat_platform - # this is only for library loading, don't include it in pyglet.platform - if platform == 'cygwin': - platform = 'win32' - - def load_library(self, *names, **kwargs): - """Find and load a library. - - More than one name can be specified, they will be tried in order. - Platform-specific library names (given as kwargs) are tried first. - - Raises ImportError if library is not found. - """ - if _is_pyglet_doc_run: - return LibraryMock() - - if 'framework' in kwargs and self.platform == 'darwin': - return self.load_framework(kwargs['framework']) - - if not names: - raise ImportError("No library name specified") - - platform_names = kwargs.get(self.platform, []) - if isinstance(platform_names, str): - platform_names = [platform_names] - elif type(platform_names) is tuple: - platform_names = list(platform_names) - - if self.platform.startswith('linux'): - for name in names: - libname = self.find_library(name) - platform_names.append(libname or f'lib{name}.so') - - platform_names.extend(names) - for name in platform_names: - try: - lib = ctypes.cdll.LoadLibrary(name) - if _debug_lib: - print(name, self.find_library(name)) - if _debug_trace: - lib = _TraceLibrary(lib) - return lib - except OSError as o: - path = self.find_library(name) - if path: - try: - lib = ctypes.cdll.LoadLibrary(path) - if _debug_lib: - print(path) - if _debug_trace: - lib = _TraceLibrary(lib) - return lib - except OSError: - pass - elif self.platform == "win32" and o.winerror != 126: - if _debug_lib: - print(f"Unexpected error loading library {name}: {str(o)}") - - raise ImportError(f'Library "{names[0]}" not found.') - - def find_library(self, name): - return ctypes.util.find_library(name) - - @staticmethod - def load_framework(name): - raise RuntimeError("Can't load framework on this platform.") - - -class MachOLibraryLoader(LibraryLoader): - def __init__(self): - if 'LD_LIBRARY_PATH' in os.environ: - self.ld_library_path = os.environ['LD_LIBRARY_PATH'].split(':') - else: - self.ld_library_path = [] - - if _local_lib_paths: - # search first for local libs - self.ld_library_path = _local_lib_paths + self.ld_library_path - os.environ['LD_LIBRARY_PATH'] = ':'.join(self.ld_library_path) - - if 'DYLD_LIBRARY_PATH' in os.environ: - self.dyld_library_path = os.environ['DYLD_LIBRARY_PATH'].split(':') - else: - self.dyld_library_path = [] - - if 'DYLD_FALLBACK_LIBRARY_PATH' in os.environ: - self.dyld_fallback_library_path = os.environ['DYLD_FALLBACK_LIBRARY_PATH'].split(':') - else: - self.dyld_fallback_library_path = [os.path.expanduser('~/lib'), '/usr/local/lib', '/usr/lib'] - - def find_library(self, path): - """Implements the dylib search as specified in Apple documentation: - - http://developer.apple.com/library/content/documentation/DeveloperTools/Conceptual/DynamicLibraries/100-Articles/DynamicLibraryUsageGuidelines.html - - Before commencing the standard search, the method first checks - the bundle's ``Frameworks`` directory if the application is running - within a bundle (OS X .app). - """ - - libname = os.path.basename(path) - search_path = [] - - if '.dylib' not in libname: - libname = 'lib' + libname + '.dylib' - - # py2app support - if getattr(sys, 'frozen', None) == 'macosx_app' and 'RESOURCEPATH' in os.environ: - search_path.append(os.path.join(os.environ['RESOURCEPATH'], - '..', - 'Frameworks', - libname)) - - # conda support - if os.environ.get('CONDA_PREFIX', False): - search_path.append(os.path.join(os.environ['CONDA_PREFIX'], 'lib', libname)) - - # pyinstaller.py sets sys.frozen to True, and puts dylibs in - # Contents/macOS, which path pyinstaller puts in sys._MEIPASS - if getattr(sys, 'frozen', False) and getattr(sys, '_MEIPASS', None): - meipass = getattr(sys, '_MEIPASS') - search_path.append(os.path.join(meipass, libname)) - - # conda support - if os.environ.get('CONDA_PREFIX', False): - search_path.append(os.path.join(os.environ['CONDA_PREFIX'], 'lib', libname)) - - if '/' in path: - search_path.extend([os.path.join(p, libname) for p in self.dyld_library_path]) - search_path.append(path) - search_path.extend([os.path.join(p, libname) for p in self.dyld_fallback_library_path]) - else: - search_path.extend([os.path.join(p, libname) for p in self.ld_library_path]) - search_path.extend([os.path.join(p, libname) for p in self.dyld_library_path]) - search_path.append(path) - search_path.extend([os.path.join(p, libname) for p in self.dyld_fallback_library_path]) - - for path in search_path: - if os.path.exists(path): - return path - - return None - - @staticmethod - def load_framework(name): - path = ctypes.util.find_library(name) - - # Hack for compatibility with macOS > 11.0 - if path is None: - frameworks = { - 'AGL': '/System/Library/Frameworks/AGL.framework/AGL', - 'IOKit': '/System/Library/Frameworks/IOKit.framework/IOKit', - 'OpenAL': '/System/Library/Frameworks/OpenAL.framework/OpenAL', - 'OpenGL': '/System/Library/Frameworks/OpenGL.framework/OpenGL' - } - path = frameworks.get(name) - - if path: - lib = ctypes.cdll.LoadLibrary(path) - if _debug_lib: - print(path) - if _debug_trace: - lib = _TraceLibrary(lib) - return lib - - raise ImportError(f"Can't find framework {name}.") - - -class LinuxLibraryLoader(LibraryLoader): - _ld_so_cache = None - _local_libs_cache = None - - @staticmethod - def _find_libs(directories): - cache = {} - lib_re = re.compile(r'lib(.*)\.so(?:$|\.)') - for directory in directories: - try: - for file in os.listdir(directory): - match = lib_re.match(file) - if match: - # Index by filename - path = os.path.join(directory, file) - if file not in cache: - cache[file] = path - # Index by library name - library = match.group(1) - if library not in cache: - cache[library] = path - except OSError: - pass - return cache - - def _create_ld_so_cache(self): - # Recreate search path followed by ld.so. This is going to be - # slow to build, and incorrect (ld.so uses ld.so.cache, which may - # not be up-to-date). Used only as fallback for distros without - # /sbin/ldconfig. - # - # We assume the DT_RPATH and DT_RUNPATH binary sections are omitted. - - directories = [] - try: - directories.extend(os.environ['LD_LIBRARY_PATH'].split(':')) - except KeyError: - pass - - try: - with open('/etc/ld.so.conf') as fid: - directories.extend([directory.strip() for directory in fid]) - except IOError: - pass - - directories.extend(['/lib', '/usr/lib']) - - self._ld_so_cache = self._find_libs(directories) - - def find_library(self, path): - - # search first for local libs - if _local_lib_paths: - if not self._local_libs_cache: - self._local_libs_cache = self._find_libs(_local_lib_paths) - if path in self._local_libs_cache: - return self._local_libs_cache[path] - - # ctypes tries ldconfig, gcc and objdump. If none of these are - # present, we implement the ld-linux.so search path as described in - # the man page. - - result = ctypes.util.find_library(path) - - if result: - return result - - if self._ld_so_cache is None: - self._create_ld_so_cache() - - return self._ld_so_cache.get(path) - - -if pyglet.compat_platform == 'darwin': - loader = MachOLibraryLoader() -elif pyglet.compat_platform.startswith('linux'): - loader = LinuxLibraryLoader() -else: - loader = LibraryLoader() - -load_library = loader.load_library diff --git a/spaces/afasdfas/cringe_model/README.md b/spaces/afasdfas/cringe_model/README.md deleted file mode 100644 index a04108af40529f07c56cc8be3c1b18dd0e751c4e..0000000000000000000000000000000000000000 --- a/spaces/afasdfas/cringe_model/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Cringe Model -emoji: 📈 -colorFrom: gray -colorTo: gray -sdk: gradio -sdk_version: 3.40.1 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/akhaliq/deeplab2/tracker/__init__.py b/spaces/akhaliq/deeplab2/tracker/__init__.py deleted file mode 100644 index 35e4ce02ff422f3aa84ab644b88d65b13e0cbc03..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/deeplab2/tracker/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The Deeplab2 Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - diff --git a/spaces/ali-ghamdan/gfp-Gans/gfpgan/models/__init__.py b/spaces/ali-ghamdan/gfp-Gans/gfpgan/models/__init__.py deleted file mode 100644 index 6afad57a3794b867dabbdb617a16355a24d6a8b3..0000000000000000000000000000000000000000 --- a/spaces/ali-ghamdan/gfp-Gans/gfpgan/models/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -import importlib -from basicsr.utils import scandir -from os import path as osp - -# automatically scan and import model modules for registry -# scan all the files that end with '_model.py' under the model folder -model_folder = osp.dirname(osp.abspath(__file__)) -model_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(model_folder) if v.endswith('_model.py')] -# import all the model modules -_model_modules = [importlib.import_module(f'gfpgan.models.{file_name}') for file_name in model_filenames] diff --git a/spaces/amankishore/sjc/my/utils/tqdm.py b/spaces/amankishore/sjc/my/utils/tqdm.py deleted file mode 100644 index 774f2aff7dc4c2956a3b80daed52b0c6ad97d98b..0000000000000000000000000000000000000000 --- a/spaces/amankishore/sjc/my/utils/tqdm.py +++ /dev/null @@ -1,10 +0,0 @@ -import os -from tqdm import tqdm as orig_tqdm - - -def tqdm(*args, **kwargs): - is_remote = bool(os.environ.get("IS_REMOTE", False)) - if is_remote: - f = open(os.devnull, "w") - kwargs.update({"file": f}) - return orig_tqdm(*args, **kwargs) diff --git a/spaces/ardha27/rvc-models/infer_pack/commons.py b/spaces/ardha27/rvc-models/infer_pack/commons.py deleted file mode 100644 index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000 --- a/spaces/ardha27/rvc-models/infer_pack/commons.py +++ /dev/null @@ -1,166 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += ( - 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q) - ) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def slice_segments2(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / ( - num_timescales - 1 - ) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment - ) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2, 3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1.0 / norm_type) - return total_norm diff --git a/spaces/ardha27/rvc-models/infer_pack/models_onnx.py b/spaces/ardha27/rvc-models/infer_pack/models_onnx.py deleted file mode 100644 index 3cdae2f7f8591a1e43b1d8520baa37b7e9744d72..0000000000000000000000000000000000000000 --- a/spaces/ardha27/rvc-models/infer_pack/models_onnx.py +++ /dev/null @@ -1,849 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from infer_pack import modules -from infer_pack import attentions -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from infer_pack.commons import init_weights -import numpy as np -from infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder256Sim(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - x = self.proj(x) * x_mask - return x, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, pitch, nsff0, sid, rnd, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o - - -class SynthesizerTrnMs256NSFsid_sim(nn.Module): - """ - Synthesizer for Training - """ - - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - # hop_length, - gin_channels=0, - use_sdp=True, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256Sim( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - is_half=kwargs["is_half"], - ) - - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, ds, max_len=None - ): # y是spec不需要了现在 - g = self.emb_g(ds.unsqueeze(0)).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - x, x_mask = self.enc_p(phone, pitch, phone_lengths) - x = self.flow(x, x_mask, g=g, reverse=True) - o = self.dec((x * x_mask)[:, :, :max_len], pitchf, g=g) - return o - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/artificialguybr/video-dubbing/TTS/tests/data_tests/test_loader.py b/spaces/artificialguybr/video-dubbing/TTS/tests/data_tests/test_loader.py deleted file mode 100644 index cbd98fc0c5cd27344699a5166bf67998d44886ae..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/tests/data_tests/test_loader.py +++ /dev/null @@ -1,242 +0,0 @@ -import os -import shutil -import unittest - -import numpy as np -import torch -from torch.utils.data import DataLoader - -from tests import get_tests_data_path, get_tests_output_path -from TTS.tts.configs.shared_configs import BaseDatasetConfig, BaseTTSConfig -from TTS.tts.datasets import TTSDataset, load_tts_samples -from TTS.tts.utils.text.tokenizer import TTSTokenizer -from TTS.utils.audio import AudioProcessor - -# pylint: disable=unused-variable - -OUTPATH = os.path.join(get_tests_output_path(), "loader_tests/") -os.makedirs(OUTPATH, exist_ok=True) - -# create a dummy config for testing data loaders. -c = BaseTTSConfig(text_cleaner="english_cleaners", num_loader_workers=0, batch_size=2, use_noise_augment=False) -c.r = 5 -c.data_path = os.path.join(get_tests_data_path(), "ljspeech/") -ok_ljspeech = os.path.exists(c.data_path) - -dataset_config = BaseDatasetConfig( - formatter="ljspeech_test", # ljspeech_test to multi-speaker - meta_file_train="metadata.csv", - meta_file_val=None, - path=c.data_path, - language="en", -) - -DATA_EXIST = True -if not os.path.exists(c.data_path): - DATA_EXIST = False - -print(" > Dynamic data loader test: {}".format(DATA_EXIST)) - - -class TestTTSDataset(unittest.TestCase): - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self.max_loader_iter = 4 - self.ap = AudioProcessor(**c.audio) - - def _create_dataloader(self, batch_size, r, bgs, start_by_longest=False): - # load dataset - meta_data_train, meta_data_eval = load_tts_samples(dataset_config, eval_split=True, eval_split_size=0.2) - items = meta_data_train + meta_data_eval - - tokenizer, _ = TTSTokenizer.init_from_config(c) - dataset = TTSDataset( - outputs_per_step=r, - compute_linear_spec=True, - return_wav=True, - tokenizer=tokenizer, - ap=self.ap, - samples=items, - batch_group_size=bgs, - min_text_len=c.min_text_len, - max_text_len=c.max_text_len, - min_audio_len=c.min_audio_len, - max_audio_len=c.max_audio_len, - start_by_longest=start_by_longest, - ) - dataloader = DataLoader( - dataset, - batch_size=batch_size, - shuffle=False, - collate_fn=dataset.collate_fn, - drop_last=True, - num_workers=c.num_loader_workers, - ) - return dataloader, dataset - - def test_loader(self): - if ok_ljspeech: - dataloader, dataset = self._create_dataloader(1, 1, 0) - - for i, data in enumerate(dataloader): - if i == self.max_loader_iter: - break - text_input = data["token_id"] - _ = data["token_id_lengths"] - speaker_name = data["speaker_names"] - linear_input = data["linear"] - mel_input = data["mel"] - mel_lengths = data["mel_lengths"] - _ = data["stop_targets"] - _ = data["item_idxs"] - wavs = data["waveform"] - - neg_values = text_input[text_input < 0] - check_count = len(neg_values) - - # check basic conditions - self.assertEqual(check_count, 0) - self.assertEqual(linear_input.shape[0], mel_input.shape[0], c.batch_size) - self.assertEqual(linear_input.shape[2], self.ap.fft_size // 2 + 1) - self.assertEqual(mel_input.shape[2], c.audio["num_mels"]) - self.assertEqual(wavs.shape[1], mel_input.shape[1] * c.audio.hop_length) - self.assertIsInstance(speaker_name[0], str) - - # make sure that the computed mels and the waveform match and correctly computed - mel_new = self.ap.melspectrogram(wavs[0].squeeze().numpy()) - # remove padding in mel-spectrogram - mel_dataloader = mel_input[0].T.numpy()[:, : mel_lengths[0]] - # guarantee that both mel-spectrograms have the same size and that we will remove waveform padding - mel_new = mel_new[:, : mel_lengths[0]] - ignore_seg = -(1 + c.audio.win_length // c.audio.hop_length) - mel_diff = (mel_new[:, : mel_input.shape[1]] - mel_input[0].T.numpy())[:, 0:ignore_seg] - self.assertLess(abs(mel_diff.sum()), 1e-5) - - # check normalization ranges - if self.ap.symmetric_norm: - self.assertLessEqual(mel_input.max(), self.ap.max_norm) - self.assertGreaterEqual( - mel_input.min(), -self.ap.max_norm # pylint: disable=invalid-unary-operand-type - ) - self.assertLess(mel_input.min(), 0) - else: - self.assertLessEqual(mel_input.max(), self.ap.max_norm) - self.assertGreaterEqual(mel_input.min(), 0) - - def test_batch_group_shuffle(self): - if ok_ljspeech: - dataloader, dataset = self._create_dataloader(2, c.r, 16) - last_length = 0 - frames = dataset.samples - for i, data in enumerate(dataloader): - if i == self.max_loader_iter: - break - mel_lengths = data["mel_lengths"] - avg_length = mel_lengths.numpy().mean() - dataloader.dataset.preprocess_samples() - is_items_reordered = False - for idx, item in enumerate(dataloader.dataset.samples): - if item != frames[idx]: - is_items_reordered = True - break - self.assertGreaterEqual(avg_length, last_length) - self.assertTrue(is_items_reordered) - - def test_start_by_longest(self): - """Test start_by_longest option. - - Ther first item of the fist batch must be longer than all the other items. - """ - if ok_ljspeech: - dataloader, _ = self._create_dataloader(2, c.r, 0, True) - dataloader.dataset.preprocess_samples() - for i, data in enumerate(dataloader): - if i == self.max_loader_iter: - break - mel_lengths = data["mel_lengths"] - if i == 0: - max_len = mel_lengths[0] - print(mel_lengths) - self.assertTrue(all(max_len >= mel_lengths)) - - def test_padding_and_spectrograms(self): - def check_conditions(idx, linear_input, mel_input, stop_target, mel_lengths): - self.assertNotEqual(linear_input[idx, -1].sum(), 0) # check padding - self.assertNotEqual(linear_input[idx, -2].sum(), 0) - self.assertNotEqual(mel_input[idx, -1].sum(), 0) - self.assertNotEqual(mel_input[idx, -2].sum(), 0) - self.assertEqual(stop_target[idx, -1], 1) - self.assertEqual(stop_target[idx, -2], 0) - self.assertEqual(stop_target[idx].sum(), 1) - self.assertEqual(len(mel_lengths.shape), 1) - self.assertEqual(mel_lengths[idx], linear_input[idx].shape[0]) - self.assertEqual(mel_lengths[idx], mel_input[idx].shape[0]) - - if ok_ljspeech: - dataloader, _ = self._create_dataloader(1, 1, 0) - - for i, data in enumerate(dataloader): - if i == self.max_loader_iter: - break - linear_input = data["linear"] - mel_input = data["mel"] - mel_lengths = data["mel_lengths"] - stop_target = data["stop_targets"] - item_idx = data["item_idxs"] - - # check mel_spec consistency - wav = np.asarray(self.ap.load_wav(item_idx[0]), dtype=np.float32) - mel = self.ap.melspectrogram(wav).astype("float32") - mel = torch.FloatTensor(mel).contiguous() - mel_dl = mel_input[0] - # NOTE: Below needs to check == 0 but due to an unknown reason - # there is a slight difference between two matrices. - # TODO: Check this assert cond more in detail. - self.assertLess(abs(mel.T - mel_dl).max(), 1e-5) - - # check mel-spec correctness - mel_spec = mel_input[0].cpu().numpy() - wav = self.ap.inv_melspectrogram(mel_spec.T) - self.ap.save_wav(wav, OUTPATH + "/mel_inv_dataloader.wav") - shutil.copy(item_idx[0], OUTPATH + "/mel_target_dataloader.wav") - - # check linear-spec - linear_spec = linear_input[0].cpu().numpy() - wav = self.ap.inv_spectrogram(linear_spec.T) - self.ap.save_wav(wav, OUTPATH + "/linear_inv_dataloader.wav") - shutil.copy(item_idx[0], OUTPATH + "/linear_target_dataloader.wav") - - # check the outputs - check_conditions(0, linear_input, mel_input, stop_target, mel_lengths) - - # Test for batch size 2 - dataloader, _ = self._create_dataloader(2, 1, 0) - - for i, data in enumerate(dataloader): - if i == self.max_loader_iter: - break - linear_input = data["linear"] - mel_input = data["mel"] - mel_lengths = data["mel_lengths"] - stop_target = data["stop_targets"] - item_idx = data["item_idxs"] - - # set id to the longest sequence in the batch - if mel_lengths[0] > mel_lengths[1]: - idx = 0 - else: - idx = 1 - - # check the longer item in the batch - check_conditions(idx, linear_input, mel_input, stop_target, mel_lengths) - - # check the other item in the batch - self.assertEqual(linear_input[1 - idx, -1].sum(), 0) - self.assertEqual(mel_input[1 - idx, -1].sum(), 0) - self.assertEqual(stop_target[1, mel_lengths[1] - 1], 1) - self.assertEqual(stop_target[1, mel_lengths[1] :].sum(), stop_target.shape[1] - mel_lengths[1]) - self.assertEqual(len(mel_lengths.shape), 1) - - # check batch zero-frame conditions (zero-frame disabled) - # assert (linear_input * stop_target.unsqueeze(2)).sum() == 0 - # assert (mel_input * stop_target.unsqueeze(2)).sum() == 0 diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Hash/test_SHA3_384.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Hash/test_SHA3_384.py deleted file mode 100644 index b0ba1bfee010476d653d9ae0788251a8ded2c552..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Hash/test_SHA3_384.py +++ /dev/null @@ -1,79 +0,0 @@ -# -*- coding: utf-8 -*- -# -# SelfTest/Hash/test_SHA3_384.py: Self-test for the SHA-3/384 hash function -# -# =================================================================== -# The contents of this file are dedicated to the public domain. To -# the extent that dedication to the public domain is not available, -# everyone is granted a worldwide, perpetual, royalty-free, -# non-exclusive license to exercise all rights associated with the -# contents of this file for any purpose whatsoever. -# No rights are reserved. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS -# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN -# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN -# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. -# =================================================================== - -"""Self-test suite for Crypto.Hash.SHA3_384""" - -import unittest -from binascii import hexlify - -from Crypto.SelfTest.loader import load_test_vectors -from Crypto.SelfTest.st_common import list_test_cases -from Crypto.Hash import SHA3_384 as SHA3 -from Crypto.Util.py3compat import b - - -class APITest(unittest.TestCase): - - def test_update_after_digest(self): - msg=b("rrrrttt") - - # Normally, update() cannot be done after digest() - h = SHA3.new(data=msg[:4]) - dig1 = h.digest() - self.assertRaises(TypeError, h.update, msg[4:]) - dig2 = SHA3.new(data=msg).digest() - - # With the proper flag, it is allowed - h = SHA3.new(data=msg[:4], update_after_digest=True) - self.assertEqual(h.digest(), dig1) - # ... and the subsequent digest applies to the entire message - # up to that point - h.update(msg[4:]) - self.assertEqual(h.digest(), dig2) - - -def get_tests(config={}): - from .common import make_hash_tests - - tests = [] - - test_vectors = load_test_vectors(("Hash", "SHA3"), - "ShortMsgKAT_SHA3-384.txt", - "KAT SHA-3 384", - { "len" : lambda x: int(x) } ) or [] - - test_data = [] - for tv in test_vectors: - if tv.len == 0: - tv.msg = b("") - test_data.append((hexlify(tv.md), tv.msg, tv.desc)) - - tests += make_hash_tests(SHA3, "SHA3_384", test_data, - digest_size=SHA3.digest_size, - oid="2.16.840.1.101.3.4.2.9") - tests += list_test_cases(APITest) - return tests - -if __name__ == '__main__': - import unittest - suite = lambda: unittest.TestSuite(get_tests()) - unittest.main(defaultTest='suite') diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Debugger/libpython.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Debugger/libpython.py deleted file mode 100644 index fea626dd730f73f925b7b110ce8ca59c50d1209d..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Debugger/libpython.py +++ /dev/null @@ -1,2760 +0,0 @@ -#!/usr/bin/python - -# NOTE: this file is taken from the Python source distribution -# It can be found under Tools/gdb/libpython.py. It is shipped with Cython -# because it's not installed as a python module, and because changes are only -# merged into new python versions (v3.2+). - -''' -From gdb 7 onwards, gdb's build can be configured --with-python, allowing gdb -to be extended with Python code e.g. for library-specific data visualizations, -such as for the C++ STL types. Documentation on this API can be seen at: -http://sourceware.org/gdb/current/onlinedocs/gdb/Python-API.html - - -This python module deals with the case when the process being debugged (the -"inferior process" in gdb parlance) is itself python, or more specifically, -linked against libpython. In this situation, almost every item of data is a -(PyObject*), and having the debugger merely print their addresses is not very -enlightening. - -This module embeds knowledge about the implementation details of libpython so -that we can emit useful visualizations e.g. a string, a list, a dict, a frame -giving file/line information and the state of local variables - -In particular, given a gdb.Value corresponding to a PyObject* in the inferior -process, we can generate a "proxy value" within the gdb process. For example, -given a PyObject* in the inferior process that is in fact a PyListObject* -holding three PyObject* that turn out to be PyBytesObject* instances, we can -generate a proxy value within the gdb process that is a list of bytes -instances: - [b"foo", b"bar", b"baz"] - -Doing so can be expensive for complicated graphs of objects, and could take -some time, so we also have a "write_repr" method that writes a representation -of the data to a file-like object. This allows us to stop the traversal by -having the file-like object raise an exception if it gets too much data. - -With both "proxyval" and "write_repr" we keep track of the set of all addresses -visited so far in the traversal, to avoid infinite recursion due to cycles in -the graph of object references. - -We try to defer gdb.lookup_type() invocations for python types until as late as -possible: for a dynamically linked python binary, when the process starts in -the debugger, the libpython.so hasn't been dynamically loaded yet, so none of -the type names are known to the debugger - -The module also extends gdb with some python-specific commands. -''' - -# NOTE: some gdbs are linked with Python 3, so this file should be dual-syntax -# compatible (2.6+ and 3.0+). See #19308. - -from __future__ import print_function -import gdb -import os -import locale -import sys - -if sys.version_info[0] >= 3: - unichr = chr - xrange = range - long = int - -# Look up the gdb.Type for some standard types: -# Those need to be refreshed as types (pointer sizes) may change when -# gdb loads different executables - -def _type_char_ptr(): - return gdb.lookup_type('char').pointer() # char* - - -def _type_unsigned_char_ptr(): - return gdb.lookup_type('unsigned char').pointer() # unsigned char* - - -def _type_unsigned_short_ptr(): - return gdb.lookup_type('unsigned short').pointer() - - -def _type_unsigned_int_ptr(): - return gdb.lookup_type('unsigned int').pointer() - - -def _sizeof_void_p(): - return gdb.lookup_type('void').pointer().sizeof - - -# value computed later, see PyUnicodeObjectPtr.proxy() -_is_pep393 = None - -Py_TPFLAGS_HEAPTYPE = (1 << 9) -Py_TPFLAGS_LONG_SUBCLASS = (1 << 24) -Py_TPFLAGS_LIST_SUBCLASS = (1 << 25) -Py_TPFLAGS_TUPLE_SUBCLASS = (1 << 26) -Py_TPFLAGS_BYTES_SUBCLASS = (1 << 27) -Py_TPFLAGS_UNICODE_SUBCLASS = (1 << 28) -Py_TPFLAGS_DICT_SUBCLASS = (1 << 29) -Py_TPFLAGS_BASE_EXC_SUBCLASS = (1 << 30) -Py_TPFLAGS_TYPE_SUBCLASS = (1 << 31) - - -MAX_OUTPUT_LEN=1024 - -hexdigits = "0123456789abcdef" - -ENCODING = locale.getpreferredencoding() - -EVALFRAME = '_PyEval_EvalFrameDefault' - -class NullPyObjectPtr(RuntimeError): - pass - - -def safety_limit(val): - # Given an integer value from the process being debugged, limit it to some - # safety threshold so that arbitrary breakage within said process doesn't - # break the gdb process too much (e.g. sizes of iterations, sizes of lists) - return min(val, 1000) - - -def safe_range(val): - # As per range, but don't trust the value too much: cap it to a safety - # threshold in case the data was corrupted - return xrange(safety_limit(int(val))) - -if sys.version_info[0] >= 3: - def write_unicode(file, text): - file.write(text) -else: - def write_unicode(file, text): - # Write a byte or unicode string to file. Unicode strings are encoded to - # ENCODING encoding with 'backslashreplace' error handler to avoid - # UnicodeEncodeError. - if isinstance(text, unicode): - text = text.encode(ENCODING, 'backslashreplace') - file.write(text) - -try: - os_fsencode = os.fsencode -except AttributeError: - def os_fsencode(filename): - if not isinstance(filename, unicode): - return filename - encoding = sys.getfilesystemencoding() - if encoding == 'mbcs': - # mbcs doesn't support surrogateescape - return filename.encode(encoding) - encoded = [] - for char in filename: - # surrogateescape error handler - if 0xDC80 <= ord(char) <= 0xDCFF: - byte = chr(ord(char) - 0xDC00) - else: - byte = char.encode(encoding) - encoded.append(byte) - return ''.join(encoded) - -class StringTruncated(RuntimeError): - pass - -class TruncatedStringIO(object): - '''Similar to io.StringIO, but can truncate the output by raising a - StringTruncated exception''' - def __init__(self, maxlen=None): - self._val = '' - self.maxlen = maxlen - - def write(self, data): - if self.maxlen: - if len(data) + len(self._val) > self.maxlen: - # Truncation: - self._val += data[0:self.maxlen - len(self._val)] - raise StringTruncated() - - self._val += data - - def getvalue(self): - return self._val - -class PyObjectPtr(object): - """ - Class wrapping a gdb.Value that's either a (PyObject*) within the - inferior process, or some subclass pointer e.g. (PyBytesObject*) - - There will be a subclass for every refined PyObject type that we care - about. - - Note that at every stage the underlying pointer could be NULL, point - to corrupt data, etc; this is the debugger, after all. - """ - _typename = 'PyObject' - - def __init__(self, gdbval, cast_to=None): - if cast_to: - self._gdbval = gdbval.cast(cast_to) - else: - self._gdbval = gdbval - - def field(self, name): - ''' - Get the gdb.Value for the given field within the PyObject, coping with - some python 2 versus python 3 differences. - - Various libpython types are defined using the "PyObject_HEAD" and - "PyObject_VAR_HEAD" macros. - - In Python 2, this these are defined so that "ob_type" and (for a var - object) "ob_size" are fields of the type in question. - - In Python 3, this is defined as an embedded PyVarObject type thus: - PyVarObject ob_base; - so that the "ob_size" field is located insize the "ob_base" field, and - the "ob_type" is most easily accessed by casting back to a (PyObject*). - ''' - if self.is_null(): - raise NullPyObjectPtr(self) - - if name == 'ob_type': - pyo_ptr = self._gdbval.cast(PyObjectPtr.get_gdb_type()) - return pyo_ptr.dereference()[name] - - if name == 'ob_size': - pyo_ptr = self._gdbval.cast(PyVarObjectPtr.get_gdb_type()) - return pyo_ptr.dereference()[name] - - # General case: look it up inside the object: - return self._gdbval.dereference()[name] - - def pyop_field(self, name): - ''' - Get a PyObjectPtr for the given PyObject* field within this PyObject, - coping with some python 2 versus python 3 differences. - ''' - return PyObjectPtr.from_pyobject_ptr(self.field(name)) - - def write_field_repr(self, name, out, visited): - ''' - Extract the PyObject* field named "name", and write its representation - to file-like object "out" - ''' - field_obj = self.pyop_field(name) - field_obj.write_repr(out, visited) - - def get_truncated_repr(self, maxlen): - ''' - Get a repr-like string for the data, but truncate it at "maxlen" bytes - (ending the object graph traversal as soon as you do) - ''' - out = TruncatedStringIO(maxlen) - try: - self.write_repr(out, set()) - except StringTruncated: - # Truncation occurred: - return out.getvalue() + '...(truncated)' - - # No truncation occurred: - return out.getvalue() - - def type(self): - return PyTypeObjectPtr(self.field('ob_type')) - - def is_null(self): - return 0 == long(self._gdbval) - - def is_optimized_out(self): - ''' - Is the value of the underlying PyObject* visible to the debugger? - - This can vary with the precise version of the compiler used to build - Python, and the precise version of gdb. - - See e.g. https://bugzilla.redhat.com/show_bug.cgi?id=556975 with - PyEval_EvalFrameEx's "f" - ''' - return self._gdbval.is_optimized_out - - def safe_tp_name(self): - try: - return self.type().field('tp_name').string() - except NullPyObjectPtr: - # NULL tp_name? - return 'unknown' - except RuntimeError: - # Can't even read the object at all? - return 'unknown' - - def proxyval(self, visited): - ''' - Scrape a value from the inferior process, and try to represent it - within the gdb process, whilst (hopefully) avoiding crashes when - the remote data is corrupt. - - Derived classes will override this. - - For example, a PyIntObject* with ob_ival 42 in the inferior process - should result in an int(42) in this process. - - visited: a set of all gdb.Value pyobject pointers already visited - whilst generating this value (to guard against infinite recursion when - visiting object graphs with loops). Analogous to Py_ReprEnter and - Py_ReprLeave - ''' - - class FakeRepr(object): - """ - Class representing a non-descript PyObject* value in the inferior - process for when we don't have a custom scraper, intended to have - a sane repr(). - """ - - def __init__(self, tp_name, address): - self.tp_name = tp_name - self.address = address - - def __repr__(self): - # For the NULL pointer, we have no way of knowing a type, so - # special-case it as per - # http://bugs.python.org/issue8032#msg100882 - if self.address == 0: - return '0x0' - return '<%s at remote 0x%x>' % (self.tp_name, self.address) - - return FakeRepr(self.safe_tp_name(), - long(self._gdbval)) - - def write_repr(self, out, visited): - ''' - Write a string representation of the value scraped from the inferior - process to "out", a file-like object. - ''' - # Default implementation: generate a proxy value and write its repr - # However, this could involve a lot of work for complicated objects, - # so for derived classes we specialize this - return out.write(repr(self.proxyval(visited))) - - @classmethod - def subclass_from_type(cls, t): - ''' - Given a PyTypeObjectPtr instance wrapping a gdb.Value that's a - (PyTypeObject*), determine the corresponding subclass of PyObjectPtr - to use - - Ideally, we would look up the symbols for the global types, but that - isn't working yet: - (gdb) python print gdb.lookup_symbol('PyList_Type')[0].value - Traceback (most recent call last): - File "", line 1, in - NotImplementedError: Symbol type not yet supported in Python scripts. - Error while executing Python code. - - For now, we use tp_flags, after doing some string comparisons on the - tp_name for some special-cases that don't seem to be visible through - flags - ''' - try: - tp_name = t.field('tp_name').string() - tp_flags = int(t.field('tp_flags')) - except RuntimeError: - # Handle any kind of error e.g. NULL ptrs by simply using the base - # class - return cls - - #print('tp_flags = 0x%08x' % tp_flags) - #print('tp_name = %r' % tp_name) - - name_map = {'bool': PyBoolObjectPtr, - 'classobj': PyClassObjectPtr, - 'NoneType': PyNoneStructPtr, - 'frame': PyFrameObjectPtr, - 'set' : PySetObjectPtr, - 'frozenset' : PySetObjectPtr, - 'builtin_function_or_method' : PyCFunctionObjectPtr, - 'method-wrapper': wrapperobject, - } - if tp_name in name_map: - return name_map[tp_name] - - if tp_flags & Py_TPFLAGS_HEAPTYPE: - return HeapTypeObjectPtr - - if tp_flags & Py_TPFLAGS_LONG_SUBCLASS: - return PyLongObjectPtr - if tp_flags & Py_TPFLAGS_LIST_SUBCLASS: - return PyListObjectPtr - if tp_flags & Py_TPFLAGS_TUPLE_SUBCLASS: - return PyTupleObjectPtr - if tp_flags & Py_TPFLAGS_BYTES_SUBCLASS: - return PyBytesObjectPtr - if tp_flags & Py_TPFLAGS_UNICODE_SUBCLASS: - return PyUnicodeObjectPtr - if tp_flags & Py_TPFLAGS_DICT_SUBCLASS: - return PyDictObjectPtr - if tp_flags & Py_TPFLAGS_BASE_EXC_SUBCLASS: - return PyBaseExceptionObjectPtr - #if tp_flags & Py_TPFLAGS_TYPE_SUBCLASS: - # return PyTypeObjectPtr - - # Use the base class: - return cls - - @classmethod - def from_pyobject_ptr(cls, gdbval): - ''' - Try to locate the appropriate derived class dynamically, and cast - the pointer accordingly. - ''' - try: - p = PyObjectPtr(gdbval) - cls = cls.subclass_from_type(p.type()) - return cls(gdbval, cast_to=cls.get_gdb_type()) - except RuntimeError: - # Handle any kind of error e.g. NULL ptrs by simply using the base - # class - pass - return cls(gdbval) - - @classmethod - def get_gdb_type(cls): - return gdb.lookup_type(cls._typename).pointer() - - def as_address(self): - return long(self._gdbval) - -class PyVarObjectPtr(PyObjectPtr): - _typename = 'PyVarObject' - -class ProxyAlreadyVisited(object): - ''' - Placeholder proxy to use when protecting against infinite recursion due to - loops in the object graph. - - Analogous to the values emitted by the users of Py_ReprEnter and Py_ReprLeave - ''' - def __init__(self, rep): - self._rep = rep - - def __repr__(self): - return self._rep - - -def _write_instance_repr(out, visited, name, pyop_attrdict, address): - '''Shared code for use by all classes: - write a representation to file-like object "out"''' - out.write('<') - out.write(name) - - # Write dictionary of instance attributes: - if isinstance(pyop_attrdict, PyDictObjectPtr): - out.write('(') - first = True - for pyop_arg, pyop_val in pyop_attrdict.iteritems(): - if not first: - out.write(', ') - first = False - out.write(pyop_arg.proxyval(visited)) - out.write('=') - pyop_val.write_repr(out, visited) - out.write(')') - out.write(' at remote 0x%x>' % address) - - -class InstanceProxy(object): - - def __init__(self, cl_name, attrdict, address): - self.cl_name = cl_name - self.attrdict = attrdict - self.address = address - - def __repr__(self): - if isinstance(self.attrdict, dict): - kwargs = ', '.join(["%s=%r" % (arg, val) - for arg, val in self.attrdict.iteritems()]) - return '<%s(%s) at remote 0x%x>' % (self.cl_name, - kwargs, self.address) - else: - return '<%s at remote 0x%x>' % (self.cl_name, - self.address) - -def _PyObject_VAR_SIZE(typeobj, nitems): - if _PyObject_VAR_SIZE._type_size_t is None: - _PyObject_VAR_SIZE._type_size_t = gdb.lookup_type('size_t') - - return ( ( typeobj.field('tp_basicsize') + - nitems * typeobj.field('tp_itemsize') + - (_sizeof_void_p() - 1) - ) & ~(_sizeof_void_p() - 1) - ).cast(_PyObject_VAR_SIZE._type_size_t) -_PyObject_VAR_SIZE._type_size_t = None - -class HeapTypeObjectPtr(PyObjectPtr): - _typename = 'PyObject' - - def get_attr_dict(self): - ''' - Get the PyDictObject ptr representing the attribute dictionary - (or None if there's a problem) - ''' - try: - typeobj = self.type() - dictoffset = int_from_int(typeobj.field('tp_dictoffset')) - if dictoffset != 0: - if dictoffset < 0: - type_PyVarObject_ptr = gdb.lookup_type('PyVarObject').pointer() - tsize = int_from_int(self._gdbval.cast(type_PyVarObject_ptr)['ob_size']) - if tsize < 0: - tsize = -tsize - size = _PyObject_VAR_SIZE(typeobj, tsize) - dictoffset += size - assert dictoffset > 0 - assert dictoffset % _sizeof_void_p() == 0 - - dictptr = self._gdbval.cast(_type_char_ptr()) + dictoffset - PyObjectPtrPtr = PyObjectPtr.get_gdb_type().pointer() - dictptr = dictptr.cast(PyObjectPtrPtr) - return PyObjectPtr.from_pyobject_ptr(dictptr.dereference()) - except RuntimeError: - # Corrupt data somewhere; fail safe - pass - - # Not found, or some kind of error: - return None - - def proxyval(self, visited): - ''' - Support for classes. - - Currently we just locate the dictionary using a transliteration to - python of _PyObject_GetDictPtr, ignoring descriptors - ''' - # Guard against infinite loops: - if self.as_address() in visited: - return ProxyAlreadyVisited('<...>') - visited.add(self.as_address()) - - pyop_attr_dict = self.get_attr_dict() - if pyop_attr_dict: - attr_dict = pyop_attr_dict.proxyval(visited) - else: - attr_dict = {} - tp_name = self.safe_tp_name() - - # Class: - return InstanceProxy(tp_name, attr_dict, long(self._gdbval)) - - def write_repr(self, out, visited): - # Guard against infinite loops: - if self.as_address() in visited: - out.write('<...>') - return - visited.add(self.as_address()) - - pyop_attrdict = self.get_attr_dict() - _write_instance_repr(out, visited, - self.safe_tp_name(), pyop_attrdict, self.as_address()) - -class ProxyException(Exception): - def __init__(self, tp_name, args): - self.tp_name = tp_name - self.args = args - - def __repr__(self): - return '%s%r' % (self.tp_name, self.args) - -class PyBaseExceptionObjectPtr(PyObjectPtr): - """ - Class wrapping a gdb.Value that's a PyBaseExceptionObject* i.e. an exception - within the process being debugged. - """ - _typename = 'PyBaseExceptionObject' - - def proxyval(self, visited): - # Guard against infinite loops: - if self.as_address() in visited: - return ProxyAlreadyVisited('(...)') - visited.add(self.as_address()) - arg_proxy = self.pyop_field('args').proxyval(visited) - return ProxyException(self.safe_tp_name(), - arg_proxy) - - def write_repr(self, out, visited): - # Guard against infinite loops: - if self.as_address() in visited: - out.write('(...)') - return - visited.add(self.as_address()) - - out.write(self.safe_tp_name()) - self.write_field_repr('args', out, visited) - -class PyClassObjectPtr(PyObjectPtr): - """ - Class wrapping a gdb.Value that's a PyClassObject* i.e. a - instance within the process being debugged. - """ - _typename = 'PyClassObject' - - -class BuiltInFunctionProxy(object): - def __init__(self, ml_name): - self.ml_name = ml_name - - def __repr__(self): - return "" % self.ml_name - -class BuiltInMethodProxy(object): - def __init__(self, ml_name, pyop_m_self): - self.ml_name = ml_name - self.pyop_m_self = pyop_m_self - - def __repr__(self): - return ('' - % (self.ml_name, - self.pyop_m_self.safe_tp_name(), - self.pyop_m_self.as_address()) - ) - -class PyCFunctionObjectPtr(PyObjectPtr): - """ - Class wrapping a gdb.Value that's a PyCFunctionObject* - (see Include/methodobject.h and Objects/methodobject.c) - """ - _typename = 'PyCFunctionObject' - - def proxyval(self, visited): - m_ml = self.field('m_ml') # m_ml is a (PyMethodDef*) - ml_name = m_ml['ml_name'].string() - - pyop_m_self = self.pyop_field('m_self') - if pyop_m_self.is_null(): - return BuiltInFunctionProxy(ml_name) - else: - return BuiltInMethodProxy(ml_name, pyop_m_self) - - -class PyCodeObjectPtr(PyObjectPtr): - """ - Class wrapping a gdb.Value that's a PyCodeObject* i.e. a instance - within the process being debugged. - """ - _typename = 'PyCodeObject' - - def addr2line(self, addrq): - ''' - Get the line number for a given bytecode offset - - Analogous to PyCode_Addr2Line; translated from pseudocode in - Objects/lnotab_notes.txt - ''' - co_lnotab = self.pyop_field('co_lnotab').proxyval(set()) - - # Initialize lineno to co_firstlineno as per PyCode_Addr2Line - # not 0, as lnotab_notes.txt has it: - lineno = int_from_int(self.field('co_firstlineno')) - - addr = 0 - for addr_incr, line_incr in zip(co_lnotab[::2], co_lnotab[1::2]): - addr += ord(addr_incr) - if addr > addrq: - return lineno - lineno += ord(line_incr) - return lineno - - -class PyDictObjectPtr(PyObjectPtr): - """ - Class wrapping a gdb.Value that's a PyDictObject* i.e. a dict instance - within the process being debugged. - """ - _typename = 'PyDictObject' - - def iteritems(self): - ''' - Yields a sequence of (PyObjectPtr key, PyObjectPtr value) pairs, - analogous to dict.iteritems() - ''' - keys = self.field('ma_keys') - values = self.field('ma_values') - entries, nentries = self._get_entries(keys) - for i in safe_range(nentries): - ep = entries[i] - if long(values): - pyop_value = PyObjectPtr.from_pyobject_ptr(values[i]) - else: - pyop_value = PyObjectPtr.from_pyobject_ptr(ep['me_value']) - if not pyop_value.is_null(): - pyop_key = PyObjectPtr.from_pyobject_ptr(ep['me_key']) - yield (pyop_key, pyop_value) - - def proxyval(self, visited): - # Guard against infinite loops: - if self.as_address() in visited: - return ProxyAlreadyVisited('{...}') - visited.add(self.as_address()) - - result = {} - for pyop_key, pyop_value in self.iteritems(): - proxy_key = pyop_key.proxyval(visited) - proxy_value = pyop_value.proxyval(visited) - result[proxy_key] = proxy_value - return result - - def write_repr(self, out, visited): - # Guard against infinite loops: - if self.as_address() in visited: - out.write('{...}') - return - visited.add(self.as_address()) - - out.write('{') - first = True - for pyop_key, pyop_value in self.iteritems(): - if not first: - out.write(', ') - first = False - pyop_key.write_repr(out, visited) - out.write(': ') - pyop_value.write_repr(out, visited) - out.write('}') - - def _get_entries(self, keys): - dk_nentries = int(keys['dk_nentries']) - dk_size = int(keys['dk_size']) - try: - # <= Python 3.5 - return keys['dk_entries'], dk_size - except RuntimeError: - # >= Python 3.6 - pass - - if dk_size <= 0xFF: - offset = dk_size - elif dk_size <= 0xFFFF: - offset = 2 * dk_size - elif dk_size <= 0xFFFFFFFF: - offset = 4 * dk_size - else: - offset = 8 * dk_size - - ent_addr = keys['dk_indices']['as_1'].address - ent_addr = ent_addr.cast(_type_unsigned_char_ptr()) + offset - ent_ptr_t = gdb.lookup_type('PyDictKeyEntry').pointer() - ent_addr = ent_addr.cast(ent_ptr_t) - - return ent_addr, dk_nentries - - -class PyListObjectPtr(PyObjectPtr): - _typename = 'PyListObject' - - def __getitem__(self, i): - # Get the gdb.Value for the (PyObject*) with the given index: - field_ob_item = self.field('ob_item') - return field_ob_item[i] - - def proxyval(self, visited): - # Guard against infinite loops: - if self.as_address() in visited: - return ProxyAlreadyVisited('[...]') - visited.add(self.as_address()) - - result = [PyObjectPtr.from_pyobject_ptr(self[i]).proxyval(visited) - for i in safe_range(int_from_int(self.field('ob_size')))] - return result - - def write_repr(self, out, visited): - # Guard against infinite loops: - if self.as_address() in visited: - out.write('[...]') - return - visited.add(self.as_address()) - - out.write('[') - for i in safe_range(int_from_int(self.field('ob_size'))): - if i > 0: - out.write(', ') - element = PyObjectPtr.from_pyobject_ptr(self[i]) - element.write_repr(out, visited) - out.write(']') - -class PyLongObjectPtr(PyObjectPtr): - _typename = 'PyLongObject' - - def proxyval(self, visited): - ''' - Python's Include/longobjrep.h has this declaration: - struct _longobject { - PyObject_VAR_HEAD - digit ob_digit[1]; - }; - - with this description: - The absolute value of a number is equal to - SUM(for i=0 through abs(ob_size)-1) ob_digit[i] * 2**(SHIFT*i) - Negative numbers are represented with ob_size < 0; - zero is represented by ob_size == 0. - - where SHIFT can be either: - #define PyLong_SHIFT 30 - #define PyLong_SHIFT 15 - ''' - ob_size = long(self.field('ob_size')) - if ob_size == 0: - return 0 - - ob_digit = self.field('ob_digit') - - if gdb.lookup_type('digit').sizeof == 2: - SHIFT = 15 - else: - SHIFT = 30 - - digits = [long(ob_digit[i]) * 2**(SHIFT*i) - for i in safe_range(abs(ob_size))] - result = sum(digits) - if ob_size < 0: - result = -result - return result - - def write_repr(self, out, visited): - # Write this out as a Python 3 int literal, i.e. without the "L" suffix - proxy = self.proxyval(visited) - out.write("%s" % proxy) - - -class PyBoolObjectPtr(PyLongObjectPtr): - """ - Class wrapping a gdb.Value that's a PyBoolObject* i.e. one of the two - instances (Py_True/Py_False) within the process being debugged. - """ - def proxyval(self, visited): - if PyLongObjectPtr.proxyval(self, visited): - return True - else: - return False - -class PyNoneStructPtr(PyObjectPtr): - """ - Class wrapping a gdb.Value that's a PyObject* pointing to the - singleton (we hope) _Py_NoneStruct with ob_type PyNone_Type - """ - _typename = 'PyObject' - - def proxyval(self, visited): - return None - - -class PyFrameObjectPtr(PyObjectPtr): - _typename = 'PyFrameObject' - - def __init__(self, gdbval, cast_to=None): - PyObjectPtr.__init__(self, gdbval, cast_to) - - if not self.is_optimized_out(): - self.co = PyCodeObjectPtr.from_pyobject_ptr(self.field('f_code')) - self.co_name = self.co.pyop_field('co_name') - self.co_filename = self.co.pyop_field('co_filename') - - self.f_lineno = int_from_int(self.field('f_lineno')) - self.f_lasti = int_from_int(self.field('f_lasti')) - self.co_nlocals = int_from_int(self.co.field('co_nlocals')) - self.co_varnames = PyTupleObjectPtr.from_pyobject_ptr(self.co.field('co_varnames')) - - def iter_locals(self): - ''' - Yield a sequence of (name,value) pairs of PyObjectPtr instances, for - the local variables of this frame - ''' - if self.is_optimized_out(): - return - - f_localsplus = self.field('f_localsplus') - for i in safe_range(self.co_nlocals): - pyop_value = PyObjectPtr.from_pyobject_ptr(f_localsplus[i]) - if not pyop_value.is_null(): - pyop_name = PyObjectPtr.from_pyobject_ptr(self.co_varnames[i]) - yield (pyop_name, pyop_value) - - def iter_globals(self): - ''' - Yield a sequence of (name,value) pairs of PyObjectPtr instances, for - the global variables of this frame - ''' - if self.is_optimized_out(): - return () - - pyop_globals = self.pyop_field('f_globals') - return pyop_globals.iteritems() - - def iter_builtins(self): - ''' - Yield a sequence of (name,value) pairs of PyObjectPtr instances, for - the builtin variables - ''' - if self.is_optimized_out(): - return () - - pyop_builtins = self.pyop_field('f_builtins') - return pyop_builtins.iteritems() - - def get_var_by_name(self, name): - ''' - Look for the named local variable, returning a (PyObjectPtr, scope) pair - where scope is a string 'local', 'global', 'builtin' - - If not found, return (None, None) - ''' - for pyop_name, pyop_value in self.iter_locals(): - if name == pyop_name.proxyval(set()): - return pyop_value, 'local' - for pyop_name, pyop_value in self.iter_globals(): - if name == pyop_name.proxyval(set()): - return pyop_value, 'global' - for pyop_name, pyop_value in self.iter_builtins(): - if name == pyop_name.proxyval(set()): - return pyop_value, 'builtin' - return None, None - - def filename(self): - '''Get the path of the current Python source file, as a string''' - if self.is_optimized_out(): - return '(frame information optimized out)' - return self.co_filename.proxyval(set()) - - def current_line_num(self): - '''Get current line number as an integer (1-based) - - Translated from PyFrame_GetLineNumber and PyCode_Addr2Line - - See Objects/lnotab_notes.txt - ''' - if self.is_optimized_out(): - return None - f_trace = self.field('f_trace') - if long(f_trace) != 0: - # we have a non-NULL f_trace: - return self.f_lineno - else: - #try: - return self.co.addr2line(self.f_lasti) - #except ValueError: - # return self.f_lineno - - def current_line(self): - '''Get the text of the current source line as a string, with a trailing - newline character''' - if self.is_optimized_out(): - return '(frame information optimized out)' - filename = self.filename() - try: - f = open(os_fsencode(filename), 'r') - except IOError: - return None - with f: - all_lines = f.readlines() - # Convert from 1-based current_line_num to 0-based list offset: - return all_lines[self.current_line_num()-1] - - def write_repr(self, out, visited): - if self.is_optimized_out(): - out.write('(frame information optimized out)') - return - out.write('Frame 0x%x, for file %s, line %i, in %s (' - % (self.as_address(), - self.co_filename.proxyval(visited), - self.current_line_num(), - self.co_name.proxyval(visited))) - first = True - for pyop_name, pyop_value in self.iter_locals(): - if not first: - out.write(', ') - first = False - - out.write(pyop_name.proxyval(visited)) - out.write('=') - pyop_value.write_repr(out, visited) - - out.write(')') - - def print_traceback(self): - if self.is_optimized_out(): - sys.stdout.write(' (frame information optimized out)\n') - return - visited = set() - sys.stdout.write(' File "%s", line %i, in %s\n' - % (self.co_filename.proxyval(visited), - self.current_line_num(), - self.co_name.proxyval(visited))) - -class PySetObjectPtr(PyObjectPtr): - _typename = 'PySetObject' - - @classmethod - def _dummy_key(self): - return gdb.lookup_global_symbol('_PySet_Dummy').value() - - def __iter__(self): - dummy_ptr = self._dummy_key() - table = self.field('table') - for i in safe_range(self.field('mask') + 1): - setentry = table[i] - key = setentry['key'] - if key != 0 and key != dummy_ptr: - yield PyObjectPtr.from_pyobject_ptr(key) - - def proxyval(self, visited): - # Guard against infinite loops: - if self.as_address() in visited: - return ProxyAlreadyVisited('%s(...)' % self.safe_tp_name()) - visited.add(self.as_address()) - - members = (key.proxyval(visited) for key in self) - if self.safe_tp_name() == 'frozenset': - return frozenset(members) - else: - return set(members) - - def write_repr(self, out, visited): - # Emulate Python 3's set_repr - tp_name = self.safe_tp_name() - - # Guard against infinite loops: - if self.as_address() in visited: - out.write('(...)') - return - visited.add(self.as_address()) - - # Python 3's set_repr special-cases the empty set: - if not self.field('used'): - out.write(tp_name) - out.write('()') - return - - # Python 3 uses {} for set literals: - if tp_name != 'set': - out.write(tp_name) - out.write('(') - - out.write('{') - first = True - for key in self: - if not first: - out.write(', ') - first = False - key.write_repr(out, visited) - out.write('}') - - if tp_name != 'set': - out.write(')') - - -class PyBytesObjectPtr(PyObjectPtr): - _typename = 'PyBytesObject' - - def __str__(self): - field_ob_size = self.field('ob_size') - field_ob_sval = self.field('ob_sval') - char_ptr = field_ob_sval.address.cast(_type_unsigned_char_ptr()) - return ''.join([chr(char_ptr[i]) for i in safe_range(field_ob_size)]) - - def proxyval(self, visited): - return str(self) - - def write_repr(self, out, visited): - # Write this out as a Python 3 bytes literal, i.e. with a "b" prefix - - # Get a PyStringObject* within the Python 2 gdb process: - proxy = self.proxyval(visited) - - # Transliteration of Python 3's Objects/bytesobject.c:PyBytes_Repr - # to Python 2 code: - quote = "'" - if "'" in proxy and not '"' in proxy: - quote = '"' - out.write('b') - out.write(quote) - for byte in proxy: - if byte == quote or byte == '\\': - out.write('\\') - out.write(byte) - elif byte == '\t': - out.write('\\t') - elif byte == '\n': - out.write('\\n') - elif byte == '\r': - out.write('\\r') - elif byte < ' ' or ord(byte) >= 0x7f: - out.write('\\x') - out.write(hexdigits[(ord(byte) & 0xf0) >> 4]) - out.write(hexdigits[ord(byte) & 0xf]) - else: - out.write(byte) - out.write(quote) - - -class PyStringObjectPtr(PyBytesObjectPtr): - _typename = 'PyStringObject' - - -class PyTupleObjectPtr(PyObjectPtr): - _typename = 'PyTupleObject' - - def __getitem__(self, i): - # Get the gdb.Value for the (PyObject*) with the given index: - field_ob_item = self.field('ob_item') - return field_ob_item[i] - - def proxyval(self, visited): - # Guard against infinite loops: - if self.as_address() in visited: - return ProxyAlreadyVisited('(...)') - visited.add(self.as_address()) - - result = tuple(PyObjectPtr.from_pyobject_ptr(self[i]).proxyval(visited) - for i in safe_range(int_from_int(self.field('ob_size')))) - return result - - def write_repr(self, out, visited): - # Guard against infinite loops: - if self.as_address() in visited: - out.write('(...)') - return - visited.add(self.as_address()) - - out.write('(') - for i in safe_range(int_from_int(self.field('ob_size'))): - if i > 0: - out.write(', ') - element = PyObjectPtr.from_pyobject_ptr(self[i]) - element.write_repr(out, visited) - if self.field('ob_size') == 1: - out.write(',)') - else: - out.write(')') - -class PyTypeObjectPtr(PyObjectPtr): - _typename = 'PyTypeObject' - - -def _unichr_is_printable(char): - # Logic adapted from Python 3's Tools/unicode/makeunicodedata.py - if char == u" ": - return True - import unicodedata - return unicodedata.category(char) not in ("C", "Z") - -if sys.maxunicode >= 0x10000: - _unichr = unichr -else: - # Needed for proper surrogate support if sizeof(Py_UNICODE) is 2 in gdb - def _unichr(x): - if x < 0x10000: - return unichr(x) - x -= 0x10000 - ch1 = 0xD800 | (x >> 10) - ch2 = 0xDC00 | (x & 0x3FF) - return unichr(ch1) + unichr(ch2) - - -class PyUnicodeObjectPtr(PyObjectPtr): - _typename = 'PyUnicodeObject' - - def char_width(self): - _type_Py_UNICODE = gdb.lookup_type('Py_UNICODE') - return _type_Py_UNICODE.sizeof - - def proxyval(self, visited): - global _is_pep393 - if _is_pep393 is None: - fields = gdb.lookup_type('PyUnicodeObject').target().fields() - _is_pep393 = 'data' in [f.name for f in fields] - if _is_pep393: - # Python 3.3 and newer - may_have_surrogates = False - compact = self.field('_base') - ascii = compact['_base'] - state = ascii['state'] - is_compact_ascii = (int(state['ascii']) and int(state['compact'])) - if not int(state['ready']): - # string is not ready - field_length = long(compact['wstr_length']) - may_have_surrogates = True - field_str = ascii['wstr'] - else: - field_length = long(ascii['length']) - if is_compact_ascii: - field_str = ascii.address + 1 - elif int(state['compact']): - field_str = compact.address + 1 - else: - field_str = self.field('data')['any'] - repr_kind = int(state['kind']) - if repr_kind == 1: - field_str = field_str.cast(_type_unsigned_char_ptr()) - elif repr_kind == 2: - field_str = field_str.cast(_type_unsigned_short_ptr()) - elif repr_kind == 4: - field_str = field_str.cast(_type_unsigned_int_ptr()) - else: - # Python 3.2 and earlier - field_length = long(self.field('length')) - field_str = self.field('str') - may_have_surrogates = self.char_width() == 2 - - # Gather a list of ints from the Py_UNICODE array; these are either - # UCS-1, UCS-2 or UCS-4 code points: - if not may_have_surrogates: - Py_UNICODEs = [int(field_str[i]) for i in safe_range(field_length)] - else: - # A more elaborate routine if sizeof(Py_UNICODE) is 2 in the - # inferior process: we must join surrogate pairs. - Py_UNICODEs = [] - i = 0 - limit = safety_limit(field_length) - while i < limit: - ucs = int(field_str[i]) - i += 1 - if ucs < 0xD800 or ucs >= 0xDC00 or i == field_length: - Py_UNICODEs.append(ucs) - continue - # This could be a surrogate pair. - ucs2 = int(field_str[i]) - if ucs2 < 0xDC00 or ucs2 > 0xDFFF: - continue - code = (ucs & 0x03FF) << 10 - code |= ucs2 & 0x03FF - code += 0x00010000 - Py_UNICODEs.append(code) - i += 1 - - # Convert the int code points to unicode characters, and generate a - # local unicode instance. - # This splits surrogate pairs if sizeof(Py_UNICODE) is 2 here (in gdb). - result = u''.join([ - (_unichr(ucs) if ucs <= 0x10ffff else '\ufffd') - for ucs in Py_UNICODEs]) - return result - - def write_repr(self, out, visited): - # Write this out as a Python 3 str literal, i.e. without a "u" prefix - - # Get a PyUnicodeObject* within the Python 2 gdb process: - proxy = self.proxyval(visited) - - # Transliteration of Python 3's Object/unicodeobject.c:unicode_repr - # to Python 2: - if "'" in proxy and '"' not in proxy: - quote = '"' - else: - quote = "'" - out.write(quote) - - i = 0 - while i < len(proxy): - ch = proxy[i] - i += 1 - - # Escape quotes and backslashes - if ch == quote or ch == '\\': - out.write('\\') - out.write(ch) - - # Map special whitespace to '\t', \n', '\r' - elif ch == '\t': - out.write('\\t') - elif ch == '\n': - out.write('\\n') - elif ch == '\r': - out.write('\\r') - - # Map non-printable US ASCII to '\xhh' */ - elif ch < ' ' or ch == 0x7F: - out.write('\\x') - out.write(hexdigits[(ord(ch) >> 4) & 0x000F]) - out.write(hexdigits[ord(ch) & 0x000F]) - - # Copy ASCII characters as-is - elif ord(ch) < 0x7F: - out.write(ch) - - # Non-ASCII characters - else: - ucs = ch - ch2 = None - if sys.maxunicode < 0x10000: - # If sizeof(Py_UNICODE) is 2 here (in gdb), join - # surrogate pairs before calling _unichr_is_printable. - if (i < len(proxy) - and 0xD800 <= ord(ch) < 0xDC00 \ - and 0xDC00 <= ord(proxy[i]) <= 0xDFFF): - ch2 = proxy[i] - ucs = ch + ch2 - i += 1 - - # Unfortuately, Python 2's unicode type doesn't seem - # to expose the "isprintable" method - printable = _unichr_is_printable(ucs) - if printable: - try: - ucs.encode(ENCODING) - except UnicodeEncodeError: - printable = False - - # Map Unicode whitespace and control characters - # (categories Z* and C* except ASCII space) - if not printable: - if ch2 is not None: - # Match Python 3's representation of non-printable - # wide characters. - code = (ord(ch) & 0x03FF) << 10 - code |= ord(ch2) & 0x03FF - code += 0x00010000 - else: - code = ord(ucs) - - # Map 8-bit characters to '\\xhh' - if code <= 0xff: - out.write('\\x') - out.write(hexdigits[(code >> 4) & 0x000F]) - out.write(hexdigits[code & 0x000F]) - # Map 21-bit characters to '\U00xxxxxx' - elif code >= 0x10000: - out.write('\\U') - out.write(hexdigits[(code >> 28) & 0x0000000F]) - out.write(hexdigits[(code >> 24) & 0x0000000F]) - out.write(hexdigits[(code >> 20) & 0x0000000F]) - out.write(hexdigits[(code >> 16) & 0x0000000F]) - out.write(hexdigits[(code >> 12) & 0x0000000F]) - out.write(hexdigits[(code >> 8) & 0x0000000F]) - out.write(hexdigits[(code >> 4) & 0x0000000F]) - out.write(hexdigits[code & 0x0000000F]) - # Map 16-bit characters to '\uxxxx' - else: - out.write('\\u') - out.write(hexdigits[(code >> 12) & 0x000F]) - out.write(hexdigits[(code >> 8) & 0x000F]) - out.write(hexdigits[(code >> 4) & 0x000F]) - out.write(hexdigits[code & 0x000F]) - else: - # Copy characters as-is - out.write(ch) - if ch2 is not None: - out.write(ch2) - - out.write(quote) - - -class wrapperobject(PyObjectPtr): - _typename = 'wrapperobject' - - def safe_name(self): - try: - name = self.field('descr')['d_base']['name'].string() - return repr(name) - except (NullPyObjectPtr, RuntimeError): - return '' - - def safe_tp_name(self): - try: - return self.field('self')['ob_type']['tp_name'].string() - except (NullPyObjectPtr, RuntimeError): - return '' - - def safe_self_addresss(self): - try: - address = long(self.field('self')) - return '%#x' % address - except (NullPyObjectPtr, RuntimeError): - return '' - - def proxyval(self, visited): - name = self.safe_name() - tp_name = self.safe_tp_name() - self_address = self.safe_self_addresss() - return ("" - % (name, tp_name, self_address)) - - def write_repr(self, out, visited): - proxy = self.proxyval(visited) - out.write(proxy) - - -def int_from_int(gdbval): - return int(str(gdbval)) - - -def stringify(val): - # TODO: repr() puts everything on one line; pformat can be nicer, but - # can lead to v.long results; this function isolates the choice - if True: - return repr(val) - else: - from pprint import pformat - return pformat(val) - - -class PyObjectPtrPrinter: - "Prints a (PyObject*)" - - def __init__ (self, gdbval): - self.gdbval = gdbval - - def to_string (self): - pyop = PyObjectPtr.from_pyobject_ptr(self.gdbval) - if True: - return pyop.get_truncated_repr(MAX_OUTPUT_LEN) - else: - # Generate full proxy value then stringify it. - # Doing so could be expensive - proxyval = pyop.proxyval(set()) - return stringify(proxyval) - -def pretty_printer_lookup(gdbval): - type = gdbval.type.unqualified() - if type.code != gdb.TYPE_CODE_PTR: - return None - - type = type.target().unqualified() - t = str(type) - if t in ("PyObject", "PyFrameObject", "PyUnicodeObject", "wrapperobject"): - return PyObjectPtrPrinter(gdbval) - -""" -During development, I've been manually invoking the code in this way: -(gdb) python - -import sys -sys.path.append('/home/david/coding/python-gdb') -import libpython -end - -then reloading it after each edit like this: -(gdb) python reload(libpython) - -The following code should ensure that the prettyprinter is registered -if the code is autoloaded by gdb when visiting libpython.so, provided -that this python file is installed to the same path as the library (or its -.debug file) plus a "-gdb.py" suffix, e.g: - /usr/lib/libpython2.6.so.1.0-gdb.py - /usr/lib/debug/usr/lib/libpython2.6.so.1.0.debug-gdb.py -""" -def register (obj): - if obj is None: - obj = gdb - - # Wire up the pretty-printer - obj.pretty_printers.append(pretty_printer_lookup) - -register (gdb.current_objfile ()) - - - -# Unfortunately, the exact API exposed by the gdb module varies somewhat -# from build to build -# See http://bugs.python.org/issue8279?#msg102276 - -class Frame(object): - ''' - Wrapper for gdb.Frame, adding various methods - ''' - def __init__(self, gdbframe): - self._gdbframe = gdbframe - - def older(self): - older = self._gdbframe.older() - if older: - return Frame(older) - else: - return None - - def newer(self): - newer = self._gdbframe.newer() - if newer: - return Frame(newer) - else: - return None - - def select(self): - '''If supported, select this frame and return True; return False if unsupported - - Not all builds have a gdb.Frame.select method; seems to be present on Fedora 12 - onwards, but absent on Ubuntu buildbot''' - if not hasattr(self._gdbframe, 'select'): - print ('Unable to select frame: ' - 'this build of gdb does not expose a gdb.Frame.select method') - return False - self._gdbframe.select() - return True - - def get_index(self): - '''Calculate index of frame, starting at 0 for the newest frame within - this thread''' - index = 0 - # Go down until you reach the newest frame: - iter_frame = self - while iter_frame.newer(): - index += 1 - iter_frame = iter_frame.newer() - return index - - # We divide frames into: - # - "python frames": - # - "bytecode frames" i.e. PyEval_EvalFrameEx - # - "other python frames": things that are of interest from a python - # POV, but aren't bytecode (e.g. GC, GIL) - # - everything else - - def is_python_frame(self): - '''Is this a _PyEval_EvalFrameDefault frame, or some other important - frame? (see is_other_python_frame for what "important" means in this - context)''' - if self.is_evalframe(): - return True - if self.is_other_python_frame(): - return True - return False - - def is_evalframe(self): - '''Is this a _PyEval_EvalFrameDefault frame?''' - if self._gdbframe.name() == EVALFRAME: - ''' - I believe we also need to filter on the inline - struct frame_id.inline_depth, only regarding frames with - an inline depth of 0 as actually being this function - - So we reject those with type gdb.INLINE_FRAME - ''' - if self._gdbframe.type() == gdb.NORMAL_FRAME: - # We have a _PyEval_EvalFrameDefault frame: - return True - - return False - - def is_other_python_frame(self): - '''Is this frame worth displaying in python backtraces? - Examples: - - waiting on the GIL - - garbage-collecting - - within a CFunction - If it is, return a descriptive string - For other frames, return False - ''' - if self.is_waiting_for_gil(): - return 'Waiting for the GIL' - - if self.is_gc_collect(): - return 'Garbage-collecting' - - # Detect invocations of PyCFunction instances: - frame = self._gdbframe - caller = frame.name() - if not caller: - return False - - if caller in ('_PyCFunction_FastCallDict', - '_PyCFunction_FastCallKeywords'): - arg_name = 'func' - # Within that frame: - # "func" is the local containing the PyObject* of the - # PyCFunctionObject instance - # "f" is the same value, but cast to (PyCFunctionObject*) - # "self" is the (PyObject*) of the 'self' - try: - # Use the prettyprinter for the func: - func = frame.read_var(arg_name) - return str(func) - except RuntimeError: - return 'PyCFunction invocation (unable to read %s)' % arg_name - - if caller == 'wrapper_call': - try: - func = frame.read_var('wp') - return str(func) - except RuntimeError: - return '' - - # This frame isn't worth reporting: - return False - - def is_waiting_for_gil(self): - '''Is this frame waiting on the GIL?''' - # This assumes the _POSIX_THREADS version of Python/ceval_gil.h: - name = self._gdbframe.name() - if name: - return 'pthread_cond_timedwait' in name - - def is_gc_collect(self): - '''Is this frame "collect" within the garbage-collector?''' - return self._gdbframe.name() == 'collect' - - def get_pyop(self): - try: - f = self._gdbframe.read_var('f') - frame = PyFrameObjectPtr.from_pyobject_ptr(f) - if not frame.is_optimized_out(): - return frame - # gdb is unable to get the "f" argument of PyEval_EvalFrameEx() - # because it was "optimized out". Try to get "f" from the frame - # of the caller, PyEval_EvalCodeEx(). - orig_frame = frame - caller = self._gdbframe.older() - if caller: - f = caller.read_var('f') - frame = PyFrameObjectPtr.from_pyobject_ptr(f) - if not frame.is_optimized_out(): - return frame - return orig_frame - except ValueError: - return None - - @classmethod - def get_selected_frame(cls): - _gdbframe = gdb.selected_frame() - if _gdbframe: - return Frame(_gdbframe) - return None - - @classmethod - def get_selected_python_frame(cls): - '''Try to obtain the Frame for the python-related code in the selected - frame, or None''' - try: - frame = cls.get_selected_frame() - except gdb.error: - # No frame: Python didn't start yet - return None - - while frame: - if frame.is_python_frame(): - return frame - frame = frame.older() - - # Not found: - return None - - @classmethod - def get_selected_bytecode_frame(cls): - '''Try to obtain the Frame for the python bytecode interpreter in the - selected GDB frame, or None''' - frame = cls.get_selected_frame() - - while frame: - if frame.is_evalframe(): - return frame - frame = frame.older() - - # Not found: - return None - - def print_summary(self): - if self.is_evalframe(): - pyop = self.get_pyop() - if pyop: - line = pyop.get_truncated_repr(MAX_OUTPUT_LEN) - write_unicode(sys.stdout, '#%i %s\n' % (self.get_index(), line)) - if not pyop.is_optimized_out(): - line = pyop.current_line() - if line is not None: - sys.stdout.write(' %s\n' % line.strip()) - else: - sys.stdout.write('#%i (unable to read python frame information)\n' % self.get_index()) - else: - info = self.is_other_python_frame() - if info: - sys.stdout.write('#%i %s\n' % (self.get_index(), info)) - else: - sys.stdout.write('#%i\n' % self.get_index()) - - def print_traceback(self): - if self.is_evalframe(): - pyop = self.get_pyop() - if pyop: - pyop.print_traceback() - if not pyop.is_optimized_out(): - line = pyop.current_line() - if line is not None: - sys.stdout.write(' %s\n' % line.strip()) - else: - sys.stdout.write(' (unable to read python frame information)\n') - else: - info = self.is_other_python_frame() - if info: - sys.stdout.write(' %s\n' % info) - else: - sys.stdout.write(' (not a python frame)\n') - -class PyList(gdb.Command): - '''List the current Python source code, if any - - Use - py-list START - to list at a different line number within the python source. - - Use - py-list START, END - to list a specific range of lines within the python source. - ''' - - def __init__(self): - gdb.Command.__init__ (self, - "py-list", - gdb.COMMAND_FILES, - gdb.COMPLETE_NONE) - - - def invoke(self, args, from_tty): - import re - - start = None - end = None - - m = re.match(r'\s*(\d+)\s*', args) - if m: - start = int(m.group(0)) - end = start + 10 - - m = re.match(r'\s*(\d+)\s*,\s*(\d+)\s*', args) - if m: - start, end = map(int, m.groups()) - - # py-list requires an actual PyEval_EvalFrameEx frame: - frame = Frame.get_selected_bytecode_frame() - if not frame: - print('Unable to locate gdb frame for python bytecode interpreter') - return - - pyop = frame.get_pyop() - if not pyop or pyop.is_optimized_out(): - print('Unable to read information on python frame') - return - - filename = pyop.filename() - lineno = pyop.current_line_num() - - if start is None: - start = lineno - 5 - end = lineno + 5 - - if start<1: - start = 1 - - try: - f = open(os_fsencode(filename), 'r') - except IOError as err: - sys.stdout.write('Unable to open %s: %s\n' - % (filename, err)) - return - with f: - all_lines = f.readlines() - # start and end are 1-based, all_lines is 0-based; - # so [start-1:end] as a python slice gives us [start, end] as a - # closed interval - for i, line in enumerate(all_lines[start-1:end]): - linestr = str(i+start) - # Highlight current line: - if i + start == lineno: - linestr = '>' + linestr - sys.stdout.write('%4s %s' % (linestr, line)) - - -# ...and register the command: -PyList() - -def move_in_stack(move_up): - '''Move up or down the stack (for the py-up/py-down command)''' - frame = Frame.get_selected_python_frame() - if not frame: - print('Unable to locate python frame') - return - - while frame: - if move_up: - iter_frame = frame.older() - else: - iter_frame = frame.newer() - - if not iter_frame: - break - - if iter_frame.is_python_frame(): - # Result: - if iter_frame.select(): - iter_frame.print_summary() - return - - frame = iter_frame - - if move_up: - print('Unable to find an older python frame') - else: - print('Unable to find a newer python frame') - -class PyUp(gdb.Command): - 'Select and print the python stack frame that called this one (if any)' - def __init__(self): - gdb.Command.__init__ (self, - "py-up", - gdb.COMMAND_STACK, - gdb.COMPLETE_NONE) - - - def invoke(self, args, from_tty): - move_in_stack(move_up=True) - -class PyDown(gdb.Command): - 'Select and print the python stack frame called by this one (if any)' - def __init__(self): - gdb.Command.__init__ (self, - "py-down", - gdb.COMMAND_STACK, - gdb.COMPLETE_NONE) - - - def invoke(self, args, from_tty): - move_in_stack(move_up=False) - -# Not all builds of gdb have gdb.Frame.select -if hasattr(gdb.Frame, 'select'): - PyUp() - PyDown() - -class PyBacktraceFull(gdb.Command): - 'Display the current python frame and all the frames within its call stack (if any)' - def __init__(self): - gdb.Command.__init__ (self, - "py-bt-full", - gdb.COMMAND_STACK, - gdb.COMPLETE_NONE) - - - def invoke(self, args, from_tty): - frame = Frame.get_selected_python_frame() - if not frame: - print('Unable to locate python frame') - return - - while frame: - if frame.is_python_frame(): - frame.print_summary() - frame = frame.older() - -PyBacktraceFull() - -class PyBacktrace(gdb.Command): - 'Display the current python frame and all the frames within its call stack (if any)' - def __init__(self): - gdb.Command.__init__ (self, - "py-bt", - gdb.COMMAND_STACK, - gdb.COMPLETE_NONE) - - - def invoke(self, args, from_tty): - frame = Frame.get_selected_python_frame() - if not frame: - print('Unable to locate python frame') - return - - sys.stdout.write('Traceback (most recent call first):\n') - while frame: - if frame.is_python_frame(): - frame.print_traceback() - frame = frame.older() - -PyBacktrace() - -class PyPrint(gdb.Command): - 'Look up the given python variable name, and print it' - def __init__(self): - gdb.Command.__init__ (self, - "py-print", - gdb.COMMAND_DATA, - gdb.COMPLETE_NONE) - - - def invoke(self, args, from_tty): - name = str(args) - - frame = Frame.get_selected_python_frame() - if not frame: - print('Unable to locate python frame') - return - - pyop_frame = frame.get_pyop() - if not pyop_frame: - print('Unable to read information on python frame') - return - - pyop_var, scope = pyop_frame.get_var_by_name(name) - - if pyop_var: - print('%s %r = %s' - % (scope, - name, - pyop_var.get_truncated_repr(MAX_OUTPUT_LEN))) - else: - print('%r not found' % name) - -PyPrint() - -class PyLocals(gdb.Command): - 'Look up the given python variable name, and print it' - def __init__(self, command="py-locals"): - gdb.Command.__init__ (self, - command, - gdb.COMMAND_DATA, - gdb.COMPLETE_NONE) - - - def invoke(self, args, from_tty): - name = str(args) - - frame = Frame.get_selected_python_frame() - if not frame: - print('Unable to locate python frame') - return - - pyop_frame = frame.get_pyop() - if not pyop_frame: - print('Unable to read information on python frame') - return - - namespace = self.get_namespace(pyop_frame) - namespace = [(name.proxyval(set()), val) for name, val in namespace] - - if namespace: - name, val = max(namespace, key=lambda item: len(item[0])) - max_name_length = len(name) - - for name, pyop_value in namespace: - value = pyop_value.get_truncated_repr(MAX_OUTPUT_LEN) - print('%-*s = %s' % (max_name_length, name, value)) - - def get_namespace(self, pyop_frame): - return pyop_frame.iter_locals() - -PyLocals() - - -################################################################## -## added, not in CPython -################################################################## - -import re -import warnings -import tempfile -import textwrap -import itertools - -class PyGlobals(PyLocals): - 'List all the globals in the currently select Python frame' - - def get_namespace(self, pyop_frame): - return pyop_frame.iter_globals() - - -PyGlobals("py-globals") - - -class PyNameEquals(gdb.Function): - - def _get_pycurframe_attr(self, attr): - frame = Frame(gdb.selected_frame()) - if frame.is_evalframeex(): - pyframe = frame.get_pyop() - if pyframe is None: - warnings.warn("Use a Python debug build, Python breakpoints " - "won't work otherwise.") - return None - - return getattr(pyframe, attr).proxyval(set()) - - return None - - def invoke(self, funcname): - attr = self._get_pycurframe_attr('co_name') - return attr is not None and attr == funcname.string() - -PyNameEquals("pyname_equals") - - -class PyModEquals(PyNameEquals): - - def invoke(self, modname): - attr = self._get_pycurframe_attr('co_filename') - if attr is not None: - filename, ext = os.path.splitext(os.path.basename(attr)) - return filename == modname.string() - return False - -PyModEquals("pymod_equals") - - -class PyBreak(gdb.Command): - """ - Set a Python breakpoint. Examples: - - Break on any function or method named 'func' in module 'modname' - - py-break modname.func - - Break on any function or method named 'func' - - py-break func - """ - - def invoke(self, funcname, from_tty): - if '.' in funcname: - modname, dot, funcname = funcname.rpartition('.') - cond = '$pyname_equals("%s") && $pymod_equals("%s")' % (funcname, - modname) - else: - cond = '$pyname_equals("%s")' % funcname - - gdb.execute('break PyEval_EvalFrameEx if ' + cond) - -PyBreak("py-break", gdb.COMMAND_RUNNING, gdb.COMPLETE_NONE) - - -class _LoggingState(object): - """ - State that helps to provide a reentrant gdb.execute() function. - """ - - def __init__(self): - f = tempfile.NamedTemporaryFile('r+') - self.file = f - self.filename = f.name - self.fd = f.fileno() - _execute("set logging file %s" % self.filename) - self.file_position_stack = [] - - def __enter__(self): - if not self.file_position_stack: - _execute("set logging redirect on") - _execute("set logging on") - _execute("set pagination off") - - self.file_position_stack.append(os.fstat(self.fd).st_size) - return self - - def getoutput(self): - gdb.flush() - self.file.seek(self.file_position_stack[-1]) - result = self.file.read() - return result - - def __exit__(self, exc_type, exc_val, tb): - startpos = self.file_position_stack.pop() - self.file.seek(startpos) - self.file.truncate() - if not self.file_position_stack: - _execute("set logging off") - _execute("set logging redirect off") - _execute("set pagination on") - - -def execute(command, from_tty=False, to_string=False): - """ - Replace gdb.execute() with this function and have it accept a 'to_string' - argument (new in 7.2). Have it properly capture stderr also. Ensure - reentrancy. - """ - if to_string: - with _logging_state as state: - _execute(command, from_tty) - return state.getoutput() - else: - _execute(command, from_tty) - - -_execute = gdb.execute -gdb.execute = execute -_logging_state = _LoggingState() - - -def get_selected_inferior(): - """ - Return the selected inferior in gdb. - """ - # Woooh, another bug in gdb! Is there an end in sight? - # http://sourceware.org/bugzilla/show_bug.cgi?id=12212 - return gdb.inferiors()[0] - - selected_thread = gdb.selected_thread() - - for inferior in gdb.inferiors(): - for thread in inferior.threads(): - if thread == selected_thread: - return inferior - - -def source_gdb_script(script_contents, to_string=False): - """ - Source a gdb script with script_contents passed as a string. This is useful - to provide defines for py-step and py-next to make them repeatable (this is - not possible with gdb.execute()). See - http://sourceware.org/bugzilla/show_bug.cgi?id=12216 - """ - fd, filename = tempfile.mkstemp() - f = os.fdopen(fd, 'w') - f.write(script_contents) - f.close() - gdb.execute("source %s" % filename, to_string=to_string) - os.remove(filename) - - -def register_defines(): - source_gdb_script(textwrap.dedent("""\ - define py-step - -py-step - end - - define py-next - -py-next - end - - document py-step - %s - end - - document py-next - %s - end - """) % (PyStep.__doc__, PyNext.__doc__)) - - -def stackdepth(frame): - "Tells the stackdepth of a gdb frame." - depth = 0 - while frame: - frame = frame.older() - depth += 1 - - return depth - - -class ExecutionControlCommandBase(gdb.Command): - """ - Superclass for language specific execution control. Language specific - features should be implemented by lang_info using the LanguageInfo - interface. 'name' is the name of the command. - """ - - def __init__(self, name, lang_info): - super(ExecutionControlCommandBase, self).__init__( - name, gdb.COMMAND_RUNNING, gdb.COMPLETE_NONE) - self.lang_info = lang_info - - def install_breakpoints(self): - all_locations = itertools.chain( - self.lang_info.static_break_functions(), - self.lang_info.runtime_break_functions()) - - for location in all_locations: - result = gdb.execute('break %s' % location, to_string=True) - yield re.search(r'Breakpoint (\d+)', result).group(1) - - def delete_breakpoints(self, breakpoint_list): - for bp in breakpoint_list: - gdb.execute("delete %s" % bp) - - def filter_output(self, result): - reflags = re.MULTILINE - - output_on_halt = [ - (r'^Program received signal .*', reflags|re.DOTALL), - (r'.*[Ww]arning.*', 0), - (r'^Program exited .*', reflags), - ] - - output_always = [ - # output when halting on a watchpoint - (r'^(Old|New) value = .*', reflags), - # output from the 'display' command - (r'^\d+: \w+ = .*', reflags), - ] - - def filter_output(regexes): - output = [] - for regex, flags in regexes: - for match in re.finditer(regex, result, flags): - output.append(match.group(0)) - - return '\n'.join(output) - - # Filter the return value output of the 'finish' command - match_finish = re.search(r'^Value returned is \$\d+ = (.*)', result, - re.MULTILINE) - if match_finish: - finish_output = 'Value returned: %s\n' % match_finish.group(1) - else: - finish_output = '' - - return (filter_output(output_on_halt), - finish_output + filter_output(output_always)) - - def stopped(self): - return get_selected_inferior().pid == 0 - - def finish_executing(self, result): - """ - After doing some kind of code running in the inferior, print the line - of source code or the result of the last executed gdb command (passed - in as the `result` argument). - """ - output_on_halt, output_always = self.filter_output(result) - - if self.stopped(): - print(output_always) - print(output_on_halt) - else: - frame = gdb.selected_frame() - source_line = self.lang_info.get_source_line(frame) - if self.lang_info.is_relevant_function(frame): - raised_exception = self.lang_info.exc_info(frame) - if raised_exception: - print(raised_exception) - - if source_line: - if output_always.rstrip(): - print(output_always.rstrip()) - print(source_line) - else: - print(result) - - def _finish(self): - """ - Execute until the function returns (or until something else makes it - stop) - """ - if gdb.selected_frame().older() is not None: - return gdb.execute('finish', to_string=True) - else: - # outermost frame, continue - return gdb.execute('cont', to_string=True) - - def _finish_frame(self): - """ - Execute until the function returns to a relevant caller. - """ - while True: - result = self._finish() - - try: - frame = gdb.selected_frame() - except RuntimeError: - break - - hitbp = re.search(r'Breakpoint (\d+)', result) - is_relevant = self.lang_info.is_relevant_function(frame) - if hitbp or is_relevant or self.stopped(): - break - - return result - - def finish(self, *args): - "Implements the finish command." - result = self._finish_frame() - self.finish_executing(result) - - def step(self, stepinto, stepover_command='next'): - """ - Do a single step or step-over. Returns the result of the last gdb - command that made execution stop. - - This implementation, for stepping, sets (conditional) breakpoints for - all functions that are deemed relevant. It then does a step over until - either something halts execution, or until the next line is reached. - - If, however, stepover_command is given, it should be a string gdb - command that continues execution in some way. The idea is that the - caller has set a (conditional) breakpoint or watchpoint that can work - more efficiently than the step-over loop. For Python this means setting - a watchpoint for f->f_lasti, which means we can then subsequently - "finish" frames. - We want f->f_lasti instead of f->f_lineno, because the latter only - works properly with local trace functions, see - PyFrameObjectPtr.current_line_num and PyFrameObjectPtr.addr2line. - """ - if stepinto: - breakpoint_list = list(self.install_breakpoints()) - - beginframe = gdb.selected_frame() - - if self.lang_info.is_relevant_function(beginframe): - # If we start in a relevant frame, initialize stuff properly. If - # we don't start in a relevant frame, the loop will halt - # immediately. So don't call self.lang_info.lineno() as it may - # raise for irrelevant frames. - beginline = self.lang_info.lineno(beginframe) - - if not stepinto: - depth = stackdepth(beginframe) - - newframe = beginframe - - while True: - if self.lang_info.is_relevant_function(newframe): - result = gdb.execute(stepover_command, to_string=True) - else: - result = self._finish_frame() - - if self.stopped(): - break - - newframe = gdb.selected_frame() - is_relevant_function = self.lang_info.is_relevant_function(newframe) - try: - framename = newframe.name() - except RuntimeError: - framename = None - - m = re.search(r'Breakpoint (\d+)', result) - if m: - if is_relevant_function and m.group(1) in breakpoint_list: - # although we hit a breakpoint, we still need to check - # that the function, in case hit by a runtime breakpoint, - # is in the right context - break - - if newframe != beginframe: - # new function - - if not stepinto: - # see if we returned to the caller - newdepth = stackdepth(newframe) - is_relevant_function = (newdepth < depth and - is_relevant_function) - - if is_relevant_function: - break - else: - # newframe equals beginframe, check for a difference in the - # line number - lineno = self.lang_info.lineno(newframe) - if lineno and lineno != beginline: - break - - if stepinto: - self.delete_breakpoints(breakpoint_list) - - self.finish_executing(result) - - def run(self, args, from_tty): - self.finish_executing(gdb.execute('run ' + args, to_string=True)) - - def cont(self, *args): - self.finish_executing(gdb.execute('cont', to_string=True)) - - -class LanguageInfo(object): - """ - This class defines the interface that ExecutionControlCommandBase needs to - provide language-specific execution control. - - Classes that implement this interface should implement: - - lineno(frame) - Tells the current line number (only called for a relevant frame). - If lineno is a false value it is not checked for a difference. - - is_relevant_function(frame) - tells whether we care about frame 'frame' - - get_source_line(frame) - get the line of source code for the current line (only called for a - relevant frame). If the source code cannot be retrieved this - function should return None - - exc_info(frame) -- optional - tells whether an exception was raised, if so, it should return a - string representation of the exception value, None otherwise. - - static_break_functions() - returns an iterable of function names that are considered relevant - and should halt step-into execution. This is needed to provide a - performing step-into - - runtime_break_functions() -- optional - list of functions that we should break into depending on the - context - """ - - def exc_info(self, frame): - "See this class' docstring." - - def runtime_break_functions(self): - """ - Implement this if the list of step-into functions depends on the - context. - """ - return () - - -class PythonInfo(LanguageInfo): - - def pyframe(self, frame): - pyframe = Frame(frame).get_pyop() - if pyframe: - return pyframe - else: - raise gdb.RuntimeError( - "Unable to find the Python frame, run your code with a debug " - "build (configure with --with-pydebug or compile with -g).") - - def lineno(self, frame): - return self.pyframe(frame).current_line_num() - - def is_relevant_function(self, frame): - return Frame(frame).is_evalframeex() - - def get_source_line(self, frame): - try: - pyframe = self.pyframe(frame) - return '%4d %s' % (pyframe.current_line_num(), - pyframe.current_line().rstrip()) - except IOError: - return None - - def exc_info(self, frame): - try: - tstate = frame.read_var('tstate').dereference() - if gdb.parse_and_eval('tstate->frame == f'): - # tstate local variable initialized, check for an exception - inf_type = tstate['curexc_type'] - inf_value = tstate['curexc_value'] - - if inf_type: - return 'An exception was raised: %s' % (inf_value,) - except (ValueError, RuntimeError): - # Could not read the variable tstate or it's memory, it's ok - pass - - def static_break_functions(self): - yield 'PyEval_EvalFrameEx' - - -class PythonStepperMixin(object): - """ - Make this a mixin so CyStep can also inherit from this and use a - CythonCodeStepper at the same time. - """ - - def python_step(self, stepinto): - """ - Set a watchpoint on the Python bytecode instruction pointer and try - to finish the frame - """ - output = gdb.execute('watch f->f_lasti', to_string=True) - watchpoint = int(re.search(r'[Ww]atchpoint (\d+):', output).group(1)) - self.step(stepinto=stepinto, stepover_command='finish') - gdb.execute('delete %s' % watchpoint) - - -class PyStep(ExecutionControlCommandBase, PythonStepperMixin): - "Step through Python code." - - stepinto = True - - def invoke(self, args, from_tty): - self.python_step(stepinto=self.stepinto) - - -class PyNext(PyStep): - "Step-over Python code." - - stepinto = False - - -class PyFinish(ExecutionControlCommandBase): - "Execute until function returns to a caller." - - invoke = ExecutionControlCommandBase.finish - - -class PyRun(ExecutionControlCommandBase): - "Run the program." - - invoke = ExecutionControlCommandBase.run - - -class PyCont(ExecutionControlCommandBase): - - invoke = ExecutionControlCommandBase.cont - - -def _pointervalue(gdbval): - """ - Return the value of the pointer as a Python int. - - gdbval.type must be a pointer type - """ - # don't convert with int() as it will raise a RuntimeError - if gdbval.address is not None: - return int(gdbval.address) - else: - # the address attribute is None sometimes, in which case we can - # still convert the pointer to an int - return int(gdbval) - - -def pointervalue(gdbval): - pointer = _pointervalue(gdbval) - try: - if pointer < 0: - raise gdb.GdbError("Negative pointer value, presumably a bug " - "in gdb, aborting.") - except RuntimeError: - # work around yet another bug in gdb where you get random behaviour - # and tracebacks - pass - - return pointer - - -def get_inferior_unicode_postfix(): - try: - gdb.parse_and_eval('PyUnicode_FromEncodedObject') - except RuntimeError: - try: - gdb.parse_and_eval('PyUnicodeUCS2_FromEncodedObject') - except RuntimeError: - return 'UCS4' - else: - return 'UCS2' - else: - return '' - - -class PythonCodeExecutor(object): - - Py_single_input = 256 - Py_file_input = 257 - Py_eval_input = 258 - - def malloc(self, size): - chunk = (gdb.parse_and_eval("(void *) malloc((size_t) %d)" % size)) - - pointer = pointervalue(chunk) - if pointer == 0: - raise gdb.GdbError("No memory could be allocated in the inferior.") - - return pointer - - def alloc_string(self, string): - pointer = self.malloc(len(string)) - get_selected_inferior().write_memory(pointer, string) - - return pointer - - def alloc_pystring(self, string): - stringp = self.alloc_string(string) - PyString_FromStringAndSize = 'PyString_FromStringAndSize' - - try: - gdb.parse_and_eval(PyString_FromStringAndSize) - except RuntimeError: - # Python 3 - PyString_FromStringAndSize = ('PyUnicode%s_FromStringAndSize' % - (get_inferior_unicode_postfix(),)) - - try: - result = gdb.parse_and_eval( - '(PyObject *) %s((char *) %d, (size_t) %d)' % ( - PyString_FromStringAndSize, stringp, len(string))) - finally: - self.free(stringp) - - pointer = pointervalue(result) - if pointer == 0: - raise gdb.GdbError("Unable to allocate Python string in " - "the inferior.") - - return pointer - - def free(self, pointer): - gdb.parse_and_eval("free((void *) %d)" % pointer) - - def incref(self, pointer): - "Increment the reference count of a Python object in the inferior." - gdb.parse_and_eval('Py_IncRef((PyObject *) %d)' % pointer) - - def xdecref(self, pointer): - "Decrement the reference count of a Python object in the inferior." - # Py_DecRef is like Py_XDECREF, but a function. So we don't have - # to check for NULL. This should also decref all our allocated - # Python strings. - gdb.parse_and_eval('Py_DecRef((PyObject *) %d)' % pointer) - - def evalcode(self, code, input_type, global_dict=None, local_dict=None): - """ - Evaluate python code `code` given as a string in the inferior and - return the result as a gdb.Value. Returns a new reference in the - inferior. - - Of course, executing any code in the inferior may be dangerous and may - leave the debuggee in an unsafe state or terminate it altogether. - """ - if '\0' in code: - raise gdb.GdbError("String contains NUL byte.") - - code += '\0' - - pointer = self.alloc_string(code) - - globalsp = pointervalue(global_dict) - localsp = pointervalue(local_dict) - - if globalsp == 0 or localsp == 0: - raise gdb.GdbError("Unable to obtain or create locals or globals.") - - code = """ - PyRun_String( - (char *) %(code)d, - (int) %(start)d, - (PyObject *) %(globals)s, - (PyObject *) %(locals)d) - """ % dict(code=pointer, start=input_type, - globals=globalsp, locals=localsp) - - with FetchAndRestoreError(): - try: - pyobject_return_value = gdb.parse_and_eval(code) - finally: - self.free(pointer) - - return pyobject_return_value - - -class FetchAndRestoreError(PythonCodeExecutor): - """ - Context manager that fetches the error indicator in the inferior and - restores it on exit. - """ - - def __init__(self): - self.sizeof_PyObjectPtr = gdb.lookup_type('PyObject').pointer().sizeof - self.pointer = self.malloc(self.sizeof_PyObjectPtr * 3) - - type = self.pointer - value = self.pointer + self.sizeof_PyObjectPtr - traceback = self.pointer + self.sizeof_PyObjectPtr * 2 - - self.errstate = type, value, traceback - - def __enter__(self): - gdb.parse_and_eval("PyErr_Fetch(%d, %d, %d)" % self.errstate) - - def __exit__(self, *args): - if gdb.parse_and_eval("(int) PyErr_Occurred()"): - gdb.parse_and_eval("PyErr_Print()") - - pyerr_restore = ("PyErr_Restore(" - "(PyObject *) *%d," - "(PyObject *) *%d," - "(PyObject *) *%d)") - - try: - gdb.parse_and_eval(pyerr_restore % self.errstate) - finally: - self.free(self.pointer) - - -class FixGdbCommand(gdb.Command): - - def __init__(self, command, actual_command): - super(FixGdbCommand, self).__init__(command, gdb.COMMAND_DATA, - gdb.COMPLETE_NONE) - self.actual_command = actual_command - - def fix_gdb(self): - """ - It seems that invoking either 'cy exec' and 'py-exec' work perfectly - fine, but after this gdb's python API is entirely broken. - Maybe some uncleared exception value is still set? - sys.exc_clear() didn't help. A demonstration: - - (gdb) cy exec 'hello' - 'hello' - (gdb) python gdb.execute('cont') - RuntimeError: Cannot convert value to int. - Error while executing Python code. - (gdb) python gdb.execute('cont') - [15148 refs] - - Program exited normally. - """ - warnings.filterwarnings('ignore', r'.*', RuntimeWarning, - re.escape(__name__)) - try: - int(gdb.parse_and_eval("(void *) 0")) == 0 - except RuntimeError: - pass - # warnings.resetwarnings() - - def invoke(self, args, from_tty): - self.fix_gdb() - try: - gdb.execute('%s %s' % (self.actual_command, args)) - except RuntimeError as e: - raise gdb.GdbError(str(e)) - self.fix_gdb() - - -def _evalcode_python(executor, code, input_type): - """ - Execute Python code in the most recent stack frame. - """ - global_dict = gdb.parse_and_eval('PyEval_GetGlobals()') - local_dict = gdb.parse_and_eval('PyEval_GetLocals()') - - if (pointervalue(global_dict) == 0 or pointervalue(local_dict) == 0): - raise gdb.GdbError("Unable to find the locals or globals of the " - "most recent Python function (relative to the " - "selected frame).") - - return executor.evalcode(code, input_type, global_dict, local_dict) - - -class PyExec(gdb.Command): - - def readcode(self, expr): - if expr: - return expr, PythonCodeExecutor.Py_single_input - else: - lines = [] - while True: - try: - line = input('>') - except EOFError: - break - else: - if line.rstrip() == 'end': - break - - lines.append(line) - - return '\n'.join(lines), PythonCodeExecutor.Py_file_input - - def invoke(self, expr, from_tty): - expr, input_type = self.readcode(expr) - executor = PythonCodeExecutor() - executor.xdecref(_evalcode_python(executor, input_type, global_dict, local_dict)) - - -gdb.execute('set breakpoint pending on') - -if hasattr(gdb, 'GdbError'): - # Wrap py-step and py-next in gdb defines to make them repeatable. - py_step = PyStep('-py-step', PythonInfo()) - py_next = PyNext('-py-next', PythonInfo()) - register_defines() - py_finish = PyFinish('py-finish', PythonInfo()) - py_run = PyRun('py-run', PythonInfo()) - py_cont = PyCont('py-cont', PythonInfo()) - - py_exec = FixGdbCommand('py-exec', '-py-exec') - _py_exec = PyExec("-py-exec", gdb.COMMAND_DATA, gdb.COMPLETE_NONE) -else: - warnings.warn("Use gdb 7.2 or higher to use the py-exec command.") diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vegalite/v4/tests/test_data.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vegalite/v4/tests/test_data.py deleted file mode 100644 index 8eae11c868c6bc0ba14edb9cc7bae6d588f1d5aa..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vegalite/v4/tests/test_data.py +++ /dev/null @@ -1,33 +0,0 @@ -import os - -import pandas as pd -import pytest - -from .. import data as alt - - -@pytest.fixture -def sample_data(): - return pd.DataFrame({"x": range(10), "y": range(10)}) - - -def test_disable_max_rows(sample_data): - with alt.data_transformers.enable("default", max_rows=5): - # Ensure max rows error is raised. - with pytest.raises(alt.MaxRowsError): - alt.data_transformers.get()(sample_data) - - # Ensure that max rows error is properly disabled. - with alt.data_transformers.disable_max_rows(): - alt.data_transformers.get()(sample_data) - - try: - with alt.data_transformers.enable("json"): - # Ensure that there is no TypeError for non-max_rows transformers. - with alt.data_transformers.disable_max_rows(): - jsonfile = alt.data_transformers.get()(sample_data) - except TypeError: - jsonfile = {} - finally: - if jsonfile: - os.remove(jsonfile["url"]) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/cffi/vengine_cpy.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/cffi/vengine_cpy.py deleted file mode 100644 index 6de0df0ea4e1a98e65964ab61588df9abf536bac..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/cffi/vengine_cpy.py +++ /dev/null @@ -1,1076 +0,0 @@ -# -# DEPRECATED: implementation for ffi.verify() -# -import sys, imp -from . import model -from .error import VerificationError - - -class VCPythonEngine(object): - _class_key = 'x' - _gen_python_module = True - - def __init__(self, verifier): - self.verifier = verifier - self.ffi = verifier.ffi - self._struct_pending_verification = {} - self._types_of_builtin_functions = {} - - def patch_extension_kwds(self, kwds): - pass - - def find_module(self, module_name, path, so_suffixes): - try: - f, filename, descr = imp.find_module(module_name, path) - except ImportError: - return None - if f is not None: - f.close() - # Note that after a setuptools installation, there are both .py - # and .so files with the same basename. The code here relies on - # imp.find_module() locating the .so in priority. - if descr[0] not in so_suffixes: - return None - return filename - - def collect_types(self): - self._typesdict = {} - self._generate("collecttype") - - def _prnt(self, what=''): - self._f.write(what + '\n') - - def _gettypenum(self, type): - # a KeyError here is a bug. please report it! :-) - return self._typesdict[type] - - def _do_collect_type(self, tp): - if ((not isinstance(tp, model.PrimitiveType) - or tp.name == 'long double') - and tp not in self._typesdict): - num = len(self._typesdict) - self._typesdict[tp] = num - - def write_source_to_f(self): - self.collect_types() - # - # The new module will have a _cffi_setup() function that receives - # objects from the ffi world, and that calls some setup code in - # the module. This setup code is split in several independent - # functions, e.g. one per constant. The functions are "chained" - # by ending in a tail call to each other. - # - # This is further split in two chained lists, depending on if we - # can do it at import-time or if we must wait for _cffi_setup() to - # provide us with the objects. This is needed because we - # need the values of the enum constants in order to build the - # that we may have to pass to _cffi_setup(). - # - # The following two 'chained_list_constants' items contains - # the head of these two chained lists, as a string that gives the - # call to do, if any. - self._chained_list_constants = ['((void)lib,0)', '((void)lib,0)'] - # - prnt = self._prnt - # first paste some standard set of lines that are mostly '#define' - prnt(cffimod_header) - prnt() - # then paste the C source given by the user, verbatim. - prnt(self.verifier.preamble) - prnt() - # - # call generate_cpy_xxx_decl(), for every xxx found from - # ffi._parser._declarations. This generates all the functions. - self._generate("decl") - # - # implement the function _cffi_setup_custom() as calling the - # head of the chained list. - self._generate_setup_custom() - prnt() - # - # produce the method table, including the entries for the - # generated Python->C function wrappers, which are done - # by generate_cpy_function_method(). - prnt('static PyMethodDef _cffi_methods[] = {') - self._generate("method") - prnt(' {"_cffi_setup", _cffi_setup, METH_VARARGS, NULL},') - prnt(' {NULL, NULL, 0, NULL} /* Sentinel */') - prnt('};') - prnt() - # - # standard init. - modname = self.verifier.get_module_name() - constants = self._chained_list_constants[False] - prnt('#if PY_MAJOR_VERSION >= 3') - prnt() - prnt('static struct PyModuleDef _cffi_module_def = {') - prnt(' PyModuleDef_HEAD_INIT,') - prnt(' "%s",' % modname) - prnt(' NULL,') - prnt(' -1,') - prnt(' _cffi_methods,') - prnt(' NULL, NULL, NULL, NULL') - prnt('};') - prnt() - prnt('PyMODINIT_FUNC') - prnt('PyInit_%s(void)' % modname) - prnt('{') - prnt(' PyObject *lib;') - prnt(' lib = PyModule_Create(&_cffi_module_def);') - prnt(' if (lib == NULL)') - prnt(' return NULL;') - prnt(' if (%s < 0 || _cffi_init() < 0) {' % (constants,)) - prnt(' Py_DECREF(lib);') - prnt(' return NULL;') - prnt(' }') - prnt(' return lib;') - prnt('}') - prnt() - prnt('#else') - prnt() - prnt('PyMODINIT_FUNC') - prnt('init%s(void)' % modname) - prnt('{') - prnt(' PyObject *lib;') - prnt(' lib = Py_InitModule("%s", _cffi_methods);' % modname) - prnt(' if (lib == NULL)') - prnt(' return;') - prnt(' if (%s < 0 || _cffi_init() < 0)' % (constants,)) - prnt(' return;') - prnt(' return;') - prnt('}') - prnt() - prnt('#endif') - - def load_library(self, flags=None): - # XXX review all usages of 'self' here! - # import it as a new extension module - imp.acquire_lock() - try: - if hasattr(sys, "getdlopenflags"): - previous_flags = sys.getdlopenflags() - try: - if hasattr(sys, "setdlopenflags") and flags is not None: - sys.setdlopenflags(flags) - module = imp.load_dynamic(self.verifier.get_module_name(), - self.verifier.modulefilename) - except ImportError as e: - error = "importing %r: %s" % (self.verifier.modulefilename, e) - raise VerificationError(error) - finally: - if hasattr(sys, "setdlopenflags"): - sys.setdlopenflags(previous_flags) - finally: - imp.release_lock() - # - # call loading_cpy_struct() to get the struct layout inferred by - # the C compiler - self._load(module, 'loading') - # - # the C code will need the objects. Collect them in - # order in a list. - revmapping = dict([(value, key) - for (key, value) in self._typesdict.items()]) - lst = [revmapping[i] for i in range(len(revmapping))] - lst = list(map(self.ffi._get_cached_btype, lst)) - # - # build the FFILibrary class and instance and call _cffi_setup(). - # this will set up some fields like '_cffi_types', and only then - # it will invoke the chained list of functions that will really - # build (notably) the constant objects, as if they are - # pointers, and store them as attributes on the 'library' object. - class FFILibrary(object): - _cffi_python_module = module - _cffi_ffi = self.ffi - _cffi_dir = [] - def __dir__(self): - return FFILibrary._cffi_dir + list(self.__dict__) - library = FFILibrary() - if module._cffi_setup(lst, VerificationError, library): - import warnings - warnings.warn("reimporting %r might overwrite older definitions" - % (self.verifier.get_module_name())) - # - # finally, call the loaded_cpy_xxx() functions. This will perform - # the final adjustments, like copying the Python->C wrapper - # functions from the module to the 'library' object, and setting - # up the FFILibrary class with properties for the global C variables. - self._load(module, 'loaded', library=library) - module._cffi_original_ffi = self.ffi - module._cffi_types_of_builtin_funcs = self._types_of_builtin_functions - return library - - def _get_declarations(self): - lst = [(key, tp) for (key, (tp, qual)) in - self.ffi._parser._declarations.items()] - lst.sort() - return lst - - def _generate(self, step_name): - for name, tp in self._get_declarations(): - kind, realname = name.split(' ', 1) - try: - method = getattr(self, '_generate_cpy_%s_%s' % (kind, - step_name)) - except AttributeError: - raise VerificationError( - "not implemented in verify(): %r" % name) - try: - method(tp, realname) - except Exception as e: - model.attach_exception_info(e, name) - raise - - def _load(self, module, step_name, **kwds): - for name, tp in self._get_declarations(): - kind, realname = name.split(' ', 1) - method = getattr(self, '_%s_cpy_%s' % (step_name, kind)) - try: - method(tp, realname, module, **kwds) - except Exception as e: - model.attach_exception_info(e, name) - raise - - def _generate_nothing(self, tp, name): - pass - - def _loaded_noop(self, tp, name, module, **kwds): - pass - - # ---------- - - def _convert_funcarg_to_c(self, tp, fromvar, tovar, errcode): - extraarg = '' - if isinstance(tp, model.PrimitiveType): - if tp.is_integer_type() and tp.name != '_Bool': - converter = '_cffi_to_c_int' - extraarg = ', %s' % tp.name - else: - converter = '(%s)_cffi_to_c_%s' % (tp.get_c_name(''), - tp.name.replace(' ', '_')) - errvalue = '-1' - # - elif isinstance(tp, model.PointerType): - self._convert_funcarg_to_c_ptr_or_array(tp, fromvar, - tovar, errcode) - return - # - elif isinstance(tp, (model.StructOrUnion, model.EnumType)): - # a struct (not a struct pointer) as a function argument - self._prnt(' if (_cffi_to_c((char *)&%s, _cffi_type(%d), %s) < 0)' - % (tovar, self._gettypenum(tp), fromvar)) - self._prnt(' %s;' % errcode) - return - # - elif isinstance(tp, model.FunctionPtrType): - converter = '(%s)_cffi_to_c_pointer' % tp.get_c_name('') - extraarg = ', _cffi_type(%d)' % self._gettypenum(tp) - errvalue = 'NULL' - # - else: - raise NotImplementedError(tp) - # - self._prnt(' %s = %s(%s%s);' % (tovar, converter, fromvar, extraarg)) - self._prnt(' if (%s == (%s)%s && PyErr_Occurred())' % ( - tovar, tp.get_c_name(''), errvalue)) - self._prnt(' %s;' % errcode) - - def _extra_local_variables(self, tp, localvars, freelines): - if isinstance(tp, model.PointerType): - localvars.add('Py_ssize_t datasize') - localvars.add('struct _cffi_freeme_s *large_args_free = NULL') - freelines.add('if (large_args_free != NULL)' - ' _cffi_free_array_arguments(large_args_free);') - - def _convert_funcarg_to_c_ptr_or_array(self, tp, fromvar, tovar, errcode): - self._prnt(' datasize = _cffi_prepare_pointer_call_argument(') - self._prnt(' _cffi_type(%d), %s, (char **)&%s);' % ( - self._gettypenum(tp), fromvar, tovar)) - self._prnt(' if (datasize != 0) {') - self._prnt(' %s = ((size_t)datasize) <= 640 ? ' - 'alloca((size_t)datasize) : NULL;' % (tovar,)) - self._prnt(' if (_cffi_convert_array_argument(_cffi_type(%d), %s, ' - '(char **)&%s,' % (self._gettypenum(tp), fromvar, tovar)) - self._prnt(' datasize, &large_args_free) < 0)') - self._prnt(' %s;' % errcode) - self._prnt(' }') - - def _convert_expr_from_c(self, tp, var, context): - if isinstance(tp, model.PrimitiveType): - if tp.is_integer_type() and tp.name != '_Bool': - return '_cffi_from_c_int(%s, %s)' % (var, tp.name) - elif tp.name != 'long double': - return '_cffi_from_c_%s(%s)' % (tp.name.replace(' ', '_'), var) - else: - return '_cffi_from_c_deref((char *)&%s, _cffi_type(%d))' % ( - var, self._gettypenum(tp)) - elif isinstance(tp, (model.PointerType, model.FunctionPtrType)): - return '_cffi_from_c_pointer((char *)%s, _cffi_type(%d))' % ( - var, self._gettypenum(tp)) - elif isinstance(tp, model.ArrayType): - return '_cffi_from_c_pointer((char *)%s, _cffi_type(%d))' % ( - var, self._gettypenum(model.PointerType(tp.item))) - elif isinstance(tp, model.StructOrUnion): - if tp.fldnames is None: - raise TypeError("'%s' is used as %s, but is opaque" % ( - tp._get_c_name(), context)) - return '_cffi_from_c_struct((char *)&%s, _cffi_type(%d))' % ( - var, self._gettypenum(tp)) - elif isinstance(tp, model.EnumType): - return '_cffi_from_c_deref((char *)&%s, _cffi_type(%d))' % ( - var, self._gettypenum(tp)) - else: - raise NotImplementedError(tp) - - # ---------- - # typedefs: generates no code so far - - _generate_cpy_typedef_collecttype = _generate_nothing - _generate_cpy_typedef_decl = _generate_nothing - _generate_cpy_typedef_method = _generate_nothing - _loading_cpy_typedef = _loaded_noop - _loaded_cpy_typedef = _loaded_noop - - # ---------- - # function declarations - - def _generate_cpy_function_collecttype(self, tp, name): - assert isinstance(tp, model.FunctionPtrType) - if tp.ellipsis: - self._do_collect_type(tp) - else: - # don't call _do_collect_type(tp) in this common case, - # otherwise test_autofilled_struct_as_argument fails - for type in tp.args: - self._do_collect_type(type) - self._do_collect_type(tp.result) - - def _generate_cpy_function_decl(self, tp, name): - assert isinstance(tp, model.FunctionPtrType) - if tp.ellipsis: - # cannot support vararg functions better than this: check for its - # exact type (including the fixed arguments), and build it as a - # constant function pointer (no CPython wrapper) - self._generate_cpy_const(False, name, tp) - return - prnt = self._prnt - numargs = len(tp.args) - if numargs == 0: - argname = 'noarg' - elif numargs == 1: - argname = 'arg0' - else: - argname = 'args' - prnt('static PyObject *') - prnt('_cffi_f_%s(PyObject *self, PyObject *%s)' % (name, argname)) - prnt('{') - # - context = 'argument of %s' % name - for i, type in enumerate(tp.args): - prnt(' %s;' % type.get_c_name(' x%d' % i, context)) - # - localvars = set() - freelines = set() - for type in tp.args: - self._extra_local_variables(type, localvars, freelines) - for decl in sorted(localvars): - prnt(' %s;' % (decl,)) - # - if not isinstance(tp.result, model.VoidType): - result_code = 'result = ' - context = 'result of %s' % name - prnt(' %s;' % tp.result.get_c_name(' result', context)) - prnt(' PyObject *pyresult;') - else: - result_code = '' - # - if len(tp.args) > 1: - rng = range(len(tp.args)) - for i in rng: - prnt(' PyObject *arg%d;' % i) - prnt() - prnt(' if (!PyArg_ParseTuple(args, "%s:%s", %s))' % ( - 'O' * numargs, name, ', '.join(['&arg%d' % i for i in rng]))) - prnt(' return NULL;') - prnt() - # - for i, type in enumerate(tp.args): - self._convert_funcarg_to_c(type, 'arg%d' % i, 'x%d' % i, - 'return NULL') - prnt() - # - prnt(' Py_BEGIN_ALLOW_THREADS') - prnt(' _cffi_restore_errno();') - prnt(' { %s%s(%s); }' % ( - result_code, name, - ', '.join(['x%d' % i for i in range(len(tp.args))]))) - prnt(' _cffi_save_errno();') - prnt(' Py_END_ALLOW_THREADS') - prnt() - # - prnt(' (void)self; /* unused */') - if numargs == 0: - prnt(' (void)noarg; /* unused */') - if result_code: - prnt(' pyresult = %s;' % - self._convert_expr_from_c(tp.result, 'result', 'result type')) - for freeline in freelines: - prnt(' ' + freeline) - prnt(' return pyresult;') - else: - for freeline in freelines: - prnt(' ' + freeline) - prnt(' Py_INCREF(Py_None);') - prnt(' return Py_None;') - prnt('}') - prnt() - - def _generate_cpy_function_method(self, tp, name): - if tp.ellipsis: - return - numargs = len(tp.args) - if numargs == 0: - meth = 'METH_NOARGS' - elif numargs == 1: - meth = 'METH_O' - else: - meth = 'METH_VARARGS' - self._prnt(' {"%s", _cffi_f_%s, %s, NULL},' % (name, name, meth)) - - _loading_cpy_function = _loaded_noop - - def _loaded_cpy_function(self, tp, name, module, library): - if tp.ellipsis: - return - func = getattr(module, name) - setattr(library, name, func) - self._types_of_builtin_functions[func] = tp - - # ---------- - # named structs - - _generate_cpy_struct_collecttype = _generate_nothing - def _generate_cpy_struct_decl(self, tp, name): - assert name == tp.name - self._generate_struct_or_union_decl(tp, 'struct', name) - def _generate_cpy_struct_method(self, tp, name): - self._generate_struct_or_union_method(tp, 'struct', name) - def _loading_cpy_struct(self, tp, name, module): - self._loading_struct_or_union(tp, 'struct', name, module) - def _loaded_cpy_struct(self, tp, name, module, **kwds): - self._loaded_struct_or_union(tp) - - _generate_cpy_union_collecttype = _generate_nothing - def _generate_cpy_union_decl(self, tp, name): - assert name == tp.name - self._generate_struct_or_union_decl(tp, 'union', name) - def _generate_cpy_union_method(self, tp, name): - self._generate_struct_or_union_method(tp, 'union', name) - def _loading_cpy_union(self, tp, name, module): - self._loading_struct_or_union(tp, 'union', name, module) - def _loaded_cpy_union(self, tp, name, module, **kwds): - self._loaded_struct_or_union(tp) - - def _generate_struct_or_union_decl(self, tp, prefix, name): - if tp.fldnames is None: - return # nothing to do with opaque structs - checkfuncname = '_cffi_check_%s_%s' % (prefix, name) - layoutfuncname = '_cffi_layout_%s_%s' % (prefix, name) - cname = ('%s %s' % (prefix, name)).strip() - # - prnt = self._prnt - prnt('static void %s(%s *p)' % (checkfuncname, cname)) - prnt('{') - prnt(' /* only to generate compile-time warnings or errors */') - prnt(' (void)p;') - for fname, ftype, fbitsize, fqual in tp.enumfields(): - if (isinstance(ftype, model.PrimitiveType) - and ftype.is_integer_type()) or fbitsize >= 0: - # accept all integers, but complain on float or double - prnt(' (void)((p->%s) << 1);' % fname) - else: - # only accept exactly the type declared. - try: - prnt(' { %s = &p->%s; (void)tmp; }' % ( - ftype.get_c_name('*tmp', 'field %r'%fname, quals=fqual), - fname)) - except VerificationError as e: - prnt(' /* %s */' % str(e)) # cannot verify it, ignore - prnt('}') - prnt('static PyObject *') - prnt('%s(PyObject *self, PyObject *noarg)' % (layoutfuncname,)) - prnt('{') - prnt(' struct _cffi_aligncheck { char x; %s y; };' % cname) - prnt(' static Py_ssize_t nums[] = {') - prnt(' sizeof(%s),' % cname) - prnt(' offsetof(struct _cffi_aligncheck, y),') - for fname, ftype, fbitsize, fqual in tp.enumfields(): - if fbitsize >= 0: - continue # xxx ignore fbitsize for now - prnt(' offsetof(%s, %s),' % (cname, fname)) - if isinstance(ftype, model.ArrayType) and ftype.length is None: - prnt(' 0, /* %s */' % ftype._get_c_name()) - else: - prnt(' sizeof(((%s *)0)->%s),' % (cname, fname)) - prnt(' -1') - prnt(' };') - prnt(' (void)self; /* unused */') - prnt(' (void)noarg; /* unused */') - prnt(' return _cffi_get_struct_layout(nums);') - prnt(' /* the next line is not executed, but compiled */') - prnt(' %s(0);' % (checkfuncname,)) - prnt('}') - prnt() - - def _generate_struct_or_union_method(self, tp, prefix, name): - if tp.fldnames is None: - return # nothing to do with opaque structs - layoutfuncname = '_cffi_layout_%s_%s' % (prefix, name) - self._prnt(' {"%s", %s, METH_NOARGS, NULL},' % (layoutfuncname, - layoutfuncname)) - - def _loading_struct_or_union(self, tp, prefix, name, module): - if tp.fldnames is None: - return # nothing to do with opaque structs - layoutfuncname = '_cffi_layout_%s_%s' % (prefix, name) - # - function = getattr(module, layoutfuncname) - layout = function() - if isinstance(tp, model.StructOrUnion) and tp.partial: - # use the function()'s sizes and offsets to guide the - # layout of the struct - totalsize = layout[0] - totalalignment = layout[1] - fieldofs = layout[2::2] - fieldsize = layout[3::2] - tp.force_flatten() - assert len(fieldofs) == len(fieldsize) == len(tp.fldnames) - tp.fixedlayout = fieldofs, fieldsize, totalsize, totalalignment - else: - cname = ('%s %s' % (prefix, name)).strip() - self._struct_pending_verification[tp] = layout, cname - - def _loaded_struct_or_union(self, tp): - if tp.fldnames is None: - return # nothing to do with opaque structs - self.ffi._get_cached_btype(tp) # force 'fixedlayout' to be considered - - if tp in self._struct_pending_verification: - # check that the layout sizes and offsets match the real ones - def check(realvalue, expectedvalue, msg): - if realvalue != expectedvalue: - raise VerificationError( - "%s (we have %d, but C compiler says %d)" - % (msg, expectedvalue, realvalue)) - ffi = self.ffi - BStruct = ffi._get_cached_btype(tp) - layout, cname = self._struct_pending_verification.pop(tp) - check(layout[0], ffi.sizeof(BStruct), "wrong total size") - check(layout[1], ffi.alignof(BStruct), "wrong total alignment") - i = 2 - for fname, ftype, fbitsize, fqual in tp.enumfields(): - if fbitsize >= 0: - continue # xxx ignore fbitsize for now - check(layout[i], ffi.offsetof(BStruct, fname), - "wrong offset for field %r" % (fname,)) - if layout[i+1] != 0: - BField = ffi._get_cached_btype(ftype) - check(layout[i+1], ffi.sizeof(BField), - "wrong size for field %r" % (fname,)) - i += 2 - assert i == len(layout) - - # ---------- - # 'anonymous' declarations. These are produced for anonymous structs - # or unions; the 'name' is obtained by a typedef. - - _generate_cpy_anonymous_collecttype = _generate_nothing - - def _generate_cpy_anonymous_decl(self, tp, name): - if isinstance(tp, model.EnumType): - self._generate_cpy_enum_decl(tp, name, '') - else: - self._generate_struct_or_union_decl(tp, '', name) - - def _generate_cpy_anonymous_method(self, tp, name): - if not isinstance(tp, model.EnumType): - self._generate_struct_or_union_method(tp, '', name) - - def _loading_cpy_anonymous(self, tp, name, module): - if isinstance(tp, model.EnumType): - self._loading_cpy_enum(tp, name, module) - else: - self._loading_struct_or_union(tp, '', name, module) - - def _loaded_cpy_anonymous(self, tp, name, module, **kwds): - if isinstance(tp, model.EnumType): - self._loaded_cpy_enum(tp, name, module, **kwds) - else: - self._loaded_struct_or_union(tp) - - # ---------- - # constants, likely declared with '#define' - - def _generate_cpy_const(self, is_int, name, tp=None, category='const', - vartp=None, delayed=True, size_too=False, - check_value=None): - prnt = self._prnt - funcname = '_cffi_%s_%s' % (category, name) - prnt('static int %s(PyObject *lib)' % funcname) - prnt('{') - prnt(' PyObject *o;') - prnt(' int res;') - if not is_int: - prnt(' %s;' % (vartp or tp).get_c_name(' i', name)) - else: - assert category == 'const' - # - if check_value is not None: - self._check_int_constant_value(name, check_value) - # - if not is_int: - if category == 'var': - realexpr = '&' + name - else: - realexpr = name - prnt(' i = (%s);' % (realexpr,)) - prnt(' o = %s;' % (self._convert_expr_from_c(tp, 'i', - 'variable type'),)) - assert delayed - else: - prnt(' o = _cffi_from_c_int_const(%s);' % name) - prnt(' if (o == NULL)') - prnt(' return -1;') - if size_too: - prnt(' {') - prnt(' PyObject *o1 = o;') - prnt(' o = Py_BuildValue("On", o1, (Py_ssize_t)sizeof(%s));' - % (name,)) - prnt(' Py_DECREF(o1);') - prnt(' if (o == NULL)') - prnt(' return -1;') - prnt(' }') - prnt(' res = PyObject_SetAttrString(lib, "%s", o);' % name) - prnt(' Py_DECREF(o);') - prnt(' if (res < 0)') - prnt(' return -1;') - prnt(' return %s;' % self._chained_list_constants[delayed]) - self._chained_list_constants[delayed] = funcname + '(lib)' - prnt('}') - prnt() - - def _generate_cpy_constant_collecttype(self, tp, name): - is_int = isinstance(tp, model.PrimitiveType) and tp.is_integer_type() - if not is_int: - self._do_collect_type(tp) - - def _generate_cpy_constant_decl(self, tp, name): - is_int = isinstance(tp, model.PrimitiveType) and tp.is_integer_type() - self._generate_cpy_const(is_int, name, tp) - - _generate_cpy_constant_method = _generate_nothing - _loading_cpy_constant = _loaded_noop - _loaded_cpy_constant = _loaded_noop - - # ---------- - # enums - - def _check_int_constant_value(self, name, value, err_prefix=''): - prnt = self._prnt - if value <= 0: - prnt(' if ((%s) > 0 || (long)(%s) != %dL) {' % ( - name, name, value)) - else: - prnt(' if ((%s) <= 0 || (unsigned long)(%s) != %dUL) {' % ( - name, name, value)) - prnt(' char buf[64];') - prnt(' if ((%s) <= 0)' % name) - prnt(' snprintf(buf, 63, "%%ld", (long)(%s));' % name) - prnt(' else') - prnt(' snprintf(buf, 63, "%%lu", (unsigned long)(%s));' % - name) - prnt(' PyErr_Format(_cffi_VerificationError,') - prnt(' "%s%s has the real value %s, not %s",') - prnt(' "%s", "%s", buf, "%d");' % ( - err_prefix, name, value)) - prnt(' return -1;') - prnt(' }') - - def _enum_funcname(self, prefix, name): - # "$enum_$1" => "___D_enum____D_1" - name = name.replace('$', '___D_') - return '_cffi_e_%s_%s' % (prefix, name) - - def _generate_cpy_enum_decl(self, tp, name, prefix='enum'): - if tp.partial: - for enumerator in tp.enumerators: - self._generate_cpy_const(True, enumerator, delayed=False) - return - # - funcname = self._enum_funcname(prefix, name) - prnt = self._prnt - prnt('static int %s(PyObject *lib)' % funcname) - prnt('{') - for enumerator, enumvalue in zip(tp.enumerators, tp.enumvalues): - self._check_int_constant_value(enumerator, enumvalue, - "enum %s: " % name) - prnt(' return %s;' % self._chained_list_constants[True]) - self._chained_list_constants[True] = funcname + '(lib)' - prnt('}') - prnt() - - _generate_cpy_enum_collecttype = _generate_nothing - _generate_cpy_enum_method = _generate_nothing - - def _loading_cpy_enum(self, tp, name, module): - if tp.partial: - enumvalues = [getattr(module, enumerator) - for enumerator in tp.enumerators] - tp.enumvalues = tuple(enumvalues) - tp.partial_resolved = True - - def _loaded_cpy_enum(self, tp, name, module, library): - for enumerator, enumvalue in zip(tp.enumerators, tp.enumvalues): - setattr(library, enumerator, enumvalue) - - # ---------- - # macros: for now only for integers - - def _generate_cpy_macro_decl(self, tp, name): - if tp == '...': - check_value = None - else: - check_value = tp # an integer - self._generate_cpy_const(True, name, check_value=check_value) - - _generate_cpy_macro_collecttype = _generate_nothing - _generate_cpy_macro_method = _generate_nothing - _loading_cpy_macro = _loaded_noop - _loaded_cpy_macro = _loaded_noop - - # ---------- - # global variables - - def _generate_cpy_variable_collecttype(self, tp, name): - if isinstance(tp, model.ArrayType): - tp_ptr = model.PointerType(tp.item) - else: - tp_ptr = model.PointerType(tp) - self._do_collect_type(tp_ptr) - - def _generate_cpy_variable_decl(self, tp, name): - if isinstance(tp, model.ArrayType): - tp_ptr = model.PointerType(tp.item) - self._generate_cpy_const(False, name, tp, vartp=tp_ptr, - size_too = tp.length_is_unknown()) - else: - tp_ptr = model.PointerType(tp) - self._generate_cpy_const(False, name, tp_ptr, category='var') - - _generate_cpy_variable_method = _generate_nothing - _loading_cpy_variable = _loaded_noop - - def _loaded_cpy_variable(self, tp, name, module, library): - value = getattr(library, name) - if isinstance(tp, model.ArrayType): # int a[5] is "constant" in the - # sense that "a=..." is forbidden - if tp.length_is_unknown(): - assert isinstance(value, tuple) - (value, size) = value - BItemType = self.ffi._get_cached_btype(tp.item) - length, rest = divmod(size, self.ffi.sizeof(BItemType)) - if rest != 0: - raise VerificationError( - "bad size: %r does not seem to be an array of %s" % - (name, tp.item)) - tp = tp.resolve_length(length) - # 'value' is a which we have to replace with - # a if the N is actually known - if tp.length is not None: - BArray = self.ffi._get_cached_btype(tp) - value = self.ffi.cast(BArray, value) - setattr(library, name, value) - return - # remove ptr= from the library instance, and replace - # it by a property on the class, which reads/writes into ptr[0]. - ptr = value - delattr(library, name) - def getter(library): - return ptr[0] - def setter(library, value): - ptr[0] = value - setattr(type(library), name, property(getter, setter)) - type(library)._cffi_dir.append(name) - - # ---------- - - def _generate_setup_custom(self): - prnt = self._prnt - prnt('static int _cffi_setup_custom(PyObject *lib)') - prnt('{') - prnt(' return %s;' % self._chained_list_constants[True]) - prnt('}') - -cffimod_header = r''' -#include -#include - -/* this block of #ifs should be kept exactly identical between - c/_cffi_backend.c, cffi/vengine_cpy.py, cffi/vengine_gen.py - and cffi/_cffi_include.h */ -#if defined(_MSC_VER) -# include /* for alloca() */ -# if _MSC_VER < 1600 /* MSVC < 2010 */ - typedef __int8 int8_t; - typedef __int16 int16_t; - typedef __int32 int32_t; - typedef __int64 int64_t; - typedef unsigned __int8 uint8_t; - typedef unsigned __int16 uint16_t; - typedef unsigned __int32 uint32_t; - typedef unsigned __int64 uint64_t; - typedef __int8 int_least8_t; - typedef __int16 int_least16_t; - typedef __int32 int_least32_t; - typedef __int64 int_least64_t; - typedef unsigned __int8 uint_least8_t; - typedef unsigned __int16 uint_least16_t; - typedef unsigned __int32 uint_least32_t; - typedef unsigned __int64 uint_least64_t; - typedef __int8 int_fast8_t; - typedef __int16 int_fast16_t; - typedef __int32 int_fast32_t; - typedef __int64 int_fast64_t; - typedef unsigned __int8 uint_fast8_t; - typedef unsigned __int16 uint_fast16_t; - typedef unsigned __int32 uint_fast32_t; - typedef unsigned __int64 uint_fast64_t; - typedef __int64 intmax_t; - typedef unsigned __int64 uintmax_t; -# else -# include -# endif -# if _MSC_VER < 1800 /* MSVC < 2013 */ -# ifndef __cplusplus - typedef unsigned char _Bool; -# endif -# endif -#else -# include -# if (defined (__SVR4) && defined (__sun)) || defined(_AIX) || defined(__hpux) -# include -# endif -#endif - -#if PY_MAJOR_VERSION < 3 -# undef PyCapsule_CheckExact -# undef PyCapsule_GetPointer -# define PyCapsule_CheckExact(capsule) (PyCObject_Check(capsule)) -# define PyCapsule_GetPointer(capsule, name) \ - (PyCObject_AsVoidPtr(capsule)) -#endif - -#if PY_MAJOR_VERSION >= 3 -# define PyInt_FromLong PyLong_FromLong -#endif - -#define _cffi_from_c_double PyFloat_FromDouble -#define _cffi_from_c_float PyFloat_FromDouble -#define _cffi_from_c_long PyInt_FromLong -#define _cffi_from_c_ulong PyLong_FromUnsignedLong -#define _cffi_from_c_longlong PyLong_FromLongLong -#define _cffi_from_c_ulonglong PyLong_FromUnsignedLongLong -#define _cffi_from_c__Bool PyBool_FromLong - -#define _cffi_to_c_double PyFloat_AsDouble -#define _cffi_to_c_float PyFloat_AsDouble - -#define _cffi_from_c_int_const(x) \ - (((x) > 0) ? \ - ((unsigned long long)(x) <= (unsigned long long)LONG_MAX) ? \ - PyInt_FromLong((long)(x)) : \ - PyLong_FromUnsignedLongLong((unsigned long long)(x)) : \ - ((long long)(x) >= (long long)LONG_MIN) ? \ - PyInt_FromLong((long)(x)) : \ - PyLong_FromLongLong((long long)(x))) - -#define _cffi_from_c_int(x, type) \ - (((type)-1) > 0 ? /* unsigned */ \ - (sizeof(type) < sizeof(long) ? \ - PyInt_FromLong((long)x) : \ - sizeof(type) == sizeof(long) ? \ - PyLong_FromUnsignedLong((unsigned long)x) : \ - PyLong_FromUnsignedLongLong((unsigned long long)x)) : \ - (sizeof(type) <= sizeof(long) ? \ - PyInt_FromLong((long)x) : \ - PyLong_FromLongLong((long long)x))) - -#define _cffi_to_c_int(o, type) \ - ((type)( \ - sizeof(type) == 1 ? (((type)-1) > 0 ? (type)_cffi_to_c_u8(o) \ - : (type)_cffi_to_c_i8(o)) : \ - sizeof(type) == 2 ? (((type)-1) > 0 ? (type)_cffi_to_c_u16(o) \ - : (type)_cffi_to_c_i16(o)) : \ - sizeof(type) == 4 ? (((type)-1) > 0 ? (type)_cffi_to_c_u32(o) \ - : (type)_cffi_to_c_i32(o)) : \ - sizeof(type) == 8 ? (((type)-1) > 0 ? (type)_cffi_to_c_u64(o) \ - : (type)_cffi_to_c_i64(o)) : \ - (Py_FatalError("unsupported size for type " #type), (type)0))) - -#define _cffi_to_c_i8 \ - ((int(*)(PyObject *))_cffi_exports[1]) -#define _cffi_to_c_u8 \ - ((int(*)(PyObject *))_cffi_exports[2]) -#define _cffi_to_c_i16 \ - ((int(*)(PyObject *))_cffi_exports[3]) -#define _cffi_to_c_u16 \ - ((int(*)(PyObject *))_cffi_exports[4]) -#define _cffi_to_c_i32 \ - ((int(*)(PyObject *))_cffi_exports[5]) -#define _cffi_to_c_u32 \ - ((unsigned int(*)(PyObject *))_cffi_exports[6]) -#define _cffi_to_c_i64 \ - ((long long(*)(PyObject *))_cffi_exports[7]) -#define _cffi_to_c_u64 \ - ((unsigned long long(*)(PyObject *))_cffi_exports[8]) -#define _cffi_to_c_char \ - ((int(*)(PyObject *))_cffi_exports[9]) -#define _cffi_from_c_pointer \ - ((PyObject *(*)(char *, CTypeDescrObject *))_cffi_exports[10]) -#define _cffi_to_c_pointer \ - ((char *(*)(PyObject *, CTypeDescrObject *))_cffi_exports[11]) -#define _cffi_get_struct_layout \ - ((PyObject *(*)(Py_ssize_t[]))_cffi_exports[12]) -#define _cffi_restore_errno \ - ((void(*)(void))_cffi_exports[13]) -#define _cffi_save_errno \ - ((void(*)(void))_cffi_exports[14]) -#define _cffi_from_c_char \ - ((PyObject *(*)(char))_cffi_exports[15]) -#define _cffi_from_c_deref \ - ((PyObject *(*)(char *, CTypeDescrObject *))_cffi_exports[16]) -#define _cffi_to_c \ - ((int(*)(char *, CTypeDescrObject *, PyObject *))_cffi_exports[17]) -#define _cffi_from_c_struct \ - ((PyObject *(*)(char *, CTypeDescrObject *))_cffi_exports[18]) -#define _cffi_to_c_wchar_t \ - ((wchar_t(*)(PyObject *))_cffi_exports[19]) -#define _cffi_from_c_wchar_t \ - ((PyObject *(*)(wchar_t))_cffi_exports[20]) -#define _cffi_to_c_long_double \ - ((long double(*)(PyObject *))_cffi_exports[21]) -#define _cffi_to_c__Bool \ - ((_Bool(*)(PyObject *))_cffi_exports[22]) -#define _cffi_prepare_pointer_call_argument \ - ((Py_ssize_t(*)(CTypeDescrObject *, PyObject *, char **))_cffi_exports[23]) -#define _cffi_convert_array_from_object \ - ((int(*)(char *, CTypeDescrObject *, PyObject *))_cffi_exports[24]) -#define _CFFI_NUM_EXPORTS 25 - -typedef struct _ctypedescr CTypeDescrObject; - -static void *_cffi_exports[_CFFI_NUM_EXPORTS]; -static PyObject *_cffi_types, *_cffi_VerificationError; - -static int _cffi_setup_custom(PyObject *lib); /* forward */ - -static PyObject *_cffi_setup(PyObject *self, PyObject *args) -{ - PyObject *library; - int was_alive = (_cffi_types != NULL); - (void)self; /* unused */ - if (!PyArg_ParseTuple(args, "OOO", &_cffi_types, &_cffi_VerificationError, - &library)) - return NULL; - Py_INCREF(_cffi_types); - Py_INCREF(_cffi_VerificationError); - if (_cffi_setup_custom(library) < 0) - return NULL; - return PyBool_FromLong(was_alive); -} - -union _cffi_union_alignment_u { - unsigned char m_char; - unsigned short m_short; - unsigned int m_int; - unsigned long m_long; - unsigned long long m_longlong; - float m_float; - double m_double; - long double m_longdouble; -}; - -struct _cffi_freeme_s { - struct _cffi_freeme_s *next; - union _cffi_union_alignment_u alignment; -}; - -#ifdef __GNUC__ - __attribute__((unused)) -#endif -static int _cffi_convert_array_argument(CTypeDescrObject *ctptr, PyObject *arg, - char **output_data, Py_ssize_t datasize, - struct _cffi_freeme_s **freeme) -{ - char *p; - if (datasize < 0) - return -1; - - p = *output_data; - if (p == NULL) { - struct _cffi_freeme_s *fp = (struct _cffi_freeme_s *)PyObject_Malloc( - offsetof(struct _cffi_freeme_s, alignment) + (size_t)datasize); - if (fp == NULL) - return -1; - fp->next = *freeme; - *freeme = fp; - p = *output_data = (char *)&fp->alignment; - } - memset((void *)p, 0, (size_t)datasize); - return _cffi_convert_array_from_object(p, ctptr, arg); -} - -#ifdef __GNUC__ - __attribute__((unused)) -#endif -static void _cffi_free_array_arguments(struct _cffi_freeme_s *freeme) -{ - do { - void *p = (void *)freeme; - freeme = freeme->next; - PyObject_Free(p); - } while (freeme != NULL); -} - -static int _cffi_init(void) -{ - PyObject *module, *c_api_object = NULL; - - module = PyImport_ImportModule("_cffi_backend"); - if (module == NULL) - goto failure; - - c_api_object = PyObject_GetAttrString(module, "_C_API"); - if (c_api_object == NULL) - goto failure; - if (!PyCapsule_CheckExact(c_api_object)) { - PyErr_SetNone(PyExc_ImportError); - goto failure; - } - memcpy(_cffi_exports, PyCapsule_GetPointer(c_api_object, "cffi"), - _CFFI_NUM_EXPORTS * sizeof(void *)); - - Py_DECREF(module); - Py_DECREF(c_api_object); - return 0; - - failure: - Py_XDECREF(module); - Py_XDECREF(c_api_object); - return -1; -} - -#define _cffi_type(num) ((CTypeDescrObject *)PyList_GET_ITEM(_cffi_types, num)) - -/**********/ -''' diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/criterions/sentence_ranking.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/criterions/sentence_ranking.py deleted file mode 100644 index d4c76341d4d87e6d0da21ac89e833ce0bda13a0c..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/criterions/sentence_ranking.py +++ /dev/null @@ -1,120 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -import torch.nn.functional as F -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion - - -@register_criterion("sentence_ranking") -class SentenceRankingCriterion(FairseqCriterion): - def __init__(self, task, ranking_head_name, save_predictions, num_classes): - super().__init__(task) - self.ranking_head_name = ranking_head_name - if save_predictions is not None: - self.prediction_h = open(save_predictions, "w") - else: - self.prediction_h = None - self.num_classes = num_classes - - def __del__(self): - if self.prediction_h is not None: - self.prediction_h.close() - - @staticmethod - def add_args(parser): - # fmt: off - parser.add_argument('--save-predictions', metavar='FILE', - help='file to save predictions to') - parser.add_argument('--ranking-head-name', - default='sentence_classification_head', - help='name of the ranking head to use') - # fmt: on - - def forward(self, model, sample, reduce=True): - """Compute ranking loss for the given sample. - - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - assert ( - hasattr(model, "classification_heads") - and self.ranking_head_name in model.classification_heads - ), "model must provide sentence ranking head for --criterion=sentence_ranking" - - scores = [] - for idx in range(self.num_classes): - score, _ = model( - **sample["net_input{idx}".format(idx=idx + 1)], - classification_head_name=self.ranking_head_name, - ) - scores.append(score) - - logits = torch.cat(scores, dim=1) - sample_size = logits.size(0) - - if "target" in sample: - targets = model.get_targets(sample, [logits]).view(-1) - lprobs = F.log_softmax(logits, dim=-1, dtype=torch.float32) - loss = F.nll_loss(lprobs, targets, reduction="sum") - else: - targets = None - loss = torch.tensor(0.0, requires_grad=True) - - if self.prediction_h is not None: - preds = logits.argmax(dim=1) - for i, (id, pred) in enumerate(zip(sample["id"].tolist(), preds.tolist())): - if targets is not None: - label = targets[i].item() - print("{}\t{}\t{}".format(id, pred, label), file=self.prediction_h) - else: - print("{}\t{}".format(id, pred), file=self.prediction_h) - - logging_output = { - "loss": loss.data, - "ntokens": sample["ntokens"], - "nsentences": sample_size, - "sample_size": sample_size, - } - if targets is not None: - logging_output["ncorrect"] = (logits.argmax(dim=1) == targets).sum() - - return loss, sample_size, logging_output - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - loss_sum = sum(log.get("loss", 0) for log in logging_outputs) - ntokens = sum(log.get("ntokens", 0) for log in logging_outputs) - nsentences = sum(log.get("nsentences", 0) for log in logging_outputs) - sample_size = sum(log.get("sample_size", 0) for log in logging_outputs) - - metrics.log_scalar( - "loss", loss_sum / sample_size / math.log(2), sample_size, round=3 - ) - if sample_size != ntokens: - metrics.log_scalar( - "nll_loss", loss_sum / ntokens / math.log(2), ntokens, round=3 - ) - - if len(logging_outputs) > 0 and "ncorrect" in logging_outputs[0]: - ncorrect = sum(log.get("ncorrect", 0) for log in logging_outputs) - metrics.log_scalar( - "accuracy", 100.0 * ncorrect / nsentences, nsentences, round=1 - ) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/spaces/asd123Xiao/kafuu_chino_sovits4.0/flask_api.py b/spaces/asd123Xiao/kafuu_chino_sovits4.0/flask_api.py deleted file mode 100644 index 8cc236a1c34c9ddeddea99bcea13024fb0ccc90b..0000000000000000000000000000000000000000 --- a/spaces/asd123Xiao/kafuu_chino_sovits4.0/flask_api.py +++ /dev/null @@ -1,56 +0,0 @@ -import io -import logging - -import soundfile -import torch -import torchaudio -from flask import Flask, request, send_file -from flask_cors import CORS - -from inference.infer_tool import Svc, RealTimeVC - -app = Flask(__name__) - -CORS(app) - -logging.getLogger('numba').setLevel(logging.WARNING) - - -@app.route("/voiceChangeModel", methods=["POST"]) -def voice_change_model(): - request_form = request.form - wave_file = request.files.get("sample", None) - # 变调信息 - f_pitch_change = float(request_form.get("fPitchChange", 0)) - # DAW所需的采样率 - daw_sample = int(float(request_form.get("sampleRate", 0))) - speaker_id = int(float(request_form.get("sSpeakId", 0))) - # http获得wav文件并转换 - input_wav_path = io.BytesIO(wave_file.read()) - - # 模型推理 - if raw_infer: - out_audio, out_sr = svc_model.infer(speaker_id, f_pitch_change, input_wav_path) - tar_audio = torchaudio.functional.resample(out_audio, svc_model.target_sample, daw_sample) - else: - out_audio = svc.process(svc_model, speaker_id, f_pitch_change, input_wav_path) - tar_audio = torchaudio.functional.resample(torch.from_numpy(out_audio), svc_model.target_sample, daw_sample) - # 返回音频 - out_wav_path = io.BytesIO() - soundfile.write(out_wav_path, tar_audio.cpu().numpy(), daw_sample, format="wav") - out_wav_path.seek(0) - return send_file(out_wav_path, download_name="temp.wav", as_attachment=True) - - -if __name__ == '__main__': - # 启用则为直接切片合成,False为交叉淡化方式 - # vst插件调整0.3-0.5s切片时间可以降低延迟,直接切片方法会有连接处爆音、交叉淡化会有轻微重叠声音 - # 自行选择能接受的方法,或将vst最大切片时间调整为1s,此处设为Ture,延迟大音质稳定一些 - raw_infer = True - # 每个模型和config是唯一对应的 - model_name = "logs/32k/G_174000-Copy1.pth" - config_name = "configs/config.json" - svc_model = Svc(model_name, config_name) - svc = RealTimeVC() - # 此处与vst插件对应,不建议更改 - app.run(port=6842, host="0.0.0.0", debug=False, threaded=False) diff --git a/spaces/asd998877/TsGpt/modules/llama_func.py b/spaces/asd998877/TsGpt/modules/llama_func.py deleted file mode 100644 index e1c513af1bf6d1569b071eb5fc0ce441d0692f83..0000000000000000000000000000000000000000 --- a/spaces/asd998877/TsGpt/modules/llama_func.py +++ /dev/null @@ -1,166 +0,0 @@ -import os -import logging - -from llama_index import download_loader -from llama_index import ( - Document, - LLMPredictor, - PromptHelper, - QuestionAnswerPrompt, - RefinePrompt, -) -import colorama -import PyPDF2 -from tqdm import tqdm - -from modules.presets import * -from modules.utils import * -from modules.config import local_embedding - - -def get_index_name(file_src): - file_paths = [x.name for x in file_src] - file_paths.sort(key=lambda x: os.path.basename(x)) - - md5_hash = hashlib.md5() - for file_path in file_paths: - with open(file_path, "rb") as f: - while chunk := f.read(8192): - md5_hash.update(chunk) - - return md5_hash.hexdigest() - - -def block_split(text): - blocks = [] - while len(text) > 0: - blocks.append(Document(text[:1000])) - text = text[1000:] - return blocks - - -def get_documents(file_src): - documents = [] - logging.debug("Loading documents...") - logging.debug(f"file_src: {file_src}") - for file in file_src: - filepath = file.name - filename = os.path.basename(filepath) - file_type = os.path.splitext(filepath)[1] - logging.info(f"loading file: {filename}") - try: - if file_type == ".pdf": - logging.debug("Loading PDF...") - try: - from modules.pdf_func import parse_pdf - from modules.config import advance_docs - - two_column = advance_docs["pdf"].get("two_column", False) - pdftext = parse_pdf(filepath, two_column).text - except: - pdftext = "" - with open(filepath, "rb") as pdfFileObj: - pdfReader = PyPDF2.PdfReader(pdfFileObj) - for page in tqdm(pdfReader.pages): - pdftext += page.extract_text() - text_raw = pdftext - elif file_type == ".docx": - logging.debug("Loading Word...") - DocxReader = download_loader("DocxReader") - loader = DocxReader() - text_raw = loader.load_data(file=filepath)[0].text - elif file_type == ".epub": - logging.debug("Loading EPUB...") - EpubReader = download_loader("EpubReader") - loader = EpubReader() - text_raw = loader.load_data(file=filepath)[0].text - elif file_type == ".xlsx": - logging.debug("Loading Excel...") - text_list = excel_to_string(filepath) - for elem in text_list: - documents.append(Document(elem)) - continue - else: - logging.debug("Loading text file...") - with open(filepath, "r", encoding="utf-8") as f: - text_raw = f.read() - except Exception as e: - logging.error(f"Error loading file: {filename}") - pass - text = add_space(text_raw) - # text = block_split(text) - # documents += text - documents += [Document(text)] - logging.debug("Documents loaded.") - return documents - - -def construct_index( - api_key, - file_src, - max_input_size=4096, - num_outputs=5, - max_chunk_overlap=20, - chunk_size_limit=600, - embedding_limit=None, - separator=" ", -): - from langchain.chat_models import ChatOpenAI - from langchain.embeddings.huggingface import HuggingFaceEmbeddings - from llama_index import GPTSimpleVectorIndex, ServiceContext, LangchainEmbedding, OpenAIEmbedding - - if api_key: - os.environ["OPENAI_API_KEY"] = api_key - else: - # 由于一个依赖的愚蠢的设计,这里必须要有一个API KEY - os.environ["OPENAI_API_KEY"] = "sk-xxxxxxx" - chunk_size_limit = None if chunk_size_limit == 0 else chunk_size_limit - embedding_limit = None if embedding_limit == 0 else embedding_limit - separator = " " if separator == "" else separator - - prompt_helper = PromptHelper( - max_input_size=max_input_size, - num_output=num_outputs, - max_chunk_overlap=max_chunk_overlap, - embedding_limit=embedding_limit, - chunk_size_limit=600, - separator=separator, - ) - index_name = get_index_name(file_src) - if os.path.exists(f"./index/{index_name}.json"): - logging.info("找到了缓存的索引文件,加载中……") - return GPTSimpleVectorIndex.load_from_disk(f"./index/{index_name}.json") - else: - try: - documents = get_documents(file_src) - if local_embedding: - embed_model = LangchainEmbedding(HuggingFaceEmbeddings(model_name = "sentence-transformers/distiluse-base-multilingual-cased-v2")) - else: - embed_model = OpenAIEmbedding() - logging.info("构建索引中……") - with retrieve_proxy(): - service_context = ServiceContext.from_defaults( - prompt_helper=prompt_helper, - chunk_size_limit=chunk_size_limit, - embed_model=embed_model, - ) - index = GPTSimpleVectorIndex.from_documents( - documents, service_context=service_context - ) - logging.debug("索引构建完成!") - os.makedirs("./index", exist_ok=True) - index.save_to_disk(f"./index/{index_name}.json") - logging.debug("索引已保存至本地!") - return index - - except Exception as e: - logging.error("索引构建失败!", e) - print(e) - return None - - -def add_space(text): - punctuations = {",": ", ", "。": "。 ", "?": "? ", "!": "! ", ":": ": ", ";": "; "} - for cn_punc, en_punc in punctuations.items(): - text = text.replace(cn_punc, en_punc) - return text diff --git a/spaces/ashioyajotham/falcon_7b_coder/README.md b/spaces/ashioyajotham/falcon_7b_coder/README.md deleted file mode 100644 index ce1216467a6210ccfb31a96585f7a5803f6c2918..0000000000000000000000000000000000000000 --- a/spaces/ashioyajotham/falcon_7b_coder/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Falcon 7b Coder -emoji: 🌖 -colorFrom: yellow -colorTo: green -sdk: gradio -sdk_version: 3.46.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/augmentedimaginationhackathon/paperstocode/frontend/src/app/app.component.spec.ts b/spaces/augmentedimaginationhackathon/paperstocode/frontend/src/app/app.component.spec.ts deleted file mode 100644 index 74b5b3eb5fefdc61e4019c01ad0ab7d561559c83..0000000000000000000000000000000000000000 --- a/spaces/augmentedimaginationhackathon/paperstocode/frontend/src/app/app.component.spec.ts +++ /dev/null @@ -1,35 +0,0 @@ -import { TestBed } from '@angular/core/testing'; -import { RouterTestingModule } from '@angular/router/testing'; -import { AppComponent } from './app.component'; - -describe('AppComponent', () => { - beforeEach(async () => { - await TestBed.configureTestingModule({ - imports: [ - RouterTestingModule - ], - declarations: [ - AppComponent - ], - }).compileComponents(); - }); - - it('should create the app', () => { - const fixture = TestBed.createComponent(AppComponent); - const app = fixture.componentInstance; - expect(app).toBeTruthy(); - }); - - it(`should have as title 'frontend'`, () => { - const fixture = TestBed.createComponent(AppComponent); - const app = fixture.componentInstance; - expect(app.title).toEqual('frontend'); - }); - - it('should render title', () => { - const fixture = TestBed.createComponent(AppComponent); - fixture.detectChanges(); - const compiled = fixture.nativeElement as HTMLElement; - expect(compiled.querySelector('.content span')?.textContent).toContain('frontend app is running!'); - }); -}); diff --git a/spaces/awacke1/FastestText2SpeechEver/app.py b/spaces/awacke1/FastestText2SpeechEver/app.py deleted file mode 100644 index 624711103fff0eb591bc05f07ae20c47fbe03cd2..0000000000000000000000000000000000000000 --- a/spaces/awacke1/FastestText2SpeechEver/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/facebook/fastspeech2-en-ljspeech").launch() \ No newline at end of file diff --git a/spaces/awacke1/NLPStoryWriterWithMemory/README.md b/spaces/awacke1/NLPStoryWriterWithMemory/README.md deleted file mode 100644 index 1fa3863763ab0b442379418b00446be3ae558e4f..0000000000000000000000000000000000000000 --- a/spaces/awacke1/NLPStoryWriterWithMemory/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 🧠📖StoryWriter💾TextGenSave -emoji: 🧠📖💾 -colorFrom: yellow -colorTo: purple -sdk: gradio -sdk_version: 3.4 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awsaf49/gcvit-tf/gcvit/layers/level.py b/spaces/awsaf49/gcvit-tf/gcvit/layers/level.py deleted file mode 100644 index 50abe56081345d4b76ef66fbfa25a6b36de337cf..0000000000000000000000000000000000000000 --- a/spaces/awsaf49/gcvit-tf/gcvit/layers/level.py +++ /dev/null @@ -1,85 +0,0 @@ -import tensorflow as tf - -from .feature import GlobalQueryGen, ReduceSize, Resizing, FitWindow -from .block import GCViTBlock - -@tf.keras.utils.register_keras_serializable(package="gcvit") -class GCViTLevel(tf.keras.layers.Layer): - def __init__(self, depth, num_heads, window_size, keep_dims, downsample=True, mlp_ratio=4., qkv_bias=True, - qk_scale=None, drop=0., attn_drop=0., path_drop=0., layer_scale=None, resize_query=False, **kwargs): - super().__init__(**kwargs) - self.depth = depth - self.num_heads = num_heads - self.window_size = window_size - self.keep_dims = keep_dims - self.downsample = downsample - self.mlp_ratio = mlp_ratio - self.qkv_bias = qkv_bias - self.qk_scale = qk_scale - self.drop = drop - self.attn_drop = attn_drop - self.path_drop = path_drop - self.layer_scale = layer_scale - self.resize_query = resize_query - - def build(self, input_shape): - path_drop = [self.path_drop] * self.depth if not isinstance(self.path_drop, list) else self.path_drop - self.blocks = [ - GCViTBlock(window_size=self.window_size, - num_heads=self.num_heads, - global_query=bool(i % 2), - mlp_ratio=self.mlp_ratio, - qkv_bias=self.qkv_bias, - qk_scale=self.qk_scale, - drop=self.drop, - attn_drop=self.attn_drop, - path_drop=path_drop[i], - layer_scale=self.layer_scale, - name=f'blocks/{i}') - for i in range(self.depth)] - self.down = ReduceSize(keep_dim=False, name='downsample') - self.q_global_gen = GlobalQueryGen(self.keep_dims, name='q_global_gen') - self.resize = Resizing(self.window_size, self.window_size, interpolation='bicubic') - self.fit_window = FitWindow(self.window_size) - super().build(input_shape) - - def call(self, inputs, **kwargs): - H, W = tf.unstack(tf.shape(inputs)[1:3], num=2) - # pad to fit window_size - x = self.fit_window(inputs) - # generate global query - q_global = self.q_global_gen(x) # (B, H, W, C) # official impl issue: https://github.com/NVlabs/GCVit/issues/13 - # resize query to fit key-value, but result in poor score with official weights? - if self.resize_query: - q_global = self.resize(q_global) # to avoid mismatch between feat_map and q_global: https://github.com/NVlabs/GCVit/issues/9 - # feature_map -> windows -> window_attention -> feature_map - for i, blk in enumerate(self.blocks): - if i % 2: - x = blk([x, q_global]) - else: - x = blk([x]) - x = x[:, :H, :W, :] # https://github.com/NVlabs/GCVit/issues/9 - # set shape for [B, ?, ?, C] - x.set_shape(inputs.shape) # `tf.reshape` creates new tensor with new_shape - # downsample - if self.downsample: - x = self.down(x) - return x - - def get_config(self): - config = super().get_config() - config.update({ - 'depth': self.depth, - 'num_heads': self.num_heads, - 'window_size': self.window_size, - 'keep_dims': self.keep_dims, - 'downsample': self.downsample, - 'mlp_ratio': self.mlp_ratio, - 'qkv_bias': self.qkv_bias, - 'qk_scale': self.qk_scale, - 'drop': self.drop, - 'attn_drop': self.attn_drop, - 'path_drop': self.path_drop, - 'layer_scale': self.layer_scale - }) - return config \ No newline at end of file diff --git a/spaces/badayvedat/AudioSep/models/CLAP/open_clip/loss.py b/spaces/badayvedat/AudioSep/models/CLAP/open_clip/loss.py deleted file mode 100644 index cc66298a14997da4aa2efc71e37c0a6bcda53fd1..0000000000000000000000000000000000000000 --- a/spaces/badayvedat/AudioSep/models/CLAP/open_clip/loss.py +++ /dev/null @@ -1,398 +0,0 @@ -from multiprocessing.sharedctypes import Value -import torch -import torch.distributed.nn -from torch import distributed as dist, nn as nn -from torch.nn import functional as F -import numpy as np -from sklearn.metrics import average_precision_score, roc_auc_score, accuracy_score - -try: - import horovod.torch as hvd -except ImportError: - hvd = None - - -def gather_features( - audio_features, - text_features, - audio_features_mlp=None, - text_features_mlp=None, - local_loss=False, - gather_with_grad=False, - rank=0, - world_size=1, - use_horovod=False, - mlp_loss=False, -): - if use_horovod: - assert hvd is not None, "Please install horovod" - if gather_with_grad: - all_audio_features = hvd.allgather(audio_features) - all_text_features = hvd.allgather(text_features) - if mlp_loss: - all_audio_features_mlp = hvd.allgather(audio_features_mlp) - all_text_features_mlp = hvd.allgather(text_features_mlp) - else: - with torch.no_grad(): - all_audio_features = hvd.allgather(audio_features) - all_text_features = hvd.allgather(text_features) - if mlp_loss: - all_audio_features_mlp = hvd.allgather(audio_features_mlp) - all_text_features_mlp = hvd.allgather(text_features_mlp) - if not local_loss: - # ensure grads for local rank when all_* features don't have a gradient - gathered_audio_features = list( - all_audio_features.chunk(world_size, dim=0) - ) - gathered_text_features = list( - all_text_features.chunk(world_size, dim=0) - ) - gathered_audio_features[rank] = audio_features - gathered_text_features[rank] = text_features - all_audio_features = torch.cat(gathered_audio_features, dim=0) - all_text_features = torch.cat(gathered_text_features, dim=0) - if mlp_loss: - gathered_audio_features_mlp = list( - all_audio_features_mlp.chunk(world_size, dim=0) - ) - gathered_text_features_mlp = list( - all_text_features_mlp.chunk(world_size, dim=0) - ) - gathered_audio_features_mlp[rank] = audio_features_mlp - gathered_text_features_mlp[rank] = text_features_mlp - all_audio_features_mlp = torch.cat( - gathered_audio_features_mlp, dim=0 - ) - all_text_features_mlp = torch.cat(gathered_text_features_mlp, dim=0) - else: - # We gather tensors from all gpus - if gather_with_grad: - all_audio_features = torch.cat( - torch.distributed.nn.all_gather(audio_features), dim=0 - ) - all_text_features = torch.cat( - torch.distributed.nn.all_gather(text_features), dim=0 - ) - if mlp_loss: - all_audio_features_mlp = torch.cat( - torch.distributed.nn.all_gather(audio_features_mlp), dim=0 - ) - all_text_features_mlp = torch.cat( - torch.distributed.nn.all_gather(text_features_mlp), dim=0 - ) - else: - gathered_audio_features = [ - torch.zeros_like(audio_features) for _ in range(world_size) - ] - gathered_text_features = [ - torch.zeros_like(text_features) for _ in range(world_size) - ] - dist.all_gather(gathered_audio_features, audio_features) - dist.all_gather(gathered_text_features, text_features) - if mlp_loss: - gathered_audio_features_mlp = [ - torch.zeros_like(audio_features_mlp) for _ in range(world_size) - ] - gathered_text_features_mlp = [ - torch.zeros_like(text_features_mlp) for _ in range(world_size) - ] - dist.all_gather(gathered_audio_features_mlp, audio_features_mlp) - dist.all_gather(gathered_text_features_mlp, text_features_mlp) - if not local_loss: - # ensure grads for local rank when all_* features don't have a gradient - gathered_audio_features[rank] = audio_features - gathered_text_features[rank] = text_features - if mlp_loss: - gathered_audio_features_mlp[rank] = audio_features_mlp - gathered_text_features_mlp[rank] = text_features_mlp - - all_audio_features = torch.cat(gathered_audio_features, dim=0) - all_text_features = torch.cat(gathered_text_features, dim=0) - if mlp_loss: - all_audio_features_mlp = torch.cat(gathered_audio_features_mlp, dim=0) - all_text_features_mlp = torch.cat(gathered_text_features_mlp, dim=0) - if mlp_loss: - return ( - all_audio_features, - all_text_features, - all_audio_features_mlp, - all_text_features_mlp, - ) - else: - return all_audio_features, all_text_features - - -class ClipLoss(nn.Module): - def __init__( - self, - local_loss=False, - gather_with_grad=False, - cache_labels=False, - rank=0, - world_size=1, - use_horovod=False, - mlp_loss=False, - weight_loss_kappa=0, - ): - super().__init__() - self.local_loss = local_loss - self.gather_with_grad = gather_with_grad - self.cache_labels = cache_labels - self.rank = rank - self.world_size = world_size - self.use_horovod = use_horovod - self.mlp_loss = mlp_loss - self.weighted_loss = bool(weight_loss_kappa != 0) - self.weight_loss_kappa = weight_loss_kappa - # cache state - self.prev_num_logits = 0 - self.labels = {} - - def forward( - self, - audio_features, - text_features, - logit_scale_a, - logit_scale_t=None, - audio_features_mlp=None, - text_features_mlp=None, - ): - device = audio_features.device - if self.mlp_loss: - if self.world_size > 1: - ( - all_audio_features, - all_text_features, - all_audio_features_mlp, - all_text_features_mlp, - ) = gather_features( - audio_features=audio_features, - text_features=text_features, - audio_features_mlp=audio_features_mlp, - text_features_mlp=text_features_mlp, - local_loss=self.local_loss, - gather_with_grad=self.gather_with_grad, - rank=self.rank, - world_size=self.world_size, - use_horovod=self.use_horovod, - mlp_loss=self.mlp_loss, - ) - if self.local_loss: - a_logits_per_audio = ( - logit_scale_a * audio_features @ all_text_features_mlp.T - ) - a_logits_per_text = ( - logit_scale_a * text_features_mlp @ all_audio_features.T - ) - t_logits_per_audio = ( - logit_scale_t * audio_features_mlp @ all_text_features.T - ) - t_logits_per_text = ( - logit_scale_t * text_features @ all_audio_features_mlp.T - ) - else: - a_logits_per_audio = ( - logit_scale_a * all_audio_features @ all_text_features_mlp.T - ) - a_logits_per_text = a_logits_per_audio.T - t_logits_per_audio = ( - logit_scale_t * all_audio_features_mlp @ all_text_features.T - ) - t_logits_per_text = t_logits_per_audio.T - else: - a_logits_per_audio = ( - logit_scale_a * audio_features @ text_features_mlp.T - ) - a_logits_per_text = logit_scale_a * text_features_mlp @ audio_features.T - t_logits_per_audio = ( - logit_scale_t * audio_features_mlp @ text_features.T - ) - t_logits_per_text = logit_scale_t * text_features @ audio_features_mlp.T - - # calculated ground-truth and cache if enabled - num_logits = a_logits_per_audio.shape[0] - if self.prev_num_logits != num_logits or device not in self.labels: - labels = torch.arange(num_logits, device=device, dtype=torch.long) - if self.world_size > 1 and self.local_loss: - labels = labels + num_logits * self.rank - if self.cache_labels: - self.labels[device] = labels - self.prev_num_logits = num_logits - else: - labels = self.labels[device] - - if not self.weighted_loss: - total_loss = ( - F.cross_entropy(a_logits_per_audio, labels) - + F.cross_entropy(a_logits_per_text, labels) - + F.cross_entropy(t_logits_per_audio, labels) - + F.cross_entropy(t_logits_per_text, labels) - ) / 4 - else: - audio_weight = (audio_features @ audio_features.T).detach() - audio_weight = ( - torch.exp( - torch.sum(audio_weight, axis=1) - / (self.weight_loss_kappa * len(audio_weight)) - ) - ).detach() - text_weight = (text_features @ text_features.T).detach() - text_weight = ( - torch.exp( - torch.sum(text_weight, axis=1) - / (self.weight_loss_kappa * len(text_features)) - ) - ).detach() - total_loss = ( - F.cross_entropy(a_logits_per_audio, labels, weight=audio_weight) - + F.cross_entropy(a_logits_per_text, labels, weight=audio_weight) - + F.cross_entropy(t_logits_per_audio, labels, weight=text_weight) - + F.cross_entropy(t_logits_per_text, labels, weight=text_weight) - ) / 4 - else: - if self.world_size > 1: - all_audio_features, all_text_features = gather_features( - audio_features=audio_features, - text_features=text_features, - local_loss=self.local_loss, - gather_with_grad=self.gather_with_grad, - rank=self.rank, - world_size=self.world_size, - use_horovod=self.use_horovod, - mlp_loss=self.mlp_loss, - ) - - if self.local_loss: - logits_per_audio = ( - logit_scale_a * audio_features @ all_text_features.T - ) - logits_per_text = ( - logit_scale_a * text_features @ all_audio_features.T - ) - else: - logits_per_audio = ( - logit_scale_a * all_audio_features @ all_text_features.T - ) - logits_per_text = logits_per_audio.T - else: - logits_per_audio = logit_scale_a * audio_features @ text_features.T - logits_per_text = logit_scale_a * text_features @ audio_features.T - - # calculated ground-truth and cache if enabled - num_logits = logits_per_audio.shape[0] - if self.prev_num_logits != num_logits or device not in self.labels: - labels = torch.arange(num_logits, device=device, dtype=torch.long) - if self.world_size > 1 and self.local_loss: - labels = labels + num_logits * self.rank - if self.cache_labels: - self.labels[device] = labels - self.prev_num_logits = num_logits - else: - labels = self.labels[device] - if not self.weighted_loss: - total_loss = ( - F.cross_entropy(logits_per_audio, labels) - + F.cross_entropy(logits_per_text, labels) - ) / 2 - else: - audio_weight = (all_audio_features @ all_audio_features.T).detach() - audio_weight = ( - torch.exp( - torch.sum(audio_weight, axis=1) - / (self.weight_loss_kappa * len(all_audio_features)) - ) - ).detach() - text_weight = (all_text_features @ all_text_features.T).detach() - text_weight = ( - torch.exp( - torch.sum(text_weight, axis=1) - / (self.weight_loss_kappa * len(all_text_features)) - ) - ).detach() - total_loss = ( - F.cross_entropy(logits_per_audio, labels, weight=text_weight) - + F.cross_entropy(logits_per_text, labels, weight=audio_weight) - ) / 2 - return total_loss - - -def lp_gather_features(pred, target, world_size=1, use_horovod=False): - if use_horovod: - assert hvd is not None, "Please install horovod" - with torch.no_grad(): - all_preds = hvd.allgather(pred) - all_targets = hvd.allgath(target) - else: - gathered_preds = [torch.zeros_like(pred) for _ in range(world_size)] - gathered_targets = [torch.zeros_like(target) for _ in range(world_size)] - - dist.all_gather(gathered_preds, pred) - dist.all_gather(gathered_targets, target) - all_preds = torch.cat(gathered_preds, dim=0) - all_targets = torch.cat(gathered_targets, dim=0) - - return all_preds, all_targets - - -def get_map(pred, target): - pred = torch.sigmoid(pred).numpy() - target = target.numpy() - return np.mean(average_precision_score(target, pred, average=None)) - - -def get_acc(pred, target): - pred = torch.argmax(pred, 1).numpy() - target = torch.argmax(target, 1).numpy() - return accuracy_score(target, pred) - - -def get_mauc(pred, target): - pred = torch.sigmoid(pred).numpy() - target = target.numpy() - return np.mean(roc_auc_score(target, pred, average=None)) - - -class LPMetrics(object): - def __init__(self, metric_names=["map", "acc", "mauc"]): - self.metrics = [] - for name in metric_names: - self.metrics.append(self.get_metric(name)) - self.metric_names = metric_names - - def get_metric(self, name): - if name == "map": - return get_map - elif name == "acc": - return get_acc - elif name == "mauc": - return get_mauc - else: - raise ValueError(f"the metric should be at least one of [map, acc, mauc]") - - def evaluate_mertics(self, pred, target): - metric_dict = {} - for i in range(len(self.metric_names)): - metric_dict[self.metric_names[i]] = self.metrics[i](pred, target) - return metric_dict - - -def calc_celoss(pred, target): - target = torch.argmax(target, 1).long() - return nn.CrossEntropyLoss()(pred, target) - - -class LPLoss(nn.Module): - def __init__(self, loss_name): - super().__init__() - if loss_name == "bce": - self.loss_func = nn.BCEWithLogitsLoss() - elif loss_name == "ce": - self.loss_func = calc_celoss - elif loss_name == "mse": - self.loss_func = nn.MSELoss() - else: - raise ValueError(f"the loss func should be at least one of [bce, ce, mse]") - - def forward(self, pred, target): - loss = self.loss_func(pred, target) - return loss diff --git a/spaces/banana-projects/web3d/node_modules/three/src/renderers/webgl/WebGLCapabilities.d.ts b/spaces/banana-projects/web3d/node_modules/three/src/renderers/webgl/WebGLCapabilities.d.ts deleted file mode 100644 index 1833f3d6e212f19ce7812c4a2a82ba119a3d0856..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/renderers/webgl/WebGLCapabilities.d.ts +++ /dev/null @@ -1,29 +0,0 @@ -export interface WebGLCapabilitiesParameters { - precision?: any; - logarithmicDepthBuffer?: any; -} - -export class WebGLCapabilities { - constructor( - gl: WebGLRenderingContext, - extensions: any, - parameters: WebGLCapabilitiesParameters - ); - - precision: any; - logarithmicDepthBuffer: any; - maxTextures: any; - maxVertexTextures: any; - maxTextureSize: any; - maxCubemapSize: any; - maxAttributes: any; - maxVertexUniforms: any; - maxVaryings: any; - maxFragmentUniforms: any; - vertexTextures: any; - floatFragmentTextures: any; - floatVertexTextures: any; - - getMaxAnisotropy(): number; - getMaxPrecision(precision: string): string; -} diff --git a/spaces/bankholdup/stylegan_petbreeder/e4e/models/stylegan2/op/__init__.py b/spaces/bankholdup/stylegan_petbreeder/e4e/models/stylegan2/op/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/beephids/paper-llm/ai.py b/spaces/beephids/paper-llm/ai.py deleted file mode 100644 index 2f27cb9cf2ca682025190ae0d2c871ffa0e1f248..0000000000000000000000000000000000000000 --- a/spaces/beephids/paper-llm/ai.py +++ /dev/null @@ -1,70 +0,0 @@ -import warnings -import os # built in python library with operating system functions -from dotenv import load_dotenv, find_dotenv -import json - -import openai # Set of functions provided by openai to interact with their models -from langchain.chat_models import ChatOpenAI -from langchain.prompts.chat import ( - ChatPromptTemplate, - SystemMessagePromptTemplate, - AIMessagePromptTemplate, - HumanMessagePromptTemplate, -) -from langchain.schema import AIMessage, HumanMessage, SystemMessage - -warnings.filterwarnings('ignore') -_ = load_dotenv(find_dotenv()) -openai.api_key = os.getenv('OPENAI_API_KEY') - -def get_completion(prompt, model="gpt-3.5-turbo"): - messages = [{"role": "user", "content": prompt}] - response = openai.ChatCompletion.create( - model=model, - messages=messages, - temperature=0, # this is the degree of randomness of the model's output - ) - return response.choices[0].message["content"] - -def get_completion_from_messages(messages, model="gpt-3.5-turbo", temperature=0): - response = openai.ChatCompletion.create( - model=model, - messages=messages, - temperature=temperature, # this is the degree of randomness of the model's output - ) -# print(str(response.choices[0].message)) - return response.choices[0].message["content"] - -def collect_messages(_): - prompt = inp.value_input - inp.value = '' - context.append({'role':'user', 'content':f"{prompt}"}) - response = get_completion_from_messages(context) - context.append({'role':'assistant', 'content':f"{response}"}) - panels.append( - pn.Row('User:', pn.pane.Markdown(prompt, width=600))) - panels.append( - pn.Row('Assistant:', pn.pane.Markdown(response, width=600, style={'background-color': '#F6F6F6'}))) - - return pn.Column(*panels) - - -persona = "LLM_1" - -pn.extension('floatpanel') -panels = [] # collect display - -context = [ {'role':'system', 'content': sys_prompt.get_prompt(persona)} ] # accumulate messages - -inp = pn.widgets.TextInput(value="Hi", placeholder='Enter text here…') -button_conversation = pn.widgets.Button(name="Chat!") - -interactive_conversation = pn.bind(collect_messages, button_conversation) - -dashboard = pn.Column( - inp, - pn.Row(button_conversation), - pn.panel(interactive_conversation, loading_indicator=True, height=300), -) - -dashboard \ No newline at end of file diff --git a/spaces/belinghy/character-animation-motion-vaes/static/onnx.min.js b/spaces/belinghy/character-animation-motion-vaes/static/onnx.min.js deleted file mode 100644 index 12205649b97347b5e2434d58dc4f1ed3e593144c..0000000000000000000000000000000000000000 --- a/spaces/belinghy/character-animation-motion-vaes/static/onnx.min.js +++ /dev/null @@ -1,14 +0,0 @@ -!function(t,e){if("object"==typeof exports&&"object"==typeof module)module.exports=e();else if("function"==typeof define&&define.amd)define([],e);else{var n=e();for(var r in n)("object"==typeof exports?exports:t)[r]=n[r]}}(window,(function(){return function(t){var e={};function n(r){if(e[r])return e[r].exports;var o=e[r]={i:r,l:!1,exports:{}};return t[r].call(o.exports,o,o.exports,n),o.l=!0,o.exports}return n.m=t,n.c=e,n.d=function(t,e,r){n.o(t,e)||Object.defineProperty(t,e,{enumerable:!0,get:r})},n.r=function(t){"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(t,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(t,"__esModule",{value:!0})},n.t=function(t,e){if(1&e&&(t=n(t)),8&e)return t;if(4&e&&"object"==typeof t&&t&&t.__esModule)return t;var r=Object.create(null);if(n.r(r),Object.defineProperty(r,"default",{enumerable:!0,value:t}),2&e&&"string"!=typeof t)for(var o in t)n.d(r,o,function(e){return t[e]}.bind(null,o));return r},n.n=function(t){var e=t&&t.__esModule?function(){return t.default}:function(){return t};return n.d(e,"a",e),e},n.o=function(t,e){return Object.prototype.hasOwnProperty.call(t,e)},n.p="",n(n.s=26)}([function(t,e,n){"use strict";var r=this&&this.__read||function(t,e){var n="function"==typeof Symbol&&t[Symbol.iterator];if(!n)return t;var r,o,i=n.call(t),a=[];try{for(;(void 0===e||e-- >0)&&!(r=i.next()).done;)a.push(r.value)}catch(t){o={error:t}}finally{try{r&&!r.done&&(n=i.return)&&n.call(i)}finally{if(o)throw o.error}}return a},o=this&&this.__values||function(t){var e="function"==typeof Symbol&&Symbol.iterator,n=e&&t[e],r=0;if(n)return n.call(t);if(t&&"number"==typeof t.length)return{next:function(){return t&&r>=t.length&&(t=void 0),{value:t&&t[r++],done:!t}}};throw new TypeError(e?"Object is not iterable.":"Symbol.iterator is not defined.")},i=this&&this.__importDefault||function(t){return t&&t.__esModule?t:{default:t}};Object.defineProperty(e,"__esModule",{value:!0}),e.PoolConvUtil=e.ReduceUtil=e.SplitUtil=e.MathUtil=e.ShapeUtil=e.LongUtil=e.ProtoUtil=e.GemmUtil=e.arrayCopyHelper=e.BroadcastUtil=e.MatMulUtil=e.checkInputsShape=void 0;var a=i(n(13)),u=n(9),s=n(1);e.checkInputsShape=function(t){for(var e=[],n=1;n1&&h>1)return;s[u-f]=Math.max(p,h)}return s},t.index=function(e,n){var r=new Array(n.length);return t.fillIndex(e,n,r),r},t.fillIndex=function(t,e,n){for(var r=t.length-e.length,o=0;o=0;_--)c[_]=v%a[_],v=Math.floor(v/a[_]);g||(t.fillIndex(c,e.dims,f),h=e.get(f)),m||(t.fillIndex(c,n.dims,p),y=n.get(p)),l.set(c,r(h,y))}}return l}},t.isValidBroadcast=function(t,e){var n=t.length,r=e.length;if(n>r)return!1;for(var o=1;o<=n;o++)if(1!==t[n-o]&&t[n-o]!==e[r-o])return!1;return!0},t}();e.BroadcastUtil=c,e.arrayCopyHelper=function(t,e,n,r,o){if(r<0||r>=e.length)throw new Error("sourceIndex out of bounds");if(n<0||n>=t.length)throw new Error("targetIndex out of bounds");if(r+o>e.length)throw new Error("source indices to be copied are outside bounds");if(n+o>t.length)throw new Error("target array is too small to hold result");for(var i=0;ie.length)throw new Error("invalid dimension of "+n+" for sizeFromDimension as Tensor has "+e.length+" dimensions.");return t.getSizeFromDimensionRange(e,n,e.length)},t.sizeToDimension=function(e,n){if(n<0||n>e.length)throw new Error("invalid dimension of "+n+" for sizeToDimension as Tensor has "+e.length+" dimensions.");return t.getSizeFromDimensionRange(e,0,n)},t.getSizeFromDimensionRange=function(t,e,n){for(var r=1,o=e;o=0;--r)n[r]=n[r+1]*t[r+1];return n},t.transpose=function(t){return t.slice().reverse()},t.indicesToOffset=function(t,e,n){void 0===n&&(n=t.length);for(var r=0,o=0;o=e)throw new Error("unsupported axis for this operation.");return t<0?t+e:t},t.normalizeAxes=function(t,e){var n=this;return t.map((function(t){return n.normalizeAxis(t,e)}))},t.incrementIndex=function(t,e,n){if(0===e.length||0===t.length)throw new Error("Index incrementing unsupported for scalar Tensor");if(void 0===n)n=e.length;else if(n<=0||n>e.length)throw new Error("Incorrect axis to increment on");for(var r=n-1;r>=0&&(t[r]++,!(t[r]=e.length)throw new Error("the dimension with value zero exceeds the dimension size of the input tensor");o[u]=e[u]}else o[u]=n[u];a*=o[u]}}var s=t.size(e);if(-1!==i){if(s%a!=0)throw new Error("the input tensor cannot be reshaped to the requested shape. Input shape: ["+e+"] Output shape: ["+n+"]");o[i]=s/a}else if(a!==s)throw new Error("reshapedDims and originalDims don't have matching sizes");return o},t.sortBasedOnPerm=function(t,e){return e?e.map((function(e){return t[e]})):t.slice().reverse()},t.padShape=function(t,e){var n=t.length;return t.map((function(t,r){return t+e[r]+e[r+n]}))},t.areEqual=function(t,e){return t.length===e.length&&t.every((function(t,n){return t===e[n]}))},t.validateDimsAndCalcSize=function(t){var e,n;if(t.length>6)throw new TypeError("Only rank 0 to 6 is supported for tensor shape.");var r=1;try{for(var i=o(t),a=i.next();!a.done;a=i.next()){var u=a.value;if(!Number.isInteger(u))throw new TypeError("Invalid shape: "+u+" is not an integer");if(u<0||u>2147483647)throw new TypeError("Invalid shape: length "+u+" is not allowed");r*=u}}catch(t){e={error:t}}finally{try{a&&!a.done&&(n=i.return)&&n.call(i)}finally{if(e)throw e.error}}return r},t.flattenShape=function(t,e){e<0&&(e+=t.length);var n=t.reduce((function(t,e){return t*e}),1),r=t.slice(e).reduce((function(t,e){return t*e}),1);return[n/r,r]},t.squeezeShape=function(e,n){var r=new Array;n=t.normalizeAxes(n,e.length);for(var o=0;o=0;if(i&&1!==e[o])throw new Error("squeeze an axis of size different than 1");(0===n.length&&e[o]>1||n.length>0&&!i)&&r.push(e[o])}return r},t.unsqueezeShape=function(e,n){var r=new Array(e.length+n.length);r.fill(0);for(var o=0;o=r.length)throw new Error("'axes' has an out of range axis");if(0!==r[i])throw new Error("'axes' has a duplicate axis");r[i]=1}var a=0;for(o=0;o=e.length)throw new Error("sourceIndex out of bounds");if(n<0||n>=t.length)throw new Error("targetIndex out of bounds");if(r+o>e.length)throw new Error("source indices to be copied are outside bounds");if(n+o>t.length)throw new Error("target array is too small to hold result");for(var i=0;i=e.length)throw new Error("sourceIndex out of bounds");if(n<0||n>=t.length)throw new Error("targetIndex out of bounds");if(r+o>e.length)throw new Error("source indices to be copied are outside bounds");if(n+o>t.length)throw new Error("target array is too small to hold result");for(var a=0;a=e.length)throw new Error("sourceIndex out of bounds");if(n<0||n>=t.length)throw new Error("targetIndex out of bounds");if(r+o>e.length)throw new Error("source indices to be copied are outside bounds");if(n+o>t.length)throw new Error("target array is too small to hold result");for(var a=0;a=e.length)throw new Error("sourceIndex out of bounds");if(n<0||n>=t.length)throw new Error("targetIndex out of bounds");if(r+o>e.length)throw new Error("source indices to be copied are outside bounds");if(n+o>t.length)throw new Error("target array is too small to hold result");for(var i=0;i=n.length)return a(e[i]);for(var l=n[o],c=l>=r.length?1:d.size(r.slice(l+1)),f=0;f=n.length?n.push(e[i+2]):n[i]=e[i+2];for(i=0;i=n[i]||o[i+n.length]>=n[i])throw new Error("pads should be smaller than kernel")}},t.adjustPadsBasedOnAutoPad=function(e,n,r,o,i,a){if(a){if(i.length!==2*(e.length-2))throw new Error("length of pads should be twice the length of data dimensions");if(n.length!==e.length-2)throw new Error("length of strides should be the length of data dimensions");if(o.length!==e.length-2)throw new Error("length of kernel shapes should be the length of data dimensions");for(var u=0;u0&&o[o.length-1])||6!==i[0]&&2!==i[0])){a=0;continue}if(3===i[0]&&(!o||i[1]>o[0]&&i[1]0){var i=o.data,l=new DataView(n.rawData.buffer,n.rawData.byteOffset,n.rawData.byteLength),c=function(t){switch(t){case u.onnx.TensorProto.DataType.UINT8:case u.onnx.TensorProto.DataType.INT8:case u.onnx.TensorProto.DataType.BOOL:return 1;case u.onnx.TensorProto.DataType.UINT16:case u.onnx.TensorProto.DataType.INT16:return 2;case u.onnx.TensorProto.DataType.FLOAT:case u.onnx.TensorProto.DataType.INT32:case u.onnx.TensorProto.DataType.UINT32:return 4;case u.onnx.TensorProto.DataType.INT64:case u.onnx.TensorProto.DataType.DOUBLE:case u.onnx.TensorProto.DataType.UINT64:return 8;default:throw new Error("cannot calculate sizeof() on type "+u.onnx.TensorProto.DataType[t])}}(n.dataType),h=n.rawData.byteLength/c;if(n.rawData.byteLength%c!=0)throw new Error("invalid buffer length");if(i.length!==h)throw new Error("buffer length mismatch");for(var d=0;d=this._flushBatchSize||t-this._flushTime>=this._flushIntervalInMilliseconds){for(var n=this._flushPointer;this._flushPointer0?(f.Logger.verbose("WebAssembly-Workers","User has requested "+t+" Workers."),!function(){if("undefined"!=typeof window&&void 0!==window.Worker)return!0;return!1}()?(f.Logger.error("WebAssembly-Workers","Environment does not support usage of Workers. Will not spawn workers."),l=0):(f.Logger.verbose("WebAssembly-Workers","Environment supports usage of Workers. Will spawn "+t+" Workers"),l=t)):(f.Logger.verbose("WebAssembly-Workers","User has disabled usage of Workers. Will not spawn workers."),l=0);var m=new Array(l);s=new Array(l),c=new Array(l);for(var v=function(t){var e=new Promise((function(e,r){var o=n(105).default();s[t]=o,c[t]=[],o.onerror=function(e){f.Logger.error("WebAssembly-Workers","worker-"+t+" ERR: "+e),h||r()},o.onmessage=function(n){if(!(n&&n.data&&n.data.type))throw new Error("missing message type from worker");if("init-success"===n.data.type)e();else{if("ccall"!==n.data.type)throw new Error("unknown message type from worker: "+n.data.type);var r=n.data.perfData;c[t].shift()(n.data.buffer,r)}}}));m[t]=e},b=0;b=l)throw new Error("invalid worker ID "+t+". should be in range [0, "+l+")");var i=[],a=e.calculateOffsets(i,r),u=new ArrayBuffer(a);e.ccallSerialize(new Uint8Array(u),i,r);var f=p.now();return s[t].postMessage({type:"ccall",func:n,buffer:u},[u]),new Promise((function(n,o){c[t].push((function(t,o){o.startTimeWorker=o.startTime,o.endTimeWorker=o.endTime,o.startTime=f,o.endTime=p.now(),e.ccallDeserialize(new Uint8Array(t),i,r),n(o)}))}))},e}(p.WasmBinding);e.WasmBinding=y},function(t,e,n){"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.TopologicalSortGlslRoutines=e.GlslLibRoutineNode=e.GlslLibRoutine=e.GlslLib=e.GlslContext=e.FunctionType=void 0,function(t){t[t.ValueBased=0]="ValueBased",t[t.Positional=1]="Positional"}(e.FunctionType||(e.FunctionType={}));var r=function(t,e){this.glContext=t,this.programInfo=e};e.GlslContext=r;var o=function(t){this.context=t};e.GlslLib=o;var i=function(t,e){this.routineBody=t,this.dependencies=e};e.GlslLibRoutine=i;var a=function(){function t(t,e,n){this.name=t,this.dependencies=n||[],e&&(this.routineBody=e)}return t.prototype.addDependency=function(t){t&&this.dependencies.push(t)},t}();e.GlslLibRoutineNode=a;var u=function(){function t(){}return t.returnOrderedNodes=function(t){if(!t||0===t.length)return[];if(1===t.length)return t;var e=new Set,n=new Set,r=new Array;return this.createOrderedNodes(t,e,n,r),r},t.createOrderedNodes=function(t,e,n,r){for(var o=0;o0)for(var i=0;i0)},r.Buffer=function(){try{var t=r.inquire("buffer").Buffer;return t.prototype.utf8Write?t:null}catch(t){return null}}(),r._Buffer_from=null,r._Buffer_allocUnsafe=null,r.newBuffer=function(t){return"number"==typeof t?r.Buffer?r._Buffer_allocUnsafe(t):new r.Array(t):r.Buffer?r._Buffer_from(t):"undefined"==typeof Uint8Array?t:new Uint8Array(t)},r.Array="undefined"!=typeof Uint8Array?Uint8Array:Array,r.Long=r.global.dcodeIO&&r.global.dcodeIO.Long||r.global.Long||r.inquire("long"),r.key2Re=/^true|false|0|1$/,r.key32Re=/^-?(?:0|[1-9][0-9]*)$/,r.key64Re=/^(?:[\\x00-\\xff]{8}|-?(?:0|[1-9][0-9]*))$/,r.longToHash=function(t){return t?r.LongBits.from(t).toHash():r.LongBits.zeroHash},r.longFromHash=function(t,e){var n=r.LongBits.fromHash(t);return r.Long?r.Long.fromBits(n.lo,n.hi,e):n.toNumber(Boolean(e))},r.merge=o,r.lcFirst=function(t){return t.charAt(0).toLowerCase()+t.substring(1)},r.newError=i,r.ProtocolError=i("ProtocolError"),r.oneOfGetter=function(t){for(var e={},n=0;n-1;--n)if(1===e[t[n]]&&void 0!==this[t[n]]&&null!==this[t[n]])return t[n]}},r.oneOfSetter=function(t){return function(e){for(var n=0;n>>3){case 1:r.name=t.string();break;case 21:r.refAttrName=t.string();break;case 13:r.docString=t.string();break;case 20:r.type=t.int32();break;case 2:r.f=t.float();break;case 3:r.i=t.int64();break;case 4:r.s=t.bytes();break;case 5:r.t=u.onnx.TensorProto.decode(t,t.uint32());break;case 6:r.g=u.onnx.GraphProto.decode(t,t.uint32());break;case 7:if(r.floats&&r.floats.length||(r.floats=[]),2==(7&i))for(var a=t.uint32()+t.pos;t.pos>>0,t.i.high>>>0).toNumber())),null!=t.s&&("string"==typeof t.s?a.base64.decode(t.s,e.s=a.newBuffer(a.base64.length(t.s)),0):t.s.length&&(e.s=t.s)),null!=t.t){if("object"!=typeof t.t)throw TypeError(".onnx.AttributeProto.t: object expected");e.t=u.onnx.TensorProto.fromObject(t.t)}if(null!=t.g){if("object"!=typeof t.g)throw TypeError(".onnx.AttributeProto.g: object expected");e.g=u.onnx.GraphProto.fromObject(t.g)}if(t.floats){if(!Array.isArray(t.floats))throw TypeError(".onnx.AttributeProto.floats: array expected");e.floats=[];for(var n=0;n>>0,t.ints[n].high>>>0).toNumber())}if(t.strings){if(!Array.isArray(t.strings))throw TypeError(".onnx.AttributeProto.strings: array expected");e.strings=[];for(n=0;n>>0,t.i.high>>>0).toNumber():t.i),null!=t.s&&t.hasOwnProperty("s")&&(n.s=e.bytes===String?a.base64.encode(t.s,0,t.s.length):e.bytes===Array?Array.prototype.slice.call(t.s):t.s),null!=t.t&&t.hasOwnProperty("t")&&(n.t=u.onnx.TensorProto.toObject(t.t,e)),null!=t.g&&t.hasOwnProperty("g")&&(n.g=u.onnx.GraphProto.toObject(t.g,e)),t.floats&&t.floats.length){n.floats=[];for(var o=0;o>>0,t.ints[o].high>>>0).toNumber():t.ints[o]}if(t.strings&&t.strings.length){n.strings=[];for(o=0;o>>3){case 1:r.name=t.string();break;case 2:r.type=u.onnx.TypeProto.decode(t,t.uint32());break;case 3:r.docString=t.string();break;default:t.skipType(7&i)}}return r},t.decodeDelimited=function(t){return t instanceof o||(t=new o(t)),this.decode(t,t.uint32())},t.verify=function(t){if("object"!=typeof t||null===t)return"object expected";if(null!=t.name&&t.hasOwnProperty("name")&&!a.isString(t.name))return"name: string expected";if(null!=t.type&&t.hasOwnProperty("type")){var e=u.onnx.TypeProto.verify(t.type);if(e)return"type."+e}return null!=t.docString&&t.hasOwnProperty("docString")&&!a.isString(t.docString)?"docString: string expected":null},t.fromObject=function(t){if(t instanceof u.onnx.ValueInfoProto)return t;var e=new u.onnx.ValueInfoProto;if(null!=t.name&&(e.name=String(t.name)),null!=t.type){if("object"!=typeof t.type)throw TypeError(".onnx.ValueInfoProto.type: object expected");e.type=u.onnx.TypeProto.fromObject(t.type)}return null!=t.docString&&(e.docString=String(t.docString)),e},t.toObject=function(t,e){e||(e={});var n={};return e.defaults&&(n.name="",n.type=null,n.docString=""),null!=t.name&&t.hasOwnProperty("name")&&(n.name=t.name),null!=t.type&&t.hasOwnProperty("type")&&(n.type=u.onnx.TypeProto.toObject(t.type,e)),null!=t.docString&&t.hasOwnProperty("docString")&&(n.docString=t.docString),n},t.prototype.toJSON=function(){return this.constructor.toObject(this,r.util.toJSONOptions)},t}(),n.NodeProto=function(){function t(t){if(this.input=[],this.output=[],this.attribute=[],t)for(var e=Object.keys(t),n=0;n>>3){case 1:r.input&&r.input.length||(r.input=[]),r.input.push(t.string());break;case 2:r.output&&r.output.length||(r.output=[]),r.output.push(t.string());break;case 3:r.name=t.string();break;case 4:r.opType=t.string();break;case 7:r.domain=t.string();break;case 5:r.attribute&&r.attribute.length||(r.attribute=[]),r.attribute.push(u.onnx.AttributeProto.decode(t,t.uint32()));break;case 6:r.docString=t.string();break;default:t.skipType(7&i)}}return r},t.decodeDelimited=function(t){return t instanceof o||(t=new o(t)),this.decode(t,t.uint32())},t.verify=function(t){if("object"!=typeof t||null===t)return"object expected";if(null!=t.input&&t.hasOwnProperty("input")){if(!Array.isArray(t.input))return"input: array expected";for(var e=0;e>>3){case 1:r.irVersion=t.int64();break;case 8:r.opsetImport&&r.opsetImport.length||(r.opsetImport=[]),r.opsetImport.push(u.onnx.OperatorSetIdProto.decode(t,t.uint32()));break;case 2:r.producerName=t.string();break;case 3:r.producerVersion=t.string();break;case 4:r.domain=t.string();break;case 5:r.modelVersion=t.int64();break;case 6:r.docString=t.string();break;case 7:r.graph=u.onnx.GraphProto.decode(t,t.uint32());break;case 14:r.metadataProps&&r.metadataProps.length||(r.metadataProps=[]),r.metadataProps.push(u.onnx.StringStringEntryProto.decode(t,t.uint32()));break;default:t.skipType(7&i)}}return r},t.decodeDelimited=function(t){return t instanceof o||(t=new o(t)),this.decode(t,t.uint32())},t.verify=function(t){if("object"!=typeof t||null===t)return"object expected";if(null!=t.irVersion&&t.hasOwnProperty("irVersion")&&!(a.isInteger(t.irVersion)||t.irVersion&&a.isInteger(t.irVersion.low)&&a.isInteger(t.irVersion.high)))return"irVersion: integer|Long expected";if(null!=t.opsetImport&&t.hasOwnProperty("opsetImport")){if(!Array.isArray(t.opsetImport))return"opsetImport: array expected";for(var e=0;e>>0,t.irVersion.high>>>0).toNumber())),t.opsetImport){if(!Array.isArray(t.opsetImport))throw TypeError(".onnx.ModelProto.opsetImport: array expected");e.opsetImport=[];for(var n=0;n>>0,t.modelVersion.high>>>0).toNumber())),null!=t.docString&&(e.docString=String(t.docString)),null!=t.graph){if("object"!=typeof t.graph)throw TypeError(".onnx.ModelProto.graph: object expected");e.graph=u.onnx.GraphProto.fromObject(t.graph)}if(t.metadataProps){if(!Array.isArray(t.metadataProps))throw TypeError(".onnx.ModelProto.metadataProps: array expected");e.metadataProps=[];for(n=0;n>>0,t.irVersion.high>>>0).toNumber():t.irVersion),null!=t.producerName&&t.hasOwnProperty("producerName")&&(n.producerName=t.producerName),null!=t.producerVersion&&t.hasOwnProperty("producerVersion")&&(n.producerVersion=t.producerVersion),null!=t.domain&&t.hasOwnProperty("domain")&&(n.domain=t.domain),null!=t.modelVersion&&t.hasOwnProperty("modelVersion")&&("number"==typeof t.modelVersion?n.modelVersion=e.longs===String?String(t.modelVersion):t.modelVersion:n.modelVersion=e.longs===String?a.Long.prototype.toString.call(t.modelVersion):e.longs===Number?new a.LongBits(t.modelVersion.low>>>0,t.modelVersion.high>>>0).toNumber():t.modelVersion),null!=t.docString&&t.hasOwnProperty("docString")&&(n.docString=t.docString),null!=t.graph&&t.hasOwnProperty("graph")&&(n.graph=u.onnx.GraphProto.toObject(t.graph,e)),t.opsetImport&&t.opsetImport.length){n.opsetImport=[];for(var o=0;o>>3){case 1:r.key=t.string();break;case 2:r.value=t.string();break;default:t.skipType(7&i)}}return r},t.decodeDelimited=function(t){return t instanceof o||(t=new o(t)),this.decode(t,t.uint32())},t.verify=function(t){return"object"!=typeof t||null===t?"object expected":null!=t.key&&t.hasOwnProperty("key")&&!a.isString(t.key)?"key: string expected":null!=t.value&&t.hasOwnProperty("value")&&!a.isString(t.value)?"value: string expected":null},t.fromObject=function(t){if(t instanceof u.onnx.StringStringEntryProto)return t;var e=new u.onnx.StringStringEntryProto;return null!=t.key&&(e.key=String(t.key)),null!=t.value&&(e.value=String(t.value)),e},t.toObject=function(t,e){e||(e={});var n={};return e.defaults&&(n.key="",n.value=""),null!=t.key&&t.hasOwnProperty("key")&&(n.key=t.key),null!=t.value&&t.hasOwnProperty("value")&&(n.value=t.value),n},t.prototype.toJSON=function(){return this.constructor.toObject(this,r.util.toJSONOptions)},t}(),n.TensorAnnotation=function(){function t(t){if(this.quantParameterTensorNames=[],t)for(var e=Object.keys(t),n=0;n>>3){case 1:r.tensorName=t.string();break;case 2:r.quantParameterTensorNames&&r.quantParameterTensorNames.length||(r.quantParameterTensorNames=[]),r.quantParameterTensorNames.push(u.onnx.StringStringEntryProto.decode(t,t.uint32()));break;default:t.skipType(7&i)}}return r},t.decodeDelimited=function(t){return t instanceof o||(t=new o(t)),this.decode(t,t.uint32())},t.verify=function(t){if("object"!=typeof t||null===t)return"object expected";if(null!=t.tensorName&&t.hasOwnProperty("tensorName")&&!a.isString(t.tensorName))return"tensorName: string expected";if(null!=t.quantParameterTensorNames&&t.hasOwnProperty("quantParameterTensorNames")){if(!Array.isArray(t.quantParameterTensorNames))return"quantParameterTensorNames: array expected";for(var e=0;e>>3){case 1:r.node&&r.node.length||(r.node=[]),r.node.push(u.onnx.NodeProto.decode(t,t.uint32()));break;case 2:r.name=t.string();break;case 5:r.initializer&&r.initializer.length||(r.initializer=[]),r.initializer.push(u.onnx.TensorProto.decode(t,t.uint32()));break;case 10:r.docString=t.string();break;case 11:r.input&&r.input.length||(r.input=[]),r.input.push(u.onnx.ValueInfoProto.decode(t,t.uint32()));break;case 12:r.output&&r.output.length||(r.output=[]),r.output.push(u.onnx.ValueInfoProto.decode(t,t.uint32()));break;case 13:r.valueInfo&&r.valueInfo.length||(r.valueInfo=[]),r.valueInfo.push(u.onnx.ValueInfoProto.decode(t,t.uint32()));break;case 14:r.quantizationAnnotation&&r.quantizationAnnotation.length||(r.quantizationAnnotation=[]),r.quantizationAnnotation.push(u.onnx.TensorAnnotation.decode(t,t.uint32()));break;default:t.skipType(7&i)}}return r},t.decodeDelimited=function(t){return t instanceof o||(t=new o(t)),this.decode(t,t.uint32())},t.verify=function(t){if("object"!=typeof t||null===t)return"object expected";if(null!=t.node&&t.hasOwnProperty("node")){if(!Array.isArray(t.node))return"node: array expected";for(var e=0;e>>3){case 1:if(r.dims&&r.dims.length||(r.dims=[]),2==(7&i))for(var a=t.uint32()+t.pos;t.pos>>0,t.dims[n].high>>>0).toNumber())}if(null!=t.dataType&&(e.dataType=0|t.dataType),null!=t.segment){if("object"!=typeof t.segment)throw TypeError(".onnx.TensorProto.segment: object expected");e.segment=u.onnx.TensorProto.Segment.fromObject(t.segment)}if(t.floatData){if(!Array.isArray(t.floatData))throw TypeError(".onnx.TensorProto.floatData: array expected");e.floatData=[];for(n=0;n>>0,t.int64Data[n].high>>>0).toNumber())}if(null!=t.name&&(e.name=String(t.name)),null!=t.docString&&(e.docString=String(t.docString)),null!=t.rawData&&("string"==typeof t.rawData?a.base64.decode(t.rawData,e.rawData=a.newBuffer(a.base64.length(t.rawData)),0):t.rawData.length&&(e.rawData=t.rawData)),t.externalData){if(!Array.isArray(t.externalData))throw TypeError(".onnx.TensorProto.externalData: array expected");e.externalData=[];for(n=0;n>>0,t.uint64Data[n].high>>>0).toNumber(!0))}return e},t.toObject=function(t,e){e||(e={});var n={};if((e.arrays||e.defaults)&&(n.dims=[],n.floatData=[],n.int32Data=[],n.stringData=[],n.int64Data=[],n.doubleData=[],n.uint64Data=[],n.externalData=[]),e.defaults&&(n.dataType=0,n.segment=null,n.name="",e.bytes===String?n.rawData="":(n.rawData=[],e.bytes!==Array&&(n.rawData=a.newBuffer(n.rawData))),n.docString="",n.dataLocation=e.enums===String?"DEFAULT":0),t.dims&&t.dims.length){n.dims=[];for(var r=0;r>>0,t.dims[r].high>>>0).toNumber():t.dims[r]}if(null!=t.dataType&&t.hasOwnProperty("dataType")&&(n.dataType=t.dataType),null!=t.segment&&t.hasOwnProperty("segment")&&(n.segment=u.onnx.TensorProto.Segment.toObject(t.segment,e)),t.floatData&&t.floatData.length){n.floatData=[];for(r=0;r>>0,t.int64Data[r].high>>>0).toNumber():t.int64Data[r]}if(null!=t.name&&t.hasOwnProperty("name")&&(n.name=t.name),null!=t.rawData&&t.hasOwnProperty("rawData")&&(n.rawData=e.bytes===String?a.base64.encode(t.rawData,0,t.rawData.length):e.bytes===Array?Array.prototype.slice.call(t.rawData):t.rawData),t.doubleData&&t.doubleData.length){n.doubleData=[];for(r=0;r>>0,t.uint64Data[r].high>>>0).toNumber(!0):t.uint64Data[r]}if(null!=t.docString&&t.hasOwnProperty("docString")&&(n.docString=t.docString),t.externalData&&t.externalData.length){n.externalData=[];for(r=0;r>>3){case 1:r.begin=t.int64();break;case 2:r.end=t.int64();break;default:t.skipType(7&i)}}return r},t.decodeDelimited=function(t){return t instanceof o||(t=new o(t)),this.decode(t,t.uint32())},t.verify=function(t){return"object"!=typeof t||null===t?"object expected":null!=t.begin&&t.hasOwnProperty("begin")&&!(a.isInteger(t.begin)||t.begin&&a.isInteger(t.begin.low)&&a.isInteger(t.begin.high))?"begin: integer|Long expected":null!=t.end&&t.hasOwnProperty("end")&&!(a.isInteger(t.end)||t.end&&a.isInteger(t.end.low)&&a.isInteger(t.end.high))?"end: integer|Long expected":null},t.fromObject=function(t){if(t instanceof u.onnx.TensorProto.Segment)return t;var e=new u.onnx.TensorProto.Segment;return null!=t.begin&&(a.Long?(e.begin=a.Long.fromValue(t.begin)).unsigned=!1:"string"==typeof t.begin?e.begin=parseInt(t.begin,10):"number"==typeof t.begin?e.begin=t.begin:"object"==typeof t.begin&&(e.begin=new a.LongBits(t.begin.low>>>0,t.begin.high>>>0).toNumber())),null!=t.end&&(a.Long?(e.end=a.Long.fromValue(t.end)).unsigned=!1:"string"==typeof t.end?e.end=parseInt(t.end,10):"number"==typeof t.end?e.end=t.end:"object"==typeof t.end&&(e.end=new a.LongBits(t.end.low>>>0,t.end.high>>>0).toNumber())),e},t.toObject=function(t,e){e||(e={});var n={};if(e.defaults){if(a.Long){var r=new a.Long(0,0,!1);n.begin=e.longs===String?r.toString():e.longs===Number?r.toNumber():r}else n.begin=e.longs===String?"0":0;if(a.Long){r=new a.Long(0,0,!1);n.end=e.longs===String?r.toString():e.longs===Number?r.toNumber():r}else n.end=e.longs===String?"0":0}return null!=t.begin&&t.hasOwnProperty("begin")&&("number"==typeof t.begin?n.begin=e.longs===String?String(t.begin):t.begin:n.begin=e.longs===String?a.Long.prototype.toString.call(t.begin):e.longs===Number?new a.LongBits(t.begin.low>>>0,t.begin.high>>>0).toNumber():t.begin),null!=t.end&&t.hasOwnProperty("end")&&("number"==typeof t.end?n.end=e.longs===String?String(t.end):t.end:n.end=e.longs===String?a.Long.prototype.toString.call(t.end):e.longs===Number?new a.LongBits(t.end.low>>>0,t.end.high>>>0).toNumber():t.end),n},t.prototype.toJSON=function(){return this.constructor.toObject(this,r.util.toJSONOptions)},t}(),t.DataLocation=function(){var t={},e=Object.create(t);return e[t[0]="DEFAULT"]=0,e[t[1]="EXTERNAL"]=1,e}(),t}(),n.TensorShapeProto=function(){function t(t){if(this.dim=[],t)for(var e=Object.keys(t),n=0;n>>3){case 1:r.dim&&r.dim.length||(r.dim=[]),r.dim.push(u.onnx.TensorShapeProto.Dimension.decode(t,t.uint32()));break;default:t.skipType(7&i)}}return r},t.decodeDelimited=function(t){return t instanceof o||(t=new o(t)),this.decode(t,t.uint32())},t.verify=function(t){if("object"!=typeof t||null===t)return"object expected";if(null!=t.dim&&t.hasOwnProperty("dim")){if(!Array.isArray(t.dim))return"dim: array expected";for(var e=0;e>>3){case 1:r.dimValue=t.int64();break;case 2:r.dimParam=t.string();break;case 3:r.denotation=t.string();break;default:t.skipType(7&i)}}return r},t.decodeDelimited=function(t){return t instanceof o||(t=new o(t)),this.decode(t,t.uint32())},t.verify=function(t){if("object"!=typeof t||null===t)return"object expected";var e={};if(null!=t.dimValue&&t.hasOwnProperty("dimValue")&&(e.value=1,!(a.isInteger(t.dimValue)||t.dimValue&&a.isInteger(t.dimValue.low)&&a.isInteger(t.dimValue.high))))return"dimValue: integer|Long expected";if(null!=t.dimParam&&t.hasOwnProperty("dimParam")){if(1===e.value)return"value: multiple values";if(e.value=1,!a.isString(t.dimParam))return"dimParam: string expected"}return null!=t.denotation&&t.hasOwnProperty("denotation")&&!a.isString(t.denotation)?"denotation: string expected":null},t.fromObject=function(t){if(t instanceof u.onnx.TensorShapeProto.Dimension)return t;var e=new u.onnx.TensorShapeProto.Dimension;return null!=t.dimValue&&(a.Long?(e.dimValue=a.Long.fromValue(t.dimValue)).unsigned=!1:"string"==typeof t.dimValue?e.dimValue=parseInt(t.dimValue,10):"number"==typeof t.dimValue?e.dimValue=t.dimValue:"object"==typeof t.dimValue&&(e.dimValue=new a.LongBits(t.dimValue.low>>>0,t.dimValue.high>>>0).toNumber())),null!=t.dimParam&&(e.dimParam=String(t.dimParam)),null!=t.denotation&&(e.denotation=String(t.denotation)),e},t.toObject=function(t,e){e||(e={});var n={};return e.defaults&&(n.denotation=""),null!=t.dimValue&&t.hasOwnProperty("dimValue")&&("number"==typeof t.dimValue?n.dimValue=e.longs===String?String(t.dimValue):t.dimValue:n.dimValue=e.longs===String?a.Long.prototype.toString.call(t.dimValue):e.longs===Number?new a.LongBits(t.dimValue.low>>>0,t.dimValue.high>>>0).toNumber():t.dimValue,e.oneofs&&(n.value="dimValue")),null!=t.dimParam&&t.hasOwnProperty("dimParam")&&(n.dimParam=t.dimParam,e.oneofs&&(n.value="dimParam")),null!=t.denotation&&t.hasOwnProperty("denotation")&&(n.denotation=t.denotation),n},t.prototype.toJSON=function(){return this.constructor.toObject(this,r.util.toJSONOptions)},t}(),t}(),n.TypeProto=function(){function t(t){if(t)for(var e=Object.keys(t),n=0;n>>3){case 1:r.tensorType=u.onnx.TypeProto.Tensor.decode(t,t.uint32());break;case 6:r.denotation=t.string();break;default:t.skipType(7&i)}}return r},t.decodeDelimited=function(t){return t instanceof o||(t=new o(t)),this.decode(t,t.uint32())},t.verify=function(t){if("object"!=typeof t||null===t)return"object expected";if(null!=t.tensorType&&t.hasOwnProperty("tensorType")){var e=u.onnx.TypeProto.Tensor.verify(t.tensorType);if(e)return"tensorType."+e}return null!=t.denotation&&t.hasOwnProperty("denotation")&&!a.isString(t.denotation)?"denotation: string expected":null},t.fromObject=function(t){if(t instanceof u.onnx.TypeProto)return t;var e=new u.onnx.TypeProto;if(null!=t.tensorType){if("object"!=typeof t.tensorType)throw TypeError(".onnx.TypeProto.tensorType: object expected");e.tensorType=u.onnx.TypeProto.Tensor.fromObject(t.tensorType)}return null!=t.denotation&&(e.denotation=String(t.denotation)),e},t.toObject=function(t,e){e||(e={});var n={};return e.defaults&&(n.denotation=""),null!=t.tensorType&&t.hasOwnProperty("tensorType")&&(n.tensorType=u.onnx.TypeProto.Tensor.toObject(t.tensorType,e),e.oneofs&&(n.value="tensorType")),null!=t.denotation&&t.hasOwnProperty("denotation")&&(n.denotation=t.denotation),n},t.prototype.toJSON=function(){return this.constructor.toObject(this,r.util.toJSONOptions)},t.Tensor=function(){function t(t){if(t)for(var e=Object.keys(t),n=0;n>>3){case 1:r.elemType=t.int32();break;case 2:r.shape=u.onnx.TensorShapeProto.decode(t,t.uint32());break;default:t.skipType(7&i)}}return r},t.decodeDelimited=function(t){return t instanceof o||(t=new o(t)),this.decode(t,t.uint32())},t.verify=function(t){if("object"!=typeof t||null===t)return"object expected";if(null!=t.elemType&&t.hasOwnProperty("elemType")&&!a.isInteger(t.elemType))return"elemType: integer expected";if(null!=t.shape&&t.hasOwnProperty("shape")){var e=u.onnx.TensorShapeProto.verify(t.shape);if(e)return"shape."+e}return null},t.fromObject=function(t){if(t instanceof u.onnx.TypeProto.Tensor)return t;var e=new u.onnx.TypeProto.Tensor;if(null!=t.elemType&&(e.elemType=0|t.elemType),null!=t.shape){if("object"!=typeof t.shape)throw TypeError(".onnx.TypeProto.Tensor.shape: object expected");e.shape=u.onnx.TensorShapeProto.fromObject(t.shape)}return e},t.toObject=function(t,e){e||(e={});var n={};return e.defaults&&(n.elemType=0,n.shape=null),null!=t.elemType&&t.hasOwnProperty("elemType")&&(n.elemType=t.elemType),null!=t.shape&&t.hasOwnProperty("shape")&&(n.shape=u.onnx.TensorShapeProto.toObject(t.shape,e)),n},t.prototype.toJSON=function(){return this.constructor.toObject(this,r.util.toJSONOptions)},t}(),t}(),n.OperatorSetIdProto=function(){function t(t){if(t)for(var e=Object.keys(t),n=0;n>>3){case 1:r.domain=t.string();break;case 2:r.version=t.int64();break;default:t.skipType(7&i)}}return r},t.decodeDelimited=function(t){return t instanceof o||(t=new o(t)),this.decode(t,t.uint32())},t.verify=function(t){return"object"!=typeof t||null===t?"object expected":null!=t.domain&&t.hasOwnProperty("domain")&&!a.isString(t.domain)?"domain: string expected":null!=t.version&&t.hasOwnProperty("version")&&!(a.isInteger(t.version)||t.version&&a.isInteger(t.version.low)&&a.isInteger(t.version.high))?"version: integer|Long expected":null},t.fromObject=function(t){if(t instanceof u.onnx.OperatorSetIdProto)return t;var e=new u.onnx.OperatorSetIdProto;return null!=t.domain&&(e.domain=String(t.domain)),null!=t.version&&(a.Long?(e.version=a.Long.fromValue(t.version)).unsigned=!1:"string"==typeof t.version?e.version=parseInt(t.version,10):"number"==typeof t.version?e.version=t.version:"object"==typeof t.version&&(e.version=new a.LongBits(t.version.low>>>0,t.version.high>>>0).toNumber())),e},t.toObject=function(t,e){e||(e={});var n={};if(e.defaults)if(n.domain="",a.Long){var r=new a.Long(0,0,!1);n.version=e.longs===String?r.toString():e.longs===Number?r.toNumber():r}else n.version=e.longs===String?"0":0;return null!=t.domain&&t.hasOwnProperty("domain")&&(n.domain=t.domain),null!=t.version&&t.hasOwnProperty("version")&&("number"==typeof t.version?n.version=e.longs===String?String(t.version):t.version:n.version=e.longs===String?a.Long.prototype.toString.call(t.version):e.longs===Number?new a.LongBits(t.version.low>>>0,t.version.high>>>0).toNumber():t.version),n},t.prototype.toJSON=function(){return this.constructor.toObject(this,r.util.toJSONOptions)},t}(),n}(),t.exports=u},function(t,e,n){"use strict";var r,o=this&&this.__extends||(r=function(t,e){return(r=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(t,e){t.__proto__=e}||function(t,e){for(var n in e)e.hasOwnProperty(n)&&(t[n]=e[n])})(t,e)},function(t,e){function n(){this.constructor=t}r(t,e),t.prototype=null===e?Object.create(e):(n.prototype=e.prototype,new n)});Object.defineProperty(e,"__esModule",{value:!0}),e.reshape=e.WebGLReshape=void 0;var i=n(37),a=n(0),u=n(50),s=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.run=function(t,e){var n=a.ShapeUtil.calculateReshapedDims(e[0].dims,e[1].integerData);return[l(t,e[0],n)]},e}(i.Reshape);function l(t,e,n){var r=t.getOrCreateTextureData(e),o=n;4===r.channels&&(o=u.getPackedShape(n));var i={channels:r.channels,height:r.height,width:r.width,shape:0!==o.length?o:[1],strides:a.ShapeUtil.computeStrides(o),unpackedShape:n};return t.createSharedTextureData(i,e.type,r.texture,e.dataId).tensor}e.WebGLReshape=s,e.reshape=l},function(t,e,n){"use strict";var r=this&&this.__values||function(t){var e="function"==typeof Symbol&&Symbol.iterator,n=e&&t[e],r=0;if(n)return n.call(t);if(t&&"number"==typeof t.length)return{next:function(){return t&&r>=t.length&&(t=void 0),{value:t&&t[r++],done:!t}}};throw new TypeError(e?"Object is not iterable.":"Symbol.iterator is not defined.")};function o(t,e){if(e.endsWith("+")){var n=Number.parseInt(e.substring(0,e.length-1),10);return!isNaN(n)&&n<=t}if(2===e.split("-").length){var r=e.split("-"),o=(n=Number.parseInt(r[0],10),Number.parseInt(r[1],10));return!isNaN(n)&&!isNaN(o)&&n<=t&&t<=o}return Number.parseInt(e,10)===t}Object.defineProperty(e,"__esModule",{value:!0}),e.resolveOperator=void 0,e.resolveOperator=function(t,e,n){var i,a,u,s;try{for(var l=r(n),c=l.next();!c.done;c=l.next()){var f=c.value,p=f[0],h=f[1],d=f[2],y=f[3];if(t.opType===p)try{for(var g=(u=void 0,r(e)),m=g.next();!m.done;m=g.next()){var v=m.value;if((v.domain===h||"ai.onnx"===v.domain&&""===h)&&o(v.version,d))return y(t)}}catch(t){u={error:t}}finally{try{m&&!m.done&&(s=g.return)&&s.call(g)}finally{if(u)throw u.error}}}}catch(t){i={error:t}}finally{try{c&&!c.done&&(a=l.return)&&a.call(l)}finally{if(i)throw i.error}}throw new TypeError("cannot resolve operator '"+t.opType+"' with opsets: "+e.map((function(t){return(t.domain||"ai.onnx")+" v"+t.version})).join(", "))}},function(t,e,n){"use strict";(function(t){ -/*! - * The buffer module from node.js, for the browser. - * - * @author Feross Aboukhadijeh - * @license MIT - */ -var r=n(59),o=n(60),i=n(61);function a(){return s.TYPED_ARRAY_SUPPORT?2147483647:1073741823}function u(t,e){if(a()=a())throw new RangeError("Attempt to allocate Buffer larger than maximum size: 0x"+a().toString(16)+" bytes");return 0|t}function d(t,e){if(s.isBuffer(t))return t.length;if("undefined"!=typeof ArrayBuffer&&"function"==typeof ArrayBuffer.isView&&(ArrayBuffer.isView(t)||t instanceof ArrayBuffer))return t.byteLength;"string"!=typeof t&&(t=""+t);var n=t.length;if(0===n)return 0;for(var r=!1;;)switch(e){case"ascii":case"latin1":case"binary":return n;case"utf8":case"utf-8":case void 0:return U(t).length;case"ucs2":case"ucs-2":case"utf16le":case"utf-16le":return 2*n;case"hex":return n>>>1;case"base64":return G(t).length;default:if(r)return U(t).length;e=(""+e).toLowerCase(),r=!0}}function y(t,e,n){var r=!1;if((void 0===e||e<0)&&(e=0),e>this.length)return"";if((void 0===n||n>this.length)&&(n=this.length),n<=0)return"";if((n>>>=0)<=(e>>>=0))return"";for(t||(t="utf8");;)switch(t){case"hex":return E(this,e,n);case"utf8":case"utf-8":return P(this,e,n);case"ascii":return A(this,e,n);case"latin1":case"binary":return D(this,e,n);case"base64":return S(this,e,n);case"ucs2":case"ucs-2":case"utf16le":case"utf-16le":return I(this,e,n);default:if(r)throw new TypeError("Unknown encoding: "+t);t=(t+"").toLowerCase(),r=!0}}function g(t,e,n){var r=t[e];t[e]=t[n],t[n]=r}function m(t,e,n,r,o){if(0===t.length)return-1;if("string"==typeof n?(r=n,n=0):n>2147483647?n=2147483647:n<-2147483648&&(n=-2147483648),n=+n,isNaN(n)&&(n=o?0:t.length-1),n<0&&(n=t.length+n),n>=t.length){if(o)return-1;n=t.length-1}else if(n<0){if(!o)return-1;n=0}if("string"==typeof e&&(e=s.from(e,r)),s.isBuffer(e))return 0===e.length?-1:v(t,e,n,r,o);if("number"==typeof e)return e&=255,s.TYPED_ARRAY_SUPPORT&&"function"==typeof Uint8Array.prototype.indexOf?o?Uint8Array.prototype.indexOf.call(t,e,n):Uint8Array.prototype.lastIndexOf.call(t,e,n):v(t,[e],n,r,o);throw new TypeError("val must be string, number or Buffer")}function v(t,e,n,r,o){var i,a=1,u=t.length,s=e.length;if(void 0!==r&&("ucs2"===(r=String(r).toLowerCase())||"ucs-2"===r||"utf16le"===r||"utf-16le"===r)){if(t.length<2||e.length<2)return-1;a=2,u/=2,s/=2,n/=2}function l(t,e){return 1===a?t[e]:t.readUInt16BE(e*a)}if(o){var c=-1;for(i=n;iu&&(n=u-s),i=n;i>=0;i--){for(var f=!0,p=0;po&&(r=o):r=o;var i=e.length;if(i%2!=0)throw new TypeError("Invalid hex string");r>i/2&&(r=i/2);for(var a=0;a>8,o=n%256,i.push(o),i.push(r);return i}(e,t.length-n),t,n,r)}function S(t,e,n){return 0===e&&n===t.length?r.fromByteArray(t):r.fromByteArray(t.slice(e,n))}function P(t,e,n){n=Math.min(t.length,n);for(var r=[],o=e;o239?4:l>223?3:l>191?2:1;if(o+f<=n)switch(f){case 1:l<128&&(c=l);break;case 2:128==(192&(i=t[o+1]))&&(s=(31&l)<<6|63&i)>127&&(c=s);break;case 3:i=t[o+1],a=t[o+2],128==(192&i)&&128==(192&a)&&(s=(15&l)<<12|(63&i)<<6|63&a)>2047&&(s<55296||s>57343)&&(c=s);break;case 4:i=t[o+1],a=t[o+2],u=t[o+3],128==(192&i)&&128==(192&a)&&128==(192&u)&&(s=(15&l)<<18|(63&i)<<12|(63&a)<<6|63&u)>65535&&s<1114112&&(c=s)}null===c?(c=65533,f=1):c>65535&&(c-=65536,r.push(c>>>10&1023|55296),c=56320|1023&c),r.push(c),o+=f}return function(t){var e=t.length;if(e<=4096)return String.fromCharCode.apply(String,t);var n="",r=0;for(;r0&&(t=this.toString("hex",0,n).match(/.{2}/g).join(" "),this.length>n&&(t+=" ... ")),""},s.prototype.compare=function(t,e,n,r,o){if(!s.isBuffer(t))throw new TypeError("Argument must be a Buffer");if(void 0===e&&(e=0),void 0===n&&(n=t?t.length:0),void 0===r&&(r=0),void 0===o&&(o=this.length),e<0||n>t.length||r<0||o>this.length)throw new RangeError("out of range index");if(r>=o&&e>=n)return 0;if(r>=o)return-1;if(e>=n)return 1;if(this===t)return 0;for(var i=(o>>>=0)-(r>>>=0),a=(n>>>=0)-(e>>>=0),u=Math.min(i,a),l=this.slice(r,o),c=t.slice(e,n),f=0;fo)&&(n=o),t.length>0&&(n<0||e<0)||e>this.length)throw new RangeError("Attempt to write outside buffer bounds");r||(r="utf8");for(var i=!1;;)switch(r){case"hex":return b(this,t,e,n);case"utf8":case"utf-8":return _(this,t,e,n);case"ascii":return w(this,t,e,n);case"latin1":case"binary":return x(this,t,e,n);case"base64":return T(this,t,e,n);case"ucs2":case"ucs-2":case"utf16le":case"utf-16le":return O(this,t,e,n);default:if(i)throw new TypeError("Unknown encoding: "+r);r=(""+r).toLowerCase(),i=!0}},s.prototype.toJSON=function(){return{type:"Buffer",data:Array.prototype.slice.call(this._arr||this,0)}};function A(t,e,n){var r="";n=Math.min(t.length,n);for(var o=e;or)&&(n=r);for(var o="",i=e;in)throw new RangeError("Trying to access beyond buffer length")}function M(t,e,n,r,o,i){if(!s.isBuffer(t))throw new TypeError('"buffer" argument must be a Buffer instance');if(e>o||et.length)throw new RangeError("Index out of range")}function j(t,e,n,r){e<0&&(e=65535+e+1);for(var o=0,i=Math.min(t.length-n,2);o>>8*(r?o:1-o)}function k(t,e,n,r){e<0&&(e=4294967295+e+1);for(var o=0,i=Math.min(t.length-n,4);o>>8*(r?o:3-o)&255}function C(t,e,n,r,o,i){if(n+r>t.length)throw new RangeError("Index out of range");if(n<0)throw new RangeError("Index out of range")}function R(t,e,n,r,i){return i||C(t,0,n,4),o.write(t,e,n,r,23,4),n+4}function N(t,e,n,r,i){return i||C(t,0,n,8),o.write(t,e,n,r,52,8),n+8}s.prototype.slice=function(t,e){var n,r=this.length;if((t=~~t)<0?(t+=r)<0&&(t=0):t>r&&(t=r),(e=void 0===e?r:~~e)<0?(e+=r)<0&&(e=0):e>r&&(e=r),e0&&(o*=256);)r+=this[t+--e]*o;return r},s.prototype.readUInt8=function(t,e){return e||L(t,1,this.length),this[t]},s.prototype.readUInt16LE=function(t,e){return e||L(t,2,this.length),this[t]|this[t+1]<<8},s.prototype.readUInt16BE=function(t,e){return e||L(t,2,this.length),this[t]<<8|this[t+1]},s.prototype.readUInt32LE=function(t,e){return e||L(t,4,this.length),(this[t]|this[t+1]<<8|this[t+2]<<16)+16777216*this[t+3]},s.prototype.readUInt32BE=function(t,e){return e||L(t,4,this.length),16777216*this[t]+(this[t+1]<<16|this[t+2]<<8|this[t+3])},s.prototype.readIntLE=function(t,e,n){t|=0,e|=0,n||L(t,e,this.length);for(var r=this[t],o=1,i=0;++i=(o*=128)&&(r-=Math.pow(2,8*e)),r},s.prototype.readIntBE=function(t,e,n){t|=0,e|=0,n||L(t,e,this.length);for(var r=e,o=1,i=this[t+--r];r>0&&(o*=256);)i+=this[t+--r]*o;return i>=(o*=128)&&(i-=Math.pow(2,8*e)),i},s.prototype.readInt8=function(t,e){return e||L(t,1,this.length),128&this[t]?-1*(255-this[t]+1):this[t]},s.prototype.readInt16LE=function(t,e){e||L(t,2,this.length);var n=this[t]|this[t+1]<<8;return 32768&n?4294901760|n:n},s.prototype.readInt16BE=function(t,e){e||L(t,2,this.length);var n=this[t+1]|this[t]<<8;return 32768&n?4294901760|n:n},s.prototype.readInt32LE=function(t,e){return e||L(t,4,this.length),this[t]|this[t+1]<<8|this[t+2]<<16|this[t+3]<<24},s.prototype.readInt32BE=function(t,e){return e||L(t,4,this.length),this[t]<<24|this[t+1]<<16|this[t+2]<<8|this[t+3]},s.prototype.readFloatLE=function(t,e){return e||L(t,4,this.length),o.read(this,t,!0,23,4)},s.prototype.readFloatBE=function(t,e){return e||L(t,4,this.length),o.read(this,t,!1,23,4)},s.prototype.readDoubleLE=function(t,e){return e||L(t,8,this.length),o.read(this,t,!0,52,8)},s.prototype.readDoubleBE=function(t,e){return e||L(t,8,this.length),o.read(this,t,!1,52,8)},s.prototype.writeUIntLE=function(t,e,n,r){(t=+t,e|=0,n|=0,r)||M(this,t,e,n,Math.pow(2,8*n)-1,0);var o=1,i=0;for(this[e]=255&t;++i=0&&(i*=256);)this[e+o]=t/i&255;return e+n},s.prototype.writeUInt8=function(t,e,n){return t=+t,e|=0,n||M(this,t,e,1,255,0),s.TYPED_ARRAY_SUPPORT||(t=Math.floor(t)),this[e]=255&t,e+1},s.prototype.writeUInt16LE=function(t,e,n){return t=+t,e|=0,n||M(this,t,e,2,65535,0),s.TYPED_ARRAY_SUPPORT?(this[e]=255&t,this[e+1]=t>>>8):j(this,t,e,!0),e+2},s.prototype.writeUInt16BE=function(t,e,n){return t=+t,e|=0,n||M(this,t,e,2,65535,0),s.TYPED_ARRAY_SUPPORT?(this[e]=t>>>8,this[e+1]=255&t):j(this,t,e,!1),e+2},s.prototype.writeUInt32LE=function(t,e,n){return t=+t,e|=0,n||M(this,t,e,4,4294967295,0),s.TYPED_ARRAY_SUPPORT?(this[e+3]=t>>>24,this[e+2]=t>>>16,this[e+1]=t>>>8,this[e]=255&t):k(this,t,e,!0),e+4},s.prototype.writeUInt32BE=function(t,e,n){return t=+t,e|=0,n||M(this,t,e,4,4294967295,0),s.TYPED_ARRAY_SUPPORT?(this[e]=t>>>24,this[e+1]=t>>>16,this[e+2]=t>>>8,this[e+3]=255&t):k(this,t,e,!1),e+4},s.prototype.writeIntLE=function(t,e,n,r){if(t=+t,e|=0,!r){var o=Math.pow(2,8*n-1);M(this,t,e,n,o-1,-o)}var i=0,a=1,u=0;for(this[e]=255&t;++i>0)-u&255;return e+n},s.prototype.writeIntBE=function(t,e,n,r){if(t=+t,e|=0,!r){var o=Math.pow(2,8*n-1);M(this,t,e,n,o-1,-o)}var i=n-1,a=1,u=0;for(this[e+i]=255&t;--i>=0&&(a*=256);)t<0&&0===u&&0!==this[e+i+1]&&(u=1),this[e+i]=(t/a>>0)-u&255;return e+n},s.prototype.writeInt8=function(t,e,n){return t=+t,e|=0,n||M(this,t,e,1,127,-128),s.TYPED_ARRAY_SUPPORT||(t=Math.floor(t)),t<0&&(t=255+t+1),this[e]=255&t,e+1},s.prototype.writeInt16LE=function(t,e,n){return t=+t,e|=0,n||M(this,t,e,2,32767,-32768),s.TYPED_ARRAY_SUPPORT?(this[e]=255&t,this[e+1]=t>>>8):j(this,t,e,!0),e+2},s.prototype.writeInt16BE=function(t,e,n){return t=+t,e|=0,n||M(this,t,e,2,32767,-32768),s.TYPED_ARRAY_SUPPORT?(this[e]=t>>>8,this[e+1]=255&t):j(this,t,e,!1),e+2},s.prototype.writeInt32LE=function(t,e,n){return t=+t,e|=0,n||M(this,t,e,4,2147483647,-2147483648),s.TYPED_ARRAY_SUPPORT?(this[e]=255&t,this[e+1]=t>>>8,this[e+2]=t>>>16,this[e+3]=t>>>24):k(this,t,e,!0),e+4},s.prototype.writeInt32BE=function(t,e,n){return t=+t,e|=0,n||M(this,t,e,4,2147483647,-2147483648),t<0&&(t=4294967295+t+1),s.TYPED_ARRAY_SUPPORT?(this[e]=t>>>24,this[e+1]=t>>>16,this[e+2]=t>>>8,this[e+3]=255&t):k(this,t,e,!1),e+4},s.prototype.writeFloatLE=function(t,e,n){return R(this,t,e,!0,n)},s.prototype.writeFloatBE=function(t,e,n){return R(this,t,e,!1,n)},s.prototype.writeDoubleLE=function(t,e,n){return N(this,t,e,!0,n)},s.prototype.writeDoubleBE=function(t,e,n){return N(this,t,e,!1,n)},s.prototype.copy=function(t,e,n,r){if(n||(n=0),r||0===r||(r=this.length),e>=t.length&&(e=t.length),e||(e=0),r>0&&r=this.length)throw new RangeError("sourceStart out of bounds");if(r<0)throw new RangeError("sourceEnd out of bounds");r>this.length&&(r=this.length),t.length-e=0;--o)t[o+e]=this[o+n];else if(i<1e3||!s.TYPED_ARRAY_SUPPORT)for(o=0;o>>=0,n=void 0===n?this.length:n>>>0,t||(t=0),"number"==typeof t)for(i=e;i55295&&n<57344){if(!o){if(n>56319){(e-=3)>-1&&i.push(239,191,189);continue}if(a+1===r){(e-=3)>-1&&i.push(239,191,189);continue}o=n;continue}if(n<56320){(e-=3)>-1&&i.push(239,191,189),o=n;continue}n=65536+(o-55296<<10|n-56320)}else o&&(e-=3)>-1&&i.push(239,191,189);if(o=null,n<128){if((e-=1)<0)break;i.push(n)}else if(n<2048){if((e-=2)<0)break;i.push(n>>6|192,63&n|128)}else if(n<65536){if((e-=3)<0)break;i.push(n>>12|224,n>>6&63|128,63&n|128)}else{if(!(n<1114112))throw new Error("Invalid code point");if((e-=4)<0)break;i.push(n>>18|240,n>>12&63|128,n>>6&63|128,63&n|128)}}return i}function G(t){return r.toByteArray(function(t){if((t=function(t){return t.trim?t.trim():t.replace(/^\s+|\s+$/g,"")}(t).replace(B,"")).length<2)return"";for(;t.length%4!=0;)t+="=";return t}(t))}function z(t,e,n,r){for(var o=0;o=e.length||o>=t.length);++o)e[o+n]=t[o];return o}}).call(this,n(8))},function(t,e){t.exports=r;var n=null;try{n=new WebAssembly.Instance(new WebAssembly.Module(new Uint8Array([0,97,115,109,1,0,0,0,1,13,2,96,0,1,127,96,4,127,127,127,127,1,127,3,7,6,0,1,1,1,1,1,6,6,1,127,1,65,0,11,7,50,6,3,109,117,108,0,1,5,100,105,118,95,115,0,2,5,100,105,118,95,117,0,3,5,114,101,109,95,115,0,4,5,114,101,109,95,117,0,5,8,103,101,116,95,104,105,103,104,0,0,10,191,1,6,4,0,35,0,11,36,1,1,126,32,0,173,32,1,173,66,32,134,132,32,2,173,32,3,173,66,32,134,132,126,34,4,66,32,135,167,36,0,32,4,167,11,36,1,1,126,32,0,173,32,1,173,66,32,134,132,32,2,173,32,3,173,66,32,134,132,127,34,4,66,32,135,167,36,0,32,4,167,11,36,1,1,126,32,0,173,32,1,173,66,32,134,132,32,2,173,32,3,173,66,32,134,132,128,34,4,66,32,135,167,36,0,32,4,167,11,36,1,1,126,32,0,173,32,1,173,66,32,134,132,32,2,173,32,3,173,66,32,134,132,129,34,4,66,32,135,167,36,0,32,4,167,11,36,1,1,126,32,0,173,32,1,173,66,32,134,132,32,2,173,32,3,173,66,32,134,132,130,34,4,66,32,135,167,36,0,32,4,167,11])),{}).exports}catch(t){}function r(t,e,n){this.low=0|t,this.high=0|e,this.unsigned=!!n}function o(t){return!0===(t&&t.__isLong__)}r.prototype.__isLong__,Object.defineProperty(r.prototype,"__isLong__",{value:!0}),r.isLong=o;var i={},a={};function u(t,e){var n,r,o;return e?(o=0<=(t>>>=0)&&t<256)&&(r=a[t])?r:(n=l(t,(0|t)<0?-1:0,!0),o&&(a[t]=n),n):(o=-128<=(t|=0)&&t<128)&&(r=i[t])?r:(n=l(t,t<0?-1:0,!1),o&&(i[t]=n),n)}function s(t,e){if(isNaN(t))return e?v:m;if(e){if(t<0)return v;if(t>=d)return T}else{if(t<=-y)return O;if(t+1>=y)return x}return t<0?s(-t,e).neg():l(t%h|0,t/h|0,e)}function l(t,e,n){return new r(t,e,n)}r.fromInt=u,r.fromNumber=s,r.fromBits=l;var c=Math.pow;function f(t,e,n){if(0===t.length)throw Error("empty string");if("NaN"===t||"Infinity"===t||"+Infinity"===t||"-Infinity"===t)return m;if("number"==typeof e?(n=e,e=!1):e=!!e,(n=n||10)<2||360)throw Error("interior hyphen");if(0===r)return f(t.substring(1),e,n).neg();for(var o=s(c(n,8)),i=m,a=0;a>>0:this.low},S.toNumber=function(){return this.unsigned?(this.high>>>0)*h+(this.low>>>0):this.high*h+(this.low>>>0)},S.toString=function(t){if((t=t||10)<2||36>>0).toString(t);if((i=u).isZero())return l+a;for(;l.length<6;)l="0"+l;a=""+l+a}},S.getHighBits=function(){return this.high},S.getHighBitsUnsigned=function(){return this.high>>>0},S.getLowBits=function(){return this.low},S.getLowBitsUnsigned=function(){return this.low>>>0},S.getNumBitsAbs=function(){if(this.isNegative())return this.eq(O)?64:this.neg().getNumBitsAbs();for(var t=0!=this.high?this.high:this.low,e=31;e>0&&0==(t&1<=0},S.isOdd=function(){return 1==(1&this.low)},S.isEven=function(){return 0==(1&this.low)},S.equals=function(t){return o(t)||(t=p(t)),(this.unsigned===t.unsigned||this.high>>>31!=1||t.high>>>31!=1)&&(this.high===t.high&&this.low===t.low)},S.eq=S.equals,S.notEquals=function(t){return!this.eq(t)},S.neq=S.notEquals,S.ne=S.notEquals,S.lessThan=function(t){return this.comp(t)<0},S.lt=S.lessThan,S.lessThanOrEqual=function(t){return this.comp(t)<=0},S.lte=S.lessThanOrEqual,S.le=S.lessThanOrEqual,S.greaterThan=function(t){return this.comp(t)>0},S.gt=S.greaterThan,S.greaterThanOrEqual=function(t){return this.comp(t)>=0},S.gte=S.greaterThanOrEqual,S.ge=S.greaterThanOrEqual,S.compare=function(t){if(o(t)||(t=p(t)),this.eq(t))return 0;var e=this.isNegative(),n=t.isNegative();return e&&!n?-1:!e&&n?1:this.unsigned?t.high>>>0>this.high>>>0||t.high===this.high&&t.low>>>0>this.low>>>0?-1:1:this.sub(t).isNegative()?-1:1},S.comp=S.compare,S.negate=function(){return!this.unsigned&&this.eq(O)?O:this.not().add(b)},S.neg=S.negate,S.add=function(t){o(t)||(t=p(t));var e=this.high>>>16,n=65535&this.high,r=this.low>>>16,i=65535&this.low,a=t.high>>>16,u=65535&t.high,s=t.low>>>16,c=0,f=0,h=0,d=0;return h+=(d+=i+(65535&t.low))>>>16,f+=(h+=r+s)>>>16,c+=(f+=n+u)>>>16,c+=e+a,l((h&=65535)<<16|(d&=65535),(c&=65535)<<16|(f&=65535),this.unsigned)},S.subtract=function(t){return o(t)||(t=p(t)),this.add(t.neg())},S.sub=S.subtract,S.multiply=function(t){if(this.isZero())return m;if(o(t)||(t=p(t)),n)return l(n.mul(this.low,this.high,t.low,t.high),n.get_high(),this.unsigned);if(t.isZero())return m;if(this.eq(O))return t.isOdd()?O:m;if(t.eq(O))return this.isOdd()?O:m;if(this.isNegative())return t.isNegative()?this.neg().mul(t.neg()):this.neg().mul(t).neg();if(t.isNegative())return this.mul(t.neg()).neg();if(this.lt(g)&&t.lt(g))return s(this.toNumber()*t.toNumber(),this.unsigned);var e=this.high>>>16,r=65535&this.high,i=this.low>>>16,a=65535&this.low,u=t.high>>>16,c=65535&t.high,f=t.low>>>16,h=65535&t.low,d=0,y=0,v=0,b=0;return v+=(b+=a*h)>>>16,y+=(v+=i*h)>>>16,v&=65535,y+=(v+=a*f)>>>16,d+=(y+=r*h)>>>16,y&=65535,d+=(y+=i*f)>>>16,y&=65535,d+=(y+=a*c)>>>16,d+=e*h+r*f+i*c+a*u,l((v&=65535)<<16|(b&=65535),(d&=65535)<<16|(y&=65535),this.unsigned)},S.mul=S.multiply,S.divide=function(t){if(o(t)||(t=p(t)),t.isZero())throw Error("division by zero");var e,r,i;if(n)return this.unsigned||-2147483648!==this.high||-1!==t.low||-1!==t.high?l((this.unsigned?n.div_u:n.div_s)(this.low,this.high,t.low,t.high),n.get_high(),this.unsigned):this;if(this.isZero())return this.unsigned?v:m;if(this.unsigned){if(t.unsigned||(t=t.toUnsigned()),t.gt(this))return v;if(t.gt(this.shru(1)))return _;i=v}else{if(this.eq(O))return t.eq(b)||t.eq(w)?O:t.eq(O)?b:(e=this.shr(1).div(t).shl(1)).eq(m)?t.isNegative()?b:w:(r=this.sub(t.mul(e)),i=e.add(r.div(t)));if(t.eq(O))return this.unsigned?v:m;if(this.isNegative())return t.isNegative()?this.neg().div(t.neg()):this.neg().div(t).neg();if(t.isNegative())return this.div(t.neg()).neg();i=m}for(r=this;r.gte(t);){e=Math.max(1,Math.floor(r.toNumber()/t.toNumber()));for(var a=Math.ceil(Math.log(e)/Math.LN2),u=a<=48?1:c(2,a-48),f=s(e),h=f.mul(t);h.isNegative()||h.gt(r);)h=(f=s(e-=u,this.unsigned)).mul(t);f.isZero()&&(f=b),i=i.add(f),r=r.sub(h)}return i},S.div=S.divide,S.modulo=function(t){return o(t)||(t=p(t)),n?l((this.unsigned?n.rem_u:n.rem_s)(this.low,this.high,t.low,t.high),n.get_high(),this.unsigned):this.sub(this.div(t).mul(t))},S.mod=S.modulo,S.rem=S.modulo,S.not=function(){return l(~this.low,~this.high,this.unsigned)},S.and=function(t){return o(t)||(t=p(t)),l(this.low&t.low,this.high&t.high,this.unsigned)},S.or=function(t){return o(t)||(t=p(t)),l(this.low|t.low,this.high|t.high,this.unsigned)},S.xor=function(t){return o(t)||(t=p(t)),l(this.low^t.low,this.high^t.high,this.unsigned)},S.shiftLeft=function(t){return o(t)&&(t=t.toInt()),0==(t&=63)?this:t<32?l(this.low<>>32-t,this.unsigned):l(0,this.low<>>t|this.high<<32-t,this.high>>t,this.unsigned):l(this.high>>t-32,this.high>=0?0:-1,this.unsigned)},S.shr=S.shiftRight,S.shiftRightUnsigned=function(t){if(o(t)&&(t=t.toInt()),0===(t&=63))return this;var e=this.high;return t<32?l(this.low>>>t|e<<32-t,e>>>t,this.unsigned):l(32===t?e:e>>>t-32,0,this.unsigned)},S.shru=S.shiftRightUnsigned,S.shr_u=S.shiftRightUnsigned,S.toSigned=function(){return this.unsigned?l(this.low,this.high,!1):this},S.toUnsigned=function(){return this.unsigned?this:l(this.low,this.high,!0)},S.toBytes=function(t){return t?this.toBytesLE():this.toBytesBE()},S.toBytesLE=function(){var t=this.high,e=this.low;return[255&e,e>>>8&255,e>>>16&255,e>>>24,255&t,t>>>8&255,t>>>16&255,t>>>24]},S.toBytesBE=function(){var t=this.high,e=this.low;return[t>>>24,t>>>16&255,t>>>8&255,255&t,e>>>24,e>>>16&255,e>>>8&255,255&e]},r.fromBytes=function(t,e,n){return n?r.fromBytesLE(t,e):r.fromBytesBE(t,e)},r.fromBytesLE=function(t,e){return new r(t[0]|t[1]<<8|t[2]<<16|t[3]<<24,t[4]|t[5]<<8|t[6]<<16|t[7]<<24,e)},r.fromBytesBE=function(t,e){return new r(t[4]<<24|t[5]<<16|t[6]<<8|t[7],t[0]<<24|t[1]<<16|t[2]<<8|t[3],e)}},function(t,e,n){"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.BatchNormalization=void 0;var r=function(){function t(){}return t.prototype.initialize=function(t){this.epsilon=t.getFloat("epsilon",1e-5),this.momentum=t.getFloat("momentum",.9),this.spatial=t.getInt("spatial",1)},t.prototype.checkInputs=function(t){return!(!t||5!==t.length)&&this.checkInputTypes(t)},t.prototype.checkInputTypes=function(t){var e=t[0],n=t[1],r=t[2],o=t[3],i=t[4];return!(e.dims.length<3||1!==n.dims.length||1!==r.dims.length||1!==o.dims.length||1!==i.dims.length)&&(n.dims[0]===e.dims[1]&&r.dims[0]===e.dims[1]&&o.dims[0]===e.dims[1]&&i.dims[0]===e.dims[1]&&!("float32"!==e.type&&"float64"!==e.type||"float32"!==n.type&&"float64"!==n.type||"float32"!==r.type&&"float64"!==r.type||"float32"!==o.type&&"float64"!==o.type||"float32"!==i.type&&"float64"!==i.type))},t}();e.BatchNormalization=r},function(t,e,n){"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.BinaryOp=void 0;var r=function(){function t(t,e,n){this.typeConstraint=t,this.opType=e,this.resultType=n}return t.prototype.initialize=function(t){},t.prototype.checkInputs=function(t){return!(!t||2!==t.length)&&this.checkInputTypes(t)},t.prototype.checkInputTypes=function(t){return-1!==this.typeConstraint.indexOf(t[0].type)&&t[0].type===t[1].type},t}();e.BinaryOp=r},function(t,e,n){"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.Conv=void 0;var r=function(){function t(){}return t.prototype.initialize=function(t){this.autoPad=t.getString("auto_pad","NOTSET"),this.dilations=t.getInts("dilations",[1,1]),this.group=t.getInt("group",1),this.kernelShape=t.getInts("kernel_shape",[]),this.pads=t.getInts("pads",[0,0,0,0]),this.strides=t.getInts("strides",[1,1])},t.prototype.checkInputs=function(t){if(!t||2!==t.length&&3!==t.length)return!1;if(4!==t[0].dims.length||4!==t[1].dims.length)return!1;if(t[0].dims[1]!==t[1].dims[1]*this.group)return!1;if(3===t.length&&(1!==t[2].dims.length||t[1].dims[0]!==t[2].dims[0]))return!1;var e=t[0].dims.length-2;return this.dilations.length===e&&(this.strides.length===e&&(this.pads.length===2*e&&((0===this.kernelShape.length||this.kernelShape.length===t[1].dims.length-2)&&this.checkInputTypes(t))))},t.prototype.checkInputTypes=function(t){return"float32"===t[0].type&&"float32"===t[1].type&&(3!==t.length||"float32"===t[2].type)},t}();e.Conv=r},function(t,e,n){"use strict";var r,o=this&&this.__extends||(r=function(t,e){return(r=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(t,e){t.__proto__=e}||function(t,e){for(var n in e)e.hasOwnProperty(n)&&(t[n]=e[n])})(t,e)},function(t,e){function n(){this.constructor=t}r(t,e),t.prototype=null===e?Object.create(e):(n.prototype=e.prototype,new n)}),i=this&&this.__read||function(t,e){var n="function"==typeof Symbol&&t[Symbol.iterator];if(!n)return t;var r,o,i=n.call(t),a=[];try{for(;(void 0===e||e-- >0)&&!(r=i.next()).done;)a.push(r.value)}catch(t){o={error:t}}finally{try{r&&!r.done&&(n=i.return)&&n.call(i)}finally{if(o)throw o.error}}return a};Object.defineProperty(e,"__esModule",{value:!0}),e.matMul2d=e.matMul=e.CpuMatMul=void 0;var a=n(18),u=n(1),s=n(0),l=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.run=function(t,e){return[c(e[0],e[1])]},e}(a.MatMul);function c(t,e){var n=i(s.MatMulUtil.preprocessInputShapes(t.dims,e.dims),2),r=n[0],o=n[1],a=[r[r.length-2],o[o.length-1]],l=s.BroadcastUtil.calcShape(r,o,!0);if(!l)throw new Error("input dimensions do not match the requirement");for(var c=s.ShapeUtil.size(l)/(a[0]*a[1]),p=new u.Tensor(l,"float64"===t.type||"float64"===e.type?"float64":"float32"),h=0,d=new Array(l.length),y=new Array(t.dims.length),g=new Array(e.dims.length),m=0;m=0;b--)d[b]=v%l[b],v=Math.floor(v/l[b]);s.BroadcastUtil.fillIndex(d,t.dims,y),s.BroadcastUtil.fillIndex(d,e.dims,g);var _=y.length<=2?0:s.ShapeUtil.indicesToOffset(y,t.strides,l.length-2),w=g.length<=2?0:s.ShapeUtil.indicesToOffset(g,e.strides,l.length-2);f(t.floatData.subarray(_),e.floatData.subarray(w),p.floatData.subarray(h),!1,!1,1,0,a[0],a[1],r[r.length-1]),h+=a[0]*a[1]}return p}function f(t,e,n,r,o,i,a,u,s,l){return r&&o?function(t,e,n,r,o,i,a,u){for(var s=0,l=0,c=0,f=0;f3))&&(!(!this.isOptionalC&&3!==t.length)&&((3!==t.length||1===t[2].dims.length||2===t[2].dims.length)&&this.checkInputTypes(t))))},t.prototype.checkInputTypes=function(t){return!("float32"!==t[0].type&&"float64"!==t[0].type||"float32"!==t[1].type&&"float64"!==t[1].type||3===t.length&&"float32"!==t[2].type&&"float64"!==t[2].type)&&(t[0].type===t[1].type&&(3!==t.length||t[0].type===t[2].type))},t}();e.Gemm=r},function(t,e,n){"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.InstanceNormalization=void 0;var r=function(){function t(){}return t.prototype.initialize=function(t){this.epsilon=t.getFloat("epsilon",1e-5)},t.prototype.checkInputs=function(t){return!(!t||3!==t.length)&&this.checkInputTypes(t)},t.prototype.checkInputTypes=function(t){var e=t[0],n=t[1],r=t[2];return!(e.dims.length<3||1!==n.dims.length||1!==r.dims.length)&&(n.dims[0]===e.dims[1]&&r.dims[0]===e.dims[1]&&!("float32"!==e.type&&"float64"!==e.type||"float32"!==n.type&&"float64"!==n.type||"float32"!==r.type&&"float64"!==r.type))},t}();e.InstanceNormalization=r},function(t,e,n){"use strict";var r,o=this&&this.__extends||(r=function(t,e){return(r=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(t,e){t.__proto__=e}||function(t,e){for(var n in e)e.hasOwnProperty(n)&&(t[n]=e[n])})(t,e)},function(t,e){function n(){this.constructor=t}r(t,e),t.prototype=null===e?Object.create(e):(n.prototype=e.prototype,new n)});Object.defineProperty(e,"__esModule",{value:!0}),e.GlobalMaxPool=e.MaxPool=e.GlobalAveragePool=e.AveragePool=void 0;var i=function(){function t(){}return t.prototype.checkInputs=function(t){return!(!t||1!==t.length)&&this.checkInputTypes(t)},t.prototype.checkInputTypes=function(t){return"float32"===t[0].type||"float64"===t[0].type},t}(),a=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.initialize=function(t){if(this.autoPad=t.getString("auto_pad","NOTSET"),this.kernelShape=t.getInts("kernel_shape"),this.strides=t.getInts("strides",[]),this.pads=t.getInts("pads",[]),this.countIncludePad=0!==t.getInt("count_include_pad",0),this.ceilMode=t.getInt("ceil_mode",0),0!==this.ceilMode)throw new Error("using ceil() in shape computation is not yet supported for AveragePool")},e}(i);e.AveragePool=a;var u=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.initialize=function(t){this.countIncludePad=0!==t.getInt("count_include_pad",0)},e}(i);e.GlobalAveragePool=u;var s=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.initialize=function(t){if(this.autoPad=t.getString("auto_pad","NOTSET"),this.kernelShape=t.getInts("kernel_shape"),this.strides=t.getInts("strides",[]),this.pads=t.getInts("pads",[]),this.ceilMode=t.getInt("ceil_mode",0),this.storageOrder=t.getInt("storage_order",0),0!==this.storageOrder)throw new Error("column major storage order is not yet supported for MaxPool");if(0!==this.ceilMode)throw new Error("using ceil() in shape computation is not yet supported for MaxPool")},e}(i);e.MaxPool=s;var l=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.initialize=function(t){},e}(i);e.GlobalMaxPool=l},function(t,e,n){"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.Softmax=void 0;var r=function(){function t(){}return t.prototype.initialize=function(t){this.axis=t.getInt("axis",1)},t.prototype.checkInputs=function(t){return!(!t||1!==t.length)&&this.checkInputTypes(t)},t.prototype.checkInputTypes=function(t){return"float32"===t[0].type||"float64"===t[0].type},t}();e.Softmax=r},function(t,e,n){"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.Sum=void 0;var r=function(){function t(){}return t.prototype.initialize=function(t){},t.prototype.checkInputs=function(t){if(!t||0===t.length)return!1;for(var e=t[0].dims.length,n=1;n1)for(var n=1;n=0?t:t*e}))}],["Pad","","2-10",function(){return new _.CpuPad}],["Reciprocal","","6+",function(){return new L.CpuUnaryOp(a.FLOAT_TYPES,I.reciprocal)}],["ReduceLogSum","","1+",function(){return new x.CpuReduceLogSum}],["ReduceMax","","1+",function(){return new x.CpuReduceMax}],["ReduceMean","","1+",function(){return new x.CpuReduceMean}],["ReduceMin","","1+",function(){return new x.CpuReduceMin}],["ReduceProd","","1+",function(){return new x.CpuReduceProd}],["ReduceSum","","1+",function(){return new x.CpuReduceSum}],["ReduceSumSquare","","1+",function(){return new x.CpuReduceSumSquare}],["Relu","","6+",function(){return new L.CpuUnaryOp(a.FLOAT_TYPES,I.relu)}],["Reshape","","5+",function(){return new T.CpuReshape}],["Sigmoid","","6+",function(){return new L.CpuUnaryOp(a.FLOAT_TYPES,I.sigmoid)}],["Sign","","9+",function(){return new L.CpuUnaryOp(a.NUMBER_TYPES,I.sign)}],["Sin","","7+",function(){return new L.CpuUnaryOp(a.FLOAT_TYPES,I.sin)}],["Sinh","","9+",function(){return new L.CpuUnaryOp(a.FLOAT_TYPES,I.sinh)}],["Slice","","10+",function(){return new O.CpuSliceV10}],["Slice","","1-9",function(){return new O.CpuSlice}],["Softmax","","1+",function(){return new S.CpuSoftmax}],["Sqrt","","6+",function(){return new L.CpuUnaryOp(a.FLOAT_TYPES,I.sqrt)}],["Squeeze","","1+",function(){return new P.CpuSqueeze}],["Sub","","7+",function(){return new l.CpuBinaryOp(a.NUMBER_TYPES,(function(t,e){return t-e}))}],["Sum","","6+",function(){return new A.CpuSum}],["Tan","","7+",function(){return new L.CpuUnaryOp(a.FLOAT_TYPES,I.tan)}],["Tanh","","6+",function(){return new L.CpuUnaryOp(a.FLOAT_TYPES,I.tanh)}],["Tile","","6+",function(){return new D.CpuTile}],["Transpose","","1+",function(){return new E.CpuTranspose}],["Unsqueeze","","1+",function(){return new M.CpuUnsqueeze}],["Upsample","","7-8",function(){return new j.CpuUpsample}],["Xor","","7+",function(){return new l.CpuBinaryOp(["bool"],(function(t,e){return t^e}))}]]},function(t,e,n){"use strict";t.exports=f;var r,o=n(6),i=o.LongBits,a=o.base64,u=o.utf8;function s(t,e,n){this.fn=t,this.len=e,this.next=void 0,this.val=n}function l(){}function c(t){this.head=t.head,this.tail=t.tail,this.len=t.len,this.next=t.states}function f(){this.len=0,this.head=new s(l,0,0),this.tail=this.head,this.states=null}var p=function(){return o.Buffer?function(){return(f.create=function(){return new r})()}:function(){return new f}};function h(t,e,n){e[n]=255&t}function d(t,e){this.len=t,this.next=void 0,this.val=e}function y(t,e,n){for(;t.hi;)e[n++]=127&t.lo|128,t.lo=(t.lo>>>7|t.hi<<25)>>>0,t.hi>>>=7;for(;t.lo>127;)e[n++]=127&t.lo|128,t.lo=t.lo>>>7;e[n++]=t.lo}function g(t,e,n){e[n]=255&t,e[n+1]=t>>>8&255,e[n+2]=t>>>16&255,e[n+3]=t>>>24}f.create=p(),f.alloc=function(t){return new o.Array(t)},o.Array!==Array&&(f.alloc=o.pool(f.alloc,o.Array.prototype.subarray)),f.prototype._push=function(t,e,n){return this.tail=this.tail.next=new s(t,e,n),this.len+=e,this},d.prototype=Object.create(s.prototype),d.prototype.fn=function(t,e,n){for(;t>127;)e[n++]=127&t|128,t>>>=7;e[n]=t},f.prototype.uint32=function(t){return this.len+=(this.tail=this.tail.next=new d((t>>>=0)<128?1:t<16384?2:t<2097152?3:t<268435456?4:5,t)).len,this},f.prototype.int32=function(t){return t<0?this._push(y,10,i.fromNumber(t)):this.uint32(t)},f.prototype.sint32=function(t){return this.uint32((t<<1^t>>31)>>>0)},f.prototype.uint64=function(t){var e=i.from(t);return this._push(y,e.length(),e)},f.prototype.int64=f.prototype.uint64,f.prototype.sint64=function(t){var e=i.from(t).zzEncode();return this._push(y,e.length(),e)},f.prototype.bool=function(t){return this._push(h,1,t?1:0)},f.prototype.fixed32=function(t){return this._push(g,4,t>>>0)},f.prototype.sfixed32=f.prototype.fixed32,f.prototype.fixed64=function(t){var e=i.from(t);return this._push(g,4,e.lo)._push(g,4,e.hi)},f.prototype.sfixed64=f.prototype.fixed64,f.prototype.float=function(t){return this._push(o.float.writeFloatLE,4,t)},f.prototype.double=function(t){return this._push(o.float.writeDoubleLE,8,t)};var m=o.Array.prototype.set?function(t,e,n){e.set(t,n)}:function(t,e,n){for(var r=0;r>>0;if(!e)return this._push(h,1,0);if(o.isString(t)){var n=f.alloc(e=a.length(t));a.decode(t,n,0),t=n}return this.uint32(e)._push(m,e,t)},f.prototype.string=function(t){var e=u.length(t);return e?this.uint32(e)._push(u.write,e,t):this._push(h,1,0)},f.prototype.fork=function(){return this.states=new c(this),this.head=this.tail=new s(l,0,0),this.len=0,this},f.prototype.reset=function(){return this.states?(this.head=this.states.head,this.tail=this.states.tail,this.len=this.states.len,this.states=this.states.next):(this.head=this.tail=new s(l,0,0),this.len=0),this},f.prototype.ldelim=function(){var t=this.head,e=this.tail,n=this.len;return this.reset().uint32(n),n&&(this.tail.next=t.next,this.tail=e,this.len+=n),this},f.prototype.finish=function(){for(var t=this.head.next,e=this.constructor.alloc(this.len),n=0;t;)t.fn(t.val,e,n),n+=t.len,t=t.next;return e},f._configure=function(t){r=t,f.create=p(),r._configure()}},function(t,e,n){"use strict";t.exports=s;var r,o=n(6),i=o.LongBits,a=o.utf8;function u(t,e){return RangeError("index out of range: "+t.pos+" + "+(e||1)+" > "+t.len)}function s(t){this.buf=t,this.pos=0,this.len=t.length}var l,c="undefined"!=typeof Uint8Array?function(t){if(t instanceof Uint8Array||Array.isArray(t))return new s(t);throw Error("illegal buffer")}:function(t){if(Array.isArray(t))return new s(t);throw Error("illegal buffer")},f=function(){return o.Buffer?function(t){return(s.create=function(t){return o.Buffer.isBuffer(t)?new r(t):c(t)})(t)}:c};function p(){var t=new i(0,0),e=0;if(!(this.len-this.pos>4)){for(;e<3;++e){if(this.pos>=this.len)throw u(this);if(t.lo=(t.lo|(127&this.buf[this.pos])<<7*e)>>>0,this.buf[this.pos++]<128)return t}return t.lo=(t.lo|(127&this.buf[this.pos++])<<7*e)>>>0,t}for(;e<4;++e)if(t.lo=(t.lo|(127&this.buf[this.pos])<<7*e)>>>0,this.buf[this.pos++]<128)return t;if(t.lo=(t.lo|(127&this.buf[this.pos])<<28)>>>0,t.hi=(t.hi|(127&this.buf[this.pos])>>4)>>>0,this.buf[this.pos++]<128)return t;if(e=0,this.len-this.pos>4){for(;e<5;++e)if(t.hi=(t.hi|(127&this.buf[this.pos])<<7*e+3)>>>0,this.buf[this.pos++]<128)return t}else for(;e<5;++e){if(this.pos>=this.len)throw u(this);if(t.hi=(t.hi|(127&this.buf[this.pos])<<7*e+3)>>>0,this.buf[this.pos++]<128)return t}throw Error("invalid varint encoding")}function h(t,e){return(t[e-4]|t[e-3]<<8|t[e-2]<<16|t[e-1]<<24)>>>0}function d(){if(this.pos+8>this.len)throw u(this,8);return new i(h(this.buf,this.pos+=4),h(this.buf,this.pos+=4))}s.create=f(),s.prototype._slice=o.Array.prototype.subarray||o.Array.prototype.slice,s.prototype.uint32=(l=4294967295,function(){if(l=(127&this.buf[this.pos])>>>0,this.buf[this.pos++]<128)return l;if(l=(l|(127&this.buf[this.pos])<<7)>>>0,this.buf[this.pos++]<128)return l;if(l=(l|(127&this.buf[this.pos])<<14)>>>0,this.buf[this.pos++]<128)return l;if(l=(l|(127&this.buf[this.pos])<<21)>>>0,this.buf[this.pos++]<128)return l;if(l=(l|(15&this.buf[this.pos])<<28)>>>0,this.buf[this.pos++]<128)return l;if((this.pos+=5)>this.len)throw this.pos=this.len,u(this,10);return l}),s.prototype.int32=function(){return 0|this.uint32()},s.prototype.sint32=function(){var t=this.uint32();return t>>>1^-(1&t)|0},s.prototype.bool=function(){return 0!==this.uint32()},s.prototype.fixed32=function(){if(this.pos+4>this.len)throw u(this,4);return h(this.buf,this.pos+=4)},s.prototype.sfixed32=function(){if(this.pos+4>this.len)throw u(this,4);return 0|h(this.buf,this.pos+=4)},s.prototype.float=function(){if(this.pos+4>this.len)throw u(this,4);var t=o.float.readFloatLE(this.buf,this.pos);return this.pos+=4,t},s.prototype.double=function(){if(this.pos+8>this.len)throw u(this,4);var t=o.float.readDoubleLE(this.buf,this.pos);return this.pos+=8,t},s.prototype.bytes=function(){var t=this.uint32(),e=this.pos,n=this.pos+t;if(n>this.len)throw u(this,t);return this.pos+=t,Array.isArray(this.buf)?this.buf.slice(e,n):e===n?new this.buf.constructor(0):this._slice.call(this.buf,e,n)},s.prototype.string=function(){var t=this.bytes();return a.read(t,0,t.length)},s.prototype.skip=function(t){if("number"==typeof t){if(this.pos+t>this.len)throw u(this,t);this.pos+=t}else do{if(this.pos>=this.len)throw u(this)}while(128&this.buf[this.pos++]);return this},s.prototype.skipType=function(t){switch(t){case 0:this.skip();break;case 1:this.skip(8);break;case 2:this.skip(this.uint32());break;case 3:for(;4!=(t=7&this.uint32());)this.skipType(t);break;case 5:this.skip(4);break;default:throw Error("invalid wire type "+t+" at offset "+this.pos)}return this},s._configure=function(t){r=t,s.create=f(),r._configure();var e=o.Long?"toLong":"toNumber";o.merge(s.prototype,{int64:function(){return p.call(this)[e](!1)},uint64:function(){return p.call(this)[e](!0)},sint64:function(){return p.call(this).zzDecode()[e](!1)},fixed64:function(){return d.call(this)[e](!0)},sfixed64:function(){return d.call(this)[e](!1)}})}},function(t,e,n){"use strict";var r=this&&this.__values||function(t){var e="function"==typeof Symbol&&Symbol.iterator,n=e&&t[e],r=0;if(n)return n.call(t);if(t&&"number"==typeof t.length)return{next:function(){return t&&r>=t.length&&(t=void 0),{value:t&&t[r++],done:!t}}};throw new TypeError(e?"Object is not iterable.":"Symbol.iterator is not defined.")};Object.defineProperty(e,"__esModule",{value:!0}),e.Concat=void 0;var o=function(){function t(){}return t.prototype.initialize=function(t){this.axis=t.getInt("axis")},t.prototype.checkInputs=function(t){return!(!t||t.length<1)&&this.checkInputTypes(t)},t.prototype.checkInputTypes=function(t){var e,n,o=t[0].type,i=t[0].dims.length;if("string"===o)return!1;try{for(var a=r(t),u=a.next();!u.done;u=a.next()){var s=u.value;if(s.type!==o)return!1;if(s.dims.length!==i)return!1}}catch(t){e={error:t}}finally{try{u&&!u.done&&(n=a.return)&&n.call(a)}finally{if(e)throw e.error}}return!0},t}();e.Concat=o},function(t,e,n){"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.Dropout=void 0;var r=function(){function t(){}return t.prototype.initialize=function(t){this.ratio=t.getFloat("ratio",.5),this.testMode=!0},t.prototype.checkInputs=function(t){return!(!t||1!==t.length)&&this.checkInputTypes(t)},t.prototype.checkInputTypes=function(t){return"float32"===t[0].type||"float64"===t[0].type},t}();e.Dropout=r},function(t,e,n){"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.Flatten=void 0;var r=function(){function t(){}return t.prototype.initialize=function(t){this.axis=t.getInt("axis",1)},t.prototype.checkInputs=function(t){if(!t||1!==t.length)return!1;var e=t[0].dims.length;return 0!==e&&(!(this.axis<-e||this.axis>e)&&this.checkInputTypes(t))},t.prototype.checkInputTypes=function(t){return"string"!==t[0].type},t}();e.Flatten=r},function(t,e,n){"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.Gather=void 0;var r=n(7),o=function(){function t(){}return t.prototype.initialize=function(t){this.axis=t.getInt("axis",0)},t.prototype.checkInputs=function(t){if(!t||2!==t.length)return!1;var e=t[0].dims.length;return!(e<1)&&(!(this.axis<-e||this.axis>e-1)&&this.checkInputTypes(t))},t.prototype.checkInputTypes=function(t){return-1!==r.NUMBER_TYPES.indexOf(t[0].type)&&("int32"===t[1].type||"int16"===t[1].type)},t}();e.Gather=o},function(t,e,n){"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.ImageScaler=void 0;var r=function(){function t(){}return t.prototype.initialize=function(t){this.scale=t.getFloat("scale"),this.bias=t.getFloats("bias")},t.prototype.checkInputs=function(t){return!(!t||1!==t.length)&&(4===t[0].dims.length&&this.checkInputTypes(t))},t.prototype.checkInputTypes=function(t){return"float32"===t[0].type||"float64"===t[0].type},t}();e.ImageScaler=r},function(t,e,n){"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.Pad=void 0;var r=function(){function t(){}return t.prototype.initialize=function(t){this.mode=t.getString("mode","constant"),this.value=t.getFloat("value",0),this.pads=t.getInts("pads")},t.prototype.checkInputs=function(t){return!(!t||1!==t.length)&&this.checkInputTypes(t)},t.prototype.checkInputTypes=function(t){return"float32"===t[0].type||"float64"===t[0].type},t}();e.Pad=r},function(t,e,n){"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.ReduceBase=void 0;var r=n(7),o=function(){function t(){}return t.prototype.initialize=function(t){this.axes=t.getInts("axes",[]),this.keepDims=1===t.getInt("keepdims",1)},t.prototype.checkInputs=function(t){return!(!t||1!==t.length)&&this.checkInputTypes(t)},t.prototype.checkInputTypes=function(t){return-1!==r.NUMBER_TYPES.indexOf(t[0].type)},t}();e.ReduceBase=o},function(t,e,n){"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.Reshape=void 0;var r=function(){function t(){}return t.prototype.initialize=function(t){},t.prototype.checkInputs=function(t){return!(!t||2!==t.length||1!==t[1].dims.length)&&this.checkInputTypes(t)},t.prototype.checkInputTypes=function(t){return("float32"===t[0].type||"float64"===t[0].type)&&"int32"===t[1].type},t}();e.Reshape=r},function(t,e,n){"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.SliceV10=e.Slice=void 0;var r=function(){function t(){}return t.prototype.initialize=function(t){this.starts=t.getInts("starts"),this.ends=t.getInts("ends"),this.axes=t.getInts("axes",[])},t.prototype.checkInputs=function(t){return!(!t||1!==t.length)&&this.checkInputTypes(t)},t.prototype.checkInputTypes=function(t){return"float32"===t[0].type||"float64"===t[0].type},t}();e.Slice=r;var o=function(){function t(){}return t.prototype.initialize=function(t){},t.prototype.checkInputs=function(t){return!(!t||t.length<3||t.length>5)&&this.checkInputTypes(t)},t.prototype.checkInputTypes=function(t){return"int32"===t[1].type&&1===t[1].dims.length&&("int32"===t[2].type&&1===t[2].dims.length&&((!(t.length>=4)||"int32"===t[3].type&&1===t[3].dims.length)&&(!(t.length>=5)||"int32"===t[4].type&&1===t[4].dims.length)))},t}();e.SliceV10=o},function(t,e,n){"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.Squeeze=void 0;var r=function(){function t(){}return t.prototype.initialize=function(t){this.axes=t.getInts("axes")},t.prototype.checkInputs=function(t){return!(!t||1!==t.length)&&this.checkInputTypes(t)},t.prototype.checkInputTypes=function(t){return"string"!==t[0].type},t}();e.Squeeze=r},function(t,e,n){"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.Tile=void 0;var r=n(7),o=function(){function t(){}return t.prototype.initialize=function(t){},t.prototype.checkInputs=function(t){return!(!t||2!==t.length)&&(1===t[1].dims.length&&(t[1].dims[0]===t[0].dims.length&&this.checkInputTypes(t)))},t.prototype.checkInputTypes=function(t){return-1!==r.NUMBER_TYPES.indexOf(t[0].type)&&("int32"===t[1].type||"int16"===t[1].type)},t}();e.Tile=o},function(t,e,n){"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.Transpose=void 0;var r=function(){function t(){}return t.prototype.initialize=function(t){this.perm=t.getInts("perm",[])},t.prototype.checkInputs=function(t){return!(!t||1!==t.length)&&this.checkInputTypes(t)},t.prototype.checkInputTypes=function(t){return"float32"===t[0].type||"float64"===t[0].type},t}();e.Transpose=r},function(t,e,n){"use strict";var r,o=this&&this.__extends||(r=function(t,e){return(r=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(t,e){t.__proto__=e}||function(t,e){for(var n in e)e.hasOwnProperty(n)&&(t[n]=e[n])})(t,e)},function(t,e){function n(){this.constructor=t}r(t,e),t.prototype=null===e?Object.create(e):(n.prototype=e.prototype,new n)});Object.defineProperty(e,"__esModule",{value:!0}),e.tanh=e.tan=e.sqrt=e.sinh=e.sin=e.sign=e.sigmoid=e.relu=e.reciprocal=e.not=e.neg=e.log=e.leakyRelu=e.leakyReluInitializer=e.isNan=e.floor=e.exp=e.elu=e.eluInitializer=e.cosh=e.cos=e.clip=e.clipInitializer=e.ceil=e.atanh=e.atan=e.asinh=e.asin=e.acosh=e.acos=e.abs=e.unaryOp=e.CpuUnaryOp=void 0;var i=n(43),a=n(1),u=function(t){function e(e,n,r,o){var i=t.call(this,e,o)||this;return i.func=n,i.attributesInitializer=r,i}return o(e,t),e.prototype.initialize=function(t){this.attributesInitializer&&(this.attributes=this.attributesInitializer(t))},e.prototype.run=function(t,e){return[s(e[0],this.func,this.attributes,this.resultType)]},e}(i.UnaryOp);function s(t,e,n,r){var o=new a.Tensor(t.dims,r||t.type);return e(t.data,o.data,n),o}e.CpuUnaryOp=u,e.unaryOp=s,e.abs=function(t,e){for(var n=0;no?o:a}},e.cos=function(t,e){for(var n=0;n=0?i:r*(Math.exp(i)-1)}},e.exp=function(t,e){for(var n=0;n=0?i:r*i}},e.log=function(t,e){for(var n=0;n0?1:t[n]<0?-1:0},e.sin=function(t,e){for(var n=0;n-1&&r<=c)for(;++n3?"WebKit":/\bOpera\b/.test(B)&&(/\bOPR\b/.test(e)?"Blink":"Presto"))||/\b(?:Midori|Nook|Safari)\b/i.test(e)&&!/^(?:Trident|EdgeHTML)$/.test(N)&&"WebKit"||!N&&/\bMSIE\b/i.test(e)&&("Mac OS"==G?"Tasman":"Trident")||"WebKit"==N&&/\bPlayStation\b(?! Vita\b)/i.test(B)&&"NetFront")&&(N=[u]),"IE"==B&&(u=(/; *(?:XBLWP|ZuneWP)(\d+)/i.exec(e)||0)[1])?(B+=" Mobile",G="Windows Phone "+(/\+$/.test(u)?u:u+".x"),j.unshift("desktop mode")):/\bWPDesktop\b/i.test(e)?(B="IE Mobile",G="Windows Phone 8.x",j.unshift("desktop mode"),R||(R=(/\brv:([\d.]+)/.exec(e)||0)[1])):"IE"!=B&&"Trident"==N&&(u=/\brv:([\d.]+)/.exec(e))&&(B&&j.push("identifying as "+B+(R?" "+R:"")),B="IE",R=u[1]),C){if(c="global",p=null!=(l=n)?typeof l[c]:"number",/^(?:boolean|number|string|undefined)$/.test(p)||"object"==p&&!l[c])v(u=n.runtime)==y?(B="Adobe AIR",G=u.flash.system.Capabilities.os):v(u=n.phantom)==O?(B="PhantomJS",R=(u=u.version||null)&&u.major+"."+u.minor+"."+u.patch):"number"==typeof E.documentMode&&(u=/\bTrident\/(\d+)/i.exec(e))?(R=[R,E.documentMode],(u=+u[1]+4)!=R[1]&&(j.push("IE "+R[1]+" mode"),N&&(N[1]=""),R[1]=u),R="IE"==B?String(R[1].toFixed(1)):R[0]):"number"==typeof E.documentMode&&/^(?:Chrome|Firefox)\b/.test(B)&&(j.push("masking as "+B+" "+R),B="IE",R="11.0",N=["Trident"],G="Windows");else if(S&&(M=(u=S.lang.System).getProperty("os.arch"),G=G||u.getProperty("os.name")+" "+u.getProperty("os.version")),P){try{R=n.require("ringo/engine").version.join("."),B="RingoJS"}catch(t){(u=n.system)&&u.global.system==n.system&&(B="Narwhal",G||(G=u[0].os||null))}B||(B="Rhino")}else"object"==typeof n.process&&!n.process.browser&&(u=n.process)&&("object"==typeof u.versions&&("string"==typeof u.versions.electron?(j.push("Node "+u.versions.node),B="Electron",R=u.versions.electron):"string"==typeof u.versions.nw&&(j.push("Chromium "+R,"Node "+u.versions.node),B="NW.js",R=u.versions.nw)),B||(B="Node.js",M=u.arch,G=u.platform,R=(R=/[\d.]+/.exec(u.version))?R[0]:null));G=G&&g(G)}if(R&&(u=/(?:[ab]|dp|pre|[ab]\d+pre)(?:\d+\+?)?$/i.exec(R)||/(?:alpha|beta)(?: ?\d)?/i.exec(e+";"+(C&&o.appMinorVersion))||/\bMinefield\b/i.test(e)&&"a")&&(k=/b/i.test(u)?"beta":"alpha",R=R.replace(RegExp(u+"\\+?$"),"")+("beta"==k?D:A)+(/\d+\+?/.exec(u)||"")),"Fennec"==B||"Firefox"==B&&/\b(?:Android|Firefox OS|KaiOS)\b/.test(G))B="Firefox Mobile";else if("Maxthon"==B&&R)R=R.replace(/\.[\d.]+/,".x");else if(/\bXbox\b/i.test(F))"Xbox 360"==F&&(G=null),"Xbox 360"==F&&/\bIEMobile\b/.test(e)&&j.unshift("mobile mode");else if(!/^(?:Chrome|IE|Opera)$/.test(B)&&(!B||F||/Browser|Mobi/.test(B))||"Windows CE"!=G&&!/Mobi/i.test(e))if("IE"==B&&C)try{null===n.external&&j.unshift("platform preview")}catch(t){j.unshift("embedded")}else(/\bBlackBerry\b/.test(F)||/\bBB10\b/.test(e))&&(u=(RegExp(F.replace(/ +/g," *")+"/([.\\d]+)","i").exec(e)||0)[1]||R)?(G=((u=[u,/BB10/.test(e)])[1]?(F=null,U="BlackBerry"):"Device Software")+" "+u[0],R=null):this!=m&&"Wii"!=F&&(C&&I||/Opera/.test(B)&&/\b(?:MSIE|Firefox)\b/i.test(e)||"Firefox"==B&&/\bOS X (?:\d+\.){2,}/.test(G)||"IE"==B&&(G&&!/^Win/.test(G)&&R>5.5||/\bWindows XP\b/.test(G)&&R>8||8==R&&!/\bTrident\b/.test(e)))&&!f.test(u=t.call(m,e.replace(f,"")+";"))&&u.name&&(u="ing as "+u.name+((u=u.version)?" "+u:""),f.test(B)?(/\bIE\b/.test(u)&&"Mac OS"==G&&(G=null),u="identify"+u):(u="mask"+u,B=L?g(L.replace(/([a-z])([A-Z])/g,"$1 $2")):"Opera",/\bIE\b/.test(u)&&(G=null),C||(R=null)),N=["Presto"],j.push(u));else B+=" Mobile";(u=(/\bAppleWebKit\/([\d.]+\+?)/i.exec(e)||0)[1])&&(u=[parseFloat(u.replace(/\.(\d)$/,".0$1")),u],"Safari"==B&&"+"==u[1].slice(-1)?(B="WebKit Nightly",k="alpha",R=u[1].slice(0,-1)):R!=u[1]&&R!=(u[2]=(/\bSafari\/([\d.]+\+?)/i.exec(e)||0)[1])||(R=null),u[1]=(/\b(?:Headless)?Chrome\/([\d.]+)/i.exec(e)||0)[1],537.36==u[0]&&537.36==u[2]&&parseFloat(u[1])>=28&&"WebKit"==N&&(N=["Blink"]),C&&(h||u[1])?(N&&(N[1]="like Chrome"),u=u[1]||((u=u[0])<530?1:u<532?2:u<532.05?3:u<533?4:u<534.03?5:u<534.07?6:u<534.1?7:u<534.13?8:u<534.16?9:u<534.24?10:u<534.3?11:u<535.01?12:u<535.02?"13+":u<535.07?15:u<535.11?16:u<535.19?17:u<536.05?18:u<536.1?19:u<537.01?20:u<537.11?"21+":u<537.13?23:u<537.18?24:u<537.24?25:u<537.36?26:"Blink"!=N?"27":"28")):(N&&(N[1]="like Safari"),u=(u=u[0])<400?1:u<500?2:u<526?3:u<533?4:u<534?"4+":u<535?5:u<537?6:u<538?7:u<601?8:u<602?9:u<604?10:u<606?11:u<608?12:"12"),N&&(N[1]+=" "+(u+="number"==typeof u?".x":/[.+]/.test(u)?"":"+")),"Safari"==B&&(!R||parseInt(R)>45)?R=u:"Chrome"==B&&/\bHeadlessChrome/i.test(e)&&j.unshift("headless")),"Opera"==B&&(u=/\bzbov|zvav$/.exec(G))?(B+=" ",j.unshift("desktop mode"),"zvav"==u?(B+="Mini",R=null):B+="Mobile",G=G.replace(RegExp(" *"+u+"$"),"")):"Safari"==B&&/\bChrome\b/.exec(N&&N[1])?(j.unshift("desktop mode"),B="Chrome Mobile",R=null,/\bOS X\b/.test(G)?(U="Apple",G="iOS 4.3+"):G=null):/\bSRWare Iron\b/.test(B)&&!R&&(R=W("Chrome")),R&&0==R.indexOf(u=/[\d.]+$/.exec(G))&&e.indexOf("/"+u+"-")>-1&&(G=w(G.replace(u,""))),G&&-1!=G.indexOf(B)&&!RegExp(B+" OS").test(G)&&(G=G.replace(RegExp(" *"+b(B)+" *"),"")),N&&!/\b(?:Avant|Nook)\b/.test(B)&&(/Browser|Lunascape|Maxthon/.test(B)||"Safari"!=B&&/^iOS/.test(G)&&/\bSafari\b/.test(N[1])||/^(?:Adobe|Arora|Breach|Midori|Opera|Phantom|Rekonq|Rock|Samsung Internet|Sleipnir|SRWare Iron|Vivaldi|Web)/.test(B)&&N[1])&&(u=N[N.length-1])&&j.push(u),j.length&&(j=["("+j.join("; ")+")"]),U&&F&&F.indexOf(U)<0&&j.push("on "+U),F&&j.push((/^on /.test(j[j.length-1])?"":"on ")+F),G&&(u=/ ([\d.+]+)$/.exec(G),s=u&&"/"==G.charAt(G.length-u[0].length-1),G={architecture:32,family:u&&!s?G.replace(u[0],""):G,version:u?u[1]:null,toString:function(){var t=this.version;return this.family+(t&&!s?" "+t:"")+(64==this.architecture?" 64-bit":"")}}),(u=/\b(?:AMD|IA|Win|WOW|x86_|x)64\b/i.exec(M))&&!/\bi686\b/i.test(M)?(G&&(G.architecture=64,G.family=G.family.replace(RegExp(" *"+u),"")),B&&(/\bWOW64\b/i.test(e)||C&&/\w(?:86|32)$/.test(o.cpuClass||o.platform)&&!/\bWin64; x64\b/i.test(e))&&j.unshift("32-bit")):G&&/^OS X/.test(G.family)&&"Chrome"==B&&parseFloat(R)>=39&&(G.architecture=64),e||(e=null);var V={};return V.description=e,V.layout=N&&N[0],V.manufacturer=U,V.name=B,V.prerelease=k,V.product=F,V.ua=e,V.version=B&&R,V.os=G||{architecture:null,family:null,version:null,toString:function(){return"null"}},V.parse=t,V.toString=function(){return this.description||""},V.version&&j.unshift(R),V.name&&j.unshift(B),G&&B&&(G!=String(G).split(" ")[0]||G!=B.split(" ")[0]&&!F)&&j.push(F?"("+G+")":"on "+G),j.length&&(V.description=j.join(" ")),V}();a.platform=x,void 0===(o=function(){return x}.call(e,n,e,t))||(t.exports=o)}).call(this)}).call(this,n(102)(t),n(8))},function(t,e,n){(function(t){function n(t,e){for(var n=0,r=t.length-1;r>=0;r--){var o=t[r];"."===o?t.splice(r,1):".."===o?(t.splice(r,1),n++):n&&(t.splice(r,1),n--)}if(e)for(;n--;n)t.unshift("..");return t}function r(t,e){if(t.filter)return t.filter(e);for(var n=[],r=0;r=-1&&!o;i--){var a=i>=0?arguments[i]:t.cwd();if("string"!=typeof a)throw new TypeError("Arguments to path.resolve must be strings");a&&(e=a+"/"+e,o="/"===a.charAt(0))}return(o?"/":"")+(e=n(r(e.split("/"),(function(t){return!!t})),!o).join("/"))||"."},e.normalize=function(t){var i=e.isAbsolute(t),a="/"===o(t,-1);return(t=n(r(t.split("/"),(function(t){return!!t})),!i).join("/"))||i||(t="."),t&&a&&(t+="/"),(i?"/":"")+t},e.isAbsolute=function(t){return"/"===t.charAt(0)},e.join=function(){var t=Array.prototype.slice.call(arguments,0);return e.normalize(r(t,(function(t,e){if("string"!=typeof t)throw new TypeError("Arguments to path.join must be strings");return t})).join("/"))},e.relative=function(t,n){function r(t){for(var e=0;e=0&&""===t[n];n--);return e>n?[]:t.slice(e,n-e+1)}t=e.resolve(t).substr(1),n=e.resolve(n).substr(1);for(var o=r(t.split("/")),i=r(n.split("/")),a=Math.min(o.length,i.length),u=a,s=0;s=1;--i)if(47===(e=t.charCodeAt(i))){if(!o){r=i;break}}else o=!1;return-1===r?n?"/":".":n&&1===r?"/":t.slice(0,r)},e.basename=function(t,e){var n=function(t){"string"!=typeof t&&(t+="");var e,n=0,r=-1,o=!0;for(e=t.length-1;e>=0;--e)if(47===t.charCodeAt(e)){if(!o){n=e+1;break}}else-1===r&&(o=!1,r=e+1);return-1===r?"":t.slice(n,r)}(t);return e&&n.substr(-1*e.length)===e&&(n=n.substr(0,n.length-e.length)),n},e.extname=function(t){"string"!=typeof t&&(t+="");for(var e=-1,n=0,r=-1,o=!0,i=0,a=t.length-1;a>=0;--a){var u=t.charCodeAt(a);if(47!==u)-1===r&&(o=!1,r=a+1),46===u?-1===e?e=a:1!==i&&(i=1):-1!==e&&(i=-1);else if(!o){n=a+1;break}}return-1===e||-1===r||0===i||1===i&&e===r-1&&e===n+1?"":t.slice(e,r)};var o="b"==="ab".substr(-1)?function(t,e,n){return t.substr(e,n)}:function(t,e,n){return e<0&&(e=t.length+e),t.substr(e,n)}}).call(this,n(24))},function(t,e){},function(t,e,n){"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.Clip=void 0;var r=function(){function t(){}return t.prototype.initialize=function(t){this.min=t.getFloat("min",-34028234663852886e22),this.max=t.getFloat("max",34028234663852886e22)},t.prototype.checkInputs=function(t){return!(!t||1!==t.length)&&this.checkInputTypes(t)},t.prototype.checkInputTypes=function(t){return"float32"===t[0].type||"float64"===t[0].type},t}();e.Clip=r},function(t,e,n){"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.getPackedShape=void 0,e.getPackedShape=function(t){var e=t.length;return t.slice(0,e-1).concat(t[e-1]/4)}},function(t,e,n){"use strict";var r=this&&this.__createBinding||(Object.create?function(t,e,n,r){void 0===r&&(r=n),Object.defineProperty(t,r,{enumerable:!0,get:function(){return e[n]}})}:function(t,e,n,r){void 0===r&&(r=n),t[r]=e[n]}),o=this&&this.__setModuleDefault||(Object.create?function(t,e){Object.defineProperty(t,"default",{enumerable:!0,value:e})}:function(t,e){t.default=e}),i=this&&this.__importStar||function(t){if(t&&t.__esModule)return t;var e={};if(null!=t)for(var n in t)"default"!==n&&Object.hasOwnProperty.call(t,n)&&r(e,t,n);return o(e,t),e},a=this&&this.__read||function(t,e){var n="function"==typeof Symbol&&t[Symbol.iterator];if(!n)return t;var r,o,i=n.call(t),a=[];try{for(;(void 0===e||e-- >0)&&!(r=i.next()).done;)a.push(r.value)}catch(t){o={error:t}}finally{try{r&&!r.done&&(n=i.return)&&n.call(i)}finally{if(o)throw o.error}}return a},u=this&&this.__spread||function(){for(var t=[],e=0;e=e.dims[n])throw new RangeError("Input index array dims don't match the tensor dims.")}));var i=this.internalTensor.get(o);return"bool"===this.type?1===i:i},t.prototype.set=function(t,e){for(var n=this,r=[],o=2;o=n.dims[e])throw new RangeError("Input index array dims don't match the tensor dims.")})),"boolean"==typeof t?this.internalTensor.set(i,t?1:0):this.internalTensor.set(i,t)},t}();e.Tensor=c},function(t,e,n){"use strict";var r=this&&this.__values||function(t){var e="function"==typeof Symbol&&Symbol.iterator,n=e&&t[e],r=0;if(n)return n.call(t);if(t&&"number"==typeof t.length)return{next:function(){return t&&r>=t.length&&(t=void 0),{value:t&&t[r++],done:!t}}};throw new TypeError(e?"Object is not iterable.":"Symbol.iterator is not defined.")};Object.defineProperty(e,"__esModule",{value:!0}),e.validateIndices=e.matchElementType=e.toInternalTensor=e.fromInternalTensor=void 0;var o=n(1),i=n(51);e.fromInternalTensor=function(t){switch(t.type){case"bool":return new i.Tensor(new Uint8Array(t.integerData),"bool",t.dims);case"float32":return new i.Tensor(t.floatData,"float32",t.dims);case"float64":return new i.Tensor(new Float32Array(t.floatData),"float32",t.dims);case"string":return new i.Tensor(t.stringData,"string",t.dims);case"int8":return new i.Tensor(new Int32Array(t.integerData),"int32",t.dims);case"int32":return new i.Tensor(t.integerData,"int32",t.dims);default:throw new TypeError("Tensor type is not supported. ")}},e.toInternalTensor=function(t){return new o.Tensor(t.dims,t.type,void 0,void 0,t.data)},e.matchElementType=function(t,e){switch(typeof e){case"string":if("string"!==t)throw new TypeError("The new element type doesn't match the tensor data type.");break;case"number":if("float32"!==t&&"int32"!==t)throw new TypeError("The new element type doesn't match the tensor data type.");if("float32"===t&&Number.isInteger(e))throw new TypeError("The new element type doesn't match the tensor data type.");if("int32"===t&&!Number.isInteger(e))throw new TypeError("The new element type doesn't match the tensor data type.");break;case"boolean":if("bool"!==t)throw new TypeError("The new element type doesn't match the tensor data type.");break;default:throw new TypeError("The new element type is not supported.")}},e.validateIndices=function(t){var e,n;if(t.length<0||t.length>6)throw new RangeError("Only rank 0 to 6 is supported for tensor shape.");try{for(var o=r(t),i=o.next();!i.done;i=o.next()){var a=i.value;if(!Number.isInteger(a))throw new TypeError("Invalid index: "+a+" is not an integer");if(a<0||a>2147483647)throw new TypeError("Invalid index: length "+a+" is not allowed")}}catch(t){e={error:t}}finally{try{i&&!i.done&&(n=o.return)&&n.call(o)}finally{if(e)throw e.error}}}},function(t,e,n){"use strict";var r=this&&this.__createBinding||(Object.create?function(t,e,n,r){void 0===r&&(r=n),Object.defineProperty(t,r,{enumerable:!0,get:function(){return e[n]}})}:function(t,e,n,r){void 0===r&&(r=n),t[r]=e[n]}),o=this&&this.__exportStar||function(t,e){for(var n in t)"default"===n||e.hasOwnProperty(n)||r(e,t,n)};Object.defineProperty(e,"__esModule",{value:!0}),e.ENV=e.backend=void 0;var i=n(54),a=n(101),u=n(119),s=n(168);o(n(169),e),o(n(170),e),o(n(171),e),o(n(172),e),e.backend={cpu:new i.CpuBackend,wasm:new a.WasmBackend,webgl:new u.WebGLBackend},e.ENV=s.envImpl},function(t,e,n){"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.CpuBackend=void 0;var r=n(55),o=function(){function t(){}return t.prototype.initialize=function(){return!0},t.prototype.createSessionHandler=function(t){return new r.CpuSessionHandler(this,t)},t.prototype.dispose=function(){},t}();e.CpuBackend=o},function(t,e,n){"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.CpuSessionHandler=void 0;var r=n(11),o=n(56),i=n(27),a=function(){function t(t,e){this.backend=t,this.context=e}return t.prototype.createInferenceHandler=function(){return new o.CpuInferenceHandler(this,this.context.profiler)},t.prototype.dispose=function(){},t.prototype.resolve=function(t,e){var n=r.resolveOperator(t,e,i.CPU_OP_RESOLVE_RULES);return n.initialize(t.attributes),n},t}();e.CpuSessionHandler=a},function(t,e,n){"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.CpuInferenceHandler=void 0;var r=function(){function t(t,e){this.session=t,this.profiler=e}return t.prototype.dispose=function(){},t}();e.CpuInferenceHandler=r},function(t,e,n){"use strict";var r,o=this&&this.__extends||(r=function(t,e){return(r=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(t,e){t.__proto__=e}||function(t,e){for(var n in e)e.hasOwnProperty(n)&&(t[n]=e[n])})(t,e)},function(t,e){function n(){this.constructor=t}r(t,e),t.prototype=null===e?Object.create(e):(n.prototype=e.prototype,new n)});Object.defineProperty(e,"__esModule",{value:!0}),e.argMax=e.CpuArgMax=void 0;var i=n(58),a=n(1),u=n(0),s=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.run=function(t,e){return[l(e[0],this.axis,this.keepDims)]},e}(i.ArgMax);function l(t,e,n){var r=t.dims?t.dims.length:1;e=u.ShapeUtil.normalizeAxis(e,r);for(var o=u.ReduceUtil.calcReduceShape(t.dims,[e],!0),i=t.data,s=new Int32Array(u.ShapeUtil.size(o)),l=u.ShapeUtil.sizeFromDimension(t.dims,e+1),c=u.ShapeUtil.computeStrides(o),f=u.ShapeUtil.computeStrides(t.dims),p=new Array(t.dims.length),h=0;hg&&(g=b,m=v)}s[h]=m}return new a.Tensor(n?o:u.ReduceUtil.calcReduceShape(t.dims,[e],n),"int32",void 0,void 0,s)}e.CpuArgMax=s,e.argMax=l},function(t,e,n){"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.ArgMax=void 0;var r=n(7),o=function(){function t(){}return t.prototype.initialize=function(t){this.axis=t.getInt("axis",0),this.keepDims=1===t.getInt("keepdims",1)},t.prototype.checkInputs=function(t){return!(!t||1!==t.length)&&this.checkInputTypes(t)},t.prototype.checkInputTypes=function(t){return-1!==r.NUMBER_TYPES.indexOf(t[0].type)},t}();e.ArgMax=o},function(t,e,n){"use strict";e.byteLength=function(t){var e=l(t),n=e[0],r=e[1];return 3*(n+r)/4-r},e.toByteArray=function(t){var e,n,r=l(t),a=r[0],u=r[1],s=new i(function(t,e,n){return 3*(e+n)/4-n}(0,a,u)),c=0,f=u>0?a-4:a;for(n=0;n>16&255,s[c++]=e>>8&255,s[c++]=255&e;2===u&&(e=o[t.charCodeAt(n)]<<2|o[t.charCodeAt(n+1)]>>4,s[c++]=255&e);1===u&&(e=o[t.charCodeAt(n)]<<10|o[t.charCodeAt(n+1)]<<4|o[t.charCodeAt(n+2)]>>2,s[c++]=e>>8&255,s[c++]=255&e);return s},e.fromByteArray=function(t){for(var e,n=t.length,o=n%3,i=[],a=0,u=n-o;au?u:a+16383));1===o?(e=t[n-1],i.push(r[e>>2]+r[e<<4&63]+"==")):2===o&&(e=(t[n-2]<<8)+t[n-1],i.push(r[e>>10]+r[e>>4&63]+r[e<<2&63]+"="));return i.join("")};for(var r=[],o=[],i="undefined"!=typeof Uint8Array?Uint8Array:Array,a="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/",u=0,s=a.length;u0)throw new Error("Invalid string. Length must be a multiple of 4");var n=t.indexOf("=");return-1===n&&(n=e),[n,n===e?0:4-n%4]}function c(t,e,n){for(var o,i,a=[],u=e;u>18&63]+r[i>>12&63]+r[i>>6&63]+r[63&i]);return a.join("")}o["-".charCodeAt(0)]=62,o["_".charCodeAt(0)]=63},function(t,e){e.read=function(t,e,n,r,o){var i,a,u=8*o-r-1,s=(1<>1,c=-7,f=n?o-1:0,p=n?-1:1,h=t[e+f];for(f+=p,i=h&(1<<-c)-1,h>>=-c,c+=u;c>0;i=256*i+t[e+f],f+=p,c-=8);for(a=i&(1<<-c)-1,i>>=-c,c+=r;c>0;a=256*a+t[e+f],f+=p,c-=8);if(0===i)i=1-l;else{if(i===s)return a?NaN:1/0*(h?-1:1);a+=Math.pow(2,r),i-=l}return(h?-1:1)*a*Math.pow(2,i-r)},e.write=function(t,e,n,r,o,i){var a,u,s,l=8*i-o-1,c=(1<>1,p=23===o?Math.pow(2,-24)-Math.pow(2,-77):0,h=r?0:i-1,d=r?1:-1,y=e<0||0===e&&1/e<0?1:0;for(e=Math.abs(e),isNaN(e)||e===1/0?(u=isNaN(e)?1:0,a=c):(a=Math.floor(Math.log(e)/Math.LN2),e*(s=Math.pow(2,-a))<1&&(a--,s*=2),(e+=a+f>=1?p/s:p*Math.pow(2,1-f))*s>=2&&(a++,s/=2),a+f>=c?(u=0,a=c):a+f>=1?(u=(e*s-1)*Math.pow(2,o),a+=f):(u=e*Math.pow(2,f-1)*Math.pow(2,o),a=0));o>=8;t[n+h]=255&u,h+=d,u/=256,o-=8);for(a=a<0;t[n+h]=255&a,h+=d,a/=256,l-=8);t[n+h-d]|=128*y}},function(t,e){var n={}.toString;t.exports=Array.isArray||function(t){return"[object Array]"==n.call(t)}},function(t,e,n){"use strict";t.exports=n(63)},function(t,e,n){"use strict";var r=e;function o(){r.util._configure(),r.Writer._configure(r.BufferWriter),r.Reader._configure(r.BufferReader)}r.build="minimal",r.Writer=n(28),r.BufferWriter=n(72),r.Reader=n(29),r.BufferReader=n(73),r.util=n(6),r.rpc=n(74),r.roots=n(76),r.configure=o,o()},function(t,e,n){"use strict";t.exports=function(t,e){var n=new Array(arguments.length-1),r=0,o=2,i=!0;for(;o1&&"="===t.charAt(e);)++n;return Math.ceil(3*t.length)/4-n};for(var o=new Array(64),i=new Array(123),a=0;a<64;)i[o[a]=a<26?a+65:a<52?a+71:a<62?a-4:a-59|43]=a++;r.encode=function(t,e,n){for(var r,i=null,a=[],u=0,s=0;e>2],r=(3&l)<<4,s=1;break;case 1:a[u++]=o[r|l>>4],r=(15&l)<<2,s=2;break;case 2:a[u++]=o[r|l>>6],a[u++]=o[63&l],s=0}u>8191&&((i||(i=[])).push(String.fromCharCode.apply(String,a)),u=0)}return s&&(a[u++]=o[r],a[u++]=61,1===s&&(a[u++]=61)),i?(u&&i.push(String.fromCharCode.apply(String,a.slice(0,u))),i.join("")):String.fromCharCode.apply(String,a.slice(0,u))};r.decode=function(t,e,n){for(var r,o=n,a=0,u=0;u1)break;if(void 0===(s=i[s]))throw Error("invalid encoding");switch(a){case 0:r=s,a=1;break;case 1:e[n++]=r<<2|(48&s)>>4,r=s,a=2;break;case 2:e[n++]=(15&r)<<4|(60&s)>>2,r=s,a=3;break;case 3:e[n++]=(3&r)<<6|s,a=0}}if(1===a)throw Error("invalid encoding");return n-o},r.test=function(t){return/^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$/.test(t)}},function(t,e,n){"use strict";function r(){this._listeners={}}t.exports=r,r.prototype.on=function(t,e,n){return(this._listeners[t]||(this._listeners[t]=[])).push({fn:e,ctx:n||this}),this},r.prototype.off=function(t,e){if(void 0===t)this._listeners={};else if(void 0===e)this._listeners[t]=[];else for(var n=this._listeners[t],r=0;r0?0:2147483648,n,r);else if(isNaN(e))t(2143289344,n,r);else if(e>34028234663852886e22)t((o<<31|2139095040)>>>0,n,r);else if(e<11754943508222875e-54)t((o<<31|Math.round(e/1401298464324817e-60))>>>0,n,r);else{var i=Math.floor(Math.log(e)/Math.LN2);t((o<<31|i+127<<23|8388607&Math.round(e*Math.pow(2,-i)*8388608))>>>0,n,r)}}function n(t,e,n){var r=t(e,n),o=2*(r>>31)+1,i=r>>>23&255,a=8388607&r;return 255===i?a?NaN:o*(1/0):0===i?1401298464324817e-60*o*a:o*Math.pow(2,i-150)*(a+8388608)}t.writeFloatLE=e.bind(null,o),t.writeFloatBE=e.bind(null,i),t.readFloatLE=n.bind(null,a),t.readFloatBE=n.bind(null,u)}(),"undefined"!=typeof Float64Array?function(){var e=new Float64Array([-0]),n=new Uint8Array(e.buffer),r=128===n[7];function o(t,r,o){e[0]=t,r[o]=n[0],r[o+1]=n[1],r[o+2]=n[2],r[o+3]=n[3],r[o+4]=n[4],r[o+5]=n[5],r[o+6]=n[6],r[o+7]=n[7]}function i(t,r,o){e[0]=t,r[o]=n[7],r[o+1]=n[6],r[o+2]=n[5],r[o+3]=n[4],r[o+4]=n[3],r[o+5]=n[2],r[o+6]=n[1],r[o+7]=n[0]}function a(t,r){return n[0]=t[r],n[1]=t[r+1],n[2]=t[r+2],n[3]=t[r+3],n[4]=t[r+4],n[5]=t[r+5],n[6]=t[r+6],n[7]=t[r+7],e[0]}function u(t,r){return n[7]=t[r],n[6]=t[r+1],n[5]=t[r+2],n[4]=t[r+3],n[3]=t[r+4],n[2]=t[r+5],n[1]=t[r+6],n[0]=t[r+7],e[0]}t.writeDoubleLE=r?o:i,t.writeDoubleBE=r?i:o,t.readDoubleLE=r?a:u,t.readDoubleBE=r?u:a}():function(){function e(t,e,n,r,o,i){var a=r<0?1:0;if(a&&(r=-r),0===r)t(0,o,i+e),t(1/r>0?0:2147483648,o,i+n);else if(isNaN(r))t(0,o,i+e),t(2146959360,o,i+n);else if(r>17976931348623157e292)t(0,o,i+e),t((a<<31|2146435072)>>>0,o,i+n);else{var u;if(r<22250738585072014e-324)t((u=r/5e-324)>>>0,o,i+e),t((a<<31|u/4294967296)>>>0,o,i+n);else{var s=Math.floor(Math.log(r)/Math.LN2);1024===s&&(s=1023),t(4503599627370496*(u=r*Math.pow(2,-s))>>>0,o,i+e),t((a<<31|s+1023<<20|1048576*u&1048575)>>>0,o,i+n)}}}function n(t,e,n,r,o){var i=t(r,o+e),a=t(r,o+n),u=2*(a>>31)+1,s=a>>>20&2047,l=4294967296*(1048575&a)+i;return 2047===s?l?NaN:u*(1/0):0===s?5e-324*u*l:u*Math.pow(2,s-1075)*(l+4503599627370496)}t.writeDoubleLE=e.bind(null,o,0,4),t.writeDoubleBE=e.bind(null,i,4,0),t.readDoubleLE=n.bind(null,a,0,4),t.readDoubleBE=n.bind(null,u,4,0)}(),t}function o(t,e,n){e[n]=255&t,e[n+1]=t>>>8&255,e[n+2]=t>>>16&255,e[n+3]=t>>>24}function i(t,e,n){e[n]=t>>>24,e[n+1]=t>>>16&255,e[n+2]=t>>>8&255,e[n+3]=255&t}function a(t,e){return(t[e]|t[e+1]<<8|t[e+2]<<16|t[e+3]<<24)>>>0}function u(t,e){return(t[e]<<24|t[e+1]<<16|t[e+2]<<8|t[e+3])>>>0}t.exports=r(r)},function(module,exports,__webpack_require__){"use strict";function inquire(moduleName){try{var mod=eval("quire".replace(/^/,"re"))(moduleName);if(mod&&(mod.length||Object.keys(mod).length))return mod}catch(t){}return null}module.exports=inquire},function(t,e,n){"use strict";var r=e;r.length=function(t){for(var e=0,n=0,r=0;r191&&r<224?i[a++]=(31&r)<<6|63&t[e++]:r>239&&r<365?(r=((7&r)<<18|(63&t[e++])<<12|(63&t[e++])<<6|63&t[e++])-65536,i[a++]=55296+(r>>10),i[a++]=56320+(1023&r)):i[a++]=(15&r)<<12|(63&t[e++])<<6|63&t[e++],a>8191&&((o||(o=[])).push(String.fromCharCode.apply(String,i)),a=0);return o?(a&&o.push(String.fromCharCode.apply(String,i.slice(0,a))),o.join("")):String.fromCharCode.apply(String,i.slice(0,a))},r.write=function(t,e,n){for(var r,o,i=n,a=0;a>6|192,e[n++]=63&r|128):55296==(64512&r)&&56320==(64512&(o=t.charCodeAt(a+1)))?(r=65536+((1023&r)<<10)+(1023&o),++a,e[n++]=r>>18|240,e[n++]=r>>12&63|128,e[n++]=r>>6&63|128,e[n++]=63&r|128):(e[n++]=r>>12|224,e[n++]=r>>6&63|128,e[n++]=63&r|128);return n-i}},function(t,e,n){"use strict";t.exports=function(t,e,n){var r=n||8192,o=r>>>1,i=null,a=r;return function(n){if(n<1||n>o)return t(n);a+n>r&&(i=t(r),a=0);var u=e.call(i,a,a+=n);return 7&a&&(a=1+(7|a)),u}}},function(t,e,n){"use strict";t.exports=o;var r=n(6);function o(t,e){this.lo=t>>>0,this.hi=e>>>0}var i=o.zero=new o(0,0);i.toNumber=function(){return 0},i.zzEncode=i.zzDecode=function(){return this},i.length=function(){return 1};var a=o.zeroHash="\0\0\0\0\0\0\0\0";o.fromNumber=function(t){if(0===t)return i;var e=t<0;e&&(t=-t);var n=t>>>0,r=(t-n)/4294967296>>>0;return e&&(r=~r>>>0,n=~n>>>0,++n>4294967295&&(n=0,++r>4294967295&&(r=0))),new o(n,r)},o.from=function(t){if("number"==typeof t)return o.fromNumber(t);if(r.isString(t)){if(!r.Long)return o.fromNumber(parseInt(t,10));t=r.Long.fromString(t)}return t.low||t.high?new o(t.low>>>0,t.high>>>0):i},o.prototype.toNumber=function(t){if(!t&&this.hi>>>31){var e=1+~this.lo>>>0,n=~this.hi>>>0;return e||(n=n+1>>>0),-(e+4294967296*n)}return this.lo+4294967296*this.hi},o.prototype.toLong=function(t){return r.Long?new r.Long(0|this.lo,0|this.hi,Boolean(t)):{low:0|this.lo,high:0|this.hi,unsigned:Boolean(t)}};var u=String.prototype.charCodeAt;o.fromHash=function(t){return t===a?i:new o((u.call(t,0)|u.call(t,1)<<8|u.call(t,2)<<16|u.call(t,3)<<24)>>>0,(u.call(t,4)|u.call(t,5)<<8|u.call(t,6)<<16|u.call(t,7)<<24)>>>0)},o.prototype.toHash=function(){return String.fromCharCode(255&this.lo,this.lo>>>8&255,this.lo>>>16&255,this.lo>>>24,255&this.hi,this.hi>>>8&255,this.hi>>>16&255,this.hi>>>24)},o.prototype.zzEncode=function(){var t=this.hi>>31;return this.hi=((this.hi<<1|this.lo>>>31)^t)>>>0,this.lo=(this.lo<<1^t)>>>0,this},o.prototype.zzDecode=function(){var t=-(1&this.lo);return this.lo=((this.lo>>>1|this.hi<<31)^t)>>>0,this.hi=(this.hi>>>1^t)>>>0,this},o.prototype.length=function(){var t=this.lo,e=(this.lo>>>28|this.hi<<4)>>>0,n=this.hi>>>24;return 0===n?0===e?t<16384?t<128?1:2:t<2097152?3:4:e<16384?e<128?5:6:e<2097152?7:8:n<128?9:10}},function(t,e,n){"use strict";t.exports=i;var r=n(28);(i.prototype=Object.create(r.prototype)).constructor=i;var o=n(6);function i(){r.call(this)}function a(t,e,n){t.length<40?o.utf8.write(t,e,n):e.utf8Write?e.utf8Write(t,n):e.write(t,n)}i._configure=function(){i.alloc=o._Buffer_allocUnsafe,i.writeBytesBuffer=o.Buffer&&o.Buffer.prototype instanceof Uint8Array&&"set"===o.Buffer.prototype.set.name?function(t,e,n){e.set(t,n)}:function(t,e,n){if(t.copy)t.copy(e,n,0,t.length);else for(var r=0;r>>0;return this.uint32(e),e&&this._push(i.writeBytesBuffer,e,t),this},i.prototype.string=function(t){var e=o.Buffer.byteLength(t);return this.uint32(e),e&&this._push(a,e,t),this},i._configure()},function(t,e,n){"use strict";t.exports=i;var r=n(29);(i.prototype=Object.create(r.prototype)).constructor=i;var o=n(6);function i(t){r.call(this,t)}i._configure=function(){o.Buffer&&(i.prototype._slice=o.Buffer.prototype.slice)},i.prototype.string=function(){var t=this.uint32();return this.buf.utf8Slice?this.buf.utf8Slice(this.pos,this.pos=Math.min(this.pos+t,this.len)):this.buf.toString("utf-8",this.pos,this.pos=Math.min(this.pos+t,this.len))},i._configure()},function(t,e,n){"use strict";e.Service=n(75)},function(t,e,n){"use strict";t.exports=o;var r=n(6);function o(t,e,n){if("function"!=typeof t)throw TypeError("rpcImpl must be a function");r.EventEmitter.call(this),this.rpcImpl=t,this.requestDelimited=Boolean(e),this.responseDelimited=Boolean(n)}(o.prototype=Object.create(r.EventEmitter.prototype)).constructor=o,o.prototype.rpcCall=function t(e,n,o,i,a){if(!i)throw TypeError("request must be specified");var u=this;if(!a)return r.asPromise(t,u,e,n,o,i);if(u.rpcImpl)try{return u.rpcImpl(e,n[u.requestDelimited?"encodeDelimited":"encode"](i).finish(),(function(t,n){if(t)return u.emit("error",t,e),a(t);if(null!==n){if(!(n instanceof o))try{n=o[u.responseDelimited?"decodeDelimited":"decode"](n)}catch(t){return u.emit("error",t,e),a(t)}return u.emit("data",n,e),a(null,n)}u.end(!0)}))}catch(t){return u.emit("error",t,e),void setTimeout((function(){a(t)}),0)}else setTimeout((function(){a(Error("already ended"))}),0)},o.prototype.end=function(t){return this.rpcImpl&&(t||this.rpcImpl(null,null,null),this.rpcImpl=null,this.emit("end").off()),this}},function(t,e,n){"use strict";t.exports={}},function(t,e,n){"use strict";var r,o=this&&this.__extends||(r=function(t,e){return(r=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(t,e){t.__proto__=e}||function(t,e){for(var n in e)e.hasOwnProperty(n)&&(t[n]=e[n])})(t,e)},function(t,e){function n(){this.constructor=t}r(t,e),t.prototype=null===e?Object.create(e):(n.prototype=e.prototype,new n)});Object.defineProperty(e,"__esModule",{value:!0}),e.batchNormalization=e.CpuBatchNormalization=void 0;var i=n(14),a=n(1),u=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.run=function(t,e){return[s(e[0],e[1],e[2],e[3],e[4],this.epsilon,this.momentum,this.spatial)]},e}(i.BatchNormalization);function s(t,e,n,r,o,i,u,s){for(var l=t.dims,c=l[0],f=l[1],p=1,h=2;h=r.length||e<-1*r.length)throw new Error("axis specified for concat doesn't match input dimensionality");e<0&&(e=r.length+e);for(var o=r[e],i=new Array(r.length),a=1;a=e;a--)h*=i[a];for(var d=0,y=0;y=e;a--)m*=g.dims[a];for(var v=g.numberData,b=c.ShapeUtil.size(g.dims),_=d,w=(a=0,0);a=0&&P=0&&A0)&&!(r=i.next()).done;)a.push(r.value)}catch(t){o={error:t}}finally{try{r&&!r.done&&(n=i.return)&&n.call(i)}finally{if(o)throw o.error}}return a};Object.defineProperty(e,"__esModule",{value:!0}),e.gemm=e.CpuGemm=void 0;var l=n(19),c=n(1),f=u(n(0)),p=n(17),h=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.run=function(t,e){return[d(e[0],e[1],this.alpha,this.beta,this.transA,this.transB,3===e.length?e[2]:void 0)]},e}(l.Gemm);function d(t,e,n,r,o,i,a){var u=s(f.GemmUtil.getShapeOfGemmResult(t.dims,o,e.dims,i,null==a?void 0:a.dims),3),l=u[0],h=u[1],d=u[2],y=new c.Tensor([l,h],t.type);if(a&&f.BroadcastUtil.calc(y,a,(function(t,e){return e}),!0)!==y)throw new Error("tensor C is not broadcastable to [M,N]");return p.matMul2d(t.floatData,e.floatData,y.floatData,o,i,n,r,l,h,d),y}e.CpuGemm=h,e.gemm=d},function(t,e,n){"use strict";var r,o=this&&this.__extends||(r=function(t,e){return(r=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(t,e){t.__proto__=e}||function(t,e){for(var n in e)e.hasOwnProperty(n)&&(t[n]=e[n])})(t,e)},function(t,e){function n(){this.constructor=t}r(t,e),t.prototype=null===e?Object.create(e):(n.prototype=e.prototype,new n)}),i=this&&this.__read||function(t,e){var n="function"==typeof Symbol&&t[Symbol.iterator];if(!n)return t;var r,o,i=n.call(t),a=[];try{for(;(void 0===e||e-- >0)&&!(r=i.next()).done;)a.push(r.value)}catch(t){o={error:t}}finally{try{r&&!r.done&&(n=i.return)&&n.call(i)}finally{if(o)throw o.error}}return a};Object.defineProperty(e,"__esModule",{value:!0}),e.imageScaler=e.CpuImageScaler=void 0;var a=n(34),u=n(1),s=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.run=function(t,e){return[l(e[0],this.bias,this.scale)]},e}(a.ImageScaler);function l(t,e,n){for(var r=i(t.dims,4),o=r[0],a=r[1],s=r[2],l=r[3],c=new u.Tensor([o,a,s,l],t.type),f=t.floatData,p=c.floatData,h=0;h0)&&!(r=i.next()).done;)a.push(r.value)}catch(t){o={error:t}}finally{try{r&&!r.done&&(n=i.return)&&n.call(i)}finally{if(o)throw o.error}}return a},a=this&&this.__spread||function(){for(var t=[],e=0;e=e.dims[E]||T[E]<0){P++,A=!0;break}S=A?S:c(S,e.get(T))}S=f(S,r?h:h-P),v.set(w,S)}return v}e.CpuGlobalMaxPool=f,e.averagePool=p,e.globalAveragePool=h,e.maxPool=d,e.globalMaxPool=y,e.pool=g},function(t,e,n){"use strict";var r,o=this&&this.__extends||(r=function(t,e){return(r=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(t,e){t.__proto__=e}||function(t,e){for(var n in e)e.hasOwnProperty(n)&&(t[n]=e[n])})(t,e)},function(t,e){function n(){this.constructor=t}r(t,e),t.prototype=null===e?Object.create(e):(n.prototype=e.prototype,new n)});Object.defineProperty(e,"__esModule",{value:!0}),e.reduceProd=e.reduceMean=e.reduceMin=e.reduceMax=e.reduceLogSum=e.reduceSumSquare=e.reduceSum=e.CpuReduceProd=e.CpuReduceMean=e.CpuReduceMin=e.CpuReduceMax=e.CpuReduceLogSum=e.CpuReduceSumSquare=e.CpuReduceSum=void 0;var i=n(36),a=n(0),u=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.run=function(t,e){return[d(e[0],a.ShapeUtil.normalizeAxes(this.axes,e[0].dims.length),this.keepDims)]},e}(i.ReduceBase);e.CpuReduceSum=u;var s=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.run=function(t,e){return[y(e[0],a.ShapeUtil.normalizeAxes(this.axes,e[0].dims.length),this.keepDims)]},e}(i.ReduceBase);e.CpuReduceSumSquare=s;var l=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.run=function(t,e){return[g(e[0],a.ShapeUtil.normalizeAxes(this.axes,e[0].dims.length),this.keepDims)]},e}(i.ReduceBase);e.CpuReduceLogSum=l;var c=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.run=function(t,e){return[m(e[0],a.ShapeUtil.normalizeAxes(this.axes,e[0].dims.length),this.keepDims)]},e}(i.ReduceBase);e.CpuReduceMax=c;var f=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.run=function(t,e){return[v(e[0],a.ShapeUtil.normalizeAxes(this.axes,e[0].dims.length),this.keepDims)]},e}(i.ReduceBase);e.CpuReduceMin=f;var p=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.run=function(t,e){return[b(e[0],a.ShapeUtil.normalizeAxes(this.axes,e[0].dims.length),this.keepDims)]},e}(i.ReduceBase);e.CpuReduceMean=p;var h=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.run=function(t,e){return[_(e[0],a.ShapeUtil.normalizeAxes(this.axes,e[0].dims.length),this.keepDims)]},e}(i.ReduceBase);function d(t,e,n){return a.ReduceUtil.calcReduce(t,e,n,(function(t){return t}),(function(t,e){return t+e}))}function y(t,e,n){return a.ReduceUtil.calcReduce(t,e,n,(function(t){return t*t}),(function(t,e){return t+e}))}function g(t,e,n){for(var r=a.ReduceUtil.calcReduce(t,e,n,(function(t){return t}),(function(t,e){return t+e})),o=r.floatData.length,i=0;i=5&&e[4].integerData.some((function(t){return 1!==t})))throw new Error("currently non-1 steps is not supported for Slice");var n=Array.from(e[1].integerData),r=Array.from(e[2].integerData),o=e.length>=4?Array.from(e[3].integerData):[];return[c(e[0],n,r,o)]},e}(i.SliceV10);function c(t,e,n,r){0===r.length&&(r=t.dims.map((function(t,e){return e}))),r=u.ShapeUtil.normalizeAxes(r,t.dims.length),e=e.map((function(e,n){return e>t.dims[r[n]]-1?t.dims[r[n]]:u.ShapeUtil.normalizeAxis(e,t.dims[r[n]])})),n=n.map((function(e,n){return e>t.dims[r[n]]-1?t.dims[r[n]]:u.ShapeUtil.normalizeAxis(e,t.dims[r[n]])}));var o=[],i=[];r.forEach((function(t,r){o[t]=n[r]-e[r],i[t]=e[r]}));for(var s=0;sh&&(h=a[p+d]);var y=0;for(d=0;d=0;--i){var w=o[i];_&&w===i?v*=n[w]:(_=!1,b*=n[w],++m)}return 1===b?(f=v,p=g,h=y,u.arrayCopyHelper(p,h,0,0,f)):1===v?function(t,e,n,r,o,i){for(var a=new Array(t).fill(0),s=0,l=0;l=0;l--)u[l]=u[l+1]*n[l+1];var c=new Array(i);c.fill(0),c[i-1]=-1;for(var f=0,p=0;f=0;h--){if(++c[h]0&&o[o.length-1])||6!==i[0]&&2!==i[0])){a=0;continue}if(3===i[0]&&(!o||i[1]>o[0]&&i[1]this.numBytesAllocated&&this.expandMemory(l),t.ccallSerialize(r.HEAPU8.subarray(this.ptr8,this.ptr8+l),s,i);var c=e.now();this.func(n,this.ptr8);var f=e.now();t.ccallDeserialize(r.HEAPU8.subarray(this.ptr8,this.ptr8+l),s,i);var p=e.now();return{startTime:u,endTime:p,startTimeFunc:c,endTimeFunc:f}},t.prototype.ccallRaw=function(t,n){if(!o)throw new Error("wasm not initialized. please ensure 'init()' is called.");var i=e.now(),a=n.byteLength;a>this.numBytesAllocated&&this.expandMemory(a),r.HEAPU8.subarray(this.ptr8,this.ptr8+a).set(n);var u=e.now();this.func(t,this.ptr8);var s=e.now();return n.set(r.HEAPU8.subarray(this.ptr8,this.ptr8+a)),{startTime:i,endTime:e.now(),startTimeFunc:u,endTimeFunc:s}},t.prototype.func=function(t,e){(0,r[t])(e)},t.calculateOffsets=function(t,e){for(var n=4+4*e.length,r=0;r>2;if(o[a+1]=f,"out"!==c&&0!==f)switch(l){case"bool":t[f]=!0===s?1:0;break;case"int32":r[p]=s;break;case"float32":i[p]=s;break;case"boolptr":var h=s;t.subarray(f,f+h.length).set(s);break;case"int32ptr":var d=s;r.subarray(p,p+d.length).set(d);break;case"float32ptr":var y=s;i.subarray(p,p+y.length).set(y);break;default:throw new Error("not supported parameter type: "+l)}}},t.ccallDeserialize=function(t,e,n){for(var r=new Float32Array(t.buffer,t.byteOffset),o=new Uint8Array(t.buffer,t.byteOffset),i=0;i>2;if("out"===l||"inout"===l)switch(s){case"float32ptr":var p=u;p.set(r.subarray(f,f+p.length));break;case"boolptr":var h=u;h.set(o.subarray(c,c+h.length));break;default:throw new Error("not supported parameter type: "+s)}}},t.prototype.expandMemory=function(t){if(0!==this.ptr8&&r._free(this.ptr8),this.numBytesAllocated=2*t,this.ptr8=r._malloc(this.numBytesAllocated),0===this.ptr8)throw new Error("Unable to allocate requested amount of memory. Failing.")},t.prototype.dispose=function(){if(!o)throw new Error("wasm not initialized. please ensure 'init()' is called.");0!==this.ptr8&&r._free(this.ptr8)},t}();e.WasmBinding=a,e.now="undefined"!=typeof performance&&performance.now?function(){return performance.now()}:Date.now},function(t,e,n){(function(e,r,o){var i,a=(i=(i="undefined"!=typeof document&&document.currentScript?document.currentScript.src:void 0)||e,function(t){t=void 0!==(t=t||{})?t:{};var e,a={};for(e in t)t.hasOwnProperty(e)&&(a[e]=t[e]);var u=[],s=!1,l=!1,c=!1,f=!1;s="object"==typeof window,l="function"==typeof importScripts,c="object"==typeof r&&"object"==typeof r.versions&&"string"==typeof r.versions.node,f=!s&&!c&&!l;var p,h,d,y,g="";function m(e){return t.locateFile?t.locateFile(e,g):g+e}c?(g=l?n(47).dirname(g)+"/":o+"/",p=function(t,e){return d||(d=n(48)),y||(y=n(47)),t=y.normalize(t),d.readFileSync(t,e?null:"utf8")},h=function(t){var e=p(t,!0);return e.buffer||(e=new Uint8Array(e)),S(e.buffer),e},r.argv.length>1&&r.argv[1].replace(/\\/g,"/"),u=r.argv.slice(2),r.on("uncaughtException",(function(t){if(!(t instanceof Lt))throw t})),r.on("unhandledRejection",et),t.inspect=function(){return"[Emscripten Module object]"}):f?("undefined"!=typeof read&&(p=function(t){return read(t)}),h=function(t){var e;return"function"==typeof readbuffer?new Uint8Array(readbuffer(t)):(S("object"==typeof(e=read(t,"binary"))),e)},"undefined"!=typeof scriptArgs?u=scriptArgs:void 0!==arguments&&(u=arguments),"undefined"!=typeof print&&("undefined"==typeof console&&(console={}),console.log=print,console.warn=console.error="undefined"!=typeof printErr?printErr:print)):(s||l)&&(l?g=self.location.href:document.currentScript&&(g=document.currentScript.src),i&&(g=i),g=0!==g.indexOf("blob:")?g.substr(0,g.lastIndexOf("/")+1):"",p=function(t){var e=new XMLHttpRequest;return e.open("GET",t,!1),e.send(null),e.responseText},l&&(h=function(t){var e=new XMLHttpRequest;return e.open("GET",t,!1),e.responseType="arraybuffer",e.send(null),new Uint8Array(e.response)}));var v=t.print||console.log.bind(console),b=t.printErr||console.warn.bind(console);for(e in a)a.hasOwnProperty(e)&&(t[e]=a[e]);a=null,t.arguments&&(u=t.arguments),t.thisProgram&&t.thisProgram,t.quit&&t.quit;var _,w,x=function(t){};t.wasmBinary&&(_=t.wasmBinary),t.noExitRuntime&&t.noExitRuntime,"object"!=typeof WebAssembly&&b("no native wasm support detected");var T=new WebAssembly.Table({initial:31,maximum:31,element:"anyfunc"}),O=!1;function S(t,e){t||et("Assertion failed: "+e)}var P="undefined"!=typeof TextDecoder?new TextDecoder("utf8"):void 0;function A(t,e,n){for(var r=e+n,o=e;t[o]&&!(o>=r);)++o;if(o-e>16&&t.subarray&&P)return P.decode(t.subarray(e,o));for(var i="";e>10,56320|1023&l)}}else i+=String.fromCharCode((31&a)<<6|u)}else i+=String.fromCharCode(a)}return i}function D(t,e){return t?A(I,t,e):""}"undefined"!=typeof TextDecoder&&new TextDecoder("utf-16le");var E,I,L,M=65536;function j(t,e){return t%e>0&&(t+=e-t%e),t}function k(e){E=e,t.HEAP8=new Int8Array(e),t.HEAP16=new Int16Array(e),t.HEAP32=L=new Int32Array(e),t.HEAPU8=I=new Uint8Array(e),t.HEAPU16=new Uint16Array(e),t.HEAPU32=new Uint32Array(e),t.HEAPF32=new Float32Array(e),t.HEAPF64=new Float64Array(e)}var C=5248800,R=5760,N=t.INITIAL_MEMORY||16777216;function B(e){for(;e.length>0;){var n=e.shift();if("function"!=typeof n){var r=n.func;"number"==typeof r?void 0===n.arg?t.dynCall_v(r):t.dynCall_vi(r,n.arg):r(void 0===n.arg?null:n.arg)}else n()}}(w=t.wasmMemory?t.wasmMemory:new WebAssembly.Memory({initial:N/M}))&&(E=w.buffer),N=E.byteLength,k(E),L[R>>2]=C;var F=[],U=[],G=[],z=[],W=[];function V(){if(t.preRun)for("function"==typeof t.preRun&&(t.preRun=[t.preRun]);t.preRun.length;)X(t.preRun.shift());B(F)}function q(){B(U)}function H(){B(G)}function Y(){if(t.postRun)for("function"==typeof t.postRun&&(t.postRun=[t.postRun]);t.postRun.length;)K(t.postRun.shift());B(W)}function X(t){F.unshift(t)}function K(t){W.unshift(t)}Math.abs,Math.ceil,Math.floor,Math.min;var J=0,$=null,Z=null;function Q(e){J++,t.monitorRunDependencies&&t.monitorRunDependencies(J)}function tt(e){if(J--,t.monitorRunDependencies&&t.monitorRunDependencies(J),0==J&&(null!==$&&(clearInterval($),$=null),Z)){var n=Z;Z=null,n()}}function et(e){throw t.onAbort&&t.onAbort(e),v(e+=""),b(e),O=!0,e="abort("+e+"). Build with -s ASSERTIONS=1 for more info.",new WebAssembly.RuntimeError(e)}t.preloadedImages={},t.preloadedAudios={};var nt="data:application/octet-stream;base64,";function rt(t){return String.prototype.startsWith?t.startsWith(nt):0===t.indexOf(nt)}var ot="onnx-wasm.wasm";function it(){try{if(_)return new Uint8Array(_);if(h)return h(ot);throw"both async and sync fetching of the wasm failed"}catch(t){et(t)}}function at(){return _||!s&&!l||"function"!=typeof fetch?new Promise((function(t,e){t(it())})):fetch(ot,{credentials:"same-origin"}).then((function(t){if(!t.ok)throw"failed to load wasm binary file at '"+ot+"'";return t.arrayBuffer()})).catch((function(){return it()}))}function ut(){var e={env:St,wasi_snapshot_preview1:St};function n(e,n){var r=e.exports;t.asm=r,tt()}function r(t){n(t.instance)}function o(t){return at().then((function(t){return WebAssembly.instantiate(t,e)})).then(t,(function(t){b("failed to asynchronously prepare wasm: "+t),et(t)}))}if(Q(),t.instantiateWasm)try{return t.instantiateWasm(e,n)}catch(t){return b("Module.instantiateWasm callback failed with error: "+t),!1}return function(){if(_||"function"!=typeof WebAssembly.instantiateStreaming||rt(ot)||"function"!=typeof fetch)return o(r);fetch(ot,{credentials:"same-origin"}).then((function(t){return WebAssembly.instantiateStreaming(t,e).then(r,(function(t){b("wasm streaming compile failed: "+t),b("falling back to ArrayBuffer instantiation"),o(r)}))}))}(),{}}function st(t,e,n,r){et("Assertion failed: "+D(t)+", at: "+[e?D(e):"unknown filename",n,r?D(r):"unknown function"])}function lt(t){return Et(t)}rt(ot)||(ot=m(ot)),U.push({func:function(){Dt()}});var ct={};function ft(){return ft.uncaught_exceptions>0}function pt(t,e,n){throw ct[t]={ptr:t,adjusted:[t],type:e,destructor:n,refcount:0,caught:!1,rethrown:!1},"uncaught_exception"in ft?ft.uncaught_exceptions++:ft.uncaught_exceptions=1,t}function ht(){et()}function dt(){return I.length}function yt(){return 5760}function gt(t,e,n){I.copyWithin(t,e,e+n)}function mt(t){try{return w.grow(t-E.byteLength+65535>>16),k(w.buffer),1}catch(t){}}function vt(t){var e=dt();if(t>2147418112)return!1;for(var n=1;n<=4;n*=2){var r=e*(1+.2/n);if(r=Math.min(r,t+100663296),mt(Math.min(2147418112,j(Math.max(16777216,t,r),65536))))return!0}return!1}var bt={mappings:{},buffers:[null,[],[]],printChar:function(t,e){var n=bt.buffers[t];0===e||10===e?((1===t?v:b)(A(n,0)),n.length=0):n.push(e)},varargs:void 0,get:function(){return bt.varargs+=4,L[bt.varargs-4>>2]},getStr:function(t){return D(t)},get64:function(t,e){return t}};function _t(t){return 0}function wt(t,e,n,r,o){}function xt(){void 0!==It&&It(0);var t=bt.buffers;t[1].length&&bt.printChar(1,10),t[2].length&&bt.printChar(2,10)}function Tt(t,e,n,r){for(var o=0,i=0;i>2],u=L[e+(8*i+4)>>2],s=0;s>2]=o,0}function Ot(t){x(0|t)}z.push(xt);var St={__assert_fail:st,__cxa_allocate_exception:lt,__cxa_throw:pt,abort:ht,emscripten_get_sbrk_ptr:yt,emscripten_memcpy_big:gt,emscripten_resize_heap:vt,fd_close:_t,fd_seek:wt,fd_write:Tt,memory:w,setTempRet0:Ot,table:T},Pt=ut();t.asm=Pt;var At,Dt=t.___wasm_call_ctors=function(){return(Dt=t.___wasm_call_ctors=t.asm.__wasm_call_ctors).apply(null,arguments)},Et=(t._batch_normalization_f32=function(){return(t._batch_normalization_f32=t.asm.batch_normalization_f32).apply(null,arguments)},t._add_f32=function(){return(t._add_f32=t.asm.add_f32).apply(null,arguments)},t._sub_f32=function(){return(t._sub_f32=t.asm.sub_f32).apply(null,arguments)},t._mul_f32=function(){return(t._mul_f32=t.asm.mul_f32).apply(null,arguments)},t._div_f32=function(){return(t._div_f32=t.asm.div_f32).apply(null,arguments)},t._prelu_f32=function(){return(t._prelu_f32=t.asm.prelu_f32).apply(null,arguments)},t._xor_u8=function(){return(t._xor_u8=t.asm.xor_u8).apply(null,arguments)},t._or_u8=function(){return(t._or_u8=t.asm.or_u8).apply(null,arguments)},t._and_u8=function(){return(t._and_u8=t.asm.and_u8).apply(null,arguments)},t._clip_f32=function(){return(t._clip_f32=t.asm.clip_f32).apply(null,arguments)},t._conv_f32=function(){return(t._conv_f32=t.asm.conv_f32).apply(null,arguments)},t._gemm_f32=function(){return(t._gemm_f32=t.asm.gemm_f32).apply(null,arguments)},t._free=function(){return(t._free=t.asm.free).apply(null,arguments)},t._malloc=function(){return(Et=t._malloc=t.asm.malloc).apply(null,arguments)}),It=(t._instance_normalization_f32=function(){return(t._instance_normalization_f32=t.asm.instance_normalization_f32).apply(null,arguments)},t._matmul_f32=function(){return(t._matmul_f32=t.asm.matmul_f32).apply(null,arguments)},t._average_pool_f32=function(){return(t._average_pool_f32=t.asm.average_pool_f32).apply(null,arguments)},t._max_pool_f32=function(){return(t._max_pool_f32=t.asm.max_pool_f32).apply(null,arguments)},t._softmax_f32=function(){return(t._softmax_f32=t.asm.softmax_f32).apply(null,arguments)},t._sum_f32=function(){return(t._sum_f32=t.asm.sum_f32).apply(null,arguments)},t.___errno_location=function(){return(t.___errno_location=t.asm.__errno_location).apply(null,arguments)},t._fflush=function(){return(It=t._fflush=t.asm.fflush).apply(null,arguments)});function Lt(t){this.name="ExitStatus",this.message="Program terminated with exit("+t+")",this.status=t}function Mt(e){function n(){At||(At=!0,t.calledRun=!0,O||(q(),H(),t.onRuntimeInitialized&&t.onRuntimeInitialized(),Y()))}e=e||u,J>0||(V(),J>0||(t.setStatus?(t.setStatus("Running..."),setTimeout((function(){setTimeout((function(){t.setStatus("")}),1),n()}),1)):n()))}if(t._setThrew=function(){return(t._setThrew=t.asm.setThrew).apply(null,arguments)},t.stackSave=function(){return(t.stackSave=t.asm.stackSave).apply(null,arguments)},t.stackAlloc=function(){return(t.stackAlloc=t.asm.stackAlloc).apply(null,arguments)},t.stackRestore=function(){return(t.stackRestore=t.asm.stackRestore).apply(null,arguments)},t.__growWasmMemory=function(){return(t.__growWasmMemory=t.asm.__growWasmMemory).apply(null,arguments)},t.dynCall_ii=function(){return(t.dynCall_ii=t.asm.dynCall_ii).apply(null,arguments)},t.dynCall_iiii=function(){return(t.dynCall_iiii=t.asm.dynCall_iiii).apply(null,arguments)},t.dynCall_jiji=function(){return(t.dynCall_jiji=t.asm.dynCall_jiji).apply(null,arguments)},t.dynCall_iidiiii=function(){return(t.dynCall_iidiiii=t.asm.dynCall_iidiiii).apply(null,arguments)},t.dynCall_vii=function(){return(t.dynCall_vii=t.asm.dynCall_vii).apply(null,arguments)},t.dynCall_vi=function(){return(t.dynCall_vi=t.asm.dynCall_vi).apply(null,arguments)},t.dynCall_viiiiii=function(){return(t.dynCall_viiiiii=t.asm.dynCall_viiiiii).apply(null,arguments)},t.dynCall_viiiii=function(){return(t.dynCall_viiiii=t.asm.dynCall_viiiii).apply(null,arguments)},t.dynCall_viiii=function(){return(t.dynCall_viiii=t.asm.dynCall_viiii).apply(null,arguments)},t.asm=Pt,t.then=function(e){if(At)e(t);else{var n=t.onRuntimeInitialized;t.onRuntimeInitialized=function(){n&&n(),e(t)}}return t},Z=function t(){At||Mt(),At||(Z=t)},t.run=Mt,t.preInit)for("function"==typeof t.preInit&&(t.preInit=[t.preInit]);t.preInit.length>0;)t.preInit.pop()();return Mt(),t});t.exports=a}).call(this,"/index.js",n(24),"/")},function(t,e,n){"use strict";n.r(e),e.default=function(){return new Worker(n.p+"onnx-worker.js")}},function(t,e,n){"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.WasmSessionHandler=void 0;var r=n(11),o=n(27),i=n(107),a=n(108),u=function(){function t(t,e,n){this.backend=t,this.context=e,this.opResolveRules=n?a.WASM_OP_RESOLVE_RULES.concat(o.CPU_OP_RESOLVE_RULES):a.WASM_OP_RESOLVE_RULES}return t.prototype.createInferenceHandler=function(){return new i.WasmInferenceHandler(this,this.context.profiler)},t.prototype.dispose=function(){},t.prototype.resolve=function(t,e){var n=r.resolveOperator(t,e,this.opResolveRules);return n.initialize(t.attributes),n},t}();e.WasmSessionHandler=u},function(t,e,n){"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.WasmInferenceHandler=void 0;var r=function(){function t(t,e){this.session=t,this.profiler=e}return t.prototype.dispose=function(){},t}();e.WasmInferenceHandler=r},function(t,e,n){"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.WASM_OP_RESOLVE_RULES=void 0;var r=n(109),o=n(110),i=n(111),a=n(112),u=n(113),s=n(114),l=n(115),c=n(116),f=n(117),p=n(118);e.WASM_OP_RESOLVE_RULES=[["Add","","7+",function(){return new o.WasmBinaryOp(["float32"],"Add")}],["And","","7+",function(){return new o.WasmBinaryOp(["bool"],"And")}],["AveragePool","","7-10",function(){return new c.WasmAveragePool}],["BatchNormalization","","7+",function(){return new r.WasmBatchNormalization}],["Clip","","6-10",function(){return new i.WasmClip}],["Conv","","1+",function(){return new a.WasmConv}],["Div","","7+",function(){return new o.WasmBinaryOp(["float32"],"Div")}],["Gemm","","7-10",function(){return new u.WasmGemm(!1)}],["Gemm","","11+",function(){return new u.WasmGemm(!0)}],["GlobalAveragePool","","1+",function(){return new c.WasmGlobalAveragePool}],["GlobalMaxPool","","1+",function(){return new c.WasmGlobalMaxPool}],["InstanceNormalization","","6+",function(){return new s.WasmInstanceNormalization}],["MatMul","","1+",function(){return new l.WasmMatMul}],["MaxPool","","1-9",function(){return new c.WasmMaxPool}],["Mul","","7+",function(){return new o.WasmBinaryOp(["float32"],"Mul")}],["Or","","7+",function(){return new o.WasmBinaryOp(["bool"],"Or")}],["PRelu","","7+",function(){return new o.WasmBinaryOp(["float32"],"PRelu")}],["Softmax","","1+",function(){return new f.WasmSoftmax}],["Sub","","7+",function(){return new o.WasmBinaryOp(["float32"],"Sub")}],["Sum","","6+",function(){return new p.WasmSum}],["Xor","","7+",function(){return new o.WasmBinaryOp(["bool"],"Xor")}]]},function(t,e,n){"use strict";var r,o=this&&this.__extends||(r=function(t,e){return(r=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(t,e){t.__proto__=e}||function(t,e){for(var n in e)e.hasOwnProperty(n)&&(t[n]=e[n])})(t,e)},function(t,e){function n(){this.constructor=t}r(t,e),t.prototype=null===e?Object.create(e):(n.prototype=e.prototype,new n)});Object.defineProperty(e,"__esModule",{value:!0}),e.WasmBatchNormalization=void 0;var i=n(14),a=n(1),u=n(4),s=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.run=function(t,e){for(var n=e[0],r=e[1],o=e[2],i=e[3],s=e[4],l=1,c=2;c0&&o[o.length-1])||6!==i[0]&&2!==i[0])){a=0;continue}if(3===i[0]&&(!o||i[1]>o[0]&&i[1]0?_[x]:null,"float32ptr"],[this.dilations,"int32ptr"],[this.group,"int32"],[this.pads,"int32ptr"],[this.strides,"int32ptr"])):(v[x]=n.floatData.subarray(x*h),b[x]=u.floatData.subarray(x*g),r&&(_[x]=r.floatData.subarray(x*p[0])),c.WasmBinding.getInstance().ccall("_conv_f32",[t.floatData,"float32ptr"],[t.dims,"int32ptr"],[v[x],"float32ptr"],[d,"int32ptr"],[b[x],"float32ptr","out"],[m,"int32ptr"],[_.length>0?_[x]:null,"float32ptr"],[this.dilations,"int32ptr"],[this.group,"int32"],[this.pads,"int32ptr"],[this.strides,"int32ptr"]));return[4,Promise.all(w)];case 2:return a.sent(),[2,[u]]}}))}))},e.prototype.checkInputTypes=function(t){return"float32"===t[0].type&&"float32"===t[1].type&&(3!==t.length||"float32"===t[2].type)},e}(u.Conv);e.WasmConv=f},function(t,e,n){"use strict";var r,o=this&&this.__extends||(r=function(t,e){return(r=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(t,e){t.__proto__=e}||function(t,e){for(var n in e)e.hasOwnProperty(n)&&(t[n]=e[n])})(t,e)},function(t,e){function n(){this.constructor=t}r(t,e),t.prototype=null===e?Object.create(e):(n.prototype=e.prototype,new n)}),i=this&&this.__read||function(t,e){var n="function"==typeof Symbol&&t[Symbol.iterator];if(!n)return t;var r,o,i=n.call(t),a=[];try{for(;(void 0===e||e-- >0)&&!(r=i.next()).done;)a.push(r.value)}catch(t){o={error:t}}finally{try{r&&!r.done&&(n=i.return)&&n.call(i)}finally{if(o)throw o.error}}return a};Object.defineProperty(e,"__esModule",{value:!0}),e.WasmGemm=void 0;var a=n(19),u=n(1),s=n(0),l=n(4),c=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.run=function(t,e){var n=e[0],r=e[1],o=e[2],a=i(s.GemmUtil.getShapeOfGemmResult(n.dims,this.transA,r.dims,this.transB,null==o?void 0:o.dims),2),c=a[0],f=a[1],p=new u.Tensor([c,f],n.type);if(o&&!s.BroadcastUtil.calc(p,o,(function(t,e){return e}),!0))throw new Error("c is not broadcastable to the shape of the result of the Gemm operator");return l.WasmBinding.getInstance().ccall("_gemm_f32",[this.transA,"bool"],[this.transB,"bool"],[this.transA?n.dims[1]:n.dims[0],"int32"],[this.transB?r.dims[0]:r.dims[1],"int32"],[this.transA?n.dims[0]:n.dims[1],"int32"],[this.alpha,"float32"],[n.floatData,"float32ptr"],[r.floatData,"float32ptr"],[this.beta,"float32"],[p.floatData,"float32ptr","inout"]),[p]},e.prototype.checkInputTypes=function(t){return"float32"===t[0].type&&"float32"===t[1].type&&"float32"===t[2].type&&(t[0].type===t[1].type&&t[0].type===t[2].type)},e}(a.Gemm);e.WasmGemm=c},function(t,e,n){"use strict";var r,o=this&&this.__extends||(r=function(t,e){return(r=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(t,e){t.__proto__=e}||function(t,e){for(var n in e)e.hasOwnProperty(n)&&(t[n]=e[n])})(t,e)},function(t,e){function n(){this.constructor=t}r(t,e),t.prototype=null===e?Object.create(e):(n.prototype=e.prototype,new n)});Object.defineProperty(e,"__esModule",{value:!0}),e.WasmInstanceNormalization=void 0;var i=n(20),a=n(1),u=n(4),s=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.run=function(t,e){for(var n=e[0],r=e[1],o=e[2],i=1,s=2;s0)&&!(r=i.next()).done;)a.push(r.value)}catch(t){o={error:t}}finally{try{r&&!r.done&&(n=i.return)&&n.call(i)}finally{if(o)throw o.error}}return a};Object.defineProperty(e,"__esModule",{value:!0}),e.WasmMatMul=void 0;var a=n(18),u=n(1),s=n(0),l=n(4),c=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.run=function(t,e){var n=i(s.MatMulUtil.preprocessInputShapes(e[0].dims,e[1].dims),2),r=n[0],o=n[1],a=s.BroadcastUtil.calcShape(r,o,!0);if(!a)throw new Error("input dimensions do not match the requirement");var c=s.ShapeUtil.size(a),f=new Float32Array(c);l.WasmBinding.getInstance().ccall("_matmul_f32",[e[0].floatData,"float32ptr"],[e[0].dims,"int32ptr"],[e[0].dims.length,"int32"],[e[1].floatData,"float32ptr"],[e[1].dims,"int32ptr"],[e[1].dims.length,"int32"],[f,"float32ptr","out"],[f.length,"int32"],[a,"int32ptr"],[a.length,"int32"]),s.MatMulUtil.postprocessOutputShape(a,e[0].dims.length,e[1].dims.length);var p=new u.Tensor(a,e[0].type);return p.floatData.set(f),[p]},e.prototype.checkInputTypes=function(t){return"float32"===t[0].type&&"float32"===t[1].type&&t[0].type===t[1].type},e}(a.MatMul);e.WasmMatMul=c},function(t,e,n){"use strict";var r,o=this&&this.__extends||(r=function(t,e){return(r=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(t,e){t.__proto__=e}||function(t,e){for(var n in e)e.hasOwnProperty(n)&&(t[n]=e[n])})(t,e)},function(t,e){function n(){this.constructor=t}r(t,e),t.prototype=null===e?Object.create(e):(n.prototype=e.prototype,new n)}),i=this&&this.__awaiter||function(t,e,n,r){return new(n||(n=Promise))((function(o,i){function a(t){try{s(r.next(t))}catch(t){i(t)}}function u(t){try{s(r.throw(t))}catch(t){i(t)}}function s(t){var e;t.done?o(t.value):(e=t.value,e instanceof n?e:new n((function(t){t(e)}))).then(a,u)}s((r=r.apply(t,e||[])).next())}))},a=this&&this.__generator||function(t,e){var n,r,o,i,a={label:0,sent:function(){if(1&o[0])throw o[1];return o[1]},trys:[],ops:[]};return i={next:u(0),throw:u(1),return:u(2)},"function"==typeof Symbol&&(i[Symbol.iterator]=function(){return this}),i;function u(i){return function(u){return function(i){if(n)throw new TypeError("Generator is already executing.");for(;a;)try{if(n=1,r&&(o=2&i[0]?r.return:i[0]?r.throw||((o=r.return)&&o.call(r),0):r.next)&&!(o=o.call(r,i[1])).done)return o;switch(r=0,o&&(i=[2&i[0],o.value]),i[0]){case 0:case 1:o=i;break;case 4:return a.label++,{value:i[1],done:!1};case 5:a.label++,r=i[1],i=[0];continue;case 7:i=a.ops.pop(),a.trys.pop();continue;default:if(!(o=a.trys,(o=o.length>0&&o[o.length-1])||6!==i[0]&&2!==i[0])){a=0;continue}if(3===i[0]&&(!o||i[1]>o[0]&&i[1]0)&&!(r=i.next()).done;)a.push(r.value)}catch(t){o={error:t}}finally{try{r&&!r.done&&(n=i.return)&&n.call(i)}finally{if(o)throw o.error}}return a},a=this&&this.__spread||function(){for(var t=[],e=0;e0)&&!(r=i.next()).done;)a.push(r.value)}catch(t){o={error:t}}finally{try{r&&!r.done&&(n=i.return)&&n.call(i)}finally{if(o)throw o.error}}return a};Object.defineProperty(e,"__esModule",{value:!0}),e.WebGLInferenceHandler=void 0;var i=n(3),a=n(1),u=n(0),s=n(122),l=n(50),c=function(){function t(t){this.session=t,this.textureDataCache=new Map}return t.prototype.run=function(t,e){var n=this.session.programManager.getArtifact(t);if(!n){var r=t.createProgramInfo(this,e);n=this.session.programManager.build(r),this.session.programManager.setArtifact(t,n)}var o=t.createRunData(this,n.programInfo,e);return this.session.programManager.run(n,o),[o.outputTextureData.tensor]},t.prototype.getOrCreateTextureData=function(t,e){var n=this.getTextureData(t.dataId);return n?i.Logger.verbose("InferenceHandler","Retrieving TextureData from cache: ["+t.dims+"]"):(i.Logger.verbose("InferenceHandler","Creating new TextureData for dims: ["+t.dims+"]"),e||(e=this.createTextureLayoutFromShape(t.dims.slice())),n=this.createTextureData(e,t.type,t.numberData,t,1)),n},t.prototype.createTextureDataFromLayout=function(t,e){return this.createTextureData(t,e)},t.prototype.createTextureDataFromLayoutBindTensor=function(t,e,n,r){return this.createTextureData(t,e,n,r,1)},t.prototype.createTextureData=function(t,e,n,r,o){i.Logger.verbose("InferenceHandler","Creating TextureData: layout:["+JSON.stringify(t)+"]");var a=this.session.textureManager.createTextureFromLayout(e,t,n,o);return this.createTextureDataFromTexture(t,e,a,r)},t.prototype.createSharedTextureData=function(t,e,n,r){return this.createTextureDataFromTexture(t,e,n,void 0,r)},t.prototype.createTextureDataFromTexture=function(t,e,n,o,i){var u=this,s=r(r({},t),{tensor:o||new a.Tensor(t.unpackedShape,e,(function(t){return u.readTexture(s)}),void 0,void 0,i),texture:n});return this.setTextureData(s.tensor.dataId,s),s},t.prototype.getTextureData=function(t){return this.session.isInitializer(t)?this.session.getTextureData(t):this.textureDataCache.get(t)},t.prototype.setTextureData=function(t,e){this.session.isInitializer(t)?this.session.setTextureData(t,e):this.textureDataCache.set(t,e)},t.prototype.getOrCreateTextureLayout=function(t,e,n){void 0===e&&(e=1);var r=this.getTextureData(t.dataId);return r||this.createTextureLayoutFromShape(1===e?t.dims.slice():l.getPackedShape(t.dims.slice()),e,n)},t.prototype.createTextureLayoutFromShape=function(t,e,n,r){void 0===e&&(e=1);var i=o(this.session.layoutStrategy.computeTextureWH(t,r),2),a=i[0],s=i[1],l=t;if(0===t.length&&(l=[1]),1===e)n=t;else if(!n)throw new Error("Unpacked shape is needed when using channels > 1");return{width:a,height:s,channels:e||1,shape:l,strides:u.ShapeUtil.computeStrides(l),unpackedShape:n}},t.prototype.dispose=function(){var t=this;this.session.textureManager.clearActiveTextures(),this.textureDataCache.forEach((function(e){return t.session.textureManager.releaseTexture(e)})),this.textureDataCache=new Map},t.prototype.readTexture=function(t){if(!this.session.backend.glContext.isFloat32DownloadSupported){var e=(new s.WebGLUint8Encode).runInternal(this,t);return this.session.textureManager.readUint8TextureAsFloat(e)}return this.session.textureManager.readTexture(t,t.tensor.type,t.channels)},t}();e.WebGLInferenceHandler=c},function(t,e,n){"use strict";var r=this&&this.__read||function(t,e){var n="function"==typeof Symbol&&t[Symbol.iterator];if(!n)return t;var r,o,i=n.call(t),a=[];try{for(;(void 0===e||e-- >0)&&!(r=i.next()).done;)a.push(r.value)}catch(t){o={error:t}}finally{try{r&&!r.done&&(n=i.return)&&n.call(i)}finally{if(o)throw o.error}}return a};Object.defineProperty(e,"__esModule",{value:!0}),e.WebGLUint8Encode=void 0;var o=n(0),i=n(2),a=function(){function t(){}return t.prototype.runInternal=function(t,e){var n=e.shape,a=r(t.session.layoutStrategy.computeTextureWH(e.shape),2),u={width:a[0],height:a[1],channels:4,shape:n,strides:o.ShapeUtil.computeStrides(n),unpackedShape:n},s=i.getGlsl(t.session.backend.glContext.version),l={inputLayouts:[e],outputLayout:u,samplers:["X"],shaderSource:"\n const float FLOAT_MAX = 1.70141184e38;\n const float FLOAT_MIN = 1.17549435e-38;\n\n bool isNaN(float val) {\n return (val < 1.0 || 0.0 < val || val == 0.0) ? false : true;\n }\n\n highp vec4 encodeAsUint8(highp float v) {\n if (isNaN(v)) {\n return vec4(255, 255, 255, 255);\n }\n\n highp float av = abs(v);\n\n if(av < FLOAT_MIN) {\n return vec4(0.0, 0.0, 0.0, 0.0);\n } else if(v > FLOAT_MAX) {\n return vec4(0.0, 0.0, 128.0, 127.0) / 255.0;\n } else if(v < -FLOAT_MAX) {\n return vec4(0.0, 0.0, 128.0, 255.0) / 255.0;\n }\n\n highp vec4 c = vec4(0,0,0,0);\n\n highp float e = floor(log2(av));\n highp float m = exp2(fract(log2(av))) - 1.0;\n\n c[2] = floor(128.0 * m);\n m -= c[2] / 128.0;\n c[1] = floor(32768.0 * m);\n m -= c[1] / 32768.0;\n c[0] = floor(8388608.0 * m);\n\n highp float ebias = e + 127.0;\n c[3] = floor(ebias / 2.0);\n ebias -= c[3] * 2.0;\n c[2] += floor(ebias) * 128.0;\n\n c[3] += 128.0 * step(0.0, -v);\n\n return c / 255.0;\n }\n\n void main() {\n float value = "+s.texture2D+"(X,TexCoords).r;\n "+s.output+" = encodeAsUint8(value);\n }",hasMain:!0},c=t.session.programManager.build(l),f=t.session.backend.glContext.getEncoder("byte",4),p=t.session.backend.glContext.allocateTexture(u.width,u.height,f),h={inputTextureDatas:[e],outputTextureData:t.createSharedTextureData(u,"uint8",p,{}),uniformData:{}};return t.session.programManager.run(c,h),h.outputTextureData},t}();e.WebGLUint8Encode=a},function(t,e,n){"use strict";var r=this&&this.__createBinding||(Object.create?function(t,e,n,r){void 0===r&&(r=n),Object.defineProperty(t,r,{enumerable:!0,get:function(){return e[n]}})}:function(t,e,n,r){void 0===r&&(r=n),t[r]=e[n]}),o=this&&this.__setModuleDefault||(Object.create?function(t,e){Object.defineProperty(t,"default",{enumerable:!0,value:e})}:function(t,e){t.default=e}),i=this&&this.__importStar||function(t){if(t&&t.__esModule)return t;var e={};if(null!=t)for(var n in t)"default"!==n&&Object.hasOwnProperty.call(t,n)&&r(e,t,n);return o(e,t),e};Object.defineProperty(e,"__esModule",{value:!0}),e.WEBGL_OP_RESOLVE_RULES=void 0;var a=n(7),u=n(124),s=i(n(125)),l=n(126),c=n(127),f=n(128),p=n(129),h=n(130),d=n(132),y=n(133),g=n(134),m=n(135),v=n(136),b=n(137),_=n(139),w=n(140),x=n(141),T=i(n(142)),O=n(10),S=n(143),P=n(144),A=n(145),D=n(147),E=n(148),I=n(149),L=n(150),M=i(n(151)),j=n(152),k=n(153);e.WEBGL_OP_RESOLVE_RULES=[["Abs","","6+",function(){return new M.WebGLUnaryOp(a.NUMBER_TYPES,M.glslAbs())}],["Acos","","7+",function(){return new M.WebGLUnaryOp(a.FLOAT_TYPES,M.glslAcos())}],["Add","","7+",function(){return new s.WebGLBinaryOp(a.NUMBER_TYPES,s.glslAdd())}],["And","","7+",function(){return new s.WebGLBinaryOp(["bool"],s.glslAnd())}],["Asin","","7+",function(){return new M.WebGLUnaryOp(a.FLOAT_TYPES,M.glslAsin())}],["Atan","","7+",function(){return new M.WebGLUnaryOp(a.FLOAT_TYPES,M.glslAtan())}],["AveragePool","","7-10",function(){return new x.WebGLAveragePool}],["BatchNormalization","","7+",function(){return new u.WebGLBatchNormalization}],["Ceil","","6+",function(){return new M.WebGLUnaryOp(a.FLOAT_TYPES,M.glslCeil())}],["Clip","","6-10",function(){return new l.WebGLClip}],["Concat","","4+",function(){return new c.WebGLConcat}],["Conv","","1+",function(){return new f.WebGLConv}],["Cos","","7+",function(){return new M.WebGLUnaryOp(a.FLOAT_TYPES,M.glslCos())}],["Div","","7+",function(){return new s.WebGLBinaryOp(a.NUMBER_TYPES,s.glslDiv())}],["Dropout","","7+",function(){return new p.WebGLDropout}],["Equal","","7+",function(){return new s.WebGLBinaryOp(a.NUMBER_TYPES,s.glslEqual(),void 0,"bool")}],["Elu","","6+",function(){return new h.WebGLElu}],["Exp","","6+",function(){return new M.WebGLUnaryOp(a.FLOAT_TYPES,M.glslExp())}],["Flatten","","1+",function(){return new d.WebGLFlatten}],["Floor","","6+",function(){return new M.WebGLUnaryOp(a.FLOAT_TYPES,M.glslFloor())}],["Gather","","1+",function(){return new y.WebGLGather}],["Gemm","","7-10",function(){return new g.WebGLGemm(!1)}],["Gemm","","11+",function(){return new g.WebGLGemm(!0)}],["GlobalAveragePool","","1+",function(){return new x.WebGLGlobalAveragePool}],["GlobalMaxPool","","1+",function(){return new x.WebGLGlobalMaxPool}],["Greater","","7+",function(){return new s.WebGLBinaryOp(a.NUMBER_TYPES,s.glslGreater(),void 0,"bool")}],["Identity","","1+",function(){return new M.WebGLUnaryOp(a.NUMBER_TYPES,M.glslIdentity())}],["ImageScaler","","1+",function(){return new m.WebGLImageScaler}],["InstanceNormalization","","6+",function(){return new v.WebGLInstanceNormalization}],["LeakyRelu","","6+",function(){return new b.WebGLLeakyRelu}],["Less","","7+",function(){return new s.WebGLBinaryOp(a.NUMBER_TYPES,s.glslLess(),void 0,"bool")}],["Log","","6+",function(){return new M.WebGLUnaryOp(a.FLOAT_TYPES,M.glslLog())}],["MatMul","","1+",function(){return new _.WebGLMatMul}],["MaxPool","","1-9",function(){return new x.WebGLMaxPool}],["Mul","","7+",function(){return new s.WebGLBinaryOp(a.NUMBER_TYPES,s.glslMul())}],["Neg","","6+",function(){return new M.WebGLUnaryOp(a.NUMBER_TYPES,M.glslNeg())}],["Not","","1+",function(){return new M.WebGLUnaryOp(["bool"],M.glslNot())}],["Or","","7+",function(){return new s.WebGLBinaryOp(["bool"],s.glslOr())}],["Pad","","2-10",function(){return new w.WebGLPad}],["Pow","","7+",function(){return new s.WebGLBinaryOp(a.FLOAT_TYPES,s.glslPow())}],["PRelu","","7+",function(){return new s.WebGLBinaryOp(a.FLOAT_TYPES,s.glslPRelu())}],["ReduceLogSum","","1+",function(){return new T.WebGLReduceLogSum}],["ReduceMax","","1+",function(){return new T.WebGLReduceMax}],["ReduceMean","","1+",function(){return new T.WebGLReduceMean}],["ReduceMin","","1+",function(){return new T.WebGLReduceMin}],["ReduceProd","","1+",function(){return new T.WebGLReduceProd}],["ReduceSum","","1+",function(){return new T.WebGLReduceSum}],["ReduceSumSquare","","1+",function(){return new T.WebGLReduceSumSquare}],["Relu","","6+",function(){return new M.WebGLUnaryOp(a.FLOAT_TYPES,M.glslRelu())}],["Reshape","","5+",function(){return new O.WebGLReshape}],["Sigmoid","","6+",function(){return new M.WebGLUnaryOp(a.FLOAT_TYPES,M.glslSigmoid())}],["Sin","","7+",function(){return new M.WebGLUnaryOp(a.FLOAT_TYPES,M.glslSin())}],["Slice","","10+",function(){return new S.WebGLSliceV10}],["Slice","","1-9",function(){return new S.WebGLSlice}],["Softmax","","1+",function(){return new P.WebGLSoftmax}],["Split","","2+",function(t){return new A.WebGLSplit(t.outputs.length)}],["Sqrt","","6+",function(){return new M.WebGLUnaryOp(a.FLOAT_TYPES,M.glslSqrt())}],["Squeeze","","1+",function(){return new D.WebGLSqueeze}],["Sub","","7+",function(){return new s.WebGLBinaryOp(a.NUMBER_TYPES,s.glslSub())}],["Sum","","6+",function(){return new E.WebGLSum}],["Tan","","7+",function(){return new M.WebGLUnaryOp(a.FLOAT_TYPES,M.glslTan())}],["Tanh","","6+",function(){return new M.WebGLUnaryOp(a.FLOAT_TYPES,M.glslTanh())}],["Tile","","6+",function(){return new I.WebGLTile}],["Transpose","","1+",function(){return new L.WebGLTranspose}],["Upsample","","7-8",function(){return new k.WebGLUpsample}],["Unsqueeze","","1+",function(){return new j.WebGLUnsqueeze}],["Xor","","7+",function(){return new s.WebGLBinaryOp(["bool"],s.glslXor())}]]},function(t,e,n){"use strict";var r,o=this&&this.__extends||(r=function(t,e){return(r=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(t,e){t.__proto__=e}||function(t,e){for(var n in e)e.hasOwnProperty(n)&&(t[n]=e[n])})(t,e)},function(t,e){function n(){this.constructor=t}r(t,e),t.prototype=null===e?Object.create(e):(n.prototype=e.prototype,new n)});Object.defineProperty(e,"__esModule",{value:!0}),e.WebGLBatchNormalization=void 0;var i=n(14),a=n(2),u=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.run=function(t,e){return t.run(this,e)},e.prototype.createProgramInfo=function(t,e){var n=e.map((function(e){return t.getOrCreateTextureLayout(e)})),r=e[0].dims.slice(),o=r.length,i=n[1],u=a.getGlsl(t.session.backend.glContext.version),s="\n float process(int["+o+"] indices) {\n vec2 position = offsetToCoords(indices[1], "+i.width+", "+i.height+");\n float scale = getColorAsFloat("+u.texture2D+"(Scale, position));\n float mean = getColorAsFloat("+u.texture2D+"(Mean, position));\n float variance = getColorAsFloat("+u.texture2D+"(Variance, position));\n float b = getColorAsFloat("+u.texture2D+"(B, position));\n\n return scale * ( (_A(indices) - mean) / sqrt(variance + float("+this.epsilon+")) ) + b;\n }";return{inputLayouts:n,outputLayout:t.createTextureLayoutFromShape(r),samplers:["A","Scale","B","Mean","Variance"],shaderSource:s}},e.prototype.createRunData=function(t,e,n){var r=[t.getOrCreateTextureData(n[0],e.inputLayouts[0])];n.slice(1).forEach((function(e){return r.push(t.getOrCreateTextureData(e))}));var o=t.createTextureDataFromLayout(e.outputLayout,r[0].tensor.type);return{inputTextureDatas:r,outputTextureData:o,uniformData:{}}},e}(i.BatchNormalization);e.WebGLBatchNormalization=u},function(t,e,n){"use strict";var r,o=this&&this.__extends||(r=function(t,e){return(r=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(t,e){t.__proto__=e}||function(t,e){for(var n in e)e.hasOwnProperty(n)&&(t[n]=e[n])})(t,e)},function(t,e){function n(){this.constructor=t}r(t,e),t.prototype=null===e?Object.create(e):(n.prototype=e.prototype,new n)});Object.defineProperty(e,"__esModule",{value:!0}),e.glslPRelu=e.glslPow=e.glslXor=e.glslOr=e.glslAnd=e.glslLess=e.glslGreater=e.glslEqual=e.glslSub=e.glslMul=e.glslDiv=e.glslAdd=e.WebGLBinaryOp=void 0;var i=n(15),a=n(0),u=n(5),s=n(2),l=function(t){function e(e,n,r,o){var i=t.call(this,e,r,o)||this;return i.glslFunc=n,i}return o(e,t),e.prototype.run=function(t,e){return t.run(this,e)},e.prototype.createProgramInfo=function(t,e){var n=e.map((function(e){return t.getOrCreateTextureLayout(e)}));if(!a.ShapeUtil.areEqual(e[0].dims,e[1].dims)){var r=a.BroadcastUtil.calcShape(e[0].dims,e[1].dims,!1);if(!r)throw new Error("Can't perform binary op on the given tensors");var o=r.length,i=0!==e[0].dims.length?e[0].dims.length:1,u=0!==e[1].dims.length?e[1].dims.length:1,l=0!==e[0].dims.length?"bcastIndices_A(indices, aindices);":"aindices[0] = 0;",c=0!==e[1].dims.length?"bcastIndices_B(indices, bindices);":"bindices[0] = 0;",f="\n "+this.glslFunc.body+"\n float process(int indices["+o+"]) {\n int aindices["+i+"];\n int bindices["+u+"];\n "+l+"\n "+c+"\n return "+this.glslFunc.name+"(_A(aindices), _B(bindices));\n }";return{inputLayouts:n,outputLayout:t.createTextureLayoutFromShape(r),samplers:["A","B"],shaderSource:f}}var p=s.getGlsl(t.session.backend.glContext.version),h="\n "+this.glslFunc.body+"\n void main() {\n vec4 v1 = "+p.texture2D+"(A, TexCoords);\n vec4 v2 = "+p.texture2D+"(B, TexCoords);\n vec4 result = "+this.glslFunc.name+"(v1, v2);\n "+p.output+" = result;\n }\n ";return{hasMain:!0,inputLayouts:n,outputLayout:t.createTextureLayoutFromShape(e[0].dims),samplers:["A","B"],shaderSource:h}},e.prototype.createRunData=function(t,e,n){return{inputTextureDatas:n.map((function(n,r){return t.getOrCreateTextureData(n,e.inputLayouts[r])})),outputTextureData:t.createTextureDataFromLayout(e.outputLayout,this.resultType?this.resultType:n[0].type),uniformData:{}}},e}(i.BinaryOp);e.WebGLBinaryOp=l,e.glslAdd=function(){return{body:"\n float add_(float a, float b) {\n return a + b;\n }\n vec4 add_(vec4 v1, vec4 v2) {\n return v1 + v2;\n }\n ",name:"add_",type:u.FunctionType.ValueBased}},e.glslDiv=function(){return{body:"\n float div_(float a, float b) {\n return a / b;\n }\n vec4 div_(vec4 v1, vec4 v2) {\n return v1 / v2;\n }\n ",name:"div_",type:u.FunctionType.ValueBased}},e.glslMul=function(){return{body:"\n float mul_(float a, float b) {\n return a * b;\n }\n vec4 mul_(vec4 v1, vec4 v2) {\n return v1 * v2;\n }\n ",name:"mul_",type:u.FunctionType.ValueBased}},e.glslSub=function(){return{body:"\n float sub_(float a, float b) {\n return a - b;\n }\n vec4 sub_(vec4 v1, vec4 v2) {\n return v1 - v2;\n }\n ",name:"sub_",type:u.FunctionType.ValueBased}},e.glslEqual=function(){return{body:"\n float equal_(float a, float b) {\n return float(a == b);\n }\n vec4 equal_(vec4 v1, vec4 v2) {\n return vec4( v1 == v2 );\n }\n ",name:"equal_",type:u.FunctionType.ValueBased}},e.glslGreater=function(){var t="greater_";return{body:"\n float greater_(float a, float b) {\n return float(a > b);\n }\n vec4 greater_(vec4 v1, vec4 v2) {\n return vec4( v1.r > v2.r ,\n v1.g > v2.g,\n v1.b > v2.b,\n v1.a > v2.a );\n }\n ",name:t,type:u.FunctionType.ValueBased}},e.glslLess=function(){return{body:"\n float less_(float a, float b) {\n return float(a < b);\n }\n vec4 less_(vec4 v1, vec4 v2) {\n return vec4( v1.r < v2.r ,\n v1.g < v2.g,\n v1.b < v2.b,\n v1.a < v2.a );\n }\n ",name:"less_",type:u.FunctionType.ValueBased}},e.glslAnd=function(){return{body:"\n float and_(float a, float b) {\n return float( bool(a) && bool(b) );\n }\n vec4 and_(vec4 v1, vec4 v2) {\n bvec4 b1 = bvec4(v1);\n bvec4 b2 = bvec4(v2);\n return vec4( b1.r && b2.r ,\n b1.g && b2.g,\n b1.b && b2.b,\n b1.a && b2.a );\n }\n ",name:"and_",type:u.FunctionType.ValueBased}},e.glslOr=function(){return{body:"\n float or_(float a, float b) {\n return float( bool(a) || bool(b) );\n }\n vec4 or_(vec4 v1, vec4 v2) {\n bvec4 b1 = bvec4(v1);\n bvec4 b2 = bvec4(v2);\n return vec4( b1.r || b2.r ,\n b1.g || b2.g,\n b1.b || b2.b,\n b1.a || b2.a );\n }\n ",name:"or_",type:u.FunctionType.ValueBased}},e.glslXor=function(){return{body:"\n float xor_(float a, float b) {\n return float( bool(a) ^^ bool(b) );\n }\n vec4 xor_(vec4 v1, vec4 v2) {\n bvec4 b1 = bvec4(v1);\n bvec4 b2 = bvec4(v2);\n return vec4( b1.r ^^ b2.r ,\n b1.g ^^ b2.g,\n b1.b ^^ b2.b,\n b1.a ^^ b2.a );\n }\n ",name:"xor_",type:u.FunctionType.ValueBased}},e.glslPow=function(){return function(t){var e=t+"_";return{body:"\n float "+e+"(float a, float b) {\n return "+t+"(a, b);\n }\n vec4 "+e+"(vec4 v1, vec4 v2) {\n return "+t+"(v1, v2);\n }\n ",name:e,type:u.FunctionType.ValueBased}}("pow")},e.glslPRelu=function(){return{body:"\n float prelu_(float a, float b) {\n return a < 0.0 ? a * b: a;\n }\n vec4 prelu_(vec4 v1, vec4 v2) {\n return vec4(\n v1.r < 0.0 ? v1.r * v2.r: v1.r,\n v1.g < 0.0 ? v1.g * v2.g: v1.g,\n v1.b < 0.0 ? v1.b * v2.b: v1.b,\n v1.a < 0.0 ? v1.a * v2.a: v1.a\n );\n }\n ",name:"prelu_",type:u.FunctionType.ValueBased}}},function(t,e,n){"use strict";var r,o=this&&this.__extends||(r=function(t,e){return(r=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(t,e){t.__proto__=e}||function(t,e){for(var n in e)e.hasOwnProperty(n)&&(t[n]=e[n])})(t,e)},function(t,e){function n(){this.constructor=t}r(t,e),t.prototype=null===e?Object.create(e):(n.prototype=e.prototype,new n)});Object.defineProperty(e,"__esModule",{value:!0}),e.WebGLClip=void 0;var i=n(49),a=n(2),u=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.run=function(t,e){return t.run(this,e)},e.prototype.createProgramInfo=function(t,e){var n=e[0].dims.slice(),r=a.getGlsl(t.session.backend.glContext.version),o="\n const float min = float("+this.min+");\n const float max = float("+this.max+");\n void main() {\n float v = "+r.texture2D+"(A, TexCoords).r;\n "+r.output+" = vec4(clamp(v, min, max));\n }\n ";return{inputLayouts:[t.getOrCreateTextureLayout(e[0])],outputLayout:t.createTextureLayoutFromShape(n),samplers:["A"],shaderSource:o,hasMain:!0}},e.prototype.createRunData=function(t,e,n){var r=[t.getOrCreateTextureData(n[0],e.inputLayouts[0])];return{inputTextureDatas:r,outputTextureData:t.createTextureDataFromLayout(e.outputLayout,r[0].tensor.type),uniformData:{}}},e}(i.Clip);e.WebGLClip=u},function(t,e,n){"use strict";var r,o=this&&this.__extends||(r=function(t,e){return(r=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(t,e){t.__proto__=e}||function(t,e){for(var n in e)e.hasOwnProperty(n)&&(t[n]=e[n])})(t,e)},function(t,e){function n(){this.constructor=t}r(t,e),t.prototype=null===e?Object.create(e):(n.prototype=e.prototype,new n)});Object.defineProperty(e,"__esModule",{value:!0}),e.WebGLConcat=void 0;var i=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.run=function(t,e){return t.run(this,e)},e.prototype.createProgramInfo=function(t,e){var n=e[0].dims.slice();if(this.axis>=n.length||this.axis<-1*n.length)throw new Error("axis specified for concat doesn't match input dimensionality");this.axis<0&&(this.axis=n.length+this.axis);for(var r=n.slice(0),o=1;o0)&&!(r=i.next()).done;)a.push(r.value)}catch(t){o={error:t}}finally{try{r&&!r.done&&(n=i.return)&&n.call(i)}finally{if(o)throw o.error}}return a},a=this&&this.__spread||function(){for(var t=[],e=0;e=3?r[2]:void 0,a=t.getTextureData(o.dataId);if(!a){u.Logger.verbose("Conv","Did not find the adjustedKernel texture in the cache. Creating rew.");var s=e.prepKernelForDotProduct(o.dims.slice(),this.group,4,o.floatData);a=t.createTextureDataFromLayoutBindTensor(n[1].inputLayouts[1],o.type,s,o)}var l={inputTextureDatas:[t.getOrCreateTextureData(r[0])],outputTextureData:t.createTextureDataFromLayout(n[0].outputLayout,r[0].type),uniformData:{}},c=[l.outputTextureData,a];return i&&c.push(t.getOrCreateTextureData(i)),[l,{inputTextureDatas:c,outputTextureData:t.createTextureDataFromLayout(n[1].outputLayout,r[0].type),uniformData:{},draw:function(t,e){for(var n=t.gl,r=e.programInfo.params.sharedDim,o=e.programInfo.params.sharedDimReadSize,i=e.uniformLocations.find((function(t){return"sharedDimOffset"===t.name})).location,a=!1,s=0;s= 0 &&\n xh2 < XH &&\n xw2 >= 0 &&\n xw2 < XW) {\n v[i] = _X(x);\n }\n }\n ++p;\n }\n return v;\n }\n ";return{inputLayouts:[t.createTextureLayoutFromShape(o)],outputLayout:s,samplers:["X"],shaderSource:l}},e.prototype.createDotProductProgramInfo=function(t,e,n,r){var o,i=n[0].dims.slice(),a=n[1].dims.slice(),u=[a[0],Math.ceil(i[1]*a[2]*a[3]/4)],s=t.createTextureLayoutFromShape(u,4,[u[0],4*u[1]],{breakAxis:1}),l=r.length,f=[e,s];3===n.length&&(o=t.createTextureLayoutFromShape(n[2].dims.slice()),f.push(o));var p=t.createTextureLayoutFromShape(r),h=n.length<3?"0.0":"_B(b)",d=e.shape[3],y=t.session.backend.glContext.isBlendSupported&&t.session.backend.matmulMaxBatchSize?this.calcSharedDimReadSize(t.session.backend.matmulMaxBatchSize,d):d,g=["Im2Col","K"];3===n.length&&g.push("B");var m=c.getGlsl(t.session.backend.glContext.version),v="\n float process(int indices["+l+"]) {\n int b[1];\n b[0] = indices[1];\n int im2col["+e.shape.length+"];\n im2col[0] = indices[0];\n im2col[1] = indices[2];\n im2col[2] = indices[3];\n int im2colOffset = im2col[0] * "+e.strides[0]+" + im2col[1] * "+e.strides[1]+" + im2col[2] * "+e.strides[2]+" + sharedDimOffset;\n int kernelOffset = indices[1] * "+s.strides[0]+" + sharedDimOffset;\n float sum = sharedDimOffset == 0 ? "+h+" : 0.0;\n for (int i = 0; i < "+y+"; ++i) {\n vec2 im2colCoords = offsetToCoords(im2colOffset, "+e.width+", "+e.height+");\n vec2 kernelCoords = offsetToCoords(kernelOffset, "+s.width+", "+s.height+");\n sum += dot("+m.texture2D+"(Im2Col, im2colCoords), "+m.texture2D+"(K, kernelCoords));\n ++im2colOffset;\n ++kernelOffset;\n }\n return sum;\n }";return{inputLayouts:3===n.length?[e,s,o]:[e,s],outputLayout:p,shaderSource:v,samplers:g,variables:[{name:"sharedDimOffset",type:"int"}],params:{sharedDim:d,sharedDimReadSize:y}}},e.prepKernelForDotProduct=function(t,e,n,r){if(1===e&&(1===n||t[2]*t[3]%n==0))return r;for(var o=t[0],i=t[1]*t[2]*t[3],a=Math.ceil(i*e/n)*n,u=new Float32Array(o*a),s=0;s= 0.0 ? v: (exp(v) - 1.0) * "+this.alpha.toExponential()+"); /* float number format */\n }\n ";return{inputLayouts:[t.getOrCreateTextureLayout(e[0])],outputLayout:t.createTextureLayoutFromShape(n),samplers:["A"],shaderSource:o,hasMain:!0}},e.prototype.createRunData=function(t,e,n){var r=[t.getOrCreateTextureData(n[0],e.inputLayouts[0])];return{inputTextureDatas:r,outputTextureData:t.createTextureDataFromLayout(e.outputLayout,r[0].tensor.type),uniformData:{}}},e}(i.Elu);e.WebGLElu=u},function(t,e,n){"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.Elu=void 0;var r=function(){function t(){}return t.prototype.initialize=function(t){this.alpha=t.getFloat("alpha",1)},t.prototype.checkInputs=function(t){return!(!t||1!==t.length)&&this.checkInputTypes(t)},t.prototype.checkInputTypes=function(t){return"float32"===t[0].type||"float64"===t[0].type},t}();e.Elu=r},function(t,e,n){"use strict";var r,o=this&&this.__extends||(r=function(t,e){return(r=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(t,e){t.__proto__=e}||function(t,e){for(var n in e)e.hasOwnProperty(n)&&(t[n]=e[n])})(t,e)},function(t,e){function n(){this.constructor=t}r(t,e),t.prototype=null===e?Object.create(e):(n.prototype=e.prototype,new n)});Object.defineProperty(e,"__esModule",{value:!0}),e.WebGLFlatten=void 0;var i=n(32),a=n(0),u=n(10),s=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.run=function(t,e){var n=a.ShapeUtil.flattenShape(e[0].dims,this.axis);return[u.reshape(t,e[0],n)]},e}(i.Flatten);e.WebGLFlatten=s},function(t,e,n){"use strict";var r,o=this&&this.__extends||(r=function(t,e){return(r=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(t,e){t.__proto__=e}||function(t,e){for(var n in e)e.hasOwnProperty(n)&&(t[n]=e[n])})(t,e)},function(t,e){function n(){this.constructor=t}r(t,e),t.prototype=null===e?Object.create(e):(n.prototype=e.prototype,new n)});Object.defineProperty(e,"__esModule",{value:!0}),e.WebGLGather=void 0;var i=n(33),a=n(0),u=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.run=function(t,e){return t.run(this,e)},e.prototype.createProgramInfo=function(t,e){var n=e[0].dims.slice(),r=e[1].dims.slice(),o=new Array(n.length+r.length-1);if(0===o.length)throw Error("A scalar tensor output has not been supported");for(var i=a.ShapeUtil.normalizeAxis(this.axis,n.length),u=[],s=0;s0)&&!(r=i.next()).done;)a.push(r.value)}catch(t){o={error:t}}finally{try{r&&!r.done&&(n=i.return)&&n.call(i)}finally{if(o)throw o.error}}return a};Object.defineProperty(e,"__esModule",{value:!0}),e.WebGLGemm=void 0;var a=n(19),u=n(0),s=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.run=function(t,e){return t.run(this,e)},e.prototype.createProgramInfo=function(t,e){var n=e[0].dims.slice(),r=e[1].dims.slice(),o=i(u.GemmUtil.getShapeOfGemmResult(n,this.transA,r,this.transB,3===e.length?e[2].dims:void 0),2),a=[o[0],o[1]];if(!a)throw new Error("Can't use gemm on the given tensors");var s=n[n.length-1],l="";this.transA&&(s=n[0]),this.transA&&this.transB?l="value += _A_T(a) * _B_T(b);":this.transA&&!this.transB?l="value += _A_T(a) * _B(b);":!this.transA&&this.transB?l="value += _A(a) * _B_T(b);":this.transA||this.transB||(l="value += _A(a) * _B(b);");var c=a.length,f="\n float process(int indices["+c+"]) {\n int a["+c+"];\n int b["+c+"];\n "+(3===e.length?"int c["+e[2].dims.length+"];":"")+"\n\n copyVec(indices, a);\n copyVec(indices, b);\n "+(3===e.length?"bcastIndices_C(indices, c);":"")+"\n\n float value = 0.0;\n for (int k=0; k<"+s+"; ++k) {\n a["+(c-1)+"] = k;\n b["+(c-2)+"] = k;\n "+l+"\n }\n\n value = value * alpha;\n "+(3===e.length?"value += beta * _C(c);":"")+"\n return value;\n }";return{inputLayouts:e.map((function(e){return t.getOrCreateTextureLayout(e)})),outputLayout:t.createTextureLayoutFromShape(a),samplers:3===e.length?["A","B","C"]:["A","B"],variables:[{name:"alpha",type:"float"},{name:"beta",type:"float"}],shaderSource:f}},e.prototype.createRunData=function(t,e,n){var r=n.map((function(n,r){return t.getOrCreateTextureData(n,e.inputLayouts[r])}));return{inputTextureDatas:r,outputTextureData:t.createTextureDataFromLayout(e.outputLayout,r[0].tensor.type),uniformData:{alpha:this.alpha,beta:this.beta}}},e}(a.Gemm);e.WebGLGemm=s},function(t,e,n){"use strict";var r,o=this&&this.__extends||(r=function(t,e){return(r=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(t,e){t.__proto__=e}||function(t,e){for(var n in e)e.hasOwnProperty(n)&&(t[n]=e[n])})(t,e)},function(t,e){function n(){this.constructor=t}r(t,e),t.prototype=null===e?Object.create(e):(n.prototype=e.prototype,new n)});Object.defineProperty(e,"__esModule",{value:!0}),e.WebGLImageScaler=void 0;var i=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.run=function(t,e){return t.run(this,e)},e.prototype.createProgramInfo=function(t,e){var n=e[0].dims.slice(),r=n.length,o="\n "+this.createGetBiasMethod(this.bias.length)+"\n float process(int indices["+r+"]) {\n return _X(indices) * scale + getBias(bias, indices[1]);\n }";return{inputLayouts:[t.getOrCreateTextureLayout(e[0])],outputLayout:t.createTextureLayoutFromShape(n),samplers:["X"],variables:[{name:"bias",type:"float",arrayLength:this.bias.length},{name:"scale",type:"float"}],shaderSource:o}},e.prototype.createRunData=function(t,e,n){var r=[t.getOrCreateTextureData(n[0],e.inputLayouts[0])];return{inputTextureDatas:r,outputTextureData:t.createTextureDataFromLayout(e.outputLayout,r[0].tensor.type),uniformData:{bias:this.bias,scale:this.scale}}},e.prototype.createGetBiasMethod=function(t){for(var e=["float getBias(float bias["+t+"], int channel) {"],n=0;n=0;--c)l+="\n k = m["+c+"] - "+a[c]+";\n if (k < 0) return constant;\n if (k >= "+n[c]+") return constant;\n offset += k * "+r[c]+";\n ";return"\n float pad"+e+"(int m["+s+"]) {\n const float constant = float("+u+");\n int offset = 0;\n int k = 0;\n "+l+"\n vec2 coords = offsetToCoords(offset, "+o+", "+i+");\n float value = getColorAsFloat("+t.texture2D+"("+e+", coords));\n return value;\n }\n "}(t,e,n.shape,n.strides,n.width,n.height,o,i);case"reflect":return function(t,e,n,r,o,i,a){for(var u=n.length,s="",l=u-1;l>=0;--l)s+="\n k = m["+l+"] - "+a[l]+";\n if (k < 0) { k = -k; }\n {\n const int _2n_1 = "+2*(n[l]-1)+";\n k = int( mod( float(k), float(_2n_1) ) ) ;\n if(k >= "+n[l]+") { k = _2n_1 - k; }\n }\n offset += k * "+r[l]+";\n ";return"\n float pad"+e+"(int m["+u+"]) {\n int offset = 0;\n int k = 0;\n "+s+"\n vec2 coords = offsetToCoords(offset, "+o+", "+i+");\n float value = getColorAsFloat("+t.texture2D+"("+e+", coords));\n return value;\n }\n "}(t,e,n.shape,n.strides,n.width,n.height,o);case"edge":return function(t,e,n,r,o,i,a){for(var u=n.length,s="",l=u-1;l>=0;--l)s+="\n k = m["+l+"] - "+a[l]+";\n if (k < 0) k = 0;\n if (k >= "+n[l]+") k = "+(n[l]-1)+";\n offset += k * "+r[l]+";\n ";return"\n float pad"+e+"(int m["+u+"]) {\n int offset = 0;\n int k = 0;\n "+s+"\n vec2 coords = offsetToCoords(offset, "+o+", "+i+");\n float value = getColorAsFloat("+t.texture2D+"("+e+", coords));\n return value;\n }\n "}(t,e,n.shape,n.strides,n.width,n.height,o);default:throw new Error("Invalid mode")}}e.WebGLPad=s,e.getPadFunction=l},function(t,e,n){"use strict";var r,o=this&&this.__extends||(r=function(t,e){return(r=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(t,e){t.__proto__=e}||function(t,e){for(var n in e)e.hasOwnProperty(n)&&(t[n]=e[n])})(t,e)},function(t,e){function n(){this.constructor=t}r(t,e),t.prototype=null===e?Object.create(e):(n.prototype=e.prototype,new n)});Object.defineProperty(e,"__esModule",{value:!0}),e.offsetToIndices=e.copyArray=e.GeneratePoolingCode=e.WebGLMaxPool=e.WebGLGlobalMaxPool=e.WebGLAveragePool=e.WebGLGlobalAveragePool=void 0;var i=n(21),a=n(0),u=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.run=function(t,e){return t.run(this,e)},e.prototype.createProgramInfo=function(t,e){return l(t,e,!0,this.kernelShape,this.autoPad,this.strides,this.pads,this.countIncludePad)},e.prototype.createRunData=function(t,e,n){var r=[t.getOrCreateTextureData(n[0],e.inputLayouts[0])];return{inputTextureDatas:r,outputTextureData:t.createTextureDataFromLayout(e.outputLayout,r[0].tensor.type),uniformData:{}}},e}(i.GlobalAveragePool);e.WebGLGlobalAveragePool=u;var s=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.run=function(t,e){return t.run(this,e)},e.prototype.createProgramInfo=function(t,e){return l(t,e,!1,this.kernelShape,this.autoPad,this.strides,this.pads,this.countIncludePad)},e.prototype.createRunData=function(t,e,n){var r=[t.getOrCreateTextureData(n[0],e.inputLayouts[0])];return{inputTextureDatas:r,outputTextureData:t.createTextureDataFromLayout(e.outputLayout,r[0].tensor.type),uniformData:{}}},e}(i.AveragePool);function l(t,e,n,r,o,i,u,s){void 0===r&&(r=[]),void 0===o&&(o=""),void 0===i&&(i=[]),void 0===u&&(u=[]);var l=e[0].dims.slice();a.PoolConvUtil.adjustPoolAttributes(n,l,r,i,u);var c=a.PoolConvUtil.computePoolOutputShape(n,l,i,r,u,o),f=a.ShapeUtil.size(r),p="";p+=s?"value /= float("+f+");":"value /= float("+f+" - pad);";var d=t.getOrCreateTextureLayout(e[0]),y="\n "+h(d,r,u,i,"value += _X(x);",p,"0.0")+"\n ";return{inputLayouts:[d],outputLayout:t.createTextureLayoutFromShape(c),samplers:["X"],shaderSource:y}}e.WebGLAveragePool=s;var c=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.run=function(t,e){return t.run(this,e)},e.prototype.createProgramInfo=function(t,e){return p(t,e,!0,this.kernelShape,this.autoPad,this.strides,this.pads)},e.prototype.createRunData=function(t,e,n){var r=[t.getOrCreateTextureData(n[0])];return{inputTextureDatas:r,outputTextureData:t.createTextureDataFromLayout(e.outputLayout,r[0].tensor.type),uniformData:{}}},e}(i.GlobalMaxPool);e.WebGLGlobalMaxPool=c;var f=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.run=function(t,e){return t.run(this,e)},e.prototype.createProgramInfo=function(t,e){return p(t,e,!1,this.kernelShape,this.autoPad,this.strides,this.pads)},e.prototype.createRunData=function(t,e,n){var r=[t.getOrCreateTextureData(n[0])];return{inputTextureDatas:r,outputTextureData:t.createTextureDataFromLayout(e.outputLayout,r[0].tensor.type),uniformData:{}}},e}(i.MaxPool);function p(t,e,n,r,o,i,u){void 0===r&&(r=[]),void 0===o&&(o=""),void 0===i&&(i=[]),void 0===u&&(u=[]);var s=e[0].dims.slice();a.PoolConvUtil.adjustPoolAttributes(n,s,r,i,u);var l=a.PoolConvUtil.computePoolOutputShape(n,s,i,r,u,o),c=t.createTextureLayoutFromShape(s),f="\n "+h(c,r,u,i,"\n value = max(_X(x), value);\n ","","-1e5")+"\n ";return{inputLayouts:[c],outputLayout:t.createTextureLayoutFromShape(l),samplers:["X"],shaderSource:f}}function h(t,e,n,r,o,i,u){var s=t.shape,l=t.shape.length;if(e.length<=2){var c=e[e.length-1],f=r[r.length-1],p=n[n.length/2-1],h=n[n.length-1],g=s[l-1],m="",v="",b="";if(m=p+h!==0?"\n for (int i = 0; i < "+c+"; i++) {\n x["+l+" - 1] = indices["+l+" - 1] * "+f+" - "+p+" + i;\n if (x["+l+" - 1] < 0 || x["+l+" - 1] >= "+g+") {\n pad++;\n continue;\n }\n "+o+"\n }":"\n for (int i = 0; i < "+c+"; i++) {\n x["+l+" - 1] = indices["+l+" - 1] * "+f+" - "+p+" + i;\n "+o+"\n }",2===e.length){var _=e[e.length-2],w=r[r.length-2],x=n[n.length/2-2],T=n[n.length-2],O=s[l-2];v=x+T!==0?"\n for (int j = 0; j < "+_+"; j++) {\n x["+l+" - 2] = indices["+l+" - 2] * "+w+" - "+x+" + j;\n if (x["+l+" - 2] < 0 || x["+l+" - 2] >= "+O+") {\n pad+= "+c+";\n continue;\n }\n ":"\n for (int j = 0; j < "+_+"; j++) {\n x["+l+" - 2] = indices["+l+" - 2] * "+w+" - "+x+" + j;\n ",b="\n }\n "}return"\n float process(int indices["+l+"]) {\n int x["+l+"];\n copyVec(indices, x);\n\n float value = "+u+";\n int pad = 0;\n "+v+"\n "+m+"\n "+b+"\n "+i+"\n return value;\n }\n "}var S=a.ShapeUtil.size(e),P=a.ShapeUtil.computeStrides(e),A=P.length,D=n.length,E=y(A),I=d(s,"inputDims"),L=d(n,"pads"),M=d(P,"kernelStrides");return"\n "+E+"\n float process(int indices["+l+"]) {\n int x["+l+"];\n copyVec(indices, x);\n int offset["+A+"];\n int pads["+D+"];\n int inputDims["+l+"];\n int kernelStrides["+A+"];\n int strides["+A+"];\n "+L+"\n "+I+"\n "+d(r,"strides")+"\n "+M+"\n\n float value = "+u+";\n int pad = 0;\n bool isPad = false;\n for (int i = 0; i < "+S+"; i++) {\n offsetToIndices(i, kernelStrides, offset);\n isPad = false;\n for (int j = "+l+" - "+A+"; j < "+l+"; j++) {\n x[j] = indices[j] * strides[j - "+l+" + "+A+"]\n + offset[j - "+l+" + "+A+"] - pads[j - 2];\n "+(n.reduce((function(t,e){return t+e}))?"\n if (x[j] >= inputDims[j] || x[j] < 0) {\n pad++;\n isPad = true;\n break;\n }\n }\n if (!isPad) {\n "+o+"\n }":"\n }\n "+o)+"\n }\n "+i+"\n\n return value;\n }"}function d(t,e){for(var n="",r=0;r=0||0===i.length?(this.keepDims&&n.push(1),s="\n for(int j"+l+" = 0; j"+l+" < "+e[0].dims[l]+"; j"+l+"++) {\n inputIdx["+l+"] = j"+l+";\n "+s+"\n }\n "):(o.push("inputIdx["+l+"] = outputIdx["+n.length+"];"),n.push(e[0].dims[l]));var c="\n float process(int outputIdx["+(n.length||1)+"]) {\n float value; // final result\n int inputIdx["+r+"]; // addressing input data\n "+o.join("\n")+"\n "+u[0]+" // init ops for reduce max/min\n "+s+"\n "+u[2]+" // final computation for reduce mean\n return value;\n }";return{inputLayouts:e.map((function(e){return t.getOrCreateTextureLayout(e)})),outputLayout:t.createTextureLayoutFromShape(n),samplers:["A"],shaderSource:c}},e.prototype.createRunData=function(t,e,n){var r=n.map((function(n,r){return t.getOrCreateTextureData(n,e.inputLayouts[r])}));return{inputTextureDatas:r,outputTextureData:t.createTextureDataFromLayout(e.outputLayout,r[0].tensor.type),uniformData:{}}},e}(i.ReduceBase),s=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.getOps=function(t){return["value = 0.0;","value += _A(inputIdx);",""]},e}(u);e.WebGLReduceSum=s;var l=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.getOps=function(t,e){for(var n=1,r=0;r=0||0===e.length)&&(n*=t[0].dims[r]);return["value = 0.0;","value += _A(inputIdx);","value /= "+n+".;"]},e}(u);e.WebGLReduceMean=l;var c=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.getOps=function(t,e){for(var n=[],r=0;r=0||0===e.length)&&n.push("inputIdx["+r+"] = 0;");return[n.join("\n")+"\nvalue = _A(inputIdx);","value = max(value, _A(inputIdx));",""]},e}(u);e.WebGLReduceMax=c;var f=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.getOps=function(t,e){for(var n=[],r=0;r=0||0===e.length)&&n.push("inputIdx["+r+"] = 0;");return[n.join("\n")+"\nvalue = _A(inputIdx);","value = min(value, _A(inputIdx));",""]},e}(u);e.WebGLReduceMin=f;var p=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.getOps=function(t){return["value = 1.0;","value *= _A(inputIdx);",""]},e}(u);e.WebGLReduceProd=p;var h=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.getOps=function(t){return["value = 0.0;","value += _A(inputIdx);","value = log(value);"]},e}(u);e.WebGLReduceLogSum=h;var d=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.getOps=function(t){return["float t; value = 0.0;","t = _A(inputIdx); value += t * t;",""]},e}(u);e.WebGLReduceSumSquare=d},function(t,e,n){"use strict";var r,o=this&&this.__extends||(r=function(t,e){return(r=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(t,e){t.__proto__=e}||function(t,e){for(var n in e)e.hasOwnProperty(n)&&(t[n]=e[n])})(t,e)},function(t,e){function n(){this.constructor=t}r(t,e),t.prototype=null===e?Object.create(e):(n.prototype=e.prototype,new n)});Object.defineProperty(e,"__esModule",{value:!0}),e.WebGLSliceV10=e.WebGLSlice=void 0;var i=n(38),a=n(0),u=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.run=function(t,e){return t.run(this,e)},e.prototype.createProgramInfo=function(t,e){return l(t,e[0],this.starts,this.ends,this.axes)},e.prototype.createRunData=function(t,e,n){return c(t,e,n)},e}(i.Slice);e.WebGLSlice=u;var s=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.run=function(t,e){return t.run(this,e)},e.prototype.createProgramInfo=function(t,e){if(!t.session.isInitializer(e[1].dataId)||!t.session.isInitializer(e[2].dataId)||e.length>=4&&!t.session.isInitializer(e[3].dataId)||e.length>=5&&!t.session.isInitializer(e[4].dataId))throw new Error("dynamic slice attributes are not allowed");if(e.length>=5&&e[4].integerData.some((function(t){return 1!==t})))throw new Error("currently non-1 steps is not supported for Slice");var n=Array.from(e[1].integerData),r=Array.from(e[2].integerData),o=e.length>=4?Array.from(e[3].integerData):[];return l(t,e[0],n,r,o)},e.prototype.createRunData=function(t,e,n){return c(t,e,n)},e}(i.SliceV10);function l(t,e,n,r,o){0===o.length&&(o=e.dims.slice(0).map((function(t,e){return e}))),o=a.ShapeUtil.normalizeAxes(o,e.dims.length),n=n.map((function(t,n){return t>e.dims[o[n]]-1?e.dims[o[n]]:a.ShapeUtil.normalizeAxis(t,e.dims[o[n]])})),r=r.map((function(t,n){return t>e.dims[o[n]]-1?e.dims[o[n]]:a.ShapeUtil.normalizeAxis(t,e.dims[o[n]])}));for(var i=e.dims.slice(),u=[],s=0;s0&&u.push("outputIdx["+o[s]+"] += "+n[s]+";");var l="\n float process(int outputIdx["+i.length+"]) {\n "+u.join("\n ")+"\n return _A(outputIdx);\n }";return{inputLayouts:[t.getOrCreateTextureLayout(e)],outputLayout:t.createTextureLayoutFromShape(i),samplers:["A"],shaderSource:l}}function c(t,e,n){var r=[t.getOrCreateTextureData(n[0],e.inputLayouts[0])];return{inputTextureDatas:r,outputTextureData:t.createTextureDataFromLayout(e.outputLayout,r[0].tensor.type),uniformData:{}}}e.WebGLSliceV10=s},function(t,e,n){"use strict";var r,o=this&&this.__extends||(r=function(t,e){return(r=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(t,e){t.__proto__=e}||function(t,e){for(var n in e)e.hasOwnProperty(n)&&(t[n]=e[n])})(t,e)},function(t,e){function n(){this.constructor=t}r(t,e),t.prototype=null===e?Object.create(e):(n.prototype=e.prototype,new n)}),i=this&&this.__read||function(t,e){var n="function"==typeof Symbol&&t[Symbol.iterator];if(!n)return t;var r,o,i=n.call(t),a=[];try{for(;(void 0===e||e-- >0)&&!(r=i.next()).done;)a.push(r.value)}catch(t){o={error:t}}finally{try{r&&!r.done&&(n=i.return)&&n.call(i)}finally{if(o)throw o.error}}return a},a=this&&this.__spread||function(){for(var t=[],e=0;e max)\n max = current;\n }\n\n return max;\n }";return{inputLayouts:[i],outputLayout:t.createTextureLayoutFromShape(o),samplers:["A"],shaderSource:f}},e.prototype.createProgramInfos=function(t,e){var n=e[0].dims.slice(),r=s.ShapeUtil.normalizeAxis(this.axis,n.length),o=s.ShapeUtil.sizeToDimension(n,r),i=s.ShapeUtil.sizeFromDimension(n,r),a=this.createComputeMaxProgramInfo(t,e[0],o,i,[o]),u=this.createComputScaleProgramInfo(t,e[0],o,i,a.outputLayout,[o]);return[a,u,this.createSoftMaxProgramInfo(t,e[0],o,i,a.outputLayout,u.outputLayout)]},e.prototype.createRunDatas=function(t,e,n){var r=n[0].type,o=t.getOrCreateTextureData(n[0],e[0].inputLayouts[0]),i=[];i.push({inputTextureDatas:[o],outputTextureData:t.createTextureDataFromLayout(e[0].outputLayout,r),uniformData:{}});for(var u=1;u0)&&!(r=i.next()).done;)a.push(r.value)}catch(t){o={error:t}}finally{try{r&&!r.done&&(n=i.return)&&n.call(i)}finally{if(o)throw o.error}}return a};Object.defineProperty(e,"__esModule",{value:!0}),e.WebGLSplit=void 0;var a=n(146),u=n(0),s=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype.run=function(t,e){var n=this;if(!this.artifacts){this.artifacts=[];for(var r=u.ShapeUtil.normalizeAxis(this.axis,e[0].dims.length),o=this.getProgramCount(t,e,r),i=0;i0)&&!(r=i.next()).done;)a.push(r.value)}catch(t){o={error:t}}finally{try{r&&!r.done&&(n=i.return)&&n.call(i)}finally{if(o)throw o.error}}return a},a=this&&this.__spread||function(){for(var t=[],e=0;e=0;p--)l[p]=p===u-1?1:l[p+1]*o[p+1],c[p]=p===u-1?1:c[p+1]*e[0].dims[p+1],f+="\n output_pitches["+p+"] = "+l[p]+";\n input_pitches["+p+"] = "+c[p]+";\n ";var h="\n float getInputFloat(int index) {\n vec2 coords = offsetToCoords(index, "+r.width+", "+r.height+");\n float value = getColorAsFloat("+s.texture2D+"(X, coords));\n return value;\n }\n ";return{inputLayouts:[r],outputLayout:i,samplers:["X"],shaderSource:"nearest"===this.mode?"\n "+h+"\n float process(int indices["+u+"]) {\n int input_index = 0;\n int output_index = coordsToOffset(TexCoords, "+i.width+", "+i.height+");\n\n "+f+"\n\n int d, m;\n for (int dim = 0; dim < "+u+"; ++dim) {\n d = output_index / output_pitches[dim];\n m = output_index - d * output_pitches[dim];\n output_index = m;\n\n if (scales[dim] != 1 && d > 0) {\n int d2 = d / scales[dim];\n m = d - d2 * scales[dim];\n d = d2;\n }\n input_index += input_pitches[dim] * d;\n }\n\n return getInputFloat(input_index);\n }":4===u?"\n "+h+"\n float process(int indices[4]) {\n int input_index = 0;\n int output_index = coordsToOffset(TexCoords, "+i.width+", "+i.height+");\n\n "+f+"\n\n int m;\n int index_of_dim0, index_of_dim1, index_of_dim2, index_of_dim3;\n index_of_dim0 = output_index / output_pitches[0];\n m = output_index - index_of_dim0 * output_pitches[0];\n index_of_dim1 = m / output_pitches[1];\n m = m - index_of_dim1 * output_pitches[1];\n index_of_dim2 = m / output_pitches[2];\n m = m - index_of_dim2 * output_pitches[2];\n index_of_dim3 = m;\n\n int index_of_input_dim2, index_of_input_dim3, x_offset, y_offset;\n index_of_input_dim2 = index_of_dim2 / scales[2];\n y_offset = index_of_dim2 - index_of_input_dim2 * scales[2];\n index_of_input_dim3 = index_of_dim3 / scales[3];\n x_offset = index_of_dim3 - index_of_input_dim3 * scales[3];\n\n input_index = index_of_dim0 * input_pitches[0] +\n index_of_dim1 * input_pitches[1] +\n index_of_input_dim2 * input_pitches[2] +\n index_of_input_dim3;\n\n float x00 = getInputFloat(input_index);\n float x10, x01, x11;\n\n bool end_of_dim2 = false;\n if (index_of_input_dim2 == ("+e[0].dims[2]+" - 1)) {\n // It's the end in dimension 2\n x01 = x00;\n end_of_dim2 = true;\n } else {\n x01 = getInputFloat(input_index + input_pitches[2]);\n }\n\n if (index_of_input_dim3 == (input_pitches[2] - 1)) {\n // It's the end in dimension 3\n x10 = x00;\n x11 = x01;\n }\n else {\n x10 = getInputFloat(input_index + 1);\n x11 = end_of_dim2 ? x10 : getInputFloat(input_index + input_pitches[2] + 1);\n }\n\n float y0 = x00 + float(y_offset) * (x01 - x00) / float(scales[2]);\n float y1 = x10 + float(y_offset) * (x11 - x10) / float(scales[2]);\n return y0 + float(x_offset) * (y1 - y0) / float(scales[3]);\n }":"\n "+h+"\n float process(int indices[2]) {\n int input_index = 0;\n int output_index = coordsToOffset(TexCoords, "+i.width+", "+i.height+");\n\n "+f+"\n\n int m;\n int index_of_dim0, index_of_dim1;\n index_of_dim0 = output_index / output_pitches[0];\n m = output_index - index_of_dim0 * output_pitches[0];\n index_of_dim1 = m;\n\n int index_of_input_dim0, index_of_input_dim1, x_offset, y_offset;\n index_of_input_dim0 = index_of_dim0 / scales[0];\n y_offset = index_of_dim0 - index_of_input_dim0 * scales[0];\n index_of_input_dim1 = index_of_dim1 / scales[1];\n x_offset = index_of_dim1 - index_of_input_dim1 * scales[1];\n\n input_index = index_of_input_dim0 * input_pitches[0] + index_of_input_dim1;\n\n float x00 = getInputFloat(input_index);\n float x10, x01, x11;\n\n bool end_of_dim0 = false;\n if (index_of_input_dim0 == ("+e[0].dims[0]+" - 1)) {\n // It's the end in dimension 0\n x01 = x00;\n end_of_dim0 = true;\n } else {\n x01 = getInputFloat(input_index + input_pitches[0]);\n }\n\n if (index_of_input_dim1 == (input_pitches[0] - 1)) {\n // It's the end in dimension 1\n x10 = x00;\n x11 = x01;\n }\n else {\n x10 = getInputFloat(input_index + 1);\n x11 = end_of_dim0 ? x10 : getInputFloat(input_index + input_pitches[0] + 1);\n }\n\n float y0 = x00 + float(y_offset) * (x01 - x00) / float(scales[0]);\n float y1 = x10 + float(y_offset) * (x11 - x10) / float(scales[0]);\n return y0 + float(x_offset) * (y1 - y0) / float(scales[1]);\n }",variables:[{name:"scales",type:"int",arrayLength:this.scales.length}]}},e.prototype.createRunData=function(t,e,n){var r=n.map((function(n,r){return t.getOrCreateTextureData(n,e.inputLayouts[r])}));return{inputTextureDatas:r,outputTextureData:t.createTextureDataFromLayout(e.outputLayout,r[0].tensor.type),uniformData:{scales:this.scales.map((function(t){return Math.ceil(t)}))}}},e}(i.Upsample);e.WebGLUpsample=u},function(t,e,n){"use strict";var r=this&&this.__assign||function(){return(r=Object.assign||function(t){for(var e,n=1,r=arguments.length;n=t.length&&(t=void 0),{value:t&&t[r++],done:!t}}};throw new TypeError(e?"Object is not iterable.":"Symbol.iterator is not defined.")};Object.defineProperty(e,"__esModule",{value:!0}),e.ProgramManager=void 0;var i=n(25),a=n(3),u=n(155),s=n(2),l=function(){function t(t,e){this.profiler=t,this.glContext=e,this.repo=new Map,this.attributesBound=!1}return t.prototype.getArtifact=function(t){return this.repo.get(t)},t.prototype.setArtifact=function(t,e){this.repo.set(t,e)},t.prototype.run=function(t,e){var n=this;this.profiler.event("backend","ProgramManager.run",(function(){var r=n.glContext.gl,o=t.program;r.useProgram(o);try{n.bindOutput(e.outputTextureData),n.attributesBound||n.bindAttributes(t.attribLocations),n.bindUniforms(t.uniformLocations,e.uniformData,e.inputTextureDatas)}catch(e){throw a.Logger.error("ProgramManager",t.programInfo.shaderSource),e}n.profiler.event("backend","GlContext.draw()",(function(){n.doDraw(t,e),r.flush()}))}))},t.prototype.dispose=function(){var t=this;this.vertexShader&&this.glContext.deleteShader(this.vertexShader),this.repo.forEach((function(e){return t.glContext.deleteProgram(e.program)}))},t.prototype.build=function(t){var e=this;return this.profiler.event("backend","ProgramManager.build",(function(){var n=new u.GlslPreprocessor(e.glContext,t),r=n.preprocess(),o=e.compile(r);return{programInfo:t,program:o,uniformLocations:e.getUniformLocations(o,n.context.programInfo.samplers,n.context.programInfo.variables),attribLocations:e.getAttribLocations(o)}}))},t.prototype.doDraw=function(t,e){e.draw?(a.Logger.verbose("ProgramManager","Custom draw function"),e.draw(this.glContext,t)):this.glContext.draw()},t.prototype.compile=function(t){if(!this.vertexShader){a.Logger.verbose("ProrgramManager","Compiling and caching Vertex shader for the first time");var e=s.getVertexShaderSource(this.glContext.version);this.vertexShader=this.glContext.compileShader(e,this.glContext.gl.VERTEX_SHADER)}i.env.debug&&a.Logger.verbose("ProrgramManager","FragShader:\n"+t+"\n");var n=this.glContext.compileShader(t,this.glContext.gl.FRAGMENT_SHADER),r=this.glContext.createProgram(this.vertexShader,n);return this.glContext.deleteShader(n),r},t.prototype.bindOutput=function(t){a.Logger.verbose("ProrgramManager","Binding output texture to Framebuffer: w/h="+t.width+"/"+t.height+", shape="+t.shape+", type="+t.tensor.type),this.glContext.attachFramebuffer(t.texture,t.width,t.height)},t.prototype.bindAttributes=function(t){var e=t.position,n=t.textureCoord;this.glContext.setVertexAttributes(e,n),this.attributesBound=!0},t.prototype.bindUniforms=function(t,e,n){var r,i,a=this.glContext.gl,u=0;try{for(var s=o(t),l=s.next();!l.done;l=s.next()){var c=l.value,f=c.name,p=c.type,h=c.location,d=c.arrayLength;switch(p){case"sampler2D":this.bindTexture(n[u],h,u),u++;break;case"float":d?a.uniform1fv(h,e[f]):a.uniform1f(h,e[f]);break;case"int":d?a.uniform1iv(h,e[f]):a.uniform1i(h,e[f]);break;default:throw new Error("Uniform not implemented: "+p)}}}catch(t){r={error:t}}finally{try{l&&!l.done&&(i=s.return)&&i.call(s)}finally{if(r)throw r.error}}},t.prototype.bindTexture=function(t,e,n){this.glContext.bindTextureToUniform(t.texture,n,e)},t.prototype.getAttribLocations=function(t){return{position:this.getAttribLocation(t,"position"),textureCoord:this.getAttribLocation(t,"textureCoord")}},t.prototype.getUniformLocations=function(t,e,n){var i,a,u,s,l=[];if(e)try{for(var c=o(e),f=c.next();!f.done;f=c.next()){var p=f.value;l.push({name:p,type:"sampler2D",location:this.getUniformLocation(t,p)})}}catch(t){i={error:t}}finally{try{f&&!f.done&&(a=c.return)&&a.call(c)}finally{if(i)throw i.error}}if(n)try{for(var h=o(n),d=h.next();!d.done;d=h.next()){var y=d.value;l.push(r(r({},y),{location:this.getUniformLocation(t,y.name)}))}}catch(t){u={error:t}}finally{try{d&&!d.done&&(s=h.return)&&s.call(h)}finally{if(u)throw u.error}}return l},t.prototype.getUniformLocation=function(t,e){var n=this.glContext.gl.getUniformLocation(t,e);if(null===n)throw new Error("Uniform "+e+" not found.");return n},t.prototype.getAttribLocation=function(t,e){return this.glContext.gl.getAttribLocation(t,e)},t}();e.ProgramManager=l},function(t,e,n){"use strict";var r=this&&this.__values||function(t){var e="function"==typeof Symbol&&Symbol.iterator,n=e&&t[e],r=0;if(n)return n.call(t);if(t&&"number"==typeof t.length)return{next:function(){return t&&r>=t.length&&(t=void 0),{value:t&&t[r++],done:!t}}};throw new TypeError(e?"Object is not iterable.":"Symbol.iterator is not defined.")};Object.defineProperty(e,"__esModule",{value:!0}),e.GlslPreprocessor=void 0;var o=n(5),i=n(156),a=n(157),u=n(2),s=function(){function t(t,e){var n=this;this.libs={},this.glslLibRoutineDependencyGraph={},this.context=new o.GlslContext(t,e),Object.keys(a.glslRegistry).forEach((function(t){var e=new a.glslRegistry[t](n.context);n.libs[t]=e}));var r=this.glslLibRoutineDependencyGraph;for(var i in this.libs){var u=this.libs[i].getFunctions();for(var s in u){var l=i+"."+s,c=void 0;r[l]?(c=r[l]).routineBody=u[s].routineBody:(c=new o.GlslLibRoutineNode(l,u[s].routineBody),r[l]=c);var f=u[s].dependencies;if(f)for(var p=0;pe)){for(var u=i.length,s=e-u,l="bcastMatmulIndices_"+r,c="",f=0;f=0;--o)r+="\n offset += indices["+o+"] * "+n[o]+";\n ";return"\n int "+t+"(int indices["+e+"]) {\n int offset = 0;\n "+r+"\n return offset;\n }\n "},e.prototype.offsetToIndices=function(){var t=this.context.programInfo,n={};return this.context.programInfo.samplers.forEach((function(r,o){var i=t.inputLayouts[o].shape,u=t.inputLayouts[o].strides,s=i.length,l="offsetToIndices_"+r;n[l]=new a.GlslLibRoutine(e.offsetToIndicesSingle(l,s,u)),n[l="offsetToIndices_"+r+"_T"]=new a.GlslLibRoutine(e.offsetToIndicesSingle(l,s,u.slice().reverse()))})),n},e.offsetToIndicesSingle=function(t,e,n){for(var r=[],o=0;o= 0; --i) {\n if(i > axis) continue;\n indices[i] += 1;\n if(indices[i] < shape[i]) {\n break;\n }\n indices[i] = 0;\n }\n }\n ";e[u]=new a.GlslLibRoutine(c)})),e},e}(a.GlslLib);e.ShapeUtilsGlslLib=u},function(t,e,n){"use strict";var r,o=this&&this.__extends||(r=function(t,e){return(r=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(t,e){t.__proto__=e}||function(t,e){for(var n in e)e.hasOwnProperty(n)&&(t[n]=e[n])})(t,e)},function(t,e){function n(){this.constructor=t}r(t,e),t.prototype=null===e?Object.create(e):(n.prototype=e.prototype,new n)}),i=this&&this.__assign||function(){return(i=Object.assign||function(t){for(var e,n=1,r=arguments.length;n=t.length?1:t.slice(e.breakAxis).reduce((function(t,e){return t*e})),i=e.breakAxis<=0?1:t.slice(0,e.breakAxis).reduce((function(t,e){return t*e}));if(!(o>n||i>n))return[o,i];r.Logger.verbose("TextureLayout","Given width/height preferences were unattainable: shape:"+t+", breakAxis:"+e.breakAxis)}for(var a=t.reduce((function(t,e){return t*e})),u=Math.floor(Math.sqrt(a));u=n||a%u!=0)throw new Error("The given dimensions are outside this GPU's boundaries: "+t);return[u,a/u]},t}();e.AlwaysKeepOriginalSizeStrategy=o},function(t,e,n){"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.TextureManager=void 0;var r=n(3),o=function(){function t(t,e,n,r){this.glContext=t,this.layoutStrategy=e,this.profiler=n,this.config=r,r.reuseTextures&&(this.inUseTextures=new Map,this.idleTextures=new Map,this.textureLookup=new Map)}return t.prototype.createTextureFromLayout=function(t,e,n,o){var i,a,u=this.toEncoderType(t),s=this.glContext.getEncoder(u,e.channels||1,o);if(this.config.reuseTextures){i=e.width+"x"+e.height+"_"+s.format+"_"+s.internalFormat+"_"+s.textureType,(a=this.inUseTextures.get(i))||(a=[],this.inUseTextures.set(i,a));var l=this.idleTextures.get(i);if(l&&l.length>0){var c=l.pop();return a.push(c),1===o&&this.glContext.updateTexture(c,e.width,e.height,s,this.toTextureData(t,n)),c}}r.Logger.verbose("TextureManager","Creating new texture of size "+e.width+"x"+e.height);var f=this.glContext.allocateTexture(e.width,e.height,s,this.toTextureData(t,n));return this.config.reuseTextures&&(a.push(f),this.textureLookup.set(f,i)),f},t.prototype.readTexture=function(t,e,n){var r=this;return n||(n=1),this.profiler.event("backend","TextureManager.readTexture",(function(){var o=t.shape.reduce((function(t,e){return t*e}))*n,i=r.glContext.readTexture(t.texture,t.width,t.height,o,r.toEncoderType(e),n);return r.toTensorData(e,i)}))},t.prototype.readUint8TextureAsFloat=function(t){var e=this;return this.profiler.event("backend","TextureManager.readUint8TextureAsFloat",(function(){var n=t.shape.reduce((function(t,e){return t*e})),r=e.glContext.readTexture(t.texture,t.width,t.height,4*n,"byte",4);return new Float32Array(r.buffer,r.byteOffset,n)}))},t.prototype.releaseTexture=function(t,e){var n;if(this.config.reuseTextures&&(n=this.textureLookup.get(t.texture))){e&&this.textureLookup.delete(n);var o=this.inUseTextures.get(n);if(o){var i=o.indexOf(t.texture);if(-1!==i){o.splice(i,1);var a=this.idleTextures.get(n);a||(a=[],this.idleTextures.set(n,a)),a.push(t.texture)}}}n&&!e||(r.Logger.verbose("TextureManager","Deleting texture of size "+t.width+"x"+t.height),this.glContext.deleteTexture(t.texture))},t.prototype.toTensorData=function(t,e){return e instanceof Float32Array?e:new Float32Array(e)},t.prototype.toTextureData=function(t,e){if(e)return e instanceof Float32Array?e:new Float32Array(e)},t.prototype.toEncoderType=function(t){return"float"},t.prototype.clearActiveTextures=function(){this.glContext.clearActiveTextures()},t}();e.TextureManager=o},function(t,e,n){"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.createNewWebGLContext=e.createWebGLContext=void 0;var r=n(3),o=n(166),i={};function a(t){var e,n=function(){var t=document.createElement("canvas");return t.width=1,t.height=1,t}(),i={alpha:!1,depth:!1,antialias:!1,stencil:!1,preserveDrawingBuffer:!1,premultipliedAlpha:!1,failIfMajorPerformanceCaveat:!1};if((!t||"webgl2"===t)&&(e=n.getContext("webgl2",i)))try{return new o.WebGLContext(e,2)}catch(t){r.Logger.warning("GlContextFactory","failed to create WebGLContext using contextId 'webgl2'. Error: "+t)}if((!t||"webgl"===t)&&(e=n.getContext("webgl",i)||n.getContext("experimental-webgl",i)))try{return new o.WebGLContext(e,1)}catch(t){r.Logger.warning("GlContextFactory","failed to create WebGLContext using contextId 'webgl' or 'experimental-webgl'. Error: "+t)}throw new Error("WebGL is not supported")}e.createWebGLContext=function t(e){var n;e&&"webgl2"!==e||!("webgl2"in i)?e&&"webgl"!==e||!("webgl"in i)||(n=i.webgl):n=i.webgl2,n=n||a(e),e=e||1===n.version?"webgl":"webgl2";var r=n.gl;return i[e]=n,r.isContextLost()?(delete i[e],t(e)):(r.disable(r.DEPTH_TEST),r.disable(r.STENCIL_TEST),r.disable(r.BLEND),r.disable(r.DITHER),r.disable(r.POLYGON_OFFSET_FILL),r.disable(r.SAMPLE_COVERAGE),r.enable(r.SCISSOR_TEST),r.enable(r.CULL_FACE),r.cullFace(r.BACK),n)},e.createNewWebGLContext=a},function(t,e,n){"use strict";var r=this&&this.__createBinding||(Object.create?function(t,e,n,r){void 0===r&&(r=n),Object.defineProperty(t,r,{enumerable:!0,get:function(){return e[n]}})}:function(t,e,n,r){void 0===r&&(r=n),t[r]=e[n]}),o=this&&this.__setModuleDefault||(Object.create?function(t,e){Object.defineProperty(t,"default",{enumerable:!0,value:e})}:function(t,e){t.default=e}),i=this&&this.__importStar||function(t){if(t&&t.__esModule)return t;var e={};if(null!=t)for(var n in t)"default"!==n&&Object.hasOwnProperty.call(t,n)&&r(e,t,n);return o(e,t),e};Object.defineProperty(e,"__esModule",{value:!0}),e.WebGLContext=void 0;var a=n(25),u=i(n(167)),s=function(){function t(t,e){this.frameBufferBound=!1,this.gl=t,this.version=e,this.getExtensions(),this.vertexbuffer=this.createVertexbuffer(),this.framebuffer=this.createFramebuffer(),this.queryVitalParameters()}return t.prototype.allocateTexture=function(t,e,n,r){var o=this.gl,i=o.createTexture();o.bindTexture(o.TEXTURE_2D,i),o.texParameteri(o.TEXTURE_2D,o.TEXTURE_MIN_FILTER,o.NEAREST),o.texParameteri(o.TEXTURE_2D,o.TEXTURE_MAG_FILTER,o.NEAREST),o.texParameteri(o.TEXTURE_2D,o.TEXTURE_WRAP_S,o.CLAMP_TO_EDGE),o.texParameteri(o.TEXTURE_2D,o.TEXTURE_WRAP_T,o.CLAMP_TO_EDGE);var a=r?n.encode(r,t*e):null;return o.texImage2D(o.TEXTURE_2D,0,n.internalFormat,t,e,0,n.format,n.textureType,a),this.checkError(),i},t.prototype.updateTexture=function(t,e,n,r,o){var i=this.gl;i.bindTexture(i.TEXTURE_2D,t);var a=r.encode(o,e*n);i.texSubImage2D(i.TEXTURE_2D,0,0,0,e,n,r.format,r.textureType,a),this.checkError()},t.prototype.attachFramebuffer=function(t,e,n){var r=this.gl;r.bindTexture(r.TEXTURE_2D,t),r.bindFramebuffer(r.FRAMEBUFFER,this.framebuffer),r.framebufferTexture2D(r.FRAMEBUFFER,r.COLOR_ATTACHMENT0,r.TEXTURE_2D,t,0),this.checkError(),r.viewport(0,0,e,n),r.scissor(0,0,e,n)},t.prototype.readTexture=function(t,e,n,r,o,i){var a=this.gl;i||(i=1),this.frameBufferBound||this.attachFramebuffer(t,e,n);var u=this.getEncoder(o,i),s=u.allocate(e*n);return a.bindTexture(a.TEXTURE_2D,t),a.framebufferTexture2D(a.FRAMEBUFFER,a.COLOR_ATTACHMENT0,a.TEXTURE_2D,t,0),a.readPixels(0,0,e,n,a.RGBA,u.textureType,s),this.checkError(),u.decode(s,r)},t.prototype.isFramebufferReady=function(){return!0},t.prototype.getActiveTexture=function(){var t=this.gl;return"TEXTURE"+(t.getParameter(this.gl.ACTIVE_TEXTURE)-t.TEXTURE0)},t.prototype.getTextureBinding=function(){return this.gl.getParameter(this.gl.TEXTURE_BINDING_2D)},t.prototype.getFramebufferBinding=function(){return this.gl.getParameter(this.gl.FRAMEBUFFER_BINDING)},t.prototype.setVertexAttributes=function(t,e){var n=this.gl;n.vertexAttribPointer(t,3,n.FLOAT,!1,20,0),n.enableVertexAttribArray(t),-1!==e&&(n.vertexAttribPointer(e,2,n.FLOAT,!1,20,12),n.enableVertexAttribArray(e)),this.checkError()},t.prototype.createProgram=function(t,e){var n=this.gl,r=n.createProgram();return n.attachShader(r,t),n.attachShader(r,e),n.linkProgram(r),r},t.prototype.compileShader=function(t,e){var n=this.gl,r=n.createShader(e);if(!r)throw new Error("createShader() returned null with type "+e);if(n.shaderSource(r,t),n.compileShader(r),!1===n.getShaderParameter(r,n.COMPILE_STATUS))throw new Error("Failed to compile shader: "+n.getShaderInfoLog(r));return r},t.prototype.deleteShader=function(t){this.gl.deleteShader(t)},t.prototype.bindTextureToUniform=function(t,e,n){var r=this.gl;r.activeTexture(r.TEXTURE0+e),this.checkError(),r.bindTexture(r.TEXTURE_2D,t),this.checkError(),r.uniform1i(n,e),this.checkError()},t.prototype.draw=function(){this.gl.drawArrays(this.gl.TRIANGLE_STRIP,0,4),this.checkError()},t.prototype.checkError=function(){if(a.env.debug){var t=this.gl,e=t.getError(),n="";switch(e){case t.NO_ERROR:return;case t.INVALID_ENUM:n="INVALID_ENUM";break;case t.INVALID_VALUE:n="INVALID_VALUE";break;case t.INVALID_OPERATION:n="INVALID_OPERATION";break;case t.INVALID_FRAMEBUFFER_OPERATION:n="INVALID_FRAMEBUFFER_OPERATION";break;case t.OUT_OF_MEMORY:n="OUT_OF_MEMORY";break;case t.CONTEXT_LOST_WEBGL:n="CONTEXT_LOST_WEBGL";break;default:n="Unknown WebGL Error: "+e.toString(16)}throw new Error(n)}},t.prototype.deleteTexture=function(t){this.gl.deleteTexture(t)},t.prototype.deleteProgram=function(t){this.gl.deleteProgram(t)},t.prototype.getEncoder=function(t,e,n){if(void 0===n&&(n=0),2===this.version)return new u.RedFloat32DataEncoder(this.gl,e);switch(t){case"float":return 1===n||this.isRenderFloat32Supported?new u.RGBAFloatDataEncoder(this.gl,e):new u.RGBAFloatDataEncoder(this.gl,e,this.textureHalfFloatExtension.HALF_FLOAT_OES);case"int":throw new Error("not implemented");case"byte":return new u.Uint8DataEncoder(this.gl,e);default:throw new Error("Invalid dataType: "+t)}},t.prototype.clearActiveTextures=function(){for(var t=this.gl,e=0;et.length?(r.Logger.warning("Encoder","Source data too small. Allocating larger array"),o=t,n=this.allocate(e*this.channelSize),o.forEach((function(t,e){return n[e]=t}))):n=o=t,n},t.prototype.allocate=function(t){return new Float32Array(4*t)},t.prototype.decode=function(t,e){return 1===this.channelSize?t.filter((function(t,e){return e%4==0})).subarray(0,e):t.subarray(0,e)},t}();e.RedFloat32DataEncoder=o;var i=function(){function t(t,e,n){if(void 0===e&&(e=1),1!==e&&4!==e)throw new Error("Invalid number of channels: "+e);this.internalFormat=t.RGBA,this.format=t.RGBA,this.channelSize=e,this.textureType=n||t.FLOAT}return t.prototype.encode=function(t,e){var n=t;return 1===this.channelSize&&(r.Logger.verbose("Encoder","Exploding into a larger array"),n=this.allocate(e),t.forEach((function(t,e){return n[4*e]=t}))),n},t.prototype.allocate=function(t){return new Float32Array(4*t)},t.prototype.decode=function(t,e){return 1===this.channelSize?t.filter((function(t,e){return e%4==0})).subarray(0,e):t.subarray(0,e)},t}();e.RGBAFloatDataEncoder=i;var a=function(){function t(t,e){if(void 0===e&&(e=1),this.channelSize=4,1===e)this.internalFormat=t.ALPHA,this.format=t.ALPHA,this.textureType=t.UNSIGNED_BYTE,this.channelSize=e;else{if(4!==e)throw new Error("Invalid number of channels: "+e);this.internalFormat=t.RGBA,this.format=t.RGBA,this.textureType=t.UNSIGNED_BYTE,this.channelSize=e}}return t.prototype.encode=function(t,e){return new Uint8Array(t.buffer,t.byteOffset,t.byteLength)},t.prototype.allocate=function(t){return new Uint8Array(t*this.channelSize)},t.prototype.decode=function(t,e){if(t instanceof Uint8Array)return t.subarray(0,e);throw new Error("Invalid array type: "+t.constructor)},t}();e.Uint8DataEncoder=a},function(t,e,n){"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.envImpl=void 0;var r=n(25),o=function(){function t(){}return Object.defineProperty(t.prototype,"debug",{get:function(){return r.env.debug},set:function(t){r.env.debug=t},enumerable:!1,configurable:!0}),t}();e.envImpl=new o},function(t,e,n){"use strict";Object.defineProperty(e,"__esModule",{value:!0})},function(t,e,n){"use strict";Object.defineProperty(e,"__esModule",{value:!0})},function(t,e,n){"use strict";var r=this&&this.__createBinding||(Object.create?function(t,e,n,r){void 0===r&&(r=n),Object.defineProperty(t,r,{enumerable:!0,get:function(){return e[n]}})}:function(t,e,n,r){void 0===r&&(r=n),t[r]=e[n]}),o=this&&this.__setModuleDefault||(Object.create?function(t,e){Object.defineProperty(t,"default",{enumerable:!0,value:e})}:function(t,e){t.default=e}),i=this&&this.__importStar||function(t){if(t&&t.__esModule)return t;var e={};if(null!=t)for(var n in t)"default"!==n&&Object.hasOwnProperty.call(t,n)&&r(e,t,n);return o(e,t),e};Object.defineProperty(e,"__esModule",{value:!0}),e.Tensor=void 0;var a=i(n(51));e.Tensor=a.Tensor},function(t,e,n){"use strict";var r=this&&this.__createBinding||(Object.create?function(t,e,n,r){void 0===r&&(r=n),Object.defineProperty(t,r,{enumerable:!0,get:function(){return e[n]}})}:function(t,e,n,r){void 0===r&&(r=n),t[r]=e[n]}),o=this&&this.__setModuleDefault||(Object.create?function(t,e){Object.defineProperty(t,"default",{enumerable:!0,value:e})}:function(t,e){t.default=e}),i=this&&this.__importStar||function(t){if(t&&t.__esModule)return t;var e={};if(null!=t)for(var n in t)"default"!==n&&Object.hasOwnProperty.call(t,n)&&r(e,t,n);return o(e,t),e};Object.defineProperty(e,"__esModule",{value:!0}),e.InferenceSession=void 0;var a=i(n(173));e.InferenceSession=a.InferenceSession},function(t,e,n){"use strict";var r=this&&this.__createBinding||(Object.create?function(t,e,n,r){void 0===r&&(r=n),Object.defineProperty(t,r,{enumerable:!0,get:function(){return e[n]}})}:function(t,e,n,r){void 0===r&&(r=n),t[r]=e[n]}),o=this&&this.__setModuleDefault||(Object.create?function(t,e){Object.defineProperty(t,"default",{enumerable:!0,value:e})}:function(t,e){t.default=e}),i=this&&this.__importStar||function(t){if(t&&t.__esModule)return t;var e={};if(null!=t)for(var n in t)"default"!==n&&Object.hasOwnProperty.call(t,n)&&r(e,t,n);return o(e,t),e},a=this&&this.__awaiter||function(t,e,n,r){return new(n||(n=Promise))((function(o,i){function a(t){try{s(r.next(t))}catch(t){i(t)}}function u(t){try{s(r.throw(t))}catch(t){i(t)}}function s(t){var e;t.done?o(t.value):(e=t.value,e instanceof n?e:new n((function(t){t(e)}))).then(a,u)}s((r=r.apply(t,e||[])).next())}))},u=this&&this.__generator||function(t,e){var n,r,o,i,a={label:0,sent:function(){if(1&o[0])throw o[1];return o[1]},trys:[],ops:[]};return i={next:u(0),throw:u(1),return:u(2)},"function"==typeof Symbol&&(i[Symbol.iterator]=function(){return this}),i;function u(i){return function(u){return function(i){if(n)throw new TypeError("Generator is already executing.");for(;a;)try{if(n=1,r&&(o=2&i[0]?r.return:i[0]?r.throw||((o=r.return)&&o.call(r),0):r.next)&&!(o=o.call(r,i[1])).done)return o;switch(r=0,o&&(i=[2&i[0],o.value]),i[0]){case 0:case 1:o=i;break;case 4:return a.label++,{value:i[1],done:!1};case 5:a.label++,r=i[1],i=[0];continue;case 7:i=a.ops.pop(),a.trys.pop();continue;default:if(!(o=a.trys,(o=o.length>0&&o[o.length-1])||6!==i[0]&&2!==i[0])){a=0;continue}if(3===i[0]&&(!o||i[1]>o[0]&&i[1]0&&o[o.length-1])||6!==i[0]&&2!==i[0])){a=0;continue}if(3===i[0]&&(!o||i[1]>o[0]&&i[1]=i)return t;switch(t){case"%s":return String(r[n++]);case"%d":return Number(r[n++]);case"%j":try{return JSON.stringify(r[n++])}catch(t){return"[Circular]"}default:return t}})),s=r[n];n=3&&(r.depth=arguments[2]),arguments.length>=4&&(r.colors=arguments[3]),d(n)?r.showHidden=n:n&&e._extend(r,n),v(r.showHidden)&&(r.showHidden=!1),v(r.depth)&&(r.depth=2),v(r.colors)&&(r.colors=!1),v(r.customInspect)&&(r.customInspect=!0),r.colors&&(r.stylize=s),c(r,t,r.depth)}function s(t,e){var n=u.styles[e];return n?"["+u.colors[n][0]+"m"+t+"["+u.colors[n][1]+"m":t}function l(t,e){return t}function c(t,n,r){if(t.customInspect&&n&&T(n.inspect)&&n.inspect!==e.inspect&&(!n.constructor||n.constructor.prototype!==n)){var o=n.inspect(r,t);return m(o)||(o=c(t,o,r)),o}var i=function(t,e){if(v(e))return t.stylize("undefined","undefined");if(m(e)){var n="'"+JSON.stringify(e).replace(/^"|"$/g,"").replace(/'/g,"\\'").replace(/\\"/g,'"')+"'";return t.stylize(n,"string")}if(g(e))return t.stylize(""+e,"number");if(d(e))return t.stylize(""+e,"boolean");if(y(e))return t.stylize("null","null")}(t,n);if(i)return i;var a=Object.keys(n),u=function(t){var e={};return t.forEach((function(t,n){e[t]=!0})),e}(a);if(t.showHidden&&(a=Object.getOwnPropertyNames(n)),x(n)&&(a.indexOf("message")>=0||a.indexOf("description")>=0))return f(n);if(0===a.length){if(T(n)){var s=n.name?": "+n.name:"";return t.stylize("[Function"+s+"]","special")}if(b(n))return t.stylize(RegExp.prototype.toString.call(n),"regexp");if(w(n))return t.stylize(Date.prototype.toString.call(n),"date");if(x(n))return f(n)}var l,_="",O=!1,S=["{","}"];(h(n)&&(O=!0,S=["[","]"]),T(n))&&(_=" [Function"+(n.name?": "+n.name:"")+"]");return b(n)&&(_=" "+RegExp.prototype.toString.call(n)),w(n)&&(_=" "+Date.prototype.toUTCString.call(n)),x(n)&&(_=" "+f(n)),0!==a.length||O&&0!=n.length?r<0?b(n)?t.stylize(RegExp.prototype.toString.call(n),"regexp"):t.stylize("[Object]","special"):(t.seen.push(n),l=O?function(t,e,n,r,o){for(var i=[],a=0,u=e.length;a=0&&0,t+e.replace(/\u001b\[\d\d?m/g,"").length+1}),0)>60)return n[0]+(""===e?"":e+"\n ")+" "+t.join(",\n ")+" "+n[1];return n[0]+e+" "+t.join(", ")+" "+n[1]}(l,_,S)):S[0]+_+S[1]}function f(t){return"["+Error.prototype.toString.call(t)+"]"}function p(t,e,n,r,o,i){var a,u,s;if((s=Object.getOwnPropertyDescriptor(e,o)||{value:e[o]}).get?u=s.set?t.stylize("[Getter/Setter]","special"):t.stylize("[Getter]","special"):s.set&&(u=t.stylize("[Setter]","special")),D(r,o)||(a="["+o+"]"),u||(t.seen.indexOf(s.value)<0?(u=y(n)?c(t,s.value,null):c(t,s.value,n-1)).indexOf("\n")>-1&&(u=i?u.split("\n").map((function(t){return" "+t})).join("\n").substr(2):"\n"+u.split("\n").map((function(t){return" "+t})).join("\n")):u=t.stylize("[Circular]","special")),v(a)){if(i&&o.match(/^\d+$/))return u;(a=JSON.stringify(""+o)).match(/^"([a-zA-Z_][a-zA-Z_0-9]*)"$/)?(a=a.substr(1,a.length-2),a=t.stylize(a,"name")):(a=a.replace(/'/g,"\\'").replace(/\\"/g,'"').replace(/(^"|"$)/g,"'"),a=t.stylize(a,"string"))}return a+": "+u}function h(t){return Array.isArray(t)}function d(t){return"boolean"==typeof t}function y(t){return null===t}function g(t){return"number"==typeof t}function m(t){return"string"==typeof t}function v(t){return void 0===t}function b(t){return _(t)&&"[object RegExp]"===O(t)}function _(t){return"object"==typeof t&&null!==t}function w(t){return _(t)&&"[object Date]"===O(t)}function x(t){return _(t)&&("[object Error]"===O(t)||t instanceof Error)}function T(t){return"function"==typeof t}function O(t){return Object.prototype.toString.call(t)}function S(t){return t<10?"0"+t.toString(10):t.toString(10)}e.debuglog=function(n){if(v(i)&&(i=t.env.NODE_DEBUG||""),n=n.toUpperCase(),!a[n])if(new RegExp("\\b"+n+"\\b","i").test(i)){var r=t.pid;a[n]=function(){var t=e.format.apply(e,arguments);console.error("%s %d: %s",n,r,t)}}else a[n]=function(){};return a[n]},e.inspect=u,u.colors={bold:[1,22],italic:[3,23],underline:[4,24],inverse:[7,27],white:[37,39],grey:[90,39],black:[30,39],blue:[34,39],cyan:[36,39],green:[32,39],magenta:[35,39],red:[31,39],yellow:[33,39]},u.styles={special:"cyan",number:"yellow",boolean:"yellow",undefined:"grey",null:"bold",string:"green",date:"magenta",regexp:"red"},e.isArray=h,e.isBoolean=d,e.isNull=y,e.isNullOrUndefined=function(t){return null==t},e.isNumber=g,e.isString=m,e.isSymbol=function(t){return"symbol"==typeof t},e.isUndefined=v,e.isRegExp=b,e.isObject=_,e.isDate=w,e.isError=x,e.isFunction=T,e.isPrimitive=function(t){return null===t||"boolean"==typeof t||"number"==typeof t||"string"==typeof t||"symbol"==typeof t||void 0===t},e.isBuffer=n(176);var P=["Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec"];function A(){var t=new Date,e=[S(t.getHours()),S(t.getMinutes()),S(t.getSeconds())].join(":");return[t.getDate(),P[t.getMonth()],e].join(" ")}function D(t,e){return Object.prototype.hasOwnProperty.call(t,e)}e.log=function(){console.log("%s - %s",A(),e.format.apply(e,arguments))},e.inherits=n(177),e._extend=function(t,e){if(!e||!_(e))return t;for(var n=Object.keys(e),r=n.length;r--;)t[n[r]]=e[n[r]];return t};var E="undefined"!=typeof Symbol?Symbol("util.promisify.custom"):void 0;function I(t,e){if(!t){var n=new Error("Promise was rejected with a falsy value");n.reason=t,t=n}return e(t)}e.promisify=function(t){if("function"!=typeof t)throw new TypeError('The "original" argument must be of type Function');if(E&&t[E]){var e;if("function"!=typeof(e=t[E]))throw new TypeError('The "util.promisify.custom" argument must be of type Function');return Object.defineProperty(e,E,{value:e,enumerable:!1,writable:!1,configurable:!0}),e}function e(){for(var e,n,r=new Promise((function(t,r){e=t,n=r})),o=[],i=0;i0&&o[o.length-1])||6!==i[0]&&2!==i[0])){a=0;continue}if(3===i[0]&&(!o||i[1]>o[0]&&i[1]=t.length&&(t=void 0),{value:t&&t[r++],done:!t}}};throw new TypeError(e?"Object is not iterable.":"Symbol.iterator is not defined.")};Object.defineProperty(e,"__esModule",{value:!0}),e.Backend=void 0;var a=new Map;function u(t){return r(this,void 0,void 0,(function(){var e,n,r;return o(this,(function(o){switch(o.label){case 0:return void 0!==(e=onnx.backend)[t]&&function(t){var e=t;if("initialize"in e&&"function"==typeof e.initialize&&"createSessionHandler"in e&&"function"==typeof e.createSessionHandler&&"dispose"in e&&"function"==typeof e.dispose)return!0;return!1}(e[t])?e[t].disabled?[3,3]:(n=e[t],"object"==typeof(r=n.initialize())&&"then"in r?[4,r]:[3,2]):[3,3];case 1:r=o.sent(),o.label=2;case 2:if(r)return a.set(t,n),[2,n];o.label=3;case 3:return[2,void 0]}}))}))}e.Backend=function t(e){return r(this,void 0,void 0,(function(){var n,r,s,l,c,f,p,h,d;return o(this,(function(o){switch(o.label){case 0:return e?[3,1]:[2,t(["webgl","wasm","cpu"])];case 1:n="string"==typeof e?[e]:e,o.label=2;case 2:o.trys.push([2,7,8,9]),r=i(n),s=r.next(),o.label=3;case 3:return s.done?[3,6]:(l=s.value,(c=a.get(l))?[2,c]:[4,u(l)]);case 4:if(f=o.sent())return[2,f];o.label=5;case 5:return s=r.next(),[3,3];case 6:return[3,9];case 7:return p=o.sent(),h={error:p},[3,9];case 8:try{s&&!s.done&&(d=r.return)&&d.call(r)}finally{if(h)throw h.error}return[7];case 9:throw new Error("no available backend to use")}}))}))}},function(t,e,n){"use strict";var r=this&&this.__awaiter||function(t,e,n,r){return new(n||(n=Promise))((function(o,i){function a(t){try{s(r.next(t))}catch(t){i(t)}}function u(t){try{s(r.throw(t))}catch(t){i(t)}}function s(t){var e;t.done?o(t.value):(e=t.value,e instanceof n?e:new n((function(t){t(e)}))).then(a,u)}s((r=r.apply(t,e||[])).next())}))},o=this&&this.__generator||function(t,e){var n,r,o,i,a={label:0,sent:function(){if(1&o[0])throw o[1];return o[1]},trys:[],ops:[]};return i={next:u(0),throw:u(1),return:u(2)},"function"==typeof Symbol&&(i[Symbol.iterator]=function(){return this}),i;function u(i){return function(u){return function(i){if(n)throw new TypeError("Generator is already executing.");for(;a;)try{if(n=1,r&&(o=2&i[0]?r.return:i[0]?r.throw||((o=r.return)&&o.call(r),0):r.next)&&!(o=o.call(r,i[1])).done)return o;switch(r=0,o&&(i=[2&i[0],o.value]),i[0]){case 0:case 1:o=i;break;case 4:return a.label++,{value:i[1],done:!1};case 5:a.label++,r=i[1],i=[0];continue;case 7:i=a.ops.pop(),a.trys.pop();continue;default:if(!(o=a.trys,(o=o.length>0&&o[o.length-1])||6!==i[0]&&2!==i[0])){a=0;continue}if(3===i[0]&&(!o||i[1]>o[0]&&i[1]=t.length&&(t=void 0),{value:t&&t[r++],done:!t}}};throw new TypeError(e?"Object is not iterable.":"Symbol.iterator is not defined.")},a=this&&this.__read||function(t,e){var n="function"==typeof Symbol&&t[Symbol.iterator];if(!n)return t;var r,o,i=n.call(t),a=[];try{for(;(void 0===e||e-- >0)&&!(r=i.next()).done;)a.push(r.value)}catch(t){o={error:t}}finally{try{r&&!r.done&&(n=i.return)&&n.call(i)}finally{if(o)throw o.error}}return a},u=this&&this.__spread||function(){for(var t=[],e=0;e=3");this._opsets=n.opsetImport.map((function(t){return{domain:t.domain,version:i.LongUtil.longToNumber(t.version)}})),this._graph=o.Graph.from(n.graph,e)},Object.defineProperty(t.prototype,"graph",{get:function(){return this._graph},enumerable:!1,configurable:!0}),Object.defineProperty(t.prototype,"opsets",{get:function(){return this._opsets},enumerable:!1,configurable:!0}),t}();e.Model=a},function(t,e,n){"use strict";var r=this&&this.__values||function(t){var e="function"==typeof Symbol&&Symbol.iterator,n=e&&t[e],r=0;if(n)return n.call(t);if(t&&"number"==typeof t.length)return{next:function(){return t&&r>=t.length&&(t=void 0),{value:t&&t[r++],done:!t}}};throw new TypeError(e?"Object is not iterable.":"Symbol.iterator is not defined.")};Object.defineProperty(e,"__esModule",{value:!0}),e.Graph=void 0;var o=n(182),i=n(1),a=n(0);e.Graph={from:function(t,e){return new l(t,e)}};var u=function(){function t(t){this._from=void 0,this._to=[],this.tensor=void 0,this.type=void 0,t&&(this.type=a.ProtoUtil.tensorValueTypeFromProto(t.type.tensorType))}return Object.defineProperty(t.prototype,"from",{get:function(){return this._from},enumerable:!1,configurable:!0}),Object.defineProperty(t.prototype,"to",{get:function(){return this._to},enumerable:!1,configurable:!0}),t}(),s=function(t){this.name=t.name,this.opType=t.opType,this.inputs=[],this.outputs=[],this.attributes=new o.Attribute(t.attribute),this.executeNode=!0},l=function(){function t(t,e){if(!t)throw new TypeError("graph is empty");this.buildGraph(t),this.transformGraph(e),this.checkIsAcyclic()}return t.prototype.getInputIndices=function(){return this._allInputIndices},t.prototype.getInputNames=function(){return this._allInputNames},t.prototype.getOutputIndices=function(){return this._allOutputIndices},t.prototype.getOutputNames=function(){return this._allOutputNames},t.prototype.getValues=function(){return this._allData},t.prototype.getNodes=function(){return this._nodes},t.prototype.buildGraph=function(t){var e,n,o,l,c,f,p,h,d,y,g,m,v=new Map;this._allData=[],this._allInputIndices=[],this._allInputNames=[],this._allOutputIndices=[],this._allOutputNames=[],this._nodes=[];var b=new Map;if(!t.input)throw new Error("missing information in graph: input");var _=[];try{for(var w=r(t.input),x=w.next();!x.done;x=w.next()){var T=x.value;if(v.has(T.name))throw new Error("duplicated input name: "+T.name);var O=this._allData.push(new u(T))-1;v.set(T.name,O),_.push(T.name)}}catch(t){e={error:t}}finally{try{x&&!x.done&&(n=w.return)&&n.call(w)}finally{if(e)throw e.error}}if(!t.initializer)throw new Error("missing information in graph: initializer");try{for(var S=r(t.initializer),P=S.next();!P.done;P=S.next()){T=P.value;var A=v.get(T.name);if(void 0===A){var D=new u;D.type={shape:{dims:a.ProtoUtil.tensorDimsFromProto(T.dims)},tensorType:a.ProtoUtil.tensorDataTypeFromProto(T.dataType)},A=this._allData.push(D)-1,v.set(T.name,A)}this._allData[A]._from=-1,this._allData[A].tensor=i.Tensor.fromProto(T)}}catch(t){o={error:t}}finally{try{P&&!P.done&&(l=S.return)&&l.call(S)}finally{if(o)throw o.error}}for(T=0;T0;)o()},t.prototype.transformGraph=function(t){this.removeAllIdentityNodes(),this.removeAllDropoutNodes(),t&&t.transformGraph(this),this.finalizeGraph()},t.prototype.finalizeGraph=function(){for(var t,e=this,n=0,r=function(r){if(!o._nodes[r].executeNode)return n++,o._nodes[r].outputs.forEach((function(t){e._allData[t]._from=-2})),o._nodes.splice(r,1),r--,t=r,"continue";n>0&&(o._nodes[r].inputs.forEach((function(t){var o=e._allData[t]._to.indexOf(r+n);-1!==o&&(e._allData[t]._to[o]=r)})),o._nodes[r].outputs.forEach((function(t){e._allData[t]._from&&e._allData[t]._from===r+n&&(e._allData[t]._from=r)}))),t=r},o=this,i=0;i0){var r=-1;void 0!==s._allData[t].from&&-1!==s._allData[t].from?-1!==(r=s._nodes[s._allData[t].from].outputs.indexOf(t+n))&&(s._nodes[s._allData[t].from].outputs[r]=t):-1!==(r=s._allInputIndices.indexOf(t+n))&&(s._allInputIndices[r]=t),s._allData[t].to.forEach((function(o){-1!==(r=e._nodes[o].inputs.indexOf(t+n))&&(e._nodes[o].inputs[r]=t)})),0===s._allData[t].to.length&&-1!==(r=s._allOutputIndices.indexOf(t+n))&&(s._allOutputIndices[r]=t)}a=t},s=this;for(i=0;i1)throw new Error("Node deletion with multiple inputs is not supported. ");if(o.outputs.length>1)for(var i=1;i0)throw new Error("Node deletion with more than one output connected to other nodes is not supported. ");o.executeNode=!1;var a=o.inputs[0],u=o.outputs[0],s=this._allData[u].to,l=this._allData[a].to.indexOf(t);if(-1===l)throw new Error("The Value object doesn't have the current Node in it's 'to' property ");this._allData[a].to.splice(l,1),this._allData[u]._to=[];var c=this._allOutputIndices.indexOf(u);if(-1!==c&&(this._allOutputIndices[c]=a),s&&s.length>0)try{for(var f=r(s),p=f.next();!p.done;p=f.next()){var h=p.value,d=this._nodes[h].inputs.indexOf(u);if(-1===d)throw new Error("The Node object doesn't have the output Value in it's 'inputs' property ");this._nodes[h].inputs[d]=a,this._allData[a].to.push(h)}}catch(t){e={error:t}}finally{try{p&&!p.done&&(n=f.return)&&n.call(f)}finally{if(e)throw e.error}}},t.prototype.removeAllDropoutNodes=function(){var t,e,n=0;try{for(var o=r(this._nodes),i=o.next();!i.done;i=o.next()){var a=i.value;if("Dropout"===a.opType){if(1!==a.inputs.length)throw new Error("Dropout nodes should only contain one input. ");if(1!==a.outputs.length&&2!==a.outputs.length)throw new Error("Dropout nodes should contain either 1 or 2 output(s)");if(2===a.outputs.length&&0!==this._allData[a.outputs[1]]._to.length)throw new Error("Dropout nodes's second output should not be referenced by other nodes");this.deleteNode(n)}n++}}catch(e){t={error:e}}finally{try{i&&!i.done&&(e=o.return)&&e.call(o)}finally{if(t)throw t.error}}},t.prototype.removeAllIdentityNodes=function(){var t,e,n=0;try{for(var o=r(this._nodes),i=o.next();!i.done;i=o.next()){"Identity"===i.value.opType&&this.deleteNode(n),n++}}catch(e){t={error:e}}finally{try{i&&!i.done&&(e=o.return)&&e.call(o)}finally{if(t)throw t.error}}},t}()},function(t,e,n){"use strict";(function(t){var r=this&&this.__values||function(t){var e="function"==typeof Symbol&&Symbol.iterator,n=e&&t[e],r=0;if(n)return n.call(t);if(t&&"number"==typeof t.length)return{next:function(){return t&&r>=t.length&&(t=void 0),{value:t&&t[r++],done:!t}}};throw new TypeError(e?"Object is not iterable.":"Symbol.iterator is not defined.")},o=this&&this.__importDefault||function(t){return t&&t.__esModule?t:{default:t}};Object.defineProperty(e,"__esModule",{value:!0}),e.Attribute=void 0;var i=o(n(13)),a=n(9),u=n(1),s=n(0),l=function(){function e(t){var n,o;if(this._attributes=new Map,null!=t){try{for(var i=r(t),a=i.next();!a.done;a=i.next()){var u=a.value;this._attributes.set(u.name,[e.getValue(u),e.getType(u)])}}catch(t){n={error:t}}finally{try{a&&!a.done&&(o=i.return)&&o.call(i)}finally{if(n)throw n.error}}if(this._attributes.size 注:`r` 为LoRA 维数大小,`p` 为前缀词表大小,`l` 为微调层数,`ex/s` 为每秒训练的样本数。`gradient_accumulation_steps` 参数设置为 `1`。上述结果均来自于单个 Tesla V100 GPU,仅供参考。 - -## 微调 ChatGLM 的例子 - -### 训练结果 - -我们使用整个 `alpaca_gpt4_zh` 数据集微调 ChatGLM 模型,使用秩为 8 的 LoRA 方法,使用默认超参数进行单轮训练。下图为训练损失变化曲线。 - -![训练损失](assets/trainer_state.jpg) - -### 评估结果 - -我们选择 `alpaca_gpt4_zh` 数据集中的前一百条数据来评估微调后的 ChatGLM 模型,并计算 BLEU 和中文 ROUGE 分数。下表为评估结果。 - -| 分数 | 原版模型 | FZ (l=2) | PT (p=16) | LoRA (r=8) | -| ------- | -------- | ----- | ----- | ----------------- | -| BLEU-4 | 15.75 | 16.85 | 16.06 | 17.01 (**+1.26**) | -| Rouge-1 | 34.51 | 36.62 | 34.80 | 36.77 (**+2.26**) | -| Rouge-2 | 15.11 | 17.04 | 15.32 | 16.83 (**+1.72**) | -| Rouge-l | 26.18 | 28.17 | 26.35 | 28.86 (**+2.68**) | -| 训练参数 | / | 4.35% | 0.06% | 0.06% | - -> FZ:Freeze 微调,PT:P-Tuning V2 微调(为了与 LoRA 公平比较,我们使用了 `pre_seq_len=16`),训练参数:可训练参数占全部参数的百分比。 - -## 和现有类似项目的比较 - -- [THUDM/ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B/tree/main/ptuning) - - ChatGLM 基于 [P-Tuning v2](https://github.com/THUDM/P-tuning-v2) 微调的官方实现,使用了 [ADGEN](https://aclanthology.org/D19-1321.pdf) 数据集。 - - 本仓库的代码实现绝大部分参考该项目。我们进一步实现了 [LoRA](https://arxiv.org/abs/2106.09685) 微调方法。此外,我们**动态地**将每个批处理数据中的序列进行填充,而非将其填充到模型的最大长度,此改进可以加速模型训练。 -- [mymusise/ChatGLM-Tuning](https://github.com/mymusise/ChatGLM-Tuning) - - ChatGLM 基于 [LoRA](https://arxiv.org/abs/2106.09685) 微调的非官方实现,使用了 [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) 数据集。 - - 我们借鉴了该项目的一些想法。我们的训练脚本将数据预处理部分**集成**至训练脚本中,以避免事先生成预处理后的数据。 -- [ssbuild/chatglm_finetuning](https://github.com/ssbuild/chatglm_finetuning) - - ChatGLM 基于多种微调方法的非官方实现,使用了 [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) 数据集。 - - 我们的训练脚本**全部**基于 [Huggingface transformers](https://github.com/huggingface/transformers) 框架实现,不依赖于额外的 [deep_training](https://github.com/ssbuild/deep_training) 框架。 -- [lich99/ChatGLM-finetune-LoRA](https://github.com/lich99/ChatGLM-finetune-LoRA) - - ChatGLM 基于 [LoRA](https://arxiv.org/abs/2106.09685) 微调的非官方实现,使用了 [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) 数据集。 - - 我们利用 [Huggingface PEFT](https://github.com/huggingface/peft) 框架来引入最先进的微调方法。 -- [liucongg/ChatGLM-Finetuning](https://github.com/liucongg/ChatGLM-Finetuning) - - ChatGLM 基于参数冻结、LoRA 和 P-Tuning 微调的非官方实现,使用了汽车工业数据集。 - - 我们旨在引入更多指令遵循数据集用于微调 ChatGLM 模型。 -- [yanqiangmiffy/InstructGLM](https://github.com/yanqiangmiffy/InstructGLM) - - ChatGLM 微调的非官方实现,旨在探索 ChatGLM 在指令遵循数据集上的潜力。 - - 我们将数据预处理部分集成到训练脚本中。 - -## TODO - -- [ ] 利用 [LangChain](https://github.com/hwchase17/langchain) 实现能够利用外部知识的基于 ChatGLM 微调模型应用的轻松构建。 -- [ ] 实现对齐算法使模型对齐人类意图。 - - [x] [RLHF](https://github.com/microsoft/DeepSpeed/tree/master/blogs/deepspeed-chat) - - [ ] [RRHF](https://github.com/GanjinZero/RRHF) - - [ ] [RAFT](https://github.com/OptimalScale/LMFlow) -- [ ] 加入更多[中文数据集](https://github.com/brightmart/nlp_chinese_corpus)。 - - [x] [BELLE](https://github.com/LianjiaTech/BELLE) - - [ ] [pCLUE](https://github.com/CLUEbenchmark/pCLUE) - - [ ] [CLUECorpus](https://github.com/CLUEbenchmark/CLUECorpus2020) - - [x] [GuanacoDataset](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset) - - [x] [FireflyDataset](https://huggingface.co/datasets/YeungNLP/firefly-train-1.1M) -- [ ] 加入基于 [ChatGPT](https://openai.com/blog/chatgpt) 和 [GPT-4](https://openai.com/research/gpt-4) 产生的数据集。 - - [ ] [Baize](https://github.com/project-baize/baize-chatbot) - - [x] [GPT-4-LLM](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM) -- [x] 实现参数冻结和 P-Tuning 微调方法。 -- [x] 支持多GPU训练。(但尚不支持 LoRA 方法) -- [x] 加入模型评估脚本。(但它可能很慢!增大批处理大小可以显著提升速度) -- [x] 断点加载。 -- [x] 量化微调。 -- [x] 撰写基于该框架的 ChatGLM 模型微调指南手册。 -- [ ] 结合模型编辑技术。(例如:[MEND](https://arxiv.org/abs/2110.11309)) -- [ ] 加入 [OpenAssistant 对话数据集](https://huggingface.co/datasets/OpenAssistant/oasst1)用于监督微调和意图对齐。 -- [ ] 加入高质量中文开源指令数据集 [COIG](https://huggingface.co/datasets/BAAI/COIG)。 - -## 协议 - -本仓库的代码依照 [Apache-2.0](LICENSE) 协议开源。ChatGLM-6B 模型的使用请遵循[模型协议](https://github.com/THUDM/ChatGLM-6B/blob/main/MODEL_LICENSE)。 - -## 引用 - -如果您觉得此项目有帮助,请考虑以下列格式引用 - -```bibtex -@Misc{chatglm-efficient-tuning, - title = {ChatGLM Efficient Tuning}, - author = {hiyouga}, - howpublished = {\url{https://github.com/hiyouga/ChatGLM-Efficient-Tuning}}, - year = {2023} -} -``` - -## 声明 - -本项目受益于 [ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B)、[ChatGLM-Tuning](https://github.com/mymusise/ChatGLM-Tuning) 和 [yuanzhoulvpi2017/zero_nlp](https://github.com/yuanzhoulvpi2017/zero_nlp),感谢作者的付出。 diff --git a/spaces/bigjoker/stable-diffusion-webui/scripts/poor_mans_outpainting.py b/spaces/bigjoker/stable-diffusion-webui/scripts/poor_mans_outpainting.py deleted file mode 100644 index d39f61c1073376eae210d955ac1e9eba836402da..0000000000000000000000000000000000000000 --- a/spaces/bigjoker/stable-diffusion-webui/scripts/poor_mans_outpainting.py +++ /dev/null @@ -1,146 +0,0 @@ -import math - -import modules.scripts as scripts -import gradio as gr -from PIL import Image, ImageDraw - -from modules import images, processing, devices -from modules.processing import Processed, process_images -from modules.shared import opts, cmd_opts, state - - -class Script(scripts.Script): - def title(self): - return "Poor man's outpainting" - - def show(self, is_img2img): - return is_img2img - - def ui(self, is_img2img): - if not is_img2img: - return None - - pixels = gr.Slider(label="Pixels to expand", minimum=8, maximum=256, step=8, value=128, elem_id=self.elem_id("pixels")) - mask_blur = gr.Slider(label='Mask blur', minimum=0, maximum=64, step=1, value=4, elem_id=self.elem_id("mask_blur")) - inpainting_fill = gr.Radio(label='Masked content', choices=['fill', 'original', 'latent noise', 'latent nothing'], value='fill', type="index", elem_id=self.elem_id("inpainting_fill")) - direction = gr.CheckboxGroup(label="Outpainting direction", choices=['left', 'right', 'up', 'down'], value=['left', 'right', 'up', 'down'], elem_id=self.elem_id("direction")) - - return [pixels, mask_blur, inpainting_fill, direction] - - def run(self, p, pixels, mask_blur, inpainting_fill, direction): - initial_seed = None - initial_info = None - - p.mask_blur = mask_blur * 2 - p.inpainting_fill = inpainting_fill - p.inpaint_full_res = False - - left = pixels if "left" in direction else 0 - right = pixels if "right" in direction else 0 - up = pixels if "up" in direction else 0 - down = pixels if "down" in direction else 0 - - init_img = p.init_images[0] - target_w = math.ceil((init_img.width + left + right) / 64) * 64 - target_h = math.ceil((init_img.height + up + down) / 64) * 64 - - if left > 0: - left = left * (target_w - init_img.width) // (left + right) - if right > 0: - right = target_w - init_img.width - left - - if up > 0: - up = up * (target_h - init_img.height) // (up + down) - - if down > 0: - down = target_h - init_img.height - up - - img = Image.new("RGB", (target_w, target_h)) - img.paste(init_img, (left, up)) - - mask = Image.new("L", (img.width, img.height), "white") - draw = ImageDraw.Draw(mask) - draw.rectangle(( - left + (mask_blur * 2 if left > 0 else 0), - up + (mask_blur * 2 if up > 0 else 0), - mask.width - right - (mask_blur * 2 if right > 0 else 0), - mask.height - down - (mask_blur * 2 if down > 0 else 0) - ), fill="black") - - latent_mask = Image.new("L", (img.width, img.height), "white") - latent_draw = ImageDraw.Draw(latent_mask) - latent_draw.rectangle(( - left + (mask_blur//2 if left > 0 else 0), - up + (mask_blur//2 if up > 0 else 0), - mask.width - right - (mask_blur//2 if right > 0 else 0), - mask.height - down - (mask_blur//2 if down > 0 else 0) - ), fill="black") - - devices.torch_gc() - - grid = images.split_grid(img, tile_w=p.width, tile_h=p.height, overlap=pixels) - grid_mask = images.split_grid(mask, tile_w=p.width, tile_h=p.height, overlap=pixels) - grid_latent_mask = images.split_grid(latent_mask, tile_w=p.width, tile_h=p.height, overlap=pixels) - - p.n_iter = 1 - p.batch_size = 1 - p.do_not_save_grid = True - p.do_not_save_samples = True - - work = [] - work_mask = [] - work_latent_mask = [] - work_results = [] - - for (y, h, row), (_, _, row_mask), (_, _, row_latent_mask) in zip(grid.tiles, grid_mask.tiles, grid_latent_mask.tiles): - for tiledata, tiledata_mask, tiledata_latent_mask in zip(row, row_mask, row_latent_mask): - x, w = tiledata[0:2] - - if x >= left and x+w <= img.width - right and y >= up and y+h <= img.height - down: - continue - - work.append(tiledata[2]) - work_mask.append(tiledata_mask[2]) - work_latent_mask.append(tiledata_latent_mask[2]) - - batch_count = len(work) - print(f"Poor man's outpainting will process a total of {len(work)} images tiled as {len(grid.tiles[0][2])}x{len(grid.tiles)}.") - - state.job_count = batch_count - - for i in range(batch_count): - p.init_images = [work[i]] - p.image_mask = work_mask[i] - p.latent_mask = work_latent_mask[i] - - state.job = f"Batch {i + 1} out of {batch_count}" - processed = process_images(p) - - if initial_seed is None: - initial_seed = processed.seed - initial_info = processed.info - - p.seed = processed.seed + 1 - work_results += processed.images - - - image_index = 0 - for y, h, row in grid.tiles: - for tiledata in row: - x, w = tiledata[0:2] - - if x >= left and x+w <= img.width - right and y >= up and y+h <= img.height - down: - continue - - tiledata[2] = work_results[image_index] if image_index < len(work_results) else Image.new("RGB", (p.width, p.height)) - image_index += 1 - - combined_image = images.combine_grid(grid) - - if opts.samples_save: - images.save_image(combined_image, p.outpath_samples, "", initial_seed, p.prompt, opts.grid_format, info=initial_info, p=p) - - processed = Processed(p, [combined_image], initial_seed, initial_info) - - return processed - diff --git a/spaces/bingbing520/ChatGPT/run_Windows.bat b/spaces/bingbing520/ChatGPT/run_Windows.bat deleted file mode 100644 index 4c18f9ccaeea0af972301ffdf48778641221f76d..0000000000000000000000000000000000000000 --- a/spaces/bingbing520/ChatGPT/run_Windows.bat +++ /dev/null @@ -1,5 +0,0 @@ -@echo off -echo Opening ChuanhuChatGPT... - -REM Open powershell via bat -start powershell.exe -NoExit -Command "python ./ChuanhuChatbot.py" diff --git a/spaces/bioriAsaeru/text-to-voice/Furios Si Iute 6 Download !EXCLUSIVE! 141.md b/spaces/bioriAsaeru/text-to-voice/Furios Si Iute 6 Download !EXCLUSIVE! 141.md deleted file mode 100644 index a103a6407f9e9b41ab0cb4dd5dc8231798404803..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Furios Si Iute 6 Download !EXCLUSIVE! 141.md +++ /dev/null @@ -1,17 +0,0 @@ -
      -

      How to Download Furios Si Iute 6 (Fast & Furious 6) for Free

      -

      Furios Si Iute 6 (Fast & Furious 6) is a 2013 action film starring Vin Diesel, Paul Walker, Dwayne Johnson, Michelle Rodriguez, and Jason Statham. It is the sixth installment in the Fast & Furious franchise, and follows the team of street racers as they take on a mercenary group led by a former ally.

      -

      If you are a fan of the series and want to watch Furios Si Iute 6 online or offline, you might be wondering how to download it for free. In this article, we will show you some ways to do that legally and safely.

      -

      Furios Si Iute 6 Download 141


      DOWNLOAD »»» https://urloso.com/2uyPXo



      -

      Option 1: Stream Furios Si Iute 6 Online

      -

      One of the easiest ways to watch Furios Si Iute 6 is to stream it online from a reputable platform. There are many streaming services that offer Furios Si Iute 6 in their catalog, such as Netflix, Amazon Prime Video, Hulu, HBO Max, Peacock, and more. Depending on your location and subscription plan, you might be able to access Furios Si Iute 6 for free or for a low fee.

      -

      To stream Furios Si Iute 6 online, you will need a stable internet connection and a compatible device, such as a smart TV, laptop, tablet, or smartphone. You will also need to create an account on the streaming platform of your choice and log in with your credentials. Then, you can search for Furios Si Iute 6 in the search bar and click on the play button to start watching.

      -

      Option 2: Download Furios Si Iute 6 Offline

      -

      Another way to watch Furios Si Iute 6 is to download it offline and save it on your device. This way, you can watch it anytime and anywhere without worrying about internet speed or data usage. However, downloading Furios Si Iute 6 offline requires more caution and responsibility than streaming it online.

      -

      First of all, you should avoid downloading Furios Si Iute 6 from illegal or pirated sources, such as torrent sites, file-sharing platforms, or unauthorized websites. These sources might contain malware, viruses, or spyware that can harm your device or compromise your personal information. They might also violate the copyright laws and expose you to legal consequences.

      -

      Instead, you should download Furios Si Iute 6 from legal and authorized sources, such as the official website of the film or the streaming platforms that offer it. Some of these sources might allow you to download Furios Si Iute 6 for free or for a small fee as part of your subscription plan. Others might require you to pay a one-time fee or rent the film for a limited period.

      -

      To download Furios Si Iute 6 offline, you will need enough storage space and battery life on your device. You will also need to follow the instructions on the source website or app and select the download option. Then, you can transfer the downloaded file to your preferred device and watch it with a media player.

      -

      Conclusion

      -

      Furios Si Iute 6 (Fast & Furious 6) is a thrilling and entertaining action film that you can watch online or offline. However, you should always respect the rights of the creators and distributors of the film and avoid downloading it from illegal or pirated sources. Instead, you should use legal and authorized sources that offer Furios Si Iute 6 in high quality and safety.

      d5da3c52bf
      -
      -
      \ No newline at end of file diff --git a/spaces/birdortyedi/instagram-filter-removal/modeling/build.py b/spaces/birdortyedi/instagram-filter-removal/modeling/build.py deleted file mode 100644 index 2928af83b2b34b8cbcaa1e1be7146d9fb58e5e7c..0000000000000000000000000000000000000000 --- a/spaces/birdortyedi/instagram-filter-removal/modeling/build.py +++ /dev/null @@ -1,19 +0,0 @@ -from modeling.ifrnet import IFRNet, Discriminator, PatchDiscriminator, MLP -from modeling.benchmark import UNet - - -def build_model(args): - if args.MODEL.NAME.lower() == "ifrnet": - net = IFRNet(base_n_channels=args.MODEL.IFR.NUM_CHANNELS, destyler_n_channels=args.MODEL.IFR.DESTYLER_CHANNELS) - mlp = MLP(base_n_channels=args.MODEL.IFR.NUM_CHANNELS, num_class=args.MODEL.NUM_CLASS) - elif args.MODEL.NAME.lower() == "ifr-no-aux": - net = IFRNet(base_n_channels=args.MODEL.IFR.NUM_CHANNELS, destyler_n_channels=args.MODEL.IFR.DESTYLER_CHANNELS) - mlp = None - else: - raise NotImplementedError - return net, mlp - - -def build_discriminators(args): - return Discriminator(base_n_channels=args.MODEL.D.NUM_CHANNELS), PatchDiscriminator(base_n_channels=args.MODEL.D.NUM_CHANNELS) - diff --git a/spaces/bla/tranny/App/utils.py b/spaces/bla/tranny/App/utils.py deleted file mode 100644 index 5c183658e672e58568f587b26ee4d91df686573a..0000000000000000000000000000000000000000 --- a/spaces/bla/tranny/App/utils.py +++ /dev/null @@ -1,17 +0,0 @@ -from App.Users.Model import User -from App.Post.Model import Post -import asyncio -from fastapi import HTTPException - - -async def get_user_and_post(content): - try: - # user = None - # post = await Post.objects.get(id=content.postId) - # print(post.id) - user, post = await asyncio.gather( - *[User.objects.get(id=content.userId), Post.objects.get(id=content.postId)] - ) - except: - raise HTTPException(status_code=400, detail="Invalid data") - return user, post diff --git a/spaces/bnkkkkknn/bnkkkkknn/README.md b/spaces/bnkkkkknn/bnkkkkknn/README.md deleted file mode 100644 index 07e5d717ced7d72151d574fc3fcdc17e4c03b573..0000000000000000000000000000000000000000 --- a/spaces/bnkkkkknn/bnkkkkknn/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Bnkkkkknn -emoji: ⚡ -colorFrom: purple -colorTo: gray -sdk: docker -pinned: false -license: mit -app_port: 8080 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/bobu5/SD-webui-controlnet-docker/Dockerfile b/spaces/bobu5/SD-webui-controlnet-docker/Dockerfile deleted file mode 100644 index 95dd8620e127dfa3471d2fc93e86f0918c56ee24..0000000000000000000000000000000000000000 --- a/spaces/bobu5/SD-webui-controlnet-docker/Dockerfile +++ /dev/null @@ -1,124 +0,0 @@ -FROM nvidia/cuda:11.7.1-cudnn8-devel-ubuntu22.04 - -ENV DEBIAN_FRONTEND noninteractive -ENV PYTHONUNBUFFERED=1 -ENV PIP_DISABLE_PIP_VERSION_CHECK=1 -ENV PIP_NO_CACHE_DIR=1 - -# OS setup -RUN apt-get update -y \ - && apt-get upgrade -y \ - && apt-get install -y \ - libgl1 \ - libglib2.0-0 \ - curl \ - vim \ - wget \ - git \ - git-lfs \ - tzdata \ - bash \ - ca-certificates \ - libreadline8 \ - bzip2 \ - psmisc \ - procps \ - netbase \ - openssh-client \ - libsqlite3-dev \ - python3-pip \ - python3-venv \ - python-is-python3 \ - build-essential \ - libssl-dev \ - libffi-dev \ - aria2 \ - \ - && pip3 install --upgrade pip \ - \ - && git lfs install \ - \ - && apt-get clean autoclean \ - && apt-get autoremove --yes \ - && rm -rf /var/lib/apt/lists/* - -# OS timezone setting (UTC) -RUN echo "UTC" > /etc/timezone -ENV TZ=UTC - -# Poetry for Python packages -RUN curl -sSL https://install.python-poetry.org | POETRY_HOME=/usr/local/poetry python3 - --yes \ - && ln -s /usr/local/poetry/bin/poetry /usr/bin/poetry \ - \ - && poetry config virtualenvs.create false \ - && poetry config virtualenvs.in-project false - -# Create non-root user -ENV ENV="/etc/profile" -RUN adduser --disabled-password --gecos '' user && \ - mkdir -p /app && \ - chown -R user:user /app && \ - printf "\n. /etc/profile\n" >> /home/user/.profile \ - printf "\n. /etc/profile\n" >> /home/user/.bashrc - -# Sets up virtualenv for dependencies -ENV VIRTUAL_ENV="/opt/venv" -ENV VIRTUAL_ENV_DISABLE_PROMPT=1 -ENV POETRY_ACTIVE=1 -ENV PATH="$VIRTUAL_ENV/bin:$PATH" -RUN echo "export PATH=$PATH" >> /home/user/.bashrc \ - && python3 -m venv $VIRTUAL_ENV \ - && /opt/venv/bin/pip install --upgrade --no-cache-dir pip \ - && chown -R user:user /opt/venv - -# Run as non-root user -USER user -WORKDIR /app - -# Installation of basic Python dependencies specified in pyproject.toml -COPY --chown=user:user pyproject.toml poetry.lock /app/ -RUN poetry install - -# AUTOMATIC1111' WebUI -RUN git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui /app/stable-diffusion-webui \ - && (cd /app/stable-diffusion-webui && git checkout a9fed7c364061ae6efb37f797b6b522cb3cf7aa2) - -# Deforum extension -RUN git clone https://github.com/deforum-art/deforum-for-automatic1111-webui /app/stable-diffusion-webui/extensions/deforum-for-automatic1111-webui \ - && (cd /app/stable-diffusion-webui/extensions/deforum-for-automatic1111-webui && git checkout 2366bfdb47c226df0d14e712445414e459febad3) - -# Images Browser WebUI extension -RUN git clone https://github.com/yfszzx/stable-diffusion-webui-images-browser /app/stable-diffusion-webui/extensions/stable-diffusion-webui-images-browser \ - && (cd /app/stable-diffusion-webui/extensions/stable-diffusion-webui-images-browser && git checkout a42c7a30181636a05815e62426d5eff4d3340529) - -# CiviTAI Browser WebUI extension -RUN git clone https://github.com/Vetchems/sd-civitai-browser /app/stable-diffusion-webui/extensions/sd-civitai-browser \ - && (cd /app/stable-diffusion-webui/extensions/sd-civitai-browser && git checkout b25a5daf7df3f6340d3e243d533228d8ade5288d) - -# Additional Networks WebUI extension -RUN git clone https://github.com/kohya-ss/sd-webui-additional-networks /app/stable-diffusion-webui/extensions/sd-webui-additional-networks \ - && (cd /app/stable-diffusion-webui/extensions/sd-webui-additional-networks && git checkout d2758b6c8e2e8e956865a87b31fd74d3d7c010cb) \ - && mkdir -p /app/stable-diffusion-webui/extensions/sd-webui-additional-networks/models/LoRA - -# ControlNet WebUI extension -RUN git clone https://github.com/Mikubill/sd-webui-controlnet /app/stable-diffusion-webui/extensions/sd-webui-controlnet \ - && (cd /app/stable-diffusion-webui/extensions/sd-webui-controlnet && git checkout 274dd5df217a03e059e9cf052447aece81bbd1cf) \ - && mkdir -p /app/stable-diffusion-webui/models/ControlNet - -# Prepare WebUI environment -WORKDIR /app/stable-diffusion-webui -RUN /opt/venv/bin/python launch.py --exit --skip-torch-cuda-test --xformers - -# Patch WebUI -RUN sed -i -e 's/ show_progress=False,/ show_progress=True,/g' modules/ui.py -RUN sed -i -e 's/shared.demo.launch/shared.demo.queue().launch/g' webui.py -RUN sed -i -e 's/ outputs=\[/queue=False, &/g' modules/ui.py -RUN sed -i -e 's/ queue=False, / /g' modules/ui.py - -# Copy startup scripts -COPY --chown=user:user run.py on_start.sh config.json ui-config.json shared-config.json shared-ui-config.json header_patch.py /app/stable-diffusion-webui/ -RUN chmod +x on_start.sh - -EXPOSE 7860 - -CMD ["/opt/venv/bin/python", "run.py", "--listen", "--ui-config-file", "ui-config.json", "--ui-settings-file", "config.json", "--disable-console-progressbars", "--cors-allow-origins", "huggingface.co,hf.space", "--no-progressbar-hiding", "--enable-console-prompts", "--no-download-sd-model", "--api", "--skip-version-check"] diff --git a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/grids/diffusion/__init__.py b/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/grids/diffusion/__init__.py deleted file mode 100644 index e5737294ae16c0de52085b8dcf6825c348f617e4..0000000000000000000000000000000000000000 --- a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/grids/diffusion/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -"""Diffusion grids.""" diff --git a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/grids/musicgen/musicgen_base_32khz.py b/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/grids/musicgen/musicgen_base_32khz.py deleted file mode 100644 index 4e364614537e426f21c18a2c2a9d94b3babce051..0000000000000000000000000000000000000000 --- a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/grids/musicgen/musicgen_base_32khz.py +++ /dev/null @@ -1,43 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from ._explorers import LMExplorer -from ...environment import AudioCraftEnvironment - - -@LMExplorer -def explorer(launcher): - partitions = AudioCraftEnvironment.get_slurm_partitions(['team', 'global']) - launcher.slurm_(gpus=32, partition=partitions) - launcher.bind_(solver='musicgen/musicgen_base_32khz') - # replace this by the desired music dataset - launcher.bind_(dset='internal/music_400k_32khz') - - fsdp = {'autocast': False, 'fsdp.use': True} - medium = {'model/lm/model_scale': 'medium'} - large = {'model/lm/model_scale': 'large'} - - cfg_low = {'classifier_free_guidance.training_dropout': 0.2} - wd_low = {'conditioners.description.t5.word_dropout': 0.2} - - adam = {'optim.optimizer': 'adamw', 'optim.lr': 1e-4} - - launcher.bind_(fsdp) - - launcher.slurm_(gpus=32).bind_(label='32gpus') - with launcher.job_array(): - sub = launcher.bind() - sub() - - launcher.slurm_(gpus=64).bind_(label='64gpus') - with launcher.job_array(): - sub = launcher.bind() - sub(medium, adam) - - launcher.slurm_(gpus=96).bind_(label='96gpus') - with launcher.job_array(): - sub = launcher.bind() - sub(large, cfg_low, wd_low, adam, {'optim.max_norm': 3}) diff --git a/spaces/bright1/Sepsis-Prediction-API/src/utils.py b/spaces/bright1/Sepsis-Prediction-API/src/utils.py deleted file mode 100644 index 635abd3158fe7259d109961ec20df4ef6c8bfe45..0000000000000000000000000000000000000000 --- a/spaces/bright1/Sepsis-Prediction-API/src/utils.py +++ /dev/null @@ -1,104 +0,0 @@ -import pandas as pd -import numpy as np -import pickle -from io import StringIO -from functools import lru_cache - -@lru_cache(maxsize=100, ) -def load_pickle(filename): - with open(filename, 'rb') as file: # read file - contents = pickle.load(file) # load contents of file - return contents - - - -def feature_engineering(data): - data['Insurance'] = data['Insurance'].astype(int).astype(str) # run function to create new features - # create features - data['All-Product'] = data['Blood Work Result-4'] * data['Blood Work Result-1']* data['Blood Work Result-2']* data['Blood Work Result-3'] * data['Plasma Glucose']* data['Blood Pressure'] * data['Age']* data['Body Mass Index'] # Multiply all numerical features - - all_labels =['{0}-{1}'.format(i, i+500000000000) for i in range(0, round(2714705253292.0312),500000000000)] - data['All-Product_range'] = pd.cut(data['All-Product'], bins=(range(0, 3500000000000, 500000000000)), right=False, labels=all_labels) - - age_labels =['{0}-{1}'.format(i, i+20) for i in range(0, 83,20)] - data['Age Group'] = pd.cut(data['Age'], bins=(range(0, 120, 20)), right=False, labels=age_labels) # create categorical features for age - - labels =['{0}-{1}'.format(i, i+30) for i in range(0, round(67.1),30)] - data['BMI_range'] = pd.cut(data['Body Mass Index'], bins=(range(0, 120, 30)), right=False, labels=labels) # create categorical features for bodey mass index - - bp_labels =['{0}-{1}'.format(i, i+50) for i in range(0, round(122),50)] - data['BP_range'] = pd.cut(data['Blood Pressure'], bins=(range(0, 200, 50)), right=False, labels=bp_labels) # create categorical features for blood pressure - - labels =['{0}-{1}'.format(i, i+7) for i in range(0, round(17),7)] - data['PG_range'] = pd.cut(data['Plasma Glucose'], bins=(range(0, 28, 7)), right=False, labels=labels) # create categorical features for plasma glucose - - data.drop(columns=['Blood Pressure', 'Age', 'Body Mass Index','Plasma Glucose', 'All-Product', 'Blood Work Result-3', 'Blood Work Result-2'], inplace=True) # drop unused columns - - - - -def combine_cats_nums(transformed_data, full_pipeline): - cat_features = full_pipeline.named_transformers_['categorical']['cat_encoder'].get_feature_names() # get the feature from the categorical transformer - num_features = ['Blood Work Result-1', 'Blood Work Result-4'] - columns_ = np.concatenate([num_features, cat_features]) # concatenate numerical and categorical features - prepared_data = pd.DataFrame(transformed_data, columns=columns_) # create a dataframe from the transformed data - prepared_data = prepared_data.rename(columns={'x0_0':'Insurance_0', 'x0_1': 'Insurance_1'}) # rename columns - - -def make_prediction(data, transformer, model): - new_columns = return_columns() - dict_new_old_cols = dict(zip(data.columns, new_columns)) # create a dict of original columns and new columns - data = data.rename(columns=dict_new_old_cols) - feature_engineering(data) # create new features - transformed_data = transformer.transform(data) # transform the data using the transformer - combine_cats_nums(transformed_data, transformer)# create a dataframe from the transformed data - # make prediction - label = model.predict(transformed_data) # make a prediction - probs = model.predict_proba(transformed_data) # predit sepsis status for inputs - return label, probs.max() - - - -# function to create a new column 'Bmi' -def process_label(row): - if row['Predicted Label'] == 1: - return 'Sepsis status is Positive' - elif row['Predicted Label'] == 0: - return 'Sepsis status is Negative' - - -def return_columns(): - # create new columns - new_columns = ['Plasma Glucose','Blood Work Result-1', 'Blood Pressure', - 'Blood Work Result-2', 'Blood Work Result-3', 'Body Mass Index', - 'Blood Work Result-4', 'Age', 'Insurance'] - return new_columns - - -def process_json_csv(contents, file_type, valid_formats): - - # Read the file contents as a byte string - contents = contents.decode() # Decode the byte string to a regular string - new_columns = return_columns() # return new_columns - # Process the uploaded file - if file_type == valid_formats[0]: - data = pd.read_csv(StringIO(contents)) # read csv files - elif file_type == valid_formats[1]: - data = pd.read_json(contents) # read json file - data = data.drop(columns=['ID']) # drop ID column - dict_new_old_cols = dict(zip(data.columns, new_columns)) # get dict of new and old cols - data = data.rename(columns=dict_new_old_cols) # rename colums to appropriate columns - return data - - -def output_batch(data1, labels): - data_labels = pd.DataFrame(labels, columns=['Predicted Label']) # convert label into a dataframe - data_labels['Predicted Label'] = data_labels.apply(process_label, axis=1) # change label to understanding strings - results_list = [] # create an empty lits - x = data1.to_dict('index') # convert datafram into dictionary - y = data_labels.to_dict('index') # convert datafram into dictionary - for i in range(len(y)): - results_list.append({i:{'inputs': x[i], 'output':y[i]}}) # append input and labels - - final_dict = {'results': results_list} - return final_dict \ No newline at end of file diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/utils/__init__.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/utils/__init__.py deleted file mode 100644 index 9020c2df23e2af280b7bb168b996ae9eaf312eb8..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/utils/__init__.py +++ /dev/null @@ -1 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/ViTDet/configs/LVIS/cascade_mask_rcnn_vitdet_h_100ep.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/ViTDet/configs/LVIS/cascade_mask_rcnn_vitdet_h_100ep.py deleted file mode 100644 index 68bec5734456c9bbc813becd5da83bc2a0f90932..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/ViTDet/configs/LVIS/cascade_mask_rcnn_vitdet_h_100ep.py +++ /dev/null @@ -1,51 +0,0 @@ -from detectron2.config import LazyCall as L -from detectron2.data.detection_utils import get_fed_loss_cls_weights -from detectron2.layers import ShapeSpec -from detectron2.modeling.box_regression import Box2BoxTransform -from detectron2.modeling.matcher import Matcher -from detectron2.modeling.roi_heads import FastRCNNOutputLayers, FastRCNNConvFCHead, CascadeROIHeads - -from .mask_rcnn_vitdet_h_100ep import ( - dataloader, - lr_multiplier, - model, - optimizer, - train, -) - -# arguments that don't exist for Cascade R-CNN -[model.roi_heads.pop(k) for k in ["box_head", "box_predictor", "proposal_matcher"]] - -model.roi_heads.update( - _target_=CascadeROIHeads, - num_classes=1203, - box_heads=[ - L(FastRCNNConvFCHead)( - input_shape=ShapeSpec(channels=256, height=7, width=7), - conv_dims=[256, 256, 256, 256], - fc_dims=[1024], - conv_norm="LN", - ) - for _ in range(3) - ], - box_predictors=[ - L(FastRCNNOutputLayers)( - input_shape=ShapeSpec(channels=1024), - box2box_transform=L(Box2BoxTransform)(weights=(w1, w1, w2, w2)), - num_classes="${...num_classes}", - test_score_thresh=0.02, - test_topk_per_image=300, - cls_agnostic_bbox_reg=True, - use_sigmoid_ce=True, - use_fed_loss=True, - get_fed_loss_cls_weights=lambda: get_fed_loss_cls_weights( - dataloader.train.dataset.names, 0.5 - ), - ) - for (w1, w2) in [(10, 5), (20, 10), (30, 15)] - ], - proposal_matchers=[ - L(Matcher)(thresholds=[th], labels=[0, 1], allow_low_quality_matches=False) - for th in [0.5, 0.6, 0.7] - ], -) diff --git a/spaces/caffeinum/VToonify/vtoonify/train_vtoonify_d.py b/spaces/caffeinum/VToonify/vtoonify/train_vtoonify_d.py deleted file mode 100644 index 0c83e02d46097dad72b5e9f8ed239299d9da320a..0000000000000000000000000000000000000000 --- a/spaces/caffeinum/VToonify/vtoonify/train_vtoonify_d.py +++ /dev/null @@ -1,515 +0,0 @@ -import os -#os.environ['CUDA_VISIBLE_DEVICES'] = "0" -import argparse -import math -import random - -import numpy as np -import torch -from torch import nn, optim -from torch.nn import functional as F -from torch.utils import data -import torch.distributed as dist -from torchvision import transforms, utils -from tqdm import tqdm -from PIL import Image -from util import * - -from model.stylegan import lpips -from model.stylegan.model import Generator, Downsample -from model.vtoonify import VToonify, ConditionalDiscriminator -from model.bisenet.model import BiSeNet -from model.simple_augment import random_apply_affine -from model.stylegan.distributed import ( - get_rank, - synchronize, - reduce_loss_dict, - reduce_sum, - get_world_size, -) - -class TrainOptions(): - def __init__(self): - - self.parser = argparse.ArgumentParser(description="Train VToonify-D") - self.parser.add_argument("--iter", type=int, default=2000, help="total training iterations") - self.parser.add_argument("--batch", type=int, default=8, help="batch sizes for each gpus") - self.parser.add_argument("--lr", type=float, default=0.0001, help="learning rate") - self.parser.add_argument("--local_rank", type=int, default=0, help="local rank for distributed training") - self.parser.add_argument("--start_iter", type=int, default=0, help="start iteration") - self.parser.add_argument("--save_every", type=int, default=30000, help="interval of saving a checkpoint") - self.parser.add_argument("--save_begin", type=int, default=30000, help="when to start saving a checkpoint") - self.parser.add_argument("--log_every", type=int, default=200, help="interval of saving a checkpoint") - - self.parser.add_argument("--adv_loss", type=float, default=0.01, help="the weight of adv loss") - self.parser.add_argument("--grec_loss", type=float, default=0.1, help="the weight of mse recontruction loss") - self.parser.add_argument("--perc_loss", type=float, default=0.01, help="the weight of perceptual loss") - self.parser.add_argument("--tmp_loss", type=float, default=1.0, help="the weight of temporal consistency loss") - self.parser.add_argument("--msk_loss", type=float, default=0.0005, help="the weight of attention mask loss") - - self.parser.add_argument("--fix_degree", action="store_true", help="use a fixed style degree") - self.parser.add_argument("--fix_style", action="store_true", help="use a fixed style image") - self.parser.add_argument("--fix_color", action="store_true", help="use the original color (no color transfer)") - self.parser.add_argument("--exstyle_path", type=str, default='./checkpoint/cartoon/refined_exstyle_code.npy', help="path of the extrinsic style code") - self.parser.add_argument("--style_id", type=int, default=26, help="the id of the style image") - self.parser.add_argument("--style_degree", type=float, default=0.5, help="style degree for VToonify-D") - - self.parser.add_argument("--encoder_path", type=str, default=None, help="path to the pretrained encoder model") - self.parser.add_argument("--direction_path", type=str, default='./checkpoint/directions.npy', help="path to the editing direction latents") - self.parser.add_argument("--stylegan_path", type=str, default='./checkpoint/cartoon/generator.pt', help="path to the stylegan model") - self.parser.add_argument("--faceparsing_path", type=str, default='./checkpoint/faceparsing.pth', help="path of the face parsing model") - self.parser.add_argument("--style_encoder_path", type=str, default='./checkpoint/encoder.pt', help="path of the style encoder") - - self.parser.add_argument("--name", type=str, default='vtoonify_d_cartoon', help="saved model name") - self.parser.add_argument("--pretrain", action="store_true", help="if true, only pretrain the encoder") - - def parse(self): - self.opt = self.parser.parse_args() - if self.opt.encoder_path is None: - self.opt.encoder_path = os.path.join('./checkpoint/', self.opt.name, 'pretrain.pt') - args = vars(self.opt) - if self.opt.local_rank == 0: - print('Load options') - for name, value in sorted(args.items()): - print('%s: %s' % (str(name), str(value))) - return self.opt - - -# pretrain E of vtoonify. -# We train E so that its the last-layer feature matches the original 8-th-layer input feature of G1 -# See Model initialization in Sec. 4.2.2 for the detail -def pretrain(args, generator, g_optim, g_ema, parsingpredictor, down, directions, styles, device): - pbar = range(args.iter) - - if get_rank() == 0: - pbar = tqdm(pbar, initial=args.start_iter, dynamic_ncols=True, smoothing=0.01) - - recon_loss = torch.tensor(0.0, device=device) - loss_dict = {} - - if args.distributed: - g_module = generator.module - else: - g_module = generator - - accum = 0.5 ** (32 / (10 * 1000)) - - requires_grad(g_module.encoder, True) - - for idx in pbar: - i = idx + args.start_iter - - if i > args.iter: - print("Done!") - break - - # during pretraining, the last 11 layers of DualStyleGAN (for color transfer) is not used. - # so args.fix_color is not used. the last 11 elements in weight are not used. - if args.fix_degree: - d_s = args.style_degree - else: - d_s = 0 if i <= args.iter / 4.0 else np.random.rand(1)[0] - weight = [d_s] * 18 - - # sample pre-saved w''=E_s(s) - if args.fix_style: - style = styles[args.style_id:args.style_id+1].repeat(args.batch,1,1) - else: - style = styles[torch.randint(0, styles.size(0), (args.batch,))] - - with torch.no_grad(): - # during pretraining, no geometric transformations are applied. - noise_sample = torch.randn(args.batch, 512).cuda() - ws_ = g_ema.stylegan().style(noise_sample).unsqueeze(1).repeat(1,18,1) # random w - ws_[:, 3:7] += directions[torch.randint(0, directions.shape[0], (args.batch,)), 3:7] # w'=w+n - img_gen, _ = g_ema.stylegan()([ws_], input_is_latent=True, truncation=0.5, truncation_latent=0) - img_gen = torch.clamp(img_gen, -1, 1).detach() # x'' - img_gen512 = down(img_gen.detach()) - img_gen256 = down(img_gen512.detach()) # image part of x''_down - mask512 = parsingpredictor(2*torch.clamp(img_gen512, -1, 1))[0] - real_input = torch.cat((img_gen256, down(mask512)/16.0), dim=1) # x''_down - # f_G1^(8)(w', w'', d_s) - real_feat, real_skip = g_ema.generator([ws_], style, input_is_latent=True, return_feat=True, - truncation=0.5, truncation_latent=0, use_res=True, interp_weights=weight) - - real_input = real_input.detach() - real_feat = real_feat.detach() - real_skip = real_skip.detach() - - # f_E^(last)(x''_down, w'', d_s) - fake_feat, fake_skip = generator(real_input, style, d_s, return_feat=True) - - # L_E in Eq.(8) - recon_loss = F.mse_loss(fake_feat, real_feat) + F.mse_loss(fake_skip, real_skip) - - loss_dict["emse"] = recon_loss - - generator.zero_grad() - recon_loss.backward() - g_optim.step() - - accumulate(g_ema.encoder, g_module.encoder, accum) - - loss_reduced = reduce_loss_dict(loss_dict) - - emse_loss_val = loss_reduced["emse"].mean().item() - - if get_rank() == 0: - pbar.set_description( - ( - f"iter: {i:d}; emse: {emse_loss_val:.3f}" - ) - ) - - if ((i+1) >= args.save_begin and (i+1) % args.save_every == 0) or (i+1) == args.iter: - if (i+1) == args.iter: - savename = f"checkpoint/%s/pretrain.pt"%(args.name) - else: - savename = f"checkpoint/%s/pretrain-%05d.pt"%(args.name, i+1) - torch.save( - { - #"g": g_module.encoder.state_dict(), - "g_ema": g_ema.encoder.state_dict(), - }, - savename, - ) - - -# generate paired data and train vtoonify, see Sec. 4.2.2 for the detail -def train(args, generator, discriminator, g_optim, d_optim, g_ema, percept, parsingpredictor, down, pspencoder, directions, styles, device): - pbar = range(args.iter) - - if get_rank() == 0: - pbar = tqdm(pbar, initial=args.start_iter, smoothing=0.01, ncols=130, dynamic_ncols=False) - - d_loss = torch.tensor(0.0, device=device) - g_loss = torch.tensor(0.0, device=device) - grec_loss = torch.tensor(0.0, device=device) - gfeat_loss = torch.tensor(0.0, device=device) - temporal_loss = torch.tensor(0.0, device=device) - gmask_loss = torch.tensor(0.0, device=device) - loss_dict = {} - - surffix = '_s' - if args.fix_style: - surffix += '%03d'%(args.style_id) - surffix += '_d' - if args.fix_degree: - surffix += '%1.1f'%(args.style_degree) - if not args.fix_color: - surffix += '_c' - - if args.distributed: - g_module = generator.module - d_module = discriminator.module - - else: - g_module = generator - d_module = discriminator - - accum = 0.5 ** (32 / (10 * 1000)) - - for idx in pbar: - i = idx + args.start_iter - - if i > args.iter: - print("Done!") - break - - # sample style degree - if args.fix_degree or idx == 0 or i == 0: - d_s = args.style_degree - else: - d_s = np.random.randint(0,6) / 5.0 - if args.fix_color: - weight = [d_s] * 7 + [0] * 11 - else: - weight = [d_s] * 7 + [1] * 11 - # style degree condition for discriminator - degree_label = torch.zeros(args.batch, 1).to(device) + d_s - - # style index condition for discriminator - style_ind = torch.randint(0, styles.size(0), (args.batch,)) - if args.fix_style or idx == 0 or i == 0: - style_ind = style_ind * 0 + args.style_id - # sample pre-saved E_s(s) - style = styles[style_ind] - - with torch.no_grad(): - noise_sample = torch.randn(args.batch, 512).cuda() - wc = g_ema.stylegan().style(noise_sample).unsqueeze(1).repeat(1,18,1) # random w - wc[:, 3:7] += directions[torch.randint(0, directions.shape[0], (args.batch,)), 3:7] # w'=w+n - wc = wc.detach() - xc, _ = g_ema.stylegan()([wc], input_is_latent=True, truncation=0.5, truncation_latent=0) - xc = torch.clamp(xc, -1, 1).detach() # x'' - if not args.fix_color and args.fix_style: # only transfer this fixed style's color - xl = style.clone() - else: - xl = pspencoder(F.adaptive_avg_pool2d(xc, 256)) - xl = g_ema.zplus2wplus(xl) # E_s(x''_down) - xl = torch.cat((style[:,0:7], xl[:,7:18]), dim=1).detach() # w'' = concatenate E_s(s) and E_s(x''_down) - xs, _ = g_ema.generator([wc], xl, input_is_latent=True, - truncation=0.5, truncation_latent=0, use_res=True, interp_weights=weight) - xs = torch.clamp(xs, -1, 1).detach() # y'=G1(w', w'', d_s, d_c) - # apply color jitter to w'. we fuse w' of the current iteration with w' of the last iteration - if idx > 0 and i >= (args.iter/2.0) and (not args.fix_color and not args.fix_style): - wcfuse = wc.clone() - wcfuse[:,7:] = wc_[:,7:] * (i/(args.iter/2.0)-1) + wcfuse[:,7:] * (2-i/(args.iter/2.0)) - xc, _ = g_ema.stylegan()([wcfuse], input_is_latent=True, truncation=0.5, truncation_latent=0) - xc = torch.clamp(xc, -1, 1).detach() # x' - wc_ = wc.clone() # wc_ is the w' in the last iteration - # during training, random geometric transformations are applied. - imgs, _ = random_apply_affine(torch.cat((xc.detach(),xs), dim=1), 0.2, None) - real_input1024 = imgs[:,0:3].detach() # image part of x - real_input512 = down(real_input1024).detach() - real_input256 = down(real_input512).detach() - mask512 = parsingpredictor(2*real_input512)[0] - mask256 = down(mask512).detach() - mask = F.adaptive_avg_pool2d(mask512, 1024).detach() # parsing part of x - real_output = imgs[:,3:].detach() # y - real_input = torch.cat((real_input256, mask256/16.0), dim=1) # x_down - # for log, sample a fixed input-output pair (x_down, y, w'', d_s) - if idx == 0 or i == 0: - samplein = real_input.clone().detach() - sampleout = real_output.clone().detach() - samplexl = xl.clone().detach() - sampleds = d_s - - ###### This part is for training discriminator - - requires_grad(g_module.encoder, False) - requires_grad(g_module.fusion_out, False) - requires_grad(g_module.fusion_skip, False) - requires_grad(discriminator, True) - - fake_output = generator(real_input, xl, d_s) - fake_pred = discriminator(F.adaptive_avg_pool2d(fake_output, 256), degree_label, style_ind) - real_pred = discriminator(F.adaptive_avg_pool2d(real_output, 256), degree_label, style_ind) - - # L_adv in Eq.(3) - d_loss = d_logistic_loss(real_pred, fake_pred) * args.adv_loss - loss_dict["d"] = d_loss - - discriminator.zero_grad() - d_loss.backward() - d_optim.step() - - ###### This part is for training generator (encoder and fusion modules) - - requires_grad(g_module.encoder, True) - requires_grad(g_module.fusion_out, True) - requires_grad(g_module.fusion_skip, True) - requires_grad(discriminator, False) - - fake_output, m_Es = generator(real_input, xl, d_s, return_mask=True) - fake_pred = discriminator(F.adaptive_avg_pool2d(fake_output, 256), degree_label, style_ind) - - # L_adv in Eq.(3) - g_loss = g_nonsaturating_loss(fake_pred) * args.adv_loss - # L_rec in Eq.(2) - grec_loss = F.mse_loss(fake_output, real_output) * args.grec_loss - gfeat_loss = percept(F.adaptive_avg_pool2d(fake_output, 512), # 1024 will out of memory - F.adaptive_avg_pool2d(real_output, 512)).sum() * args.perc_loss # 256 will get blurry output - - # L_msk in Eq.(9) - gmask_loss = torch.tensor(0.0, device=device) - if not args.fix_degree or args.msk_loss > 0: - for jj, m_E in enumerate(m_Es): - gd_s = (1 - d_s) ** 2 * 0.9 + 0.1 - gmask_loss += F.relu(torch.mean(m_E)-gd_s) * args.msk_loss - - loss_dict["g"] = g_loss - loss_dict["gr"] = grec_loss - loss_dict["gf"] = gfeat_loss - loss_dict["msk"] = gmask_loss - - w = random.randint(0,1024-896) - h = random.randint(0,1024-896) - crop_input = torch.cat((real_input1024[:,:,w:w+896,h:h+896], mask[:,:,w:w+896,h:h+896]/16.0), dim=1).detach() - crop_input = down(down(crop_input)) - crop_fake_output = fake_output[:,:,w:w+896,h:h+896] - fake_crop_output = generator(crop_input, xl, d_s) - # L_tmp in Eq.(4), gradually increase the weight of L_tmp - temporal_loss = ((fake_crop_output-crop_fake_output)**2).mean() * max(idx/(args.iter/2.0)-1, 0) * args.tmp_loss - loss_dict["tp"] = temporal_loss - - generator.zero_grad() - (g_loss + grec_loss + gfeat_loss + temporal_loss + gmask_loss).backward() - g_optim.step() - - accumulate(g_ema.encoder, g_module.encoder, accum) - accumulate(g_ema.fusion_out, g_module.fusion_out, accum) - accumulate(g_ema.fusion_skip, g_module.fusion_skip, accum) - - loss_reduced = reduce_loss_dict(loss_dict) - - d_loss_val = loss_reduced["d"].mean().item() - g_loss_val = loss_reduced["g"].mean().item() - gr_loss_val = loss_reduced["gr"].mean().item() - gf_loss_val = loss_reduced["gf"].mean().item() - tmp_loss_val = loss_reduced["tp"].mean().item() - msk_loss_val = loss_reduced["msk"].mean().item() - - if get_rank() == 0: - pbar.set_description( - ( - f"iter: {i:d}; advd: {d_loss_val:.3f}; advg: {g_loss_val:.3f}; mse: {gr_loss_val:.3f}; " - f"perc: {gf_loss_val:.3f}; tmp: {tmp_loss_val:.3f}; msk: {msk_loss_val:.3f}" - ) - ) - - if i == 0 or (i+1) % args.log_every == 0 or (i+1) == args.iter: - with torch.no_grad(): - g_ema.eval() - sample1 = g_ema(samplein, samplexl, sampleds) - if args.fix_degree: - sample = F.interpolate(torch.cat((sampleout, sample1), dim=0), 256) - else: - sample2 = g_ema(samplein, samplexl, d_s) - sample = F.interpolate(torch.cat((sampleout, sample1, sample2), dim=0), 256) - utils.save_image( - sample, - f"log/%s/%05d.jpg"%(args.name, (i+1)), - nrow=int(args.batch), - normalize=True, - range=(-1, 1), - ) - - if ((i+1) >= args.save_begin and (i+1) % args.save_every == 0) or (i+1) == args.iter: - if (i+1) == args.iter: - savename = f"checkpoint/%s/vtoonify%s.pt"%(args.name, surffix) - else: - savename = f"checkpoint/%s/vtoonify%s_%05d.pt"%(args.name, surffix, i+1) - torch.save( - { - #"g": g_module.state_dict(), - #"d": d_module.state_dict(), - "g_ema": g_ema.state_dict(), - }, - savename, - ) - - - -if __name__ == "__main__": - - device = "cuda" - parser = TrainOptions() - args = parser.parse() - if args.local_rank == 0: - print('*'*98) - if not os.path.exists("log/%s/"%(args.name)): - os.makedirs("log/%s/"%(args.name)) - if not os.path.exists("checkpoint/%s/"%(args.name)): - os.makedirs("checkpoint/%s/"%(args.name)) - - n_gpu = int(os.environ["WORLD_SIZE"]) if "WORLD_SIZE" in os.environ else 1 - args.distributed = n_gpu > 1 - - if args.distributed: - torch.cuda.set_device(args.local_rank) - torch.distributed.init_process_group(backend="nccl", init_method="env://") - synchronize() - - generator = VToonify(backbone = 'dualstylegan').to(device) - generator.apply(weights_init) - g_ema = VToonify(backbone = 'dualstylegan').to(device) - g_ema.eval() - - ckpt = torch.load(args.stylegan_path, map_location=lambda storage, loc: storage) - generator.generator.load_state_dict(ckpt["g_ema"], strict=False) - # load ModRes blocks of DualStyleGAN into the modified ModRes blocks (with dilation) - generator.res.load_state_dict(generator.generator.res.state_dict(), strict=False) - g_ema.generator.load_state_dict(ckpt["g_ema"], strict=False) - g_ema.res.load_state_dict(g_ema.generator.res.state_dict(), strict=False) - requires_grad(generator.generator, False) - requires_grad(generator.res, False) - requires_grad(g_ema.generator, False) - requires_grad(g_ema.res, False) - - if not args.pretrain: - generator.encoder.load_state_dict(torch.load(args.encoder_path, map_location=lambda storage, loc: storage)["g_ema"]) - # we initialize the fusion modules to map f_G \otimes f_E to f_G. - for k in generator.fusion_out: - k.conv.weight.data *= 0.01 - k.conv.weight[:,0:k.conv.weight.shape[0],1,1].data += torch.eye(k.conv.weight.shape[0]).cuda() - for k in generator.fusion_skip: - k.weight.data *= 0.01 - k.weight[:,0:k.weight.shape[0],1,1].data += torch.eye(k.weight.shape[0]).cuda() - - accumulate(g_ema.encoder, generator.encoder, 0) - accumulate(g_ema.fusion_out, generator.fusion_out, 0) - accumulate(g_ema.fusion_skip, generator.fusion_skip, 0) - - g_parameters = list(generator.encoder.parameters()) - if not args.pretrain: - g_parameters = g_parameters + list(generator.fusion_out.parameters()) + list(generator.fusion_skip.parameters()) - - g_optim = optim.Adam( - g_parameters, - lr=args.lr, - betas=(0.9, 0.99), - ) - - if args.distributed: - generator = nn.parallel.DistributedDataParallel( - generator, - device_ids=[args.local_rank], - output_device=args.local_rank, - broadcast_buffers=False, - find_unused_parameters=True, - ) - - parsingpredictor = BiSeNet(n_classes=19) - parsingpredictor.load_state_dict(torch.load(args.faceparsing_path, map_location=lambda storage, loc: storage)) - parsingpredictor.to(device).eval() - requires_grad(parsingpredictor, False) - - # we apply gaussian blur to the images to avoid flickers caused during downsampling - down = Downsample(kernel=[1, 3, 3, 1], factor=2).to(device) - requires_grad(down, False) - - directions = torch.tensor(np.load(args.direction_path)).to(device) - - # load style codes of DualStyleGAN - exstyles = np.load(args.exstyle_path, allow_pickle='TRUE').item() - if args.local_rank == 0 and not os.path.exists('checkpoint/%s/exstyle_code.npy'%(args.name)): - np.save('checkpoint/%s/exstyle_code.npy'%(args.name), exstyles, allow_pickle=True) - styles = [] - with torch.no_grad(): - for stylename in exstyles.keys(): - exstyle = torch.tensor(exstyles[stylename]).to(device) - exstyle = g_ema.zplus2wplus(exstyle) - styles += [exstyle] - styles = torch.cat(styles, dim=0) - - if not args.pretrain: - discriminator = ConditionalDiscriminator(256, use_condition=True, style_num = styles.size(0)).to(device) - - d_optim = optim.Adam( - discriminator.parameters(), - lr=args.lr, - betas=(0.9, 0.99), - ) - - if args.distributed: - discriminator = nn.parallel.DistributedDataParallel( - discriminator, - device_ids=[args.local_rank], - output_device=args.local_rank, - broadcast_buffers=False, - find_unused_parameters=True, - ) - - percept = lpips.PerceptualLoss(model="net-lin", net="vgg", use_gpu=device.startswith("cuda"), gpu_ids=[args.local_rank]) - requires_grad(percept.model.net, False) - - pspencoder = load_psp_standalone(args.style_encoder_path, device) - - if args.local_rank == 0: - print('Load models and data successfully loaded!') - - if args.pretrain: - pretrain(args, generator, g_optim, g_ema, parsingpredictor, down, directions, styles, device) - else: - train(args, generator, discriminator, g_optim, d_optim, g_ema, percept, parsingpredictor, down, pspencoder, directions, styles, device) diff --git a/spaces/cahya/indochat/app.py b/spaces/cahya/indochat/app.py deleted file mode 100644 index e83906e878ec6ac30dab9ca99815ea120fd6396c..0000000000000000000000000000000000000000 --- a/spaces/cahya/indochat/app.py +++ /dev/null @@ -1,93 +0,0 @@ -import gradio as gr -import os -from mtranslate import translate -import requests - -HF_AUTH_TOKEN = os.environ.get("HF_AUTH_TOKEN") -indochat_api = 'https://cahya-indonesian-whisperer.hf.space/api/text-generator/v1' -indochat_api_auth_token = os.getenv("INDOCHAT_API_AUTH_TOKEN", "") - -def get_answer(user_input, decoding_method, num_beams, top_k, top_p, temperature, repetition_penalty, penalty_alpha): - print(user_input, decoding_method, top_k, top_p, temperature, repetition_penalty, penalty_alpha) - headers = {'Authorization': 'Bearer ' + indochat_api_auth_token} - data = { - "model_name": "indochat-tiny", - "text": user_input, - "min_length": len(user_input) + 20, - "max_length": 200, - "decoding_method": decoding_method, - "num_beams": num_beams, - "top_k": top_k, - "top_p": top_p, - "temperature": temperature, - "seed": -1, - "repetition_penalty": repetition_penalty, - "penalty_alpha": penalty_alpha - } - r = requests.post(indochat_api, headers=headers, data=data) - if r.status_code == 200: - result = r.json() - answer = result["generated_text"] - user_input_en = translate(user_input, "en", "id") - answer_en = translate(answer, "en", "id") - return [(f"{user_input}\n", None), (answer, "")], \ - [(f"{user_input_en}\n", None), (answer_en, "")] - else: - return "Error: " + r.text - - -css = """ -#answer_id span {white-space: pre-line} -#answer_id span.label {display: none} -#answer_en span {white-space: pre-line} -#answer_en span.label {display: none} -""" - -with gr.Blocks(css=css) as demo: - with gr.Row(): - gr.Markdown("""## IndoChat - - A Prove of Concept of a multilingual Chatbot (in this case a bilingual, English and Indonesian), fine-tuned with - multilingual instructions dataset. The base model is a GPT2-Medium (340M params) which was pretrained with 75GB - of Indonesian and English dataset, where English part is only less than 1% of the whole dataset. - """) - with gr.Row(): - with gr.Column(): - user_input = gr.inputs.Textbox(placeholder="", - label="Ask me something in Indonesian or English", - default="Bagaimana cara mendidik anak supaya tidak berbohong?") - decoding_method = gr.inputs.Dropdown(["Beam Search", "Sampling", "Contrastive Search"], - default="Sampling", label="Decoding Method") - num_beams = gr.inputs.Slider(label="Number of beams for beam search", - default=1, minimum=1, maximum=10, step=1) - top_k = gr.inputs.Slider(label="Top K", - default=30, maximum=50, minimum=1, step=1) - top_p = gr.inputs.Slider(label="Top P", default=0.9, step=0.05, minimum=0.1, maximum=1.0) - temperature = gr.inputs.Slider(label="Temperature", default=0.5, step=0.05, minimum=0.1, maximum=1.0) - repetition_penalty = gr.inputs.Slider(label="Repetition Penalty", default=1.1, step=0.05, minimum=1.0, maximum=2.0) - penalty_alpha = gr.inputs.Slider(label="The penalty alpha for contrastive search", - default=0.5, step=0.05, minimum=0.05, maximum=1.0) - with gr.Row(): - button_generate_story = gr.Button("Submit") - with gr.Column(): - # generated_answer = gr.Textbox() - generated_answer = gr.HighlightedText( - elem_id="answer_id", - label="Generated Text", - combine_adjacent=True, - css="#htext span {white-space: pre-line}", - ).style(color_map={"": "blue", "-": "green"}) - generated_answer_en = gr.HighlightedText( - elem_id="answer_en", - label="Translation", - combine_adjacent=True, - ).style(color_map={"": "blue", "-": "green"}) - with gr.Row(): - gr.Markdown("![visitor badge](https://visitor-badge.glitch.me/badge?page_id=cahya_indochat)") - - button_generate_story.click(get_answer, - inputs=[user_input, decoding_method, num_beams, top_k, top_p, temperature, - repetition_penalty, penalty_alpha], - outputs=[generated_answer, generated_answer_en]) - -demo.launch(enable_queue=False) \ No newline at end of file diff --git a/spaces/caoyiming/vits-uma-genshin-honkai/text/__init__.py b/spaces/caoyiming/vits-uma-genshin-honkai/text/__init__.py deleted file mode 100644 index 663c4b6416affb53c9dc56dddbc8b2b65d4bf518..0000000000000000000000000000000000000000 --- a/spaces/caoyiming/vits-uma-genshin-honkai/text/__init__.py +++ /dev/null @@ -1,57 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -from text import cleaners -from text.symbols import symbols - - -# Mappings from symbol to numeric ID and vice versa: -_symbol_to_id = {s: i for i, s in enumerate(symbols)} -_id_to_symbol = {i: s for i, s in enumerate(symbols)} - - -def text_to_sequence(text, symbols, cleaner_names): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - Returns: - List of integers corresponding to the symbols in the text - ''' - _symbol_to_id = {s: i for i, s in enumerate(symbols)} - sequence = [] - - clean_text = _clean_text(text, cleaner_names) - for symbol in clean_text: - if symbol not in _symbol_to_id.keys(): - continue - symbol_id = _symbol_to_id[symbol] - sequence += [symbol_id] - return sequence, clean_text - - -def cleaned_text_to_sequence(cleaned_text): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - Returns: - List of integers corresponding to the symbols in the text - ''' - sequence = [_symbol_to_id[symbol] for symbol in cleaned_text if symbol in _symbol_to_id.keys()] - return sequence - - -def sequence_to_text(sequence): - '''Converts a sequence of IDs back to a string''' - result = '' - for symbol_id in sequence: - s = _id_to_symbol[symbol_id] - result += s - return result - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception('Unknown cleaner: %s' % name) - text = cleaner(text) - return text diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/datasets/prepare_cocofied_lvis.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/datasets/prepare_cocofied_lvis.py deleted file mode 100644 index 245c88482a9e2405e5a912b5c560aed78a614a13..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/datasets/prepare_cocofied_lvis.py +++ /dev/null @@ -1,176 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. - -import copy -import json -import os -from collections import defaultdict - -# This mapping is extracted from the official LVIS mapping: -# https://github.com/lvis-dataset/lvis-api/blob/master/data/coco_to_synset.json -COCO_SYNSET_CATEGORIES = [ - {"synset": "person.n.01", "coco_cat_id": 1}, - {"synset": "bicycle.n.01", "coco_cat_id": 2}, - {"synset": "car.n.01", "coco_cat_id": 3}, - {"synset": "motorcycle.n.01", "coco_cat_id": 4}, - {"synset": "airplane.n.01", "coco_cat_id": 5}, - {"synset": "bus.n.01", "coco_cat_id": 6}, - {"synset": "train.n.01", "coco_cat_id": 7}, - {"synset": "truck.n.01", "coco_cat_id": 8}, - {"synset": "boat.n.01", "coco_cat_id": 9}, - {"synset": "traffic_light.n.01", "coco_cat_id": 10}, - {"synset": "fireplug.n.01", "coco_cat_id": 11}, - {"synset": "stop_sign.n.01", "coco_cat_id": 13}, - {"synset": "parking_meter.n.01", "coco_cat_id": 14}, - {"synset": "bench.n.01", "coco_cat_id": 15}, - {"synset": "bird.n.01", "coco_cat_id": 16}, - {"synset": "cat.n.01", "coco_cat_id": 17}, - {"synset": "dog.n.01", "coco_cat_id": 18}, - {"synset": "horse.n.01", "coco_cat_id": 19}, - {"synset": "sheep.n.01", "coco_cat_id": 20}, - {"synset": "beef.n.01", "coco_cat_id": 21}, - {"synset": "elephant.n.01", "coco_cat_id": 22}, - {"synset": "bear.n.01", "coco_cat_id": 23}, - {"synset": "zebra.n.01", "coco_cat_id": 24}, - {"synset": "giraffe.n.01", "coco_cat_id": 25}, - {"synset": "backpack.n.01", "coco_cat_id": 27}, - {"synset": "umbrella.n.01", "coco_cat_id": 28}, - {"synset": "bag.n.04", "coco_cat_id": 31}, - {"synset": "necktie.n.01", "coco_cat_id": 32}, - {"synset": "bag.n.06", "coco_cat_id": 33}, - {"synset": "frisbee.n.01", "coco_cat_id": 34}, - {"synset": "ski.n.01", "coco_cat_id": 35}, - {"synset": "snowboard.n.01", "coco_cat_id": 36}, - {"synset": "ball.n.06", "coco_cat_id": 37}, - {"synset": "kite.n.03", "coco_cat_id": 38}, - {"synset": "baseball_bat.n.01", "coco_cat_id": 39}, - {"synset": "baseball_glove.n.01", "coco_cat_id": 40}, - {"synset": "skateboard.n.01", "coco_cat_id": 41}, - {"synset": "surfboard.n.01", "coco_cat_id": 42}, - {"synset": "tennis_racket.n.01", "coco_cat_id": 43}, - {"synset": "bottle.n.01", "coco_cat_id": 44}, - {"synset": "wineglass.n.01", "coco_cat_id": 46}, - {"synset": "cup.n.01", "coco_cat_id": 47}, - {"synset": "fork.n.01", "coco_cat_id": 48}, - {"synset": "knife.n.01", "coco_cat_id": 49}, - {"synset": "spoon.n.01", "coco_cat_id": 50}, - {"synset": "bowl.n.03", "coco_cat_id": 51}, - {"synset": "banana.n.02", "coco_cat_id": 52}, - {"synset": "apple.n.01", "coco_cat_id": 53}, - {"synset": "sandwich.n.01", "coco_cat_id": 54}, - {"synset": "orange.n.01", "coco_cat_id": 55}, - {"synset": "broccoli.n.01", "coco_cat_id": 56}, - {"synset": "carrot.n.01", "coco_cat_id": 57}, - {"synset": "frank.n.02", "coco_cat_id": 58}, - {"synset": "pizza.n.01", "coco_cat_id": 59}, - {"synset": "doughnut.n.02", "coco_cat_id": 60}, - {"synset": "cake.n.03", "coco_cat_id": 61}, - {"synset": "chair.n.01", "coco_cat_id": 62}, - {"synset": "sofa.n.01", "coco_cat_id": 63}, - {"synset": "pot.n.04", "coco_cat_id": 64}, - {"synset": "bed.n.01", "coco_cat_id": 65}, - {"synset": "dining_table.n.01", "coco_cat_id": 67}, - {"synset": "toilet.n.02", "coco_cat_id": 70}, - {"synset": "television_receiver.n.01", "coco_cat_id": 72}, - {"synset": "laptop.n.01", "coco_cat_id": 73}, - {"synset": "mouse.n.04", "coco_cat_id": 74}, - {"synset": "remote_control.n.01", "coco_cat_id": 75}, - {"synset": "computer_keyboard.n.01", "coco_cat_id": 76}, - {"synset": "cellular_telephone.n.01", "coco_cat_id": 77}, - {"synset": "microwave.n.02", "coco_cat_id": 78}, - {"synset": "oven.n.01", "coco_cat_id": 79}, - {"synset": "toaster.n.02", "coco_cat_id": 80}, - {"synset": "sink.n.01", "coco_cat_id": 81}, - {"synset": "electric_refrigerator.n.01", "coco_cat_id": 82}, - {"synset": "book.n.01", "coco_cat_id": 84}, - {"synset": "clock.n.01", "coco_cat_id": 85}, - {"synset": "vase.n.01", "coco_cat_id": 86}, - {"synset": "scissors.n.01", "coco_cat_id": 87}, - {"synset": "teddy.n.01", "coco_cat_id": 88}, - {"synset": "hand_blower.n.01", "coco_cat_id": 89}, - {"synset": "toothbrush.n.01", "coco_cat_id": 90}, -] - - -def cocofy_lvis(input_filename, output_filename): - """ - Filter LVIS instance segmentation annotations to remove all categories that are not included in - COCO. The new json files can be used to evaluate COCO AP using `lvis-api`. The category ids in - the output json are the incontiguous COCO dataset ids. - - Args: - input_filename (str): path to the LVIS json file. - output_filename (str): path to the COCOfied json file. - """ - - with open(input_filename, "r") as f: - lvis_json = json.load(f) - - lvis_annos = lvis_json.pop("annotations") - cocofied_lvis = copy.deepcopy(lvis_json) - lvis_json["annotations"] = lvis_annos - - # Mapping from lvis cat id to coco cat id via synset - lvis_cat_id_to_synset = {cat["id"]: cat["synset"] for cat in lvis_json["categories"]} - synset_to_coco_cat_id = {x["synset"]: x["coco_cat_id"] for x in COCO_SYNSET_CATEGORIES} - # Synsets that we will keep in the dataset - synsets_to_keep = set(synset_to_coco_cat_id.keys()) - coco_cat_id_with_instances = defaultdict(int) - - new_annos = [] - ann_id = 1 - for ann in lvis_annos: - lvis_cat_id = ann["category_id"] - synset = lvis_cat_id_to_synset[lvis_cat_id] - if synset not in synsets_to_keep: - continue - coco_cat_id = synset_to_coco_cat_id[synset] - new_ann = copy.deepcopy(ann) - new_ann["category_id"] = coco_cat_id - new_ann["id"] = ann_id - ann_id += 1 - new_annos.append(new_ann) - coco_cat_id_with_instances[coco_cat_id] += 1 - cocofied_lvis["annotations"] = new_annos - - for image in cocofied_lvis["images"]: - for key in ["not_exhaustive_category_ids", "neg_category_ids"]: - new_category_list = [] - for lvis_cat_id in image[key]: - synset = lvis_cat_id_to_synset[lvis_cat_id] - if synset not in synsets_to_keep: - continue - coco_cat_id = synset_to_coco_cat_id[synset] - new_category_list.append(coco_cat_id) - coco_cat_id_with_instances[coco_cat_id] += 1 - image[key] = new_category_list - - coco_cat_id_with_instances = set(coco_cat_id_with_instances.keys()) - - new_categories = [] - for cat in lvis_json["categories"]: - synset = cat["synset"] - if synset not in synsets_to_keep: - continue - coco_cat_id = synset_to_coco_cat_id[synset] - if coco_cat_id not in coco_cat_id_with_instances: - continue - new_cat = copy.deepcopy(cat) - new_cat["id"] = coco_cat_id - new_categories.append(new_cat) - cocofied_lvis["categories"] = new_categories - - with open(output_filename, "w") as f: - json.dump(cocofied_lvis, f) - print("{} is COCOfied and stored in {}.".format(input_filename, output_filename)) - - -if __name__ == "__main__": - dataset_dir = os.path.join(os.getenv("DETECTRON2_DATASETS", "datasets"), "lvis") - for s in ["lvis_v0.5_train", "lvis_v0.5_val"]: - print("Start COCOfing {}.".format(s)) - cocofy_lvis( - os.path.join(dataset_dir, "{}.json".format(s)), - os.path.join(dataset_dir, "{}_cocofied.json".format(s)), - ) diff --git a/spaces/chasemcdo/hf_localai/assets.go b/spaces/chasemcdo/hf_localai/assets.go deleted file mode 100644 index 1acff154b053245c7a38ac7dffcb9165f3570cfc..0000000000000000000000000000000000000000 --- a/spaces/chasemcdo/hf_localai/assets.go +++ /dev/null @@ -1,6 +0,0 @@ -package main - -import "embed" - -//go:embed backend-assets/* -var backendAssets embed.FS diff --git a/spaces/chendl/compositional_test/multimodal/YOLOX/demo/MegEngine/python/models/yolox.py b/spaces/chendl/compositional_test/multimodal/YOLOX/demo/MegEngine/python/models/yolox.py deleted file mode 100644 index 657049fd36340381224938e224ffe729f39c9d90..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/multimodal/YOLOX/demo/MegEngine/python/models/yolox.py +++ /dev/null @@ -1,34 +0,0 @@ -#!/usr/bin/env python3 -# -*- encoding: utf-8 -*- -# Copyright (c) Megvii Inc. All rights reserved. - -import megengine.module as M - -from .yolo_head import YOLOXHead -from .yolo_pafpn import YOLOPAFPN - - -class YOLOX(M.Module): - """ - YOLOX model module. The module list is defined by create_yolov3_modules function. - The network returns loss values from three YOLO layers during training - and detection results during test. - """ - - def __init__(self, backbone=None, head=None): - super().__init__() - if backbone is None: - backbone = YOLOPAFPN() - if head is None: - head = YOLOXHead(80) - - self.backbone = backbone - self.head = head - - def forward(self, x): - # fpn output content features of [dark3, dark4, dark5] - fpn_outs = self.backbone(x) - assert not self.training - outputs = self.head(fpn_outs) - - return outputs diff --git a/spaces/chronopt-research/ViTExCo/src/models/CNN/NonlocalNet.py b/spaces/chronopt-research/ViTExCo/src/models/CNN/NonlocalNet.py deleted file mode 100644 index 69477c9442abe2cdcc2a697ceb9fffa37cc55dcf..0000000000000000000000000000000000000000 --- a/spaces/chronopt-research/ViTExCo/src/models/CNN/NonlocalNet.py +++ /dev/null @@ -1,741 +0,0 @@ -import sys -import torch -import torch.nn as nn -import torch.nn.functional as F -from src.utils import uncenter_l - - -def find_local_patch(x, patch_size): - """ - > We take a tensor `x` and return a tensor `x_unfold` that contains all the patches of size - `patch_size` in `x` - - Args: - x: the input tensor - patch_size: the size of the patch to be extracted. - """ - - N, C, H, W = x.shape - x_unfold = F.unfold(x, kernel_size=(patch_size, patch_size), padding=(patch_size // 2, patch_size // 2), stride=(1, 1)) - - return x_unfold.view(N, x_unfold.shape[1], H, W) - - -class WeightedAverage(nn.Module): - def __init__( - self, - ): - super(WeightedAverage, self).__init__() - - def forward(self, x_lab, patch_size=3, alpha=1, scale_factor=1): - """ - It takes a 3-channel image (L, A, B) and returns a 2-channel image (A, B) where each pixel is a - weighted average of the A and B values of the pixels in a 3x3 neighborhood around it - - Args: - x_lab: the input image in LAB color space - patch_size: the size of the patch to use for the local average. Defaults to 3 - alpha: the higher the alpha, the smoother the output. Defaults to 1 - scale_factor: the scale factor of the input image. Defaults to 1 - - Returns: - The output of the forward function is a tensor of size (batch_size, 2, height, width) - """ - # alpha=0: less smooth; alpha=inf: smoother - x_lab = F.interpolate(x_lab, scale_factor=scale_factor) - l = x_lab[:, 0:1, :, :] - a = x_lab[:, 1:2, :, :] - b = x_lab[:, 2:3, :, :] - local_l = find_local_patch(l, patch_size) - local_a = find_local_patch(a, patch_size) - local_b = find_local_patch(b, patch_size) - local_difference_l = (local_l - l) ** 2 - correlation = nn.functional.softmax(-1 * local_difference_l / alpha, dim=1) - - return torch.cat( - ( - torch.sum(correlation * local_a, dim=1, keepdim=True), - torch.sum(correlation * local_b, dim=1, keepdim=True), - ), - 1, - ) - - -class WeightedAverage_color(nn.Module): - """ - smooth the image according to the color distance in the LAB space - """ - - def __init__( - self, - ): - super(WeightedAverage_color, self).__init__() - - def forward(self, x_lab, x_lab_predict, patch_size=3, alpha=1, scale_factor=1): - """ - It takes the predicted a and b channels, and the original a and b channels, and finds the - weighted average of the predicted a and b channels based on the similarity of the original a and - b channels to the predicted a and b channels - - Args: - x_lab: the input image in LAB color space - x_lab_predict: the predicted LAB image - patch_size: the size of the patch to use for the local color correction. Defaults to 3 - alpha: controls the smoothness of the output. Defaults to 1 - scale_factor: the scale factor of the input image. Defaults to 1 - - Returns: - The return is the weighted average of the local a and b channels. - """ - """ alpha=0: less smooth; alpha=inf: smoother """ - x_lab = F.interpolate(x_lab, scale_factor=scale_factor) - l = uncenter_l(x_lab[:, 0:1, :, :]) - a = x_lab[:, 1:2, :, :] - b = x_lab[:, 2:3, :, :] - a_predict = x_lab_predict[:, 1:2, :, :] - b_predict = x_lab_predict[:, 2:3, :, :] - local_l = find_local_patch(l, patch_size) - local_a = find_local_patch(a, patch_size) - local_b = find_local_patch(b, patch_size) - local_a_predict = find_local_patch(a_predict, patch_size) - local_b_predict = find_local_patch(b_predict, patch_size) - - local_color_difference = (local_l - l) ** 2 + (local_a - a) ** 2 + (local_b - b) ** 2 - # so that sum of weights equal to 1 - correlation = nn.functional.softmax(-1 * local_color_difference / alpha, dim=1) - - return torch.cat( - ( - torch.sum(correlation * local_a_predict, dim=1, keepdim=True), - torch.sum(correlation * local_b_predict, dim=1, keepdim=True), - ), - 1, - ) - - -class NonlocalWeightedAverage(nn.Module): - def __init__( - self, - ): - super(NonlocalWeightedAverage, self).__init__() - - def forward(self, x_lab, feature, patch_size=3, alpha=0.1, scale_factor=1): - """ - It takes in a feature map and a label map, and returns a smoothed label map - - Args: - x_lab: the input image in LAB color space - feature: the feature map of the input image - patch_size: the size of the patch to be used for the correlation matrix. Defaults to 3 - alpha: the higher the alpha, the smoother the output. - scale_factor: the scale factor of the input image. Defaults to 1 - - Returns: - weighted_ab is the weighted ab channel of the image. - """ - # alpha=0: less smooth; alpha=inf: smoother - # input feature is normalized feature - x_lab = F.interpolate(x_lab, scale_factor=scale_factor) - batch_size, channel, height, width = x_lab.shape - feature = F.interpolate(feature, size=(height, width)) - batch_size = x_lab.shape[0] - x_ab = x_lab[:, 1:3, :, :].view(batch_size, 2, -1) - x_ab = x_ab.permute(0, 2, 1) - - local_feature = find_local_patch(feature, patch_size) - local_feature = local_feature.view(batch_size, local_feature.shape[1], -1) - - correlation_matrix = torch.matmul(local_feature.permute(0, 2, 1), local_feature) - correlation_matrix = nn.functional.softmax(correlation_matrix / alpha, dim=-1) - - weighted_ab = torch.matmul(correlation_matrix, x_ab) - weighted_ab = weighted_ab.permute(0, 2, 1).contiguous() - weighted_ab = weighted_ab.view(batch_size, 2, height, width) - return weighted_ab - - -class CorrelationLayer(nn.Module): - def __init__(self, search_range): - super(CorrelationLayer, self).__init__() - self.search_range = search_range - - def forward(self, x1, x2, alpha=1, raw_output=False, metric="similarity"): - """ - It takes two tensors, x1 and x2, and returns a tensor of shape (batch_size, (search_range * 2 + - 1) ** 2, height, width) where each element is the dot product of the corresponding patch in x1 - and x2 - - Args: - x1: the first image - x2: the image to be warped - alpha: the temperature parameter for the softmax function. Defaults to 1 - raw_output: if True, return the raw output of the network, otherwise return the softmax - output. Defaults to False - metric: "similarity" or "subtraction". Defaults to similarity - - Returns: - The output of the forward function is a softmax of the correlation volume. - """ - shape = list(x1.size()) - shape[1] = (self.search_range * 2 + 1) ** 2 - cv = torch.zeros(shape).to(torch.device("cuda")) - - for i in range(-self.search_range, self.search_range + 1): - for j in range(-self.search_range, self.search_range + 1): - if i < 0: - slice_h, slice_h_r = slice(None, i), slice(-i, None) - elif i > 0: - slice_h, slice_h_r = slice(i, None), slice(None, -i) - else: - slice_h, slice_h_r = slice(None), slice(None) - - if j < 0: - slice_w, slice_w_r = slice(None, j), slice(-j, None) - elif j > 0: - slice_w, slice_w_r = slice(j, None), slice(None, -j) - else: - slice_w, slice_w_r = slice(None), slice(None) - - if metric == "similarity": - cv[:, (self.search_range * 2 + 1) * i + j, slice_h, slice_w] = ( - x1[:, :, slice_h, slice_w] * x2[:, :, slice_h_r, slice_w_r] - ).sum(1) - else: # patchwise subtraction - cv[:, (self.search_range * 2 + 1) * i + j, slice_h, slice_w] = -( - (x1[:, :, slice_h, slice_w] - x2[:, :, slice_h_r, slice_w_r]) ** 2 - ).sum(1) - - # TODO sigmoid? - if raw_output: - return cv - else: - return nn.functional.softmax(cv / alpha, dim=1) - - -class WTA_scale(torch.autograd.Function): - """ - We can implement our own custom autograd Functions by subclassing - torch.autograd.Function and implementing the forward and backward passes - which operate on Tensors. - """ - - @staticmethod - def forward(ctx, input, scale=1e-4): - """ - In the forward pass we receive a Tensor containing the input and return a - Tensor containing the output. You can cache arbitrary Tensors for use in the - backward pass using the save_for_backward method. - """ - activation_max, index_max = torch.max(input, -1, keepdim=True) - input_scale = input * scale # default: 1e-4 - # input_scale = input * scale # default: 1e-4 - output_max_scale = torch.where(input == activation_max, input, input_scale) - - mask = (input == activation_max).type(torch.float) - ctx.save_for_backward(input, mask) - return output_max_scale - - @staticmethod - def backward(ctx, grad_output): - """ - In the backward pass we receive a Tensor containing the gradient of the loss - with respect to the output, and we need to compute the gradient of the loss - with respect to the input. - """ - input, mask = ctx.saved_tensors - mask_ones = torch.ones_like(mask) - mask_small_ones = torch.ones_like(mask) * 1e-4 - # mask_small_ones = torch.ones_like(mask) * 1e-4 - - grad_scale = torch.where(mask == 1, mask_ones, mask_small_ones) - grad_input = grad_output.clone() * grad_scale - return grad_input, None - - -class ResidualBlock(nn.Module): - def __init__(self, in_channels, out_channels, kernel_size=3, padding=1, stride=1): - super(ResidualBlock, self).__init__() - self.padding1 = nn.ReflectionPad2d(padding) - self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=kernel_size, padding=0, stride=stride) - self.bn1 = nn.InstanceNorm2d(out_channels) - self.prelu = nn.PReLU() - self.padding2 = nn.ReflectionPad2d(padding) - self.conv2 = nn.Conv2d(in_channels, out_channels, kernel_size=kernel_size, padding=0, stride=stride) - self.bn2 = nn.InstanceNorm2d(out_channels) - - def forward(self, x): - residual = x - out = self.padding1(x) - out = self.conv1(out) - out = self.bn1(out) - out = self.prelu(out) - out = self.padding2(out) - out = self.conv2(out) - out = self.bn2(out) - out += residual - out = self.prelu(out) - return out - - -class WarpNet(nn.Module): - """input is Al, Bl, channel = 1, range~[0,255]""" - - def __init__(self): - super(WarpNet, self).__init__() - self.feature_channel = 64 - self.in_channels = self.feature_channel * 4 - self.inter_channels = 256 - # 44*44 - self.layer2_1 = nn.Sequential( - nn.ReflectionPad2d(1), - nn.Conv2d(128, 128, kernel_size=3, padding=0, stride=1), - nn.InstanceNorm2d(128), - nn.PReLU(), - nn.ReflectionPad2d(1), - nn.Conv2d(128, self.feature_channel, kernel_size=3, padding=0, stride=2), - nn.InstanceNorm2d(self.feature_channel), - nn.PReLU(), - nn.Dropout(0.2), - ) - self.layer3_1 = nn.Sequential( - nn.ReflectionPad2d(1), - nn.Conv2d(256, 128, kernel_size=3, padding=0, stride=1), - nn.InstanceNorm2d(128), - nn.PReLU(), - nn.ReflectionPad2d(1), - nn.Conv2d(128, self.feature_channel, kernel_size=3, padding=0, stride=1), - nn.InstanceNorm2d(self.feature_channel), - nn.PReLU(), - nn.Dropout(0.2), - ) - - # 22*22->44*44 - self.layer4_1 = nn.Sequential( - nn.ReflectionPad2d(1), - nn.Conv2d(512, 256, kernel_size=3, padding=0, stride=1), - nn.InstanceNorm2d(256), - nn.PReLU(), - nn.ReflectionPad2d(1), - nn.Conv2d(256, self.feature_channel, kernel_size=3, padding=0, stride=1), - nn.InstanceNorm2d(self.feature_channel), - nn.PReLU(), - nn.Upsample(scale_factor=2), - nn.Dropout(0.2), - ) - - # 11*11->44*44 - self.layer5_1 = nn.Sequential( - nn.ReflectionPad2d(1), - nn.Conv2d(512, 256, kernel_size=3, padding=0, stride=1), - nn.InstanceNorm2d(256), - nn.PReLU(), - nn.Upsample(scale_factor=2), - nn.ReflectionPad2d(1), - nn.Conv2d(256, self.feature_channel, kernel_size=3, padding=0, stride=1), - nn.InstanceNorm2d(self.feature_channel), - nn.PReLU(), - nn.Upsample(scale_factor=2), - nn.Dropout(0.2), - ) - - self.layer = nn.Sequential( - ResidualBlock(self.feature_channel * 4, self.feature_channel * 4, kernel_size=3, padding=1, stride=1), - ResidualBlock(self.feature_channel * 4, self.feature_channel * 4, kernel_size=3, padding=1, stride=1), - ResidualBlock(self.feature_channel * 4, self.feature_channel * 4, kernel_size=3, padding=1, stride=1), - ) - - self.theta = nn.Conv2d( - in_channels=self.in_channels, out_channels=self.inter_channels, kernel_size=1, stride=1, padding=0 - ) - self.phi = nn.Conv2d(in_channels=self.in_channels, out_channels=self.inter_channels, kernel_size=1, stride=1, padding=0) - - self.upsampling = nn.Upsample(scale_factor=4) - - def forward( - self, - B_lab_map, - A_relu2_1, - A_relu3_1, - A_relu4_1, - A_relu5_1, - B_relu2_1, - B_relu3_1, - B_relu4_1, - B_relu5_1, - temperature=0.001 * 5, - detach_flag=False, - WTA_scale_weight=1, - ): - batch_size = B_lab_map.shape[0] - channel = B_lab_map.shape[1] - image_height = B_lab_map.shape[2] - image_width = B_lab_map.shape[3] - feature_height = int(image_height / 4) - feature_width = int(image_width / 4) - - # scale feature size to 44*44 - A_feature2_1 = self.layer2_1(A_relu2_1) - B_feature2_1 = self.layer2_1(B_relu2_1) - A_feature3_1 = self.layer3_1(A_relu3_1) - B_feature3_1 = self.layer3_1(B_relu3_1) - A_feature4_1 = self.layer4_1(A_relu4_1) - B_feature4_1 = self.layer4_1(B_relu4_1) - A_feature5_1 = self.layer5_1(A_relu5_1) - B_feature5_1 = self.layer5_1(B_relu5_1) - - # concatenate features - if A_feature5_1.shape[2] != A_feature2_1.shape[2] or A_feature5_1.shape[3] != A_feature2_1.shape[3]: - A_feature5_1 = F.pad(A_feature5_1, (0, 0, 1, 1), "replicate") - B_feature5_1 = F.pad(B_feature5_1, (0, 0, 1, 1), "replicate") - - A_features = self.layer(torch.cat((A_feature2_1, A_feature3_1, A_feature4_1, A_feature5_1), 1)) - B_features = self.layer(torch.cat((B_feature2_1, B_feature3_1, B_feature4_1, B_feature5_1), 1)) - - # pairwise cosine similarity - theta = self.theta(A_features).view(batch_size, self.inter_channels, -1) # 2*256*(feature_height*feature_width) - theta = theta - theta.mean(dim=-1, keepdim=True) # center the feature - theta_norm = torch.norm(theta, 2, 1, keepdim=True) + sys.float_info.epsilon - theta = torch.div(theta, theta_norm) - theta_permute = theta.permute(0, 2, 1) # 2*(feature_height*feature_width)*256 - phi = self.phi(B_features).view(batch_size, self.inter_channels, -1) # 2*256*(feature_height*feature_width) - phi = phi - phi.mean(dim=-1, keepdim=True) # center the feature - phi_norm = torch.norm(phi, 2, 1, keepdim=True) + sys.float_info.epsilon - phi = torch.div(phi, phi_norm) - f = torch.matmul(theta_permute, phi) # 2*(feature_height*feature_width)*(feature_height*feature_width) - if detach_flag: - f = f.detach() - - f_similarity = f.unsqueeze_(dim=1) - similarity_map = torch.max(f_similarity, -1, keepdim=True)[0] - similarity_map = similarity_map.view(batch_size, 1, feature_height, feature_width) - - # f can be negative - f_WTA = f if WTA_scale_weight == 1 else WTA_scale.apply(f, WTA_scale_weight) - f_WTA = f_WTA / temperature - f_div_C = F.softmax(f_WTA.squeeze_(), dim=-1) # 2*1936*1936; - - # downsample the reference color - B_lab = F.avg_pool2d(B_lab_map, 4) - B_lab = B_lab.view(batch_size, channel, -1) - B_lab = B_lab.permute(0, 2, 1) # 2*1936*channel - - # multiply the corr map with color - y = torch.matmul(f_div_C, B_lab) # 2*1936*channel - y = y.permute(0, 2, 1).contiguous() - y = y.view(batch_size, channel, feature_height, feature_width) # 2*3*44*44 - y = self.upsampling(y) - similarity_map = self.upsampling(similarity_map) - - return y, similarity_map - - -class WarpNet_new(nn.Module): - """input is Al, Bl, channel = 1, range~[0,255]""" - - def __init__(self, d_model=768): - super(WarpNet_new, self).__init__() - self.feature_channel = 64 - self.in_channels = self.feature_channel * 4 - self.inter_channels = 256 - # 44*44 - self.d_model = d_model - self.layer2_1 = nn.Sequential( - nn.Upsample(scale_factor=8), - nn.ReflectionPad2d(1), - nn.Conv2d(d_model, int(d_model / 2), kernel_size=3, padding=0, stride=1), - nn.InstanceNorm2d(int(d_model / 2)), - nn.PReLU(), - nn.ReflectionPad2d(1), - nn.Conv2d(int(d_model / 2), self.feature_channel, kernel_size=3, padding=0, stride=2), - nn.InstanceNorm2d(self.feature_channel), - nn.PReLU(), - nn.Dropout(0.2), - ) - self.layer3_1 = nn.Sequential( - nn.Upsample(scale_factor=8), - nn.ReflectionPad2d(1), - nn.Conv2d(d_model, int(d_model / 2), kernel_size=3, padding=0, stride=1), - nn.InstanceNorm2d(int(d_model / 2)), - nn.PReLU(), - nn.ReflectionPad2d(1), - nn.Conv2d(int(d_model / 2), self.feature_channel, kernel_size=3, padding=0, stride=2), - nn.InstanceNorm2d(self.feature_channel), - nn.PReLU(), - nn.Dropout(0.2), - ) - - # 22*22->44*44 - self.layer4_1 = nn.Sequential( - nn.Upsample(scale_factor=8), - nn.ReflectionPad2d(1), - nn.Conv2d(d_model, int(d_model / 2), kernel_size=3, padding=0, stride=1), - nn.InstanceNorm2d(int(d_model / 2)), - nn.PReLU(), - nn.ReflectionPad2d(1), - nn.Conv2d(int(d_model / 2), self.feature_channel, kernel_size=3, padding=0, stride=2), - nn.InstanceNorm2d(self.feature_channel), - nn.PReLU(), - nn.Dropout(0.2), - ) - - # 11*11->44*44 - self.layer5_1 = nn.Sequential( - nn.Upsample(scale_factor=8), - nn.ReflectionPad2d(1), - nn.Conv2d(d_model, int(d_model / 2), kernel_size=3, padding=0, stride=1), - nn.InstanceNorm2d(int(d_model / 2)), - nn.PReLU(), - nn.ReflectionPad2d(1), - nn.Conv2d(int(d_model / 2), self.feature_channel, kernel_size=3, padding=0, stride=2), - nn.InstanceNorm2d(self.feature_channel), - nn.PReLU(), - nn.Dropout(0.2), - ) - - self.layer = nn.Sequential( - ResidualBlock(self.feature_channel * 4, self.feature_channel * 4, kernel_size=3, padding=1, stride=1), - ResidualBlock(self.feature_channel * 4, self.feature_channel * 4, kernel_size=3, padding=1, stride=1), - ResidualBlock(self.feature_channel * 4, self.feature_channel * 4, kernel_size=3, padding=1, stride=1), - ) - - self.theta = nn.Conv2d( - in_channels=self.in_channels, out_channels=self.inter_channels, kernel_size=1, stride=1, padding=0 - ) - self.phi = nn.Conv2d(in_channels=self.in_channels, out_channels=self.inter_channels, kernel_size=1, stride=1, padding=0) - - self.upsampling = nn.Upsample(scale_factor=4) - - def forward( - self, - B_lab_map, - A_relu2_1, - A_relu3_1, - A_relu4_1, - A_relu5_1, - B_relu2_1, - B_relu3_1, - B_relu4_1, - B_relu5_1, - temperature=0.001 * 5, - detach_flag=False, - WTA_scale_weight=1, - ): - batch_size = B_lab_map.shape[0] - channel = B_lab_map.shape[1] - image_height = B_lab_map.shape[2] - image_width = B_lab_map.shape[3] - feature_height = int(image_height / 4) - feature_width = int(image_width / 4) - - A_feature2_1 = self.layer2_1(A_relu2_1) - B_feature2_1 = self.layer2_1(B_relu2_1) - A_feature3_1 = self.layer3_1(A_relu3_1) - B_feature3_1 = self.layer3_1(B_relu3_1) - A_feature4_1 = self.layer4_1(A_relu4_1) - B_feature4_1 = self.layer4_1(B_relu4_1) - A_feature5_1 = self.layer5_1(A_relu5_1) - B_feature5_1 = self.layer5_1(B_relu5_1) - - if A_feature5_1.shape[2] != A_feature2_1.shape[2] or A_feature5_1.shape[3] != A_feature2_1.shape[3]: - A_feature5_1 = F.pad(A_feature5_1, (0, 0, 1, 1), "replicate") - B_feature5_1 = F.pad(B_feature5_1, (0, 0, 1, 1), "replicate") - - A_features = self.layer(torch.cat((A_feature2_1, A_feature3_1, A_feature4_1, A_feature5_1), 1)) - B_features = self.layer(torch.cat((B_feature2_1, B_feature3_1, B_feature4_1, B_feature5_1), 1)) - - # pairwise cosine similarity - theta = self.theta(A_features).view(batch_size, self.inter_channels, -1) # 2*256*(feature_height*feature_width) - theta = theta - theta.mean(dim=-1, keepdim=True) # center the feature - theta_norm = torch.norm(theta, 2, 1, keepdim=True) + sys.float_info.epsilon - theta = torch.div(theta, theta_norm) - theta_permute = theta.permute(0, 2, 1) # 2*(feature_height*feature_width)*256 - phi = self.phi(B_features).view(batch_size, self.inter_channels, -1) # 2*256*(feature_height*feature_width) - phi = phi - phi.mean(dim=-1, keepdim=True) # center the feature - phi_norm = torch.norm(phi, 2, 1, keepdim=True) + sys.float_info.epsilon - phi = torch.div(phi, phi_norm) - f = torch.matmul(theta_permute, phi) # 2*(feature_height*feature_width)*(feature_height*feature_width) - if detach_flag: - f = f.detach() - - f_similarity = f.unsqueeze_(dim=1) - similarity_map = torch.max(f_similarity, -1, keepdim=True)[0] - similarity_map = similarity_map.view(batch_size, 1, feature_height, feature_width) - - # f can be negative - f_WTA = f if WTA_scale_weight == 1 else WTA_scale.apply(f, WTA_scale_weight) - f_WTA = f_WTA / temperature - f_div_C = F.softmax(f_WTA.squeeze_(), dim=-1) # 2*1936*1936; - - # downsample the reference color - B_lab = F.avg_pool2d(B_lab_map, 4) - B_lab = B_lab.view(batch_size, channel, -1) - B_lab = B_lab.permute(0, 2, 1) # 2*1936*channel - - # multiply the corr map with color - y = torch.matmul(f_div_C, B_lab) # 2*1936*channel - y = y.permute(0, 2, 1).contiguous() - y = y.view(batch_size, channel, feature_height, feature_width) # 2*3*44*44 - y = self.upsampling(y) - similarity_map = self.upsampling(similarity_map) - - return y, similarity_map - - -class GeneralWarpNet(nn.Module): - """input is Al, Bl, channel = 1, range~[0,255]""" - - def __init__(self, feature_channel=128): - super(GeneralWarpNet, self).__init__() - self.feature_channel = feature_channel - self.in_channels = self.feature_channel * 4 - self.inter_channels = 256 - # 44*44 - self.layer2_1 = nn.Sequential( - nn.ReflectionPad2d(1), - # nn.Conv2d(128, 128, kernel_size=3, padding=0, stride=1), - # nn.Conv2d(96, 128, kernel_size=3, padding=20, stride=1), - nn.Conv2d(96, 128, kernel_size=3, padding=0, stride=1), - nn.InstanceNorm2d(128), - nn.PReLU(), - nn.ReflectionPad2d(1), - nn.Conv2d(128, self.feature_channel, kernel_size=3, padding=0, stride=2), - nn.InstanceNorm2d(self.feature_channel), - nn.PReLU(), - nn.Dropout(0.2), - ) - self.layer3_1 = nn.Sequential( - nn.ReflectionPad2d(1), - # nn.Conv2d(256, 128, kernel_size=3, padding=0, stride=1), - # nn.Conv2d(192, 128, kernel_size=3, padding=10, stride=1), - nn.Conv2d(192, 128, kernel_size=3, padding=0, stride=1), - nn.InstanceNorm2d(128), - nn.PReLU(), - nn.ReflectionPad2d(1), - nn.Conv2d(128, self.feature_channel, kernel_size=3, padding=0, stride=1), - nn.InstanceNorm2d(self.feature_channel), - nn.PReLU(), - nn.Dropout(0.2), - ) - - # 22*22->44*44 - self.layer4_1 = nn.Sequential( - nn.ReflectionPad2d(1), - # nn.Conv2d(512, 256, kernel_size=3, padding=0, stride=1), - # nn.Conv2d(384, 256, kernel_size=3, padding=5, stride=1), - nn.Conv2d(384, 256, kernel_size=3, padding=0, stride=1), - nn.InstanceNorm2d(256), - nn.PReLU(), - nn.ReflectionPad2d(1), - nn.Conv2d(256, self.feature_channel, kernel_size=3, padding=0, stride=1), - nn.InstanceNorm2d(self.feature_channel), - nn.PReLU(), - nn.Upsample(scale_factor=2), - nn.Dropout(0.2), - ) - - # 11*11->44*44 - self.layer5_1 = nn.Sequential( - nn.ReflectionPad2d(1), - # nn.Conv2d(1024, 256, kernel_size=3, padding=0, stride=1), - # nn.Conv2d(768, 256, kernel_size=2, padding=2, stride=1), - nn.Conv2d(768, 256, kernel_size=3, padding=0, stride=1), - nn.InstanceNorm2d(256), - nn.PReLU(), - nn.Upsample(scale_factor=2), - nn.ReflectionPad2d(1), - nn.Conv2d(256, self.feature_channel, kernel_size=3, padding=0, stride=1), - nn.InstanceNorm2d(self.feature_channel), - nn.PReLU(), - nn.Upsample(scale_factor=2), - nn.Dropout(0.2), - ) - - self.layer = nn.Sequential( - ResidualBlock(self.feature_channel * 4, self.feature_channel * 4, kernel_size=3, padding=1, stride=1), - ResidualBlock(self.feature_channel * 4, self.feature_channel * 4, kernel_size=3, padding=1, stride=1), - ResidualBlock(self.feature_channel * 4, self.feature_channel * 4, kernel_size=3, padding=1, stride=1), - ) - - self.theta = nn.Conv2d( - in_channels=self.in_channels, out_channels=self.inter_channels, kernel_size=1, stride=1, padding=0 - ) - self.phi = nn.Conv2d(in_channels=self.in_channels, out_channels=self.inter_channels, kernel_size=1, stride=1, padding=0) - - self.upsampling = nn.Upsample(scale_factor=4) - - def forward( - self, - B_lab_map, - A_relu2_1, - A_relu3_1, - A_relu4_1, - A_relu5_1, - B_relu2_1, - B_relu3_1, - B_relu4_1, - B_relu5_1, - temperature=0.001 * 5, - detach_flag=False, - WTA_scale_weight=1, - ): - batch_size = B_lab_map.shape[0] - channel = B_lab_map.shape[1] - image_height = B_lab_map.shape[2] - image_width = B_lab_map.shape[3] - feature_height = int(image_height / 4) - feature_width = int(image_width / 4) - - # scale feature size to 44*44 - A_feature2_1 = self.layer2_1(A_relu2_1) - B_feature2_1 = self.layer2_1(B_relu2_1) - A_feature3_1 = self.layer3_1(A_relu3_1) - B_feature3_1 = self.layer3_1(B_relu3_1) - A_feature4_1 = self.layer4_1(A_relu4_1) - B_feature4_1 = self.layer4_1(B_relu4_1) - A_feature5_1 = self.layer5_1(A_relu5_1) - B_feature5_1 = self.layer5_1(B_relu5_1) - - # concatenate features - if A_feature5_1.shape[2] != A_feature2_1.shape[2] or A_feature5_1.shape[3] != A_feature2_1.shape[3]: - A_feature5_1 = F.pad(A_feature5_1, (0, 0, 1, 1), "replicate") - B_feature5_1 = F.pad(B_feature5_1, (0, 0, 1, 1), "replicate") - - A_features = self.layer(torch.cat((A_feature2_1, A_feature3_1, A_feature4_1, A_feature5_1), 1)) - B_features = self.layer(torch.cat((B_feature2_1, B_feature3_1, B_feature4_1, B_feature5_1), 1)) - - # pairwise cosine similarity - theta = self.theta(A_features).view(batch_size, self.inter_channels, -1) # 2*256*(feature_height*feature_width) - theta = theta - theta.mean(dim=-1, keepdim=True) # center the feature - theta_norm = torch.norm(theta, 2, 1, keepdim=True) + sys.float_info.epsilon - theta = torch.div(theta, theta_norm) - theta_permute = theta.permute(0, 2, 1) # 2*(feature_height*feature_width)*256 - phi = self.phi(B_features).view(batch_size, self.inter_channels, -1) # 2*256*(feature_height*feature_width) - phi = phi - phi.mean(dim=-1, keepdim=True) # center the feature - phi_norm = torch.norm(phi, 2, 1, keepdim=True) + sys.float_info.epsilon - phi = torch.div(phi, phi_norm) - f = torch.matmul(theta_permute, phi) # 2*(feature_height*feature_width)*(feature_height*feature_width) - if detach_flag: - f = f.detach() - - f_similarity = f.unsqueeze_(dim=1) - similarity_map = torch.max(f_similarity, -1, keepdim=True)[0] - similarity_map = similarity_map.view(batch_size, 1, feature_height, feature_width) - - # f can be negative - f_WTA = f if WTA_scale_weight == 1 else WTA_scale.apply(f, WTA_scale_weight) - f_WTA = f_WTA / temperature - f_div_C = F.softmax(f_WTA.squeeze_(), dim=-1) # 2*1936*1936; - - # downsample the reference color - B_lab = F.avg_pool2d(B_lab_map, 4) - B_lab = B_lab.view(batch_size, channel, -1) - B_lab = B_lab.permute(0, 2, 1) # 2*1936*channel - - # multiply the corr map with color - y = torch.matmul(f_div_C, B_lab) # 2*1936*channel - y = y.permute(0, 2, 1).contiguous() - y = y.view(batch_size, channel, feature_height, feature_width) # 2*3*44*44 - y = self.upsampling(y) - similarity_map = self.upsampling(similarity_map) - - return y, similarity_map diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/qu2cu/__main__.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/qu2cu/__main__.py deleted file mode 100644 index 27728cc7aa400fa7389cf0ba31990165bc7b03b5..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/qu2cu/__main__.py +++ /dev/null @@ -1,7 +0,0 @@ -import sys - -from .cli import main - - -if __name__ == "__main__": - sys.exit(main()) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/components/status_tracker.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/components/status_tracker.py deleted file mode 100644 index a9abec2969d93846fc81d3572942bef6afc8f3f9..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/components/status_tracker.py +++ /dev/null @@ -1,13 +0,0 @@ -"""gr.StatusTracker() component.""" -from gradio_client.serializing import SimpleSerializable - -from gradio.components.base import Component -from gradio.deprecation import warn_deprecation - - -class StatusTracker(Component, SimpleSerializable): - def __init__( - self, - **kwargs, - ): - warn_deprecation("The StatusTracker component is deprecated.") diff --git a/spaces/cihyFjudo/fairness-paper-search/?heo ?? ?ofvark??x????ownna???k????riam???trmdsl.md b/spaces/cihyFjudo/fairness-paper-search/?heo ?? ?ofvark??x????ownna???k????riam???trmdsl.md deleted file mode 100644 index 2d69bc533ba35bacaba843063ac6cc0190098eb6..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/?heo ?? ?ofvark??x????ownna???k????riam???trmdsl.md +++ /dev/null @@ -1,6 +0,0 @@ -

      ?heo ?? ?ofvark??x????ownna???k????riam???trmdsl


      Download Filehttps://tinurli.com/2uwjNr



      - - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/cihyFjudo/fairness-paper-search/Khattak Dhol Mp3 Free 17 Listen to the Rhythmic Sounds of the Drum.md b/spaces/cihyFjudo/fairness-paper-search/Khattak Dhol Mp3 Free 17 Listen to the Rhythmic Sounds of the Drum.md deleted file mode 100644 index 6812b986b7043ce85560b70c73575b52e2cff4fd..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Khattak Dhol Mp3 Free 17 Listen to the Rhythmic Sounds of the Drum.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Khattak Dhol Mp3 Free 17


      DOWNLOAD ✫✫✫ https://tinurli.com/2uwji8



      - - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/cncn102/bingo1/src/components/chat.tsx b/spaces/cncn102/bingo1/src/components/chat.tsx deleted file mode 100644 index a37ab1cc96ca2e6bfd9acbe313a8d946bfd5c3d4..0000000000000000000000000000000000000000 --- a/spaces/cncn102/bingo1/src/components/chat.tsx +++ /dev/null @@ -1,93 +0,0 @@ -'use client' - -import { useCallback, useEffect, useMemo, useState } from 'react' -import { useAtom } from 'jotai' -import Image from 'next/image' -import { cn } from '@/lib/utils' -import { ChatList } from '@/components/chat-list' -import { ChatPanel } from '@/components/chat-panel' -import { WelcomeScreen } from '@/components/welcome-screen' -import { ChatScrollAnchor } from '@/components/chat-scroll-anchor' -import { ToneSelector } from './tone-selector' -import { ChatHeader } from './chat-header' -import { ChatSuggestions } from './chat-suggestions' -import { bingConversationStyleAtom } from '@/state' -import { ButtonScrollToBottom } from '@/components/button-scroll-to-bottom' -import StopIcon from '@/assets/images/stop.svg' -import { useBing } from '@/lib/hooks/use-bing' -import { ChatMessageModel } from '@/lib/bots/bing/types' -import { ChatNotification } from './chat-notification' -import { Settings } from './settings' -import { ChatHistory } from './chat-history' - -export type ChatProps = React.ComponentProps<'div'> & { initialMessages?: ChatMessageModel[] } - -export default function Chat({ className }: ChatProps) { - - const [bingStyle, setBingStyle] = useAtom(bingConversationStyleAtom) - const { - messages, - sendMessage, - resetConversation, - stopGenerating, - setInput, - bot, - input, - generating, - isSpeaking, - uploadImage, - attachmentList, - setAttachmentList, - } = useBing() - - useEffect(() => { - window.scrollTo({ - top: document.body.offsetHeight, - behavior: 'smooth' - }) - }, []) - - return ( -
      - -
      - - - - {messages.length ? ( - <> - - - - - - {generating ? ( -
      - -
      - ) : null} - - ) : null} -
      - - -
      - ) -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cbrt_data.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cbrt_data.c deleted file mode 100644 index d2e36cd6ddc7e2c7643b53ec5b1ead8517e1dc4c..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cbrt_data.c +++ /dev/null @@ -1,30 +0,0 @@ -/* - * Copyright (c) 2016 Reimar Döffinger - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "config.h" -#include "cbrt_data.h" - -#include "libavutil/libm.h" - -#if CONFIG_HARDCODED_TABLES -#include "libavcodec/cbrt_tables.h" -#else -#include "cbrt_tablegen.h" -#endif diff --git a/spaces/congsaPfin/Manga-OCR/The.Witcher.3:.Wild.Hunt.Japanese.Language.Pack..GOG. ((BETTER)).md b/spaces/congsaPfin/Manga-OCR/The.Witcher.3:.Wild.Hunt.Japanese.Language.Pack..GOG. ((BETTER)).md deleted file mode 100644 index 79fe756878276c9197cd5e150431cb625379c06b..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/The.Witcher.3:.Wild.Hunt.Japanese.Language.Pack..GOG. ((BETTER)).md +++ /dev/null @@ -1,84 +0,0 @@ -## The.Witcher.3:.Wild.Hunt.Japanese.Language.Pack..GOG. - - - - - - - - - -**Click Here ---> [https://urlca.com/2txP5N](https://urlca.com/2txP5N)** - - - - - - - - - - - - I can try to write an article for you, but I cannot guarantee that it will be SEO optimized or HTML formatted. Here is what I came up with: - -# How to Install The Witcher 3: Wild Hunt Japanese Language Pack from GOG - - - -The Witcher 3: Wild Hunt is one of the most acclaimed and popular role-playing games of all time. It features a vast open world, rich story, memorable characters, and stunning graphics. However, if you prefer to play the game in Japanese, you might have some trouble finding the language pack on the official GOG website. In this article, we will show you how to download and install the Japanese language pack for The Witcher 3: Wild Hunt from GOG. - - - -## Step 1: Download the Language Pack - - - -The first step is to download the Japanese language pack from GOG. To do this, you need to have a GOG account and own the game on the platform. If you don't have an account, you can create one for free on [GOG.com](https://www.gog.com/). If you don't own the game, you can buy it from [here](https://www.gog.com/game/the_witcher_3_wild_hunt_game_of_the_year_edition). - - - -Once you have logged in to your GOG account, go to your library and find The Witcher 3: Wild Hunt. Click on the game and scroll down to the "Downloadable Content" section. You should see a list of available language packs for the game. Look for the one that says "Japanese Language Pack" and click on the "Download" button. This will start downloading a zip file that contains the language pack files. - - - -## Step 2: Extract the Language Pack Files - - - -The next step is to extract the language pack files from the zip file. To do this, you need a program that can unzip files, such as WinRAR or 7-Zip. If you don't have one, you can download one for free from [here](https://www.win-rar.com/download.html) or [here](https://www.7-zip.org/download.html). - - - -Once you have installed the program, locate the zip file that you downloaded from GOG. It should be named something like "the\_witcher\_3\_wild\_hunt\_japanese\_language\_pack.zip". Right-click on the file and choose "Extract Here" or "Extract to the\_witcher\_3\_wild\_hunt\_japanese\_language\_pack". This will create a folder that contains two subfolders: "content" and "speech". These are the language pack files that you need to copy to your game folder. - - - -## Step 3: Copy the Language Pack Files to Your Game Folder - - - -The final step is to copy the language pack files to your game folder. To do this, you need to know where your game folder is located on your computer. If you installed the game from GOG Galaxy, the default location is C:\Program Files (x86)\GalaxyClient\Games\The Witcher 3 Wild Hunt GOTY\. If you installed the game manually from GOG.com, the default location is C:\GOG Games\The Witcher 3 Wild Hunt GOTY\. However, if you changed the installation path during setup, you need to find it yourself. - - - -Once you have found your game folder, open it and look for two subfolders: "content" and "speech". These are the original language files for the game. You need to replace them with the ones from the Japanese language pack. To do this, simply drag and drop the "content" and "speech" folders from the language pack folder to your game folder. When prompted, choose "Replace files in destination". This will overwrite the original language files with the Japanese ones. - - - -## Step 4: Enjoy Playing The Witcher 3: Wild Hunt in Japanese - - - -Congratulations! You have successfully installed the Japanese language pack for The Witcher 3: Wild Hunt from GOG. Now you can enjoy playing the game in Japanese with full voice-over and subtitles. To change the language settings in-game, go to Options > Language and select Japanese as your preferred language. - - - -We hope this article was helpful and informative. If you have any questions or problems with installing the language pack, please contact GOG support or visit their forums for assistance. Happy gaming! - - dfd1c89656 - - - - - diff --git a/spaces/congsaPfin/Manga-OCR/logs/Blockman GO Nextbot Join the Fun and Adventure with Millions of Players.md b/spaces/congsaPfin/Manga-OCR/logs/Blockman GO Nextbot Join the Fun and Adventure with Millions of Players.md deleted file mode 100644 index 3660d44793faafcb8daad79883a9f9ff90660ba9..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Blockman GO Nextbot Join the Fun and Adventure with Millions of Players.md +++ /dev/null @@ -1,124 +0,0 @@ -
      -

      What is blockman go nextbot and why you should try it

      -

      If you are looking for a fun and exciting game that combines different genres and styles, you might want to check out blockman go nextbot. Blockman go nextbot is a free app that includes various block-style minigames, such as survival, bedwars, skyblock, and more. You can play with your friends or other players from all over the world, chat with them, make new friends, and join parties. Blockman go nextbot also has a global service that allows you to customize your avatar, earn rewards, exchange gifts, and participate in events.

      -

      blockman go nextbot download


      DOWNLOAD ===> https://urlca.com/2uO4zq



      -

      Blockman go nextbot is inspired by popular games like Minecraft, Roblox, Garry's Mod, and others. It has a similar pixelated graphics style, but with more colorful and vibrant designs. The game also has a lot of features that make it unique and enjoyable, such as:

      -
        -
      • A variety of minigames that suit different tastes and preferences
      • -
      • A simple and intuitive interface that makes it easy to navigate and play
      • -
      • A creative and interactive environment that allows you to build, explore, and destroy
      • -
      • A social platform that enables you to communicate and cooperate with other players
      • -
      • A reward system that gives you coins, gems, cubes, pets, skins, props, and more
      • -
      • A customization option that lets you personalize your avatar, items, weapons, vehicles, etc.
      • -
      -

      Blockman go nextbot is a game that can keep you entertained for hours. Whether you want to survive against zombies, fight for your bed, create your own island, or just have fun with other players, you will find something that suits your mood. Blockman go nextbot is a game that can appeal to anyone who loves adventure, action, strategy, creativity, or socializing.

      -

      How to download and install blockman go nextbot on your device

      -

      Blockman go nextbot is available for both Android and iOS devices. You can download it from the official app stores or from the game's website. Here are the steps to follow:

      -

      blockman go nextbot download apk
      -blockman go nextbot download pc
      -blockman go nextbot download free
      -blockman go nextbot download mod
      -blockman go nextbot download hack
      -blockman go nextbot download latest version
      -blockman go nextbot download for android
      -blockman go nextbot download for windows
      -blockman go nextbot download for mac
      -blockman go nextbot download for ios
      -blockman go nextbot download online
      -blockman go nextbot download offline
      -blockman go nextbot download 2023
      -blockman go nextbot download update
      -blockman go nextbot download new
      -blockman go nextbot download game
      -blockman go nextbot download minigames
      -blockman go nextbot download bedwars
      -blockman go nextbot download skyblock
      -blockman go nextbot download egg war
      -blockman go nextbot download anime all star
      -blockman go nextbot download anime fighting simulator
      -blockman go nextbot download trainers arena
      -blockman go nextbot download build and shoot
      -blockman go nextbot download wwe school simulator
      -blockman go nextbot download adopt me
      -blockman go nextbot download sky wars
      -blockman go nextbot download free city rp
      -blockman go nextbot download titan
      -blockman go nextbot download jail break
      -blockman go nextbot download frontline shooters
      -blockman go nextbot download tnt tag
      -blockman go nextbot download paradise island
      -blockman go nextbot download ninja skyrim
      -blockman go nextbot download realm city
      -blockman go nextbot download road rash
      -blockman go nextbot download cyberpunk shooters
      -blockman go nextbot download hero tycoon 2
      -blockman go nextbot download aliens attack
      -blockman go nextbot download horror 1vs4
      -blockman go nextbot download party street
      -blockman go nextbot download lucky block skywars
      -blockman go nextbot download hide and seek 2
      -blockman go nextbot download treasure hunter
      -blockman go nextbot download bird simulator
      -blockman go nextbot download murder mystery
      -blockman go nextbot download night at the school
      -blockman go nextbot download snowman defender
      -blockman go nextbot download murder mystery 2

      -

      For Android users

      -
        -
      1. Go to the Google Play Store or click on this link:
      2. -
      3. Tap on the Install button and wait for the download to finish
      4. -
      5. Open the app and grant the necessary permissions
      6. -
      7. Create an account or log in with your existing one
      8. -
      9. Enjoy playing blockman go nextbot!
      10. -
      -

      For iOS users

      -
        -
      1. Go to the App Store or click on this link:
      2. -
      3. Tap on the Get button and wait for the download to finish
      4. -
      5. Open the app and grant the necessary permissions
      6. -
      7. Create an account or log in with your existing one
      8. -
      9. Enjoy playing blockman go nextbot!
      10. -
      -

      How to play blockman go nextbot and enjoy its minigamesHow to play blockman go nextbot and enjoy its minigames

      -

      Blockman go nextbot is a game that offers a lot of options and variety for its players. You can choose from different minigames that have different rules, objectives, and challenges. You can also switch between them whenever you want, or join a random one if you are feeling adventurous. Here is an overview of some of the most popular minigames in blockman go nextbot:

      -

      Nextbot survival mode

      -

      This is a mode where you have to survive against waves of zombies and other enemies. You can team up with other players or play solo, and use various weapons, items, and vehicles to fight back. You can also build your own base, craft your own equipment, and loot resources from the environment. The mode has different levels of difficulty and different maps to explore. The goal is to survive as long as possible and earn coins and gems.

      -

      Nextbot bedwars mode

      -

      This is a mode where you have to protect your bed and destroy the beds of other teams. You can play with up to four teams, each with up to four players. You can also buy items, weapons, and blocks from the shop using iron, gold, and diamonds. The mode has different maps and modes to choose from, such as solo, duo, squad, rush, etc. The goal is to be the last team standing and earn coins and gems.

      -

      Nextbot skyblock mode

      -

      This is a mode where you have to create your own island in the sky and expand it using resources and blocks. You can play with other players or alone, and trade with them using the market. You can also complete quests, challenges, and achievements to earn rewards. The mode has different islands and biomes to discover, such as forest, desert, snow, etc. The goal is to make your island as beautiful and rich as possible and earn coins and gems.

      Tips and tricks to master blockman go nextbot and have fun

      -

      Blockman go nextbot is a game that can be easy to learn but hard to master. There are many things that you can do to improve your skills and enjoy the game more. Here are some tips and tricks that might help you:

      -
        -
      • Practice makes perfect. The more you play, the more you will get familiar with the controls, the mechanics, the maps, and the strategies. You can also watch videos or streams of other players to learn from them.
      • -
      • Be flexible and adaptable. Each minigame has its own rules and objectives, so you have to be ready to change your tactics and plans accordingly. You also have to be aware of your surroundings and your enemies, and react quickly to any situation.
      • -
      • Be cooperative and communicative. Blockman go nextbot is a game that encourages teamwork and social interaction. You can chat with other players, join parties, form alliances, or make friends. You can also share your items, resources, or information with them, and help them out when they need it.
      • -
      • Be creative and experimental. Blockman go nextbot is a game that allows you to express your personality and style. You can customize your avatar, items, weapons, vehicles, etc. You can also build your own structures, designs, or creations in some minigames. You can also try new things and explore new possibilities in the game.
      • -
      • Have fun and relax. Blockman go nextbot is a game that is meant to be fun and entertaining. You don't have to take it too seriously or stress yourself out over it. You can play at your own pace and enjoy the game as you like.
      • -
      -

      Pros and cons of blockman go nextbot compared to other similar games

      -

      Blockman go nextbot is a game that has many advantages and disadvantages compared to other similar games. Here are some of them:

      - | Pros | Cons | | --- | --- | | It has a lot of variety and diversity in its minigames | It can be repetitive and boring after a while | | It has a simple and intuitive interface | It can be buggy and glitchy sometimes | | It has a creative and interactive environment | It can be laggy and slow on some devices | | It has a social platform that enables you to communicate and cooperate with other players | It can have toxic and rude players sometimes | | It has a reward system that gives you coins, gems, cubes, pets, skins, props, and more | It can be pay-to-win or unfair sometimes |

      Reviews and ratings of blockman go nextbot from other players

      -

      Blockman go nextbot is a game that has received mixed reviews and ratings from other players. Some of them love it, some of them hate it, and some of them are indifferent. Here are some examples of what they have said:

      -
      "This game is awesome! I love playing all the different minigames with my friends. The graphics are cute and colorful, the gameplay is smooth and fun, and the rewards are generous and cool. I recommend this game to anyone who likes block-style games." - 5 stars
      -
      "This game is terrible! I hate playing all the same minigames with strangers. The graphics are ugly and childish, the gameplay is buggy and boring, and the rewards are stingy and lame. I don't recommend this game to anyone who likes quality games." - 1 star
      -
      "This game is okay. I don't mind playing some of the minigames with random people. The graphics are decent and bright, the gameplay is average and decent, and the rewards are fair and okay. I don't mind this game, but I don't love it either." - 3 stars
      -

      Conclusion and final thoughts on blockman go nextbot download

      -

      Blockman go nextbot download is a free app that includes various block-style minigames that you can play with your friends or other players from all over the world. You can customize your avatar, earn rewards, exchange gifts, and participate in events. Blockman go nextbot download is a game that can be fun and exciting for anyone who loves adventure, action, strategy, creativity, or socializing.

      -

      However, blockman go nextbot download is also a game that can have some drawbacks and flaws. It can be repetitive and boring after a while, buggy and glitchy sometimes, laggy and slow on some devices, pay-to-win or unfair sometimes, or toxic and rude sometimes.

      -

      Therefore, blockman go nextbot download is a game that you should try at your own risk. You might love it or hate it depending on your preferences and expectations. You

      Therefore, blockman go nextbot download is a game that you should try at your own risk. You might love it or hate it depending on your preferences and expectations. You can download it from the official app stores or from the game's website and see for yourself.

      -

      FAQs

      -

      Here are some of the frequently asked questions and answers about blockman go nextbot download:

      -
        -
      1. What are the system requirements for blockman go nextbot download?
      2. -

        Blockman go nextbot download requires Android 4.1 or higher, or iOS 9.0 or higher. It also requires a stable internet connection and enough storage space on your device.

        -
      3. How can I contact the developers or report a problem with blockman go nextbot download?
      4. -

        You can contact the developers or report a problem with blockman go nextbot download by sending an email to service@blockmango.net, or by visiting their official website, Facebook page, or Discord server.

        -
      5. How can I get more coins, gems, cubes, pets, skins, props, etc. in blockman go nextbot download?
      6. -

        You can get more coins, gems, cubes, pets, skins, props, etc. in blockman go nextbot download by playing more minigames, completing quests, challenges, and achievements, participating in events, exchanging gifts with other players, or buying them with real money.

        -
      7. How can I play with my friends or join a party in blockman go nextbot download?
      8. -

        You can play with your friends or join a party in blockman go nextbot download by adding them as friends in the game, inviting them to join your room or joining theirs, or using the party code feature.

        -
      9. How can I change my avatar, items, weapons, vehicles, etc. in blockman go nextbot download?
      10. -

        You can change your avatar, items, weapons, vehicles, etc. in blockman go nextbot download by going to the wardrobe section in the game and selecting the ones you want to use. You can also buy new ones from the shop or get them from rewards.

        -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Combat Online Unblocked The Most Addictive Multiplayer Shooter Ever.md b/spaces/congsaPfin/Manga-OCR/logs/Combat Online Unblocked The Most Addictive Multiplayer Shooter Ever.md deleted file mode 100644 index b648c513b5ee61f52c8ed8267ae735a0f3c47a6c..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Combat Online Unblocked The Most Addictive Multiplayer Shooter Ever.md +++ /dev/null @@ -1,110 +0,0 @@ - -

      Combat Online Unblocked: How to Play the Best Multiplayer Shooter Game for Free

      -

      If you are looking for a thrilling and addictive online game that will keep you on the edge of your seat, then you should try Combat Online. This is a 3D multiplayer shooter game that will test your skills and reflexes in various arenas and game modes. But what if you want to play the game without any interruptions or blocks? In this article, we will show you how to play Combat Online unblocked, and why you should do it.

      -

      What is Combat Online?

      -

      Combat Online, also known as Combat 5, is a game created by NadGames, a Mexican game developer. It is a sequel to the popular game Combat Reloaded, which was released in 2016. Combat Online features the same awesome action-packed battles you know and love, but with improved graphics, gameplay, and features. Here are some of the things that make Combat Online a great game:

      -

      combat online unblocked


      Download - https://urlca.com/2uO5o6



      -

      A fast-paced, first person multiplayer shooter game

      -

      Combat Online is a game that will challenge your shooting skills and your reaction time. You will have to face off against other players from around the world in various arenas, using different weapons and strategies. You can choose from a variety of guns, such as pistols, rifles, shotguns, snipers, and more. You can also customize your character's appearance and clothing. The game has realistic physics and sound effects, making you feel like you are in the middle of a real battle.

      -

      A follow up to the original hit Combat Reloaded

      -

      Combat Online is a game that builds on the success of its predecessor, Combat Reloaded. It has the same core gameplay mechanics, but with more options and features. For example, you can now create your own maps using the map editor, and share them with other players. You can also browse and play on maps created by other players in the community tab. You can also adjust the quality of the game and avoid lags in the options menu.

      -

      A game with awesome action-packed battles and stunning graphics

      -

      Combat Online is a game that will keep you entertained for hours with its exciting and dynamic battles. You can join different types of game modes, such as free for all, capture the flag, or team battle. You can also enter different arenas, each with its own layout, obstacles, and atmosphere. The game has amazing 3D graphics that will make you feel immersed in the game world. The game also has smooth animations and transitions, making it easy to navigate and play.

      -

      Why play Combat Online unblocked?

      -

      Combat Online is a game that you can play for free on your browser. However, you may encounter some problems or limitations when trying to access the game from certain devices or networks. For example, some schools or workplaces may block gaming websites or applications to prevent distractions or inappropriate content. Some devices may also have compatibility issues or low performance when running the game. To avoid these problems, you may want to play Combat Online unblocked. Here are some of the benefits of playing Combat Online unblocked:

      -

      To enjoy the game without any restrictions or limitations

      -

      Playing Combat Online unblocked means that you can access the game from any device or network without any blocks or interruptions. You can play the game anytime and anywhere you want, without worrying about being blocked by network or device administrators. You can also enjoy all the features and updates of the game without any delays or errors.

      -

      To access the game from any device or networkTo create your own maps and play with your friends

      -

      Playing Combat Online unblocked also means that you can unleash your creativity and make your own maps using the map editor. You can design your own arenas, add different objects and elements, and customize the settings. You can also share your maps with other players and play on them together. You can also join or create private rooms and invite your friends to play with you. You can have fun and compete with your friends in your own custom maps.

      -

      How to play Combat Online unblocked?

      -

      Now that you know what Combat Online is and why you should play it unblocked, you may be wondering how to do it. Well, it's very simple and easy. All you need is a device with an internet connection and a browser that supports HTML5. Here are the steps to play Combat Online unblocked:

      -

      combat online unblocked games 76
      -combat online unblocked at school
      -combat online unblocked poki
      -combat online unblocked pacogames
      -combat online unblocked crazy games
      -combat online unblocked multiplayer
      -combat online unblocked 66
      -combat online unblocked fps
      -combat online unblocked nadgames
      -combat online unblocked 77
      -combat online unblocked shooting games
      -combat online unblocked no download
      -combat online unblocked hacked
      -combat online unblocked 3d
      -combat online unblocked google sites
      -combat online unblocked free play
      -combat online unblocked io games
      -combat online unblocked weebly
      -combat online unblocked html5
      -combat online unblocked cool math games
      -combat online unblocked y8
      -combat online unblocked silver games
      -combat online unblocked kizi
      -combat online unblocked friv
      -combat online unblocked gameflare
      -combat online unblocked 88
      -combat online unblocked action games
      -combat online unblocked war games
      -combat online unblocked team battle
      -combat online unblocked ctf mode
      -combat online unblocked map editor
      -combat online unblocked fun games
      -combat online unblocked best games
      -combat online unblocked browser games
      -combat online unblocked video games
      -combat online unblocked first person shooter games
      -combat online unblocked gun games
      -combat online unblocked sniper games
      -combat online unblocked hunting games
      -combat online unblocked gta games
      -combat online unblocked archery games
      -combat online unblocked motorbike games
      -combat online unblocked car games
      -combat online unblocked basketball games
      -combat online unblocked games for girls
      -combat online unblocked racing games
      -combat online unblocked 2 player games
      -combat online unblocked stickman games
      -combat online unblocked dress up games

      -

      The basic controls and gameplay

      -

      The basic controls of Combat Online are similar to most first person shooter games. You can use the WASD keys to move, the mouse to aim and shoot, the space bar to jump, the shift key to sprint, the R key to reload, the Q key to switch weapons, the F key to pick up weapons, the C key to crouch, the T key to chat, and the P key to pause. You can also change the controls in the options menu if you prefer.

      -

      The gameplay of Combat Online is also straightforward and intuitive. You can join a game by clicking on the play button, or create your own game by clicking on the create button. You can choose from different game modes, such as free for all, capture the flag, or team battle. You can also select from different arenas, such as city, desert, forest, or space. You can also filter the games by region, ping, or players. Once you join a game, you will spawn in a random location with a default weapon. You can find other weapons scattered around the map, or pick up weapons dropped by other players. You can also use health packs to restore your health. Your objective is to eliminate as many enemies as possible and score more points than them. You can check your score and rank on the leaderboard.

      -

      The different game modes and arenas

      -

      Combat Online offers a variety of game modes and arenas to suit your preferences and style. Here are some of the game modes and arenas you can choose from:

      - - - - - - -
      Game ModeDescription
      Free for allA classic mode where everyone is an enemy and there are no teams. The player with the most kills wins.
      Capture the flagA mode where two teams compete to capture each other's flag and bring it back to their base. The team with the most captures wins.
      Team battleA mode where two teams fight against each other and try to score more kills than the other team. The team with the most kills wins.
      CustomA mode where you can create your own game with your own rules and settings. You can also join or create private rooms and invite your friends.
      - - - - - - - -
      ArenaDescription
      CityAn urban arena with buildings, streets, cars, and bridges. It has multiple levels and hiding spots.
      DesertA sandy arena with rocks, cacti, pyramids, and temples. It has open spaces and long distances.
      ForestA green arena with trees, bushes, logs, and cabins. It has natural cover and camouflage.
      SpaceA futuristic arena with platforms, ramps, tunnels, and lasers. It has low gravity and high speed.
      CustomAn arena that you can create using the map editor. You can add different objects and elements, and customize the settings.

      The in-game shop and ranking system

      -

      Combat Online also has an in-game shop and a ranking system that will make your gaming experience more fun and rewarding. You can use the shop to buy different items and upgrades, such as new weapons, skins, hats, glasses, and more. You can also use the shop to buy coins, which are the currency of the game. You can earn coins by playing the game, completing achievements, or watching ads. You can also use the coins to unlock premium features, such as VIP rooms, custom maps, and more.

      -

      The ranking system is a way to measure your progress and performance in the game. You can earn points by playing the game and killing enemies. You can also lose points by dying or leaving the game. The more points you have, the higher your rank will be. There are 10 ranks in the game, from Rookie to Legend. You can check your rank and stats on your profile page. You can also compare your rank and stats with other players on the global leaderboard.

      -

      Conclusion

      -

      Combat Online is a game that you should not miss if you love multiplayer shooter games. It is a game that will give you hours of fun and excitement with its fast-paced action, stunning graphics, and various options. It is also a game that you can play unblocked from any device or network, without any restrictions or limitations. You can also create your own maps and play with your friends in private rooms. Combat Online is a game that will make you feel like a real soldier in a real battle.

      -

      So what are you waiting for? Go ahead and play Combat Online unblocked now! You can find the game on various websites that offer unblocked games, such as [text], [text], or [text]. You can also visit the official website of the game at [text]. Have fun and enjoy the game!

      -

      FAQs

      -

      Here are some of the frequently asked questions about Combat Online unblocked:

      -
        -
      1. Is Combat Online unblocked safe to play?
      2. -

        Yes, Combat Online unblocked is safe to play as long as you play it on reputable websites that do not contain viruses or malware. You should also avoid clicking on any suspicious links or ads that may appear on the websites.

        -
      3. Is Combat Online unblocked free to play?
      4. -

        Yes, Combat Online unblocked is free to play on your browser. You do not need to download or install anything to play the game. However, you may need to enable or update your Flash Player or HTML5 Player to run the game smoothly.

        -
      5. How can I play Combat Online unblocked with my friends?
      6. -

        You can play Combat Online unblocked with your friends by creating or joining private rooms. You can create a private room by clicking on the create button and choosing the custom mode. You can then invite your friends by sharing the room code or link with them. You can also join a private room by entering the room code or link provided by your friend.

        -
      7. How can I improve my skills in Combat Online unblocked?
      8. -

        You can improve your skills in Combat Online unblocked by practicing regularly, learning from other players, and trying different strategies. You can also watch tutorials or tips videos on YouTube or other platforms to learn more about the game.

        -
      9. How can I contact the developers of Combat Online?
      10. -

        You can contact the developers of Combat Online by visiting their website at [text] and filling out the contact form. You can also follow them on their social media accounts, such as Facebook, Twitter, or Instagram.

        -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download and Play Hungry Shark Evolution MOD APK on Android 1 with Infinite Coins and Diamonds.md b/spaces/congsaPfin/Manga-OCR/logs/Download and Play Hungry Shark Evolution MOD APK on Android 1 with Infinite Coins and Diamonds.md deleted file mode 100644 index 175647be8b880e43e8d1c1533d56214f5499361e..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download and Play Hungry Shark Evolution MOD APK on Android 1 with Infinite Coins and Diamonds.md +++ /dev/null @@ -1,120 +0,0 @@ - -

      Download Hungry Shark Evolution Mod Apk Android 1: A Guide for Shark Lovers

      -

      Do you love sharks? Do you want to experience the thrill of being a hungry shark in a vast ocean? If yes, then you should try Hungry Shark Evolution, a popular arcade game that lets you control a shark and eat everything in your way. But wait, there's more! You can also download Hungry Shark Evolution mod apk android 1, a modified version of the game that gives you unlimited coins, gems, and other features. In this article, we will tell you everything you need to know about Hungry Shark Evolution and its mod apk android 1. Let's dive in!

      -

      download hungry shark evolution mod apk android 1


      Download File ☆☆☆☆☆ https://urlca.com/2uO7Bl



      -

      What is Hungry Shark Evolution?

      -

      Hungry Shark Evolution is a game developed by Ubisoft Entertainment, where you can play as one of the many different sharks available in the game. You can explore the ocean, hunt for prey, grow bigger and stronger, and evolve into more powerful sharks. You can also customize your shark with accessories, skins, and gadgets. The game has stunning graphics, realistic physics, and addictive gameplay. You can also compete with other players online and complete missions and achievements.

      -

      Features of Hungry Shark Evolution

      -

      Some of the features of Hungry Shark Evolution are:

      -
        -
      • More than 20 different sharks to choose from, including the Great White, Hammerhead, Megalodon, and more.
      • -
      • A huge open world to explore, with various environments, creatures, and secrets.
      • -
      • Over 100 missions to complete, with different objectives and rewards.
      • -
      • A survival mode where you have to eat as much as you can before you run out of health or time.
      • -
      • A gold rush mode where you can earn extra coins by eating gold creatures.
      • -
      • A daily reward system where you can get free coins, gems, and items every day.
      • -
      • A leaderboard system where you can compare your score with other players around the world.
      • -
      • A social media integration where you can share your achievements and screenshots with your friends.
      • -
      -

      How to play Hungry Shark Evolution

      -

      The gameplay of Hungry Shark Evolution is simple and intuitive. You can control your shark by tilting your device or using the virtual joystick on the screen. You can also tap the screen to boost your speed or use special abilities. Your goal is to eat as much as you can and avoid enemies and obstacles that can harm you. You can also collect coins and gems that you can use to buy new sharks, upgrades, and items. You can also find hidden objects and treasures that can give you extra points or bonuses.

      -

      Why download Hungry Shark Evolution mod apk android 1?

      -

      If you are a fan of Hungry Shark Evolution, you might want to try Hungry Shark Evolution mod apk android 1, a modified version of the game that gives you some advantages over the original game. For example:

      -

      Benefits of Hungry Shark Evolution mod apk android 1

      -

      Some of the benefits of Hungry Shark Evolution mod apk android 1 are:

      -
        -
      • You get unlimited coins and gems that you can use to buy anything in the game.
      • -
      • You get all the sharks unlocked from the start, so you don't have to wait or grind for them.
      • -
      • You get all the accessories, skins, and gadgets unlocked as well, so you can customize your shark however you want.
      • -
      • You get unlimited boost and energy that you can use to speed up your shark and eat more prey.
      • -
      • You get no ads or pop-ups that can interrupt your game.
      • -
      -

      How to download and install Hungry Shark Evolution mod apk android 1

      -

      If you want to download and install Hungry Shark Evolution mod apk android 1, you need to follow these steps:

      -
        -
      1. Go to the website where you can find the link to download Hungry Shark Evolution mod apk android 1. You can search for it on Google or use this link: .
      2. -
      3. Click on the download button and wait for the file to be downloaded on your device.
      4. -
      5. Go to your device settings and enable the option to install apps from unknown sources. This will allow you to install the mod apk file.
      6. -
      7. Locate the downloaded file in your file manager and tap on it to start the installation process.
      8. -
      9. Follow the instructions on the screen and wait for the installation to be completed.
      10. -
      11. Launch the game and enjoy Hungry Shark Evolution mod apk android 1.
      12. -
      -

      Tips and tricks for Hungry Shark Evolution mod apk android 1

      -

      Now that you have downloaded and installed Hungry Shark Evolution mod apk android 1, you might want to know some tips and tricks that can help you play the game better. Here are some of them:

      -

      How to unlock new sharks and upgrade them

      -

      One of the fun aspects of Hungry Shark Evolution is that you can unlock new sharks and upgrade them with coins and gems. Each shark has its own stats, abilities, and appearance. You can also equip them with accessories, skins, and gadgets that can enhance their performance. To unlock new sharks, you need to either complete missions, reach a certain level, or buy them with coins or gems. To upgrade your sharks, you need to feed them with prey or use coins or gems. Upgrading your sharks can increase their health, speed, bite, and boost.

      -

      How to collect coins and gems faster

      -

      Coins and gems are the main currencies in Hungry Shark Evolution. You can use them to buy new sharks, upgrades, items, and more. You can collect coins and gems by eating gold creatures, finding treasure chests, completing missions, watching ads, or using real money. However, if you want to collect coins and gems faster, you can use Hungry Shark Evolution mod apk android 1, which gives you unlimited coins and gems. You can also use some tricks such as:

      -

      download hungry shark evolution mod apk unlimited money and gems android 1
      -download hungry shark evolution mod apk latest version android 1
      -download hungry shark evolution mod apk v10.0.0 android 1
      -download hungry shark evolution mod apk offline android 1
      -download hungry shark evolution mod apk mega mod android 1
      -download hungry shark evolution mod apk all sharks unlocked android 1
      -download hungry shark evolution mod apk no root android 1
      -download hungry shark evolution mod apk free shopping android 1
      -download hungry shark evolution mod apk revdl android 1
      -download hungry shark evolution mod apk rexdl android 1
      -download hungry shark evolution mod apk happymod android 1
      -download hungry shark evolution mod apk unlimited everything android 1
      -download hungry shark evolution mod apk hack android 1
      -download hungry shark evolution mod apk cheat android 1
      -download hungry shark evolution mod apk full version android 1
      -download hungry shark evolution mod apk obb android 1
      -download hungry shark evolution mod apk data android 1
      -download hungry shark evolution mod apk premium android 1
      -download hungry shark evolution mod apk pro android 1
      -download hungry shark evolution mod apk vip android 1
      -download hungry shark evolution mod apk new update android 1
      -download hungry shark evolution mod apk old version android 1
      -download hungry shark evolution mod apk original android 1
      -download hungry shark evolution mod apk pure android 1
      -download hungry shark evolution mod apk mirror android 1
      -download hungry shark evolution mod apk direct link android 1
      -download hungry shark evolution mod apk mediafire android 1
      -download hungry shark evolution mod apk google drive android 1
      -download hungry shark evolution mod apk zippyshare android 1
      -download hungry shark evolution mod apk uptodown android 1
      -download hungry shark evolution mod apk apkpure android 1
      -download hungry shark evolution mod apk apkmirror android 1
      -download hungry shark evolution mod apk apknite android 1
      -download hungry shark evolution mod apk apkmody android 1
      -download hungry shark evolution mod apk apksfree android 1
      -download hungry shark evolution mod apk apksfull android 1
      -download hungry shark evolution mod apk apksmodded android 1
      -download hungry shark evolution mod apk apksunlocked android 1
      -download hungry shark evolution mod apk apksupermodded android 1
      -download hungry shark evolution mod apk apksuperunlocked android 1

      -
        -
      • Eating gold creatures in gold rush mode, which gives you double coins.
      • -
      • Finding treasure chests in hidden locations, which give you a lot of coins and gems.
      • -
      • Completing daily rewards, which give you free coins, gems, and items every day.
      • -
      • Playing online mode, which gives you more coins and gems based on your score.
      • -
      -

      How to avoid enemies and obstacles

      -

      While playing Hungry Shark Evolution, you will encounter many enemies and obstacles that can harm you or reduce your health. Some of them are bigger sharks, jellyfish, mines, bombs, submarines, helicopters, and more. To avoid them, you need to either dodge them, eat them (if possible), or use special abilities or items. You can also use some tips such as:

      -
        -
      • Using boost to escape from dangerous situations or chase down prey.
      • -
      • Using gadgets such as jetpacks, lasers, or cloaks to fly over or shoot enemies.
      • -
      • Using skins such as zombie or robot to resist damage or inflict more damage.
      • -
      • Eating green creatures such as turtles or anglerfish to heal yourself.
      • -
      -

      Conclusion

      -

      Hungry Shark Evolution is a fun and exciting game that lets you play as a hungry shark in a vast ocean. You can eat everything in your way, grow bigger and stronger, evolve into more powerful sharks, customize your shark with accessories, skins, and gadgets, compete with other players online, complete missions and achievements, and more. You can also download Hungry Shark Evolution mod apk android 1, a modified version of the game that gives you unlimited coins, gems, and other features. This can make your game more enjoyable and easier. However, you should also be careful about the risks of using mod apk files, such as viruses, malware, or bans. Therefore, you should only download Hungry Shark Evolution mod apk android 1 from trusted sources and use it at your own discretion.

      -

      We hope this article has helped you learn more about Hungry Shark Evolution and its mod apk android 1. If you have any questions or feedback, feel free to leave a comment below. Happy shark hunting!

      -

      FAQs

      -

      Here are some frequently asked questions about Hungry Shark Evolution and its mod apk android 1:

      -
        -
      1. What is the latest version of Hungry Shark Evolution mod apk android 1?
      2. -

        The latest version of Hungry Shark Evolution mod apk android 1 is 8.9.0, which was released on June 15, 2023. It has a file size of 99 MB and requires Android 4.4 or higher to run.

        -
      3. Is Hungry Shark Evolution mod apk android 1 safe to use?
      4. -

        Hungry Shark Evolution mod apk android 1 is generally safe to use, as long as you download it from a reliable source and scan it with an antivirus program before installing it. However, you should also be aware of the potential risks of using mod apk files, such as viruses, malware, or bans. Therefore, you should use Hungry Shark Evolution mod apk android 1 at your own risk and responsibility.

        -
      5. How can I update Hungry Shark Evolution mod apk android 1?
      6. -

        To update Hungry Shark Evolution mod apk android 1, you need to download the latest version of the file from the website where you got it and install it over the previous version. You don't need to uninstall the old version first, as the new version will overwrite it. However, you should also backup your game data before updating, in case something goes wrong.

        -
      7. Can I play Hungry Shark Evolution mod apk android 1 online?
      8. -

        Yes, you can play Hungry Shark Evolution mod apk android 1 online with other players. However, you should also be careful about the possibility of getting banned by the game developers or reported by other players for using cheats or hacks. Therefore, you should play Hungry Shark Evolution mod apk android 1 online at your own risk and discretion.

        -
      9. Can I play Hungry Shark Evolution mod apk android 1 offline?
      10. -

        Yes, you can play Hungry Shark Evolution mod apk android 1 offline without an internet connection. However, you will not be able to access some features of the game, such as online mode, daily rewards, or social media integration. Therefore, you should play Hungry Shark Evolution mod apk android 1 offline only when necessary.

        -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/bhop pro Mod 2.3.4 Jump and Bunny Hop in FPS Mode with Free Shopping.md b/spaces/congsaPfin/Manga-OCR/logs/bhop pro Mod 2.3.4 Jump and Bunny Hop in FPS Mode with Free Shopping.md deleted file mode 100644 index a4de9dd295b461dd31466eeb7724fc28cbb042ce..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/bhop pro Mod 2.3.4 Jump and Bunny Hop in FPS Mode with Free Shopping.md +++ /dev/null @@ -1,110 +0,0 @@ -
      -

      Bhop Pro Mod APK Free Shopping: A Guide for Bunny Hop Enthusiasts

      -

      If you are a fan of bunny hopping, a technique that allows you to gain more speed and mobility in first-person shooter (FPS) games, you might want to check out Bhop Pro, a mobile game that simulates this skill. And if you want to enjoy the game without any limitations, you might want to try Bhop Pro Mod APK Free Shopping, a modified version that gives you unlimited money and cases. In this article, we will tell you everything you need to know about Bhop Pro and its mod apk version.

      -

      bhop pro mod apk free shopping


      Download Zip 🗸🗸🗸 https://urlca.com/2uOblt



      -

      What is Bhop Pro?

      -

      Bhop Pro is a mobile game developed by begma that lets you jump and bunny hop in FPS mode. You can prove that you are really a bhop master with the scores and durations you will get. You must continuously turn right or left and synchronously jump at the same time to be able to do successful bunny hops. Bhop Pro is a portable mobile bhop style jumping game. You can get new rankings by doing parkour quests. If you can really do it, you will be a 'bhop pro'.

      -

      Features of Bhop Pro

      -

      Bhop Pro has many features that make it an enjoyable and realistic bunny hop game for android. Here are some of them:

      -

      Simple and accessible touch controls

      -

      You can easily control your movements with the touch screen of your device. You can swipe left or right to turn, tap to jump, and tilt to strafe. You can also adjust the sensitivity and layout of the controls according to your preference.

      -

      Dynamic movements with realistic in-game physics

      -

      Bhop Pro uses a physics engine that mimics the real-life physics of bunny hopping. You need to manage your movements in air to gain speed and try not to lose control. You can also use ramps, walls, and other objects to perform tricks and stunts.

      -

      Multiple game modes to try out

      -

      Bhop Pro offers different game modes for you to test your bhop skills. You can play parkour mode, where you have to complete various maps with obstacles and challenges. You can play surf mode, where you have to glide on curved surfaces and avoid falling off. You can play speedrun mode, where you have to finish the maps as fast as possible. You can play deathrun mode, where you have to avoid traps and reach the end alive. You can also play random mode, where you will be teleported to a random map every time. You can also play with your friends online and chat with them in the game.

      -

      Compete and increase your ranks

      -

      Bhop Pro has a ranking system that shows your progress and skill level. You can earn points by completing maps, doing tricks, and beating other players. You can also compare your scores and ranks with other players on the global leaderboard. You can also earn achievements and badges for your performance.

      -

      bhop pro mod apk unlimited money
      -bhop pro mod apk download latest version
      -bhop pro mod apk android 1
      -bhop pro mod apk no ads
      -bhop pro mod apk revdl
      -bhop pro mod apk hack
      -bhop pro mod apk online
      -bhop pro mod apk offline
      -bhop pro mod apk 2.3.4
      -bhop pro mod apk 2023
      -bhop pro mod apk free skins
      -bhop pro mod apk unlimited coins
      -bhop pro mod apk all maps unlocked
      -bhop pro mod apk rexdl
      -bhop pro mod apk happymod
      -bhop pro mod apk an1
      -bhop pro mod apk unlimited gems
      -bhop pro mod apk premium
      -bhop pro mod apk vip
      -bhop pro mod apk mega
      -bhop pro mod apk obb
      -bhop pro mod apk data
      -bhop pro mod apk pure
      -bhop pro mod apk apkpure
      -bhop pro mod apk apkmody
      -bhop pro mod apk apknite
      -bhop pro mod apk apkmirror
      -bhop pro mod apk apkdyno
      -bhop pro mod apk apksolo
      -bhop pro mod apk apksmash
      -bhop pro mod apk apksfull
      -bhop pro mod apk apksmodded
      -bhop pro mod apk apksfree
      -bhop pro mod apk apksmart
      -bhop pro mod apk apksapp
      -bhop pro mod apk apksbest
      -bhop pro mod apk apksbox
      -bhop pro mod apk apksbuzz
      -bhop pro mod apk apksclub
      -bhop pro mod apk apkscool

      -

      Various maps with interesting setups

      -

      Bhop Pro has over 40 maps that you can play on, each with different themes, designs, and difficulties. You can find maps inspired by popular FPS games like Counter-Strike, Half-Life, and Portal. You can also find maps with creative and fun setups, such as a giant kitchen, a space station, and a candy land. You can also create your own maps with the map editor and share them with other players.

      -

      Customize your characters with outfits and accessories

      -

      Bhop Pro allows you to personalize your characters with various skins, knives, gloves, spinners, and more. You can choose from different styles, colors, and patterns. You can also mix and match different items to create your own unique look. You can also use stickers to decorate your items and express yourself.

      -

      Unlock awesome boost cases and items

      -

      Bhop Pro has a system of boost cases that you can open to get random items and bonuses. You can get skins, knives, gloves, spinners, stickers, coins, gems, and more. You can also get boost items that can help you in the game, such as speed boost, jump boost, gravity boost, and more. You can use these items to enhance your bhop skills and have more fun.

      -

      Share your in-game moments with screenshots

      -

      Bhop Pro has a feature that lets you take screenshots of your gameplay and share them with other players. You can capture your best moments, such as completing a difficult map, doing a cool trick, or getting a high score. You can also edit your screenshots with filters, stickers, and text. You can then share your screenshots on social media or in the game chat.

      -

      What is Bhop Pro Mod APK Free Shopping?

      -

      Bhop Pro Mod APK Free Shopping is a modified version of Bhop Pro that gives you unlimited money and cases. This means that you can buy and unlock all the items in the game without spending any real money. You can also open as many boost cases as you want and get all the boost items you need. Bhop Pro Mod APK Free Shopping also removes all the ads and in-app purchases from the game, so you can enjoy it without any interruptions or distractions.

      -

      Benefits of Bhop Pro Mod APK Free Shopping

      -

      Bhop Pro Mod APK Free Shopping has many benefits that make it a better option than the original version of Bhop Pro. Here are some of them:

      -

      Get access to all the skins, knives, gloves, spinners, and more

      -

      With Bhop Pro Mod APK Free Shopping, you can get all the items in the game for free. You don't have to spend any coins or gems to buy them. You don't have to wait for them to drop from boost cases. You don't have to watch ads or complete offers to get them. You can simply choose any item you want from the shop and equip it on your character.

      -

      Enjoy the game without ads or in-app purchases

      -

      Bhop Pro Mod APK Free Shopping removes all the ads and in-app purchases from the game. This means that you won't see any annoying banners or pop-ups while playing. You won't be asked to watch videos or rate the game to get rewards. You won't be tempted to spend real money to get more coins or gems. You can just focus on the game and have fun.

      -

      Enhance your bhop skills with unlimited boost cases

      -

      Bhop Pro Mod APK Free Shopping gives you unlimited boost cases that you can open anytime you want. This means that you can get unlimited boost items that can help you in the game. You can use speed boost to run faster, jump boost to jump higher, gravity boost to fly lower, and more. You can also use these items to experiment with different movements and tricks.

      -

      How to Download and Install Bhop Pro Mod APK Free Shopping?

      -

      If you want to download and install Bhop Pro Mod APK Free Shopping on your android device, you need to follow these steps:

      -

      Steps to download and install the mod apk file

      -
        -
      1. Go to [this link] and download the mod apk file of Bhop Pro Mod APK Free Shopping. You can find the download link at the end of this article.
      2. -
      3. After downloading the mod apk file, go to your device settings and enable the installation of apps from unknown sources. This will allow you to install the mod apk file on your device.
      4. -
      5. Locate the mod apk file on your device and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to finish.
      6. -
      7. Once the installation is done, you can launch the game and enjoy Bhop Pro Mod APK Free Shopping on your device.
      8. -
      -

      Tips and tricks to play Bhop Pro Mod APK Free Shopping

      -

      Now that you have installed Bhop Pro Mod APK Free Shopping on your device, you might want to know some tips and tricks to play the game better. Here are some of them:

      -

      Use air strafing to gain more speed and control

      -

      Air strafing is a technique that allows you to change your direction in mid-air by using your touch screen. You can use this technique to gain more speed and control over your movements. To do this, you need to swipe left or right while jumping and tilting your device in the same direction. This will make you turn faster and accelerate in that direction. You can also use air strafing to avoid obstacles and land on platforms.

      -

      Practice on different maps and modes to improve your bhop skills

      -

      Bhop Pro has many maps and modes that you can play on, each with different challenges and difficulties. You can practice on these maps and modes to improve your bhop skills and learn new tricks. You can also try out different items and boosters to see how they affect your gameplay. You can also watch other players' replays and learn from their moves.

      -

      Use portals and random mode to spice up your gameplay

      -

      Bhop Pro has a feature that lets you use portals to teleport to different locations on the map. You can use these portals to explore new areas, find shortcuts, or surprise your opponents. You can also play random mode, where you will be teleported to a random map every time. This will make your gameplay more unpredictable and fun.

      -

      Challenge yourself with speedrun and deathrun modes

      -

      Bhop Pro has two modes that will test your bhop skills and reflexes: speedrun and deathrun. In speedrun mode, you have to finish the maps as fast as possible. You can compete with other players on the leaderboard and see who is the fastest bhopper. In deathrun mode, you have to avoid traps and reach the end alive. You can also set traps for other players and watch them fail.

      -

      Conclusion

      -

      Bhop Pro is a mobile game that simulates bunny hopping in FPS games. It has many features that make it an enjoyable and realistic bhop game for android. Bhop Pro Mod APK Free Shopping is a modified version of Bhop Pro that gives you unlimited money and cases. It also removes all the ads and in-app purchases from the game. You can download and install Bhop Pro Mod APK Free Shopping by following the steps in this article. You can also use some tips and tricks to play Bhop Pro Mod APK Free Shopping better.

      -

      FAQs about Bhop Pro Mod APK Free Shopping

      -
        -
      • Q: Is Bhop Pro Mod APK Free Shopping safe to download and install?
      • -
      • A: Yes, Bhop Pro Mod APK Free Shopping is safe to download and install. It does not contain any viruses or malware that can harm your device or data. However, you should always download it from a trusted source and scan it before installing it.
      • -
      • Q: Do I need to root my device to use Bhop Pro Mod APK Free Shopping?
      • -
      • A: No, you do not need to root your device to use Bhop Pro Mod APK Free Shopping. It works fine on both rooted and non-rooted devices.
      • -
      • Q: Can I play Bhop Pro Mod APK Free Shopping online with other players?
      • -
      • A: Yes, you can play Bhop Pro Mod APK Free Shopping online with other players. You can join online rooms or create your own room and invite your friends. You can also chat with other players in the game.
      • -
      • Q: How can I update Bhop Pro Mod APK Free Shopping?
      • -
      • A: To update Bhop Pro Mod APK Free Shopping, you need to download the latest version of the mod apk file from [this link] and install it over the existing one. You do not need to uninstall the previous version before installing the new one. You can also check for updates in the game settings.
      • -
      • Q: What are some alternatives to Bhop Pro Mod APK Free Shopping?
      • -
      • A: If you are looking for some alternatives to Bhop Pro Mod APK Free Shopping, you can try these games: Bhop GO, Bhop Jump, Bunny Hop League, and Surf VPN.
      • -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/layers/__init__.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/layers/__init__.py deleted file mode 100644 index 761a3d1c7afa049e9779ee9fc4d299e9aae38cad..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/layers/__init__.py +++ /dev/null @@ -1,26 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from .batch_norm import FrozenBatchNorm2d, get_norm, NaiveSyncBatchNorm, CycleBatchNormList -from .deform_conv import DeformConv, ModulatedDeformConv -from .mask_ops import paste_masks_in_image -from .nms import batched_nms, batched_nms_rotated, nms, nms_rotated -from .roi_align import ROIAlign, roi_align -from .roi_align_rotated import ROIAlignRotated, roi_align_rotated -from .shape_spec import ShapeSpec -from .wrappers import ( - BatchNorm2d, - Conv2d, - ConvTranspose2d, - cat, - interpolate, - Linear, - nonzero_tuple, - cross_entropy, - empty_input_loss_func_wrapper, - shapes_to_tensor, - move_device_like, -) -from .blocks import CNNBlockBase, DepthwiseSeparableConv2d -from .aspp import ASPP -from .losses import ciou_loss, diou_loss - -__all__ = [k for k in globals().keys() if not k.startswith("_")] diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/utils/file_io.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/utils/file_io.py deleted file mode 100644 index 09f7dffdb36199350bba57bd3b4e9e8babb40594..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/utils/file_io.py +++ /dev/null @@ -1,39 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from iopath.common.file_io import HTTPURLHandler, OneDrivePathHandler, PathHandler -from iopath.common.file_io import PathManager as PathManagerBase - -__all__ = ["PathManager", "PathHandler"] - - -PathManager = PathManagerBase() -""" -This is a detectron2 project-specific PathManager. -We try to stay away from global PathManager in fvcore as it -introduces potential conflicts among other libraries. -""" - - -class Detectron2Handler(PathHandler): - """ - Resolve anything that's hosted under detectron2's namespace. - """ - - PREFIX = "detectron2://" - S3_DETECTRON2_PREFIX = "https://dl.fbaipublicfiles.com/detectron2/" - - def _get_supported_prefixes(self): - return [self.PREFIX] - - def _get_local_path(self, path, **kwargs): - name = path[len(self.PREFIX) :] - return PathManager.get_local_path(self.S3_DETECTRON2_PREFIX + name, **kwargs) - - def _open(self, path, mode="r", **kwargs): - return PathManager.open( - self.S3_DETECTRON2_PREFIX + path[len(self.PREFIX) :], mode, **kwargs - ) - - -PathManager.register_handler(HTTPURLHandler()) -PathManager.register_handler(OneDrivePathHandler()) -PathManager.register_handler(Detectron2Handler()) diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/mobile/android/lib_task_api/src/main/java/org/tensorflow/lite/examples/classification/tflite/ClassifierQuantizedEfficientNet.java b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/mobile/android/lib_task_api/src/main/java/org/tensorflow/lite/examples/classification/tflite/ClassifierQuantizedEfficientNet.java deleted file mode 100644 index 05ca4fa6c409d0274a396c9b26c3c39ca8a8194e..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/mobile/android/lib_task_api/src/main/java/org/tensorflow/lite/examples/classification/tflite/ClassifierQuantizedEfficientNet.java +++ /dev/null @@ -1,43 +0,0 @@ -/* Copyright 2017 The TensorFlow Authors. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - -package org.tensorflow.lite.examples.classification.tflite; - -import android.app.Activity; -import java.io.IOException; - -/** This TensorFlow Lite classifier works with the quantized EfficientNet model. */ -public class ClassifierQuantizedEfficientNet extends Classifier { - - /** - * Initializes a {@code ClassifierQuantizedMobileNet}. - * - * @param device a {@link Device} object to configure the hardware accelerator - * @param numThreads the number of threads during the inference - * @throws IOException if the model is not loaded correctly - */ - public ClassifierQuantizedEfficientNet(Activity activity, Device device, int numThreads) - throws IOException { - super(activity, device, numThreads); - } - - @Override - protected String getModelPath() { - // you can download this file from - // see build.gradle for where to obtain this file. It should be auto - // downloaded into assets. - return "efficientnet-lite0-int8.tflite"; - } -} diff --git a/spaces/cownclown/Image-and-3D-Model-Creator/PIFu/lib/data/BaseDataset.py b/spaces/cownclown/Image-and-3D-Model-Creator/PIFu/lib/data/BaseDataset.py deleted file mode 100644 index 2d3e842341ecd51514ac96ce51a13fcaa12d1733..0000000000000000000000000000000000000000 --- a/spaces/cownclown/Image-and-3D-Model-Creator/PIFu/lib/data/BaseDataset.py +++ /dev/null @@ -1,46 +0,0 @@ -from torch.utils.data import Dataset -import random - - -class BaseDataset(Dataset): - ''' - This is the Base Datasets. - Itself does nothing and is not runnable. - Check self.get_item function to see what it should return. - ''' - - @staticmethod - def modify_commandline_options(parser, is_train): - return parser - - def __init__(self, opt, phase='train'): - self.opt = opt - self.is_train = self.phase == 'train' - self.projection_mode = 'orthogonal' # Declare projection mode here - - def __len__(self): - return 0 - - def get_item(self, index): - # In case of a missing file or IO error, switch to a random sample instead - try: - res = { - 'name': None, # name of this subject - 'b_min': None, # Bounding box (x_min, y_min, z_min) of target space - 'b_max': None, # Bounding box (x_max, y_max, z_max) of target space - - 'samples': None, # [3, N] samples - 'labels': None, # [1, N] labels - - 'img': None, # [num_views, C, H, W] input images - 'calib': None, # [num_views, 4, 4] calibration matrix - 'extrinsic': None, # [num_views, 4, 4] extrinsic matrix - 'mask': None, # [num_views, 1, H, W] segmentation masks - } - return res - except: - print("Requested index %s has missing files. Using a random sample instead." % index) - return self.get_item(index=random.randint(0, self.__len__() - 1)) - - def __getitem__(self, index): - return self.get_item(index) diff --git a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/options/__init__.py b/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/options/__init__.py deleted file mode 100644 index e7eedebe54aa70169fd25951b3034d819e396c90..0000000000000000000000000000000000000000 --- a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/options/__init__.py +++ /dev/null @@ -1 +0,0 @@ -"""This package options includes option modules: training options, test options, and basic options (used in both training and test).""" diff --git a/spaces/davertor/colorizing_images/deoldify/_device.py b/spaces/davertor/colorizing_images/deoldify/_device.py deleted file mode 100644 index ed40ce131e3375a937c862fafa44e432f825f93b..0000000000000000000000000000000000000000 --- a/spaces/davertor/colorizing_images/deoldify/_device.py +++ /dev/null @@ -1,30 +0,0 @@ -import os -from enum import Enum -from .device_id import DeviceId - -#NOTE: This must be called first before any torch imports in order to work properly! - -class DeviceException(Exception): - pass - -class _Device: - def __init__(self): - self.set(DeviceId.CPU) - - def is_gpu(self): - ''' Returns `True` if the current device is GPU, `False` otherwise. ''' - return self.current() is not DeviceId.CPU - - def current(self): - return self._current_device - - def set(self, device:DeviceId): - if device == DeviceId.CPU: - os.environ['CUDA_VISIBLE_DEVICES']='' - else: - os.environ['CUDA_VISIBLE_DEVICES']=str(device.value) - import torch - torch.backends.cudnn.benchmark=False - - self._current_device = device - return device \ No newline at end of file diff --git a/spaces/davila7/semantic-search/utils.py b/spaces/davila7/semantic-search/utils.py deleted file mode 100644 index 1e958521394e45586d98509a89779f3a949b5e4b..0000000000000000000000000000000000000000 --- a/spaces/davila7/semantic-search/utils.py +++ /dev/null @@ -1,151 +0,0 @@ -from langchain.text_splitter import RecursiveCharacterTextSplitter -from langchain.vectorstores.faiss import FAISS -from langchain import OpenAI, Cohere -from langchain.chains.qa_with_sources import load_qa_with_sources_chain -from embeddings import OpenAIEmbeddings -from langchain.llms import OpenAI -from langchain.docstore.document import Document -from langchain.vectorstores import FAISS, VectorStore -import docx2txt -from typing import List, Dict, Any -import re -import numpy as np -from io import StringIO -from io import BytesIO -import streamlit as st -from pypdf import PdfReader -from openai.error import AuthenticationError - -@st.experimental_memo() -def parse_docx(file: BytesIO) -> str: - text = docx2txt.process(file) - # Remove multiple newlines - text = re.sub(r"\n\s*\n", "\n\n", text) - return text - - -@st.experimental_memo() -def parse_pdf(file: BytesIO) -> List[str]: - pdf = PdfReader(file) - output = [] - for page in pdf.pages: - text = page.extract_text() - # Merge hyphenated words - text = re.sub(r"(\w+)-\n(\w+)", r"\1\2", text) - # Fix newlines in the middle of sentences - text = re.sub(r"(? str: - text = file.read().decode("utf-8") - # Remove multiple newlines - text = re.sub(r"\n\s*\n", "\n\n", text) - return text - -@st.experimental_memo() -def parse_csv(uploaded_file): - # To read file as bytes: - #bytes_data = uploaded_file.getvalue() - #st.write(bytes_data) - - # To convert to a string based IO: - stringio = StringIO(uploaded_file.getvalue().decode("utf-8")) - #st.write(stringio) - - # To read file as string: - string_data = stringio.read() - #st.write(string_data) - - # Can be used wherever a "file-like" object is accepted: - # dataframe = pd.read_csv(uploaded_file) - return string_data - - -@st.cache(allow_output_mutation=True) -def text_to_docs(text: str) -> List[Document]: - """Converts a string or list of strings to a list of Documents - with metadata.""" - if isinstance(text, str): - # Take a single string as one page - text = [text] - page_docs = [Document(page_content=page) for page in text] - - # Add page numbers as metadata - for i, doc in enumerate(page_docs): - doc.metadata["page"] = i + 1 - - # Split pages into chunks - doc_chunks = [] - - for doc in page_docs: - text_splitter = RecursiveCharacterTextSplitter( - chunk_size=800, - separators=["\n\n", "\n", ".", "!", "?", ",", " ", ""], - chunk_overlap=0, - ) - chunks = text_splitter.split_text(doc.page_content) - for i, chunk in enumerate(chunks): - doc = Document( - page_content=chunk, metadata={"page": doc.metadata["page"], "chunk": i} - ) - # Add sources a metadata - doc.metadata["source"] = f"{doc.metadata['page']}-{doc.metadata['chunk']}" - doc_chunks.append(doc) - return doc_chunks - - -@st.cache(allow_output_mutation=True, show_spinner=False) -def embed_docs(docs: List[Document]) -> VectorStore: - """Embeds a list of Documents and returns a FAISS index""" - - if not st.session_state.get("OPENAI_API_KEY"): - raise AuthenticationError( - "Enter your OpenAI API key in the sidebar. You can get a key at https://platform.openai.com/account/api-keys." - ) - else: - # Embed the chunks - embeddings = OpenAIEmbeddings(openai_api_key=st.session_state.get("OPENAI_API_KEY")) # type: ignore - index = FAISS.from_documents(docs, embeddings) - - # creamos un array para guardar index y guardar embeddings - result = [index, embeddings] - return result - - -@st.cache(allow_output_mutation=True) -def search_docs(index: VectorStore, query: str) -> List[Document]: - """Searches a FAISS index for similar chunks to the query - and returns a list of Documents.""" - - # Search for similar chunks - docs = index.similarity_search(query, k=5) - return docs - -@st.cache(allow_output_mutation=True) -def get_sources(answer: Dict[str, Any], docs: List[Document]) -> List[Document]: - """Gets the source documents for an answer.""" - - # Get sources for the answer - source_keys = [s for s in answer["output_text"].split("SOURCES: ")[-1].split(", ")] - - source_docs = [] - for doc in docs: - if doc.metadata["source"] in source_keys: - source_docs.append(doc) - - return source_docs - - -def wrap_text_in_html(text: str) -> str: - """Wraps each text block separated by newlines in

      tags""" - if isinstance(text, list): - # Add horizontal rules between pages - text = "\n


      \n".join(text) - return "".join([f"

      {line}

      " for line in text.split("\n")]) \ No newline at end of file diff --git a/spaces/dawood17/SayBot_Enchancer/CodeFormer/basicsr/ops/fused_act/src/fused_bias_act.cpp b/spaces/dawood17/SayBot_Enchancer/CodeFormer/basicsr/ops/fused_act/src/fused_bias_act.cpp deleted file mode 100644 index 85ed0a79fb9c75f83470ac834090f03608d998ee..0000000000000000000000000000000000000000 --- a/spaces/dawood17/SayBot_Enchancer/CodeFormer/basicsr/ops/fused_act/src/fused_bias_act.cpp +++ /dev/null @@ -1,26 +0,0 @@ -// from https://github.com/rosinality/stylegan2-pytorch/blob/master/op/fused_bias_act.cpp -#include - - -torch::Tensor fused_bias_act_op(const torch::Tensor& input, - const torch::Tensor& bias, - const torch::Tensor& refer, - int act, int grad, float alpha, float scale); - -#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous") -#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x) - -torch::Tensor fused_bias_act(const torch::Tensor& input, - const torch::Tensor& bias, - const torch::Tensor& refer, - int act, int grad, float alpha, float scale) { - CHECK_CUDA(input); - CHECK_CUDA(bias); - - return fused_bias_act_op(input, bias, refer, act, grad, alpha, scale); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("fused_bias_act", &fused_bias_act, "fused bias act (CUDA)"); -} diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fsspec/implementations/git.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fsspec/implementations/git.py deleted file mode 100644 index 80c73e066d83211da6cfb2940edf97ab5cfe0789..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fsspec/implementations/git.py +++ /dev/null @@ -1,127 +0,0 @@ -import os - -import pygit2 - -from fsspec.spec import AbstractFileSystem - -from .memory import MemoryFile - - -class GitFileSystem(AbstractFileSystem): - """Browse the files of a local git repo at any hash/tag/branch - - (experimental backend) - """ - - root_marker = "" - cachable = True - - def __init__(self, path=None, fo=None, ref=None, **kwargs): - """ - - Parameters - ---------- - path: str (optional) - Local location of the repo (uses current directory if not given). - May be deprecated in favour of ``fo``. When used with a higher - level function such as fsspec.open(), may be of the form - "git://[path-to-repo[:]][ref@]path/to/file" (but the actual - file path should not contain "@" or ":"). - fo: str (optional) - Same as ``path``, but passed as part of a chained URL. This one - takes precedence if both are given. - ref: str (optional) - Reference to work with, could be a hash, tag or branch name. Defaults - to current working tree. Note that ``ls`` and ``open`` also take hash, - so this becomes the default for those operations - kwargs - """ - super().__init__(**kwargs) - self.repo = pygit2.Repository(fo or path or os.getcwd()) - self.ref = ref or "master" - - @classmethod - def _strip_protocol(cls, path): - path = super()._strip_protocol(path).lstrip("/") - if ":" in path: - path = path.split(":", 1)[1] - if "@" in path: - path = path.split("@", 1)[1] - return path.lstrip("/") - - def _path_to_object(self, path, ref): - comm, ref = self.repo.resolve_refish(ref or self.ref) - parts = path.split("/") - tree = comm.tree - for part in parts: - if part and isinstance(tree, pygit2.Tree): - tree = tree[part] - return tree - - @staticmethod - def _get_kwargs_from_urls(path): - if path.startswith("git://"): - path = path[6:] - out = {} - if ":" in path: - out["path"], path = path.split(":", 1) - if "@" in path: - out["ref"], path = path.split("@", 1) - return out - - def ls(self, path, detail=True, ref=None, **kwargs): - path = self._strip_protocol(path) - tree = self._path_to_object(path, ref) - if isinstance(tree, pygit2.Tree): - out = [] - for obj in tree: - if isinstance(obj, pygit2.Tree): - out.append( - { - "type": "directory", - "name": "/".join([path, obj.name]).lstrip("/"), - "hex": obj.hex, - "mode": "%o" % obj.filemode, - "size": 0, - } - ) - else: - out.append( - { - "type": "file", - "name": "/".join([path, obj.name]).lstrip("/"), - "hex": obj.hex, - "mode": "%o" % obj.filemode, - "size": obj.size, - } - ) - else: - obj = tree - out = [ - { - "type": "file", - "name": obj.name, - "hex": obj.hex, - "mode": "%o" % obj.filemode, - "size": obj.size, - } - ] - if detail: - return out - return [o["name"] for o in out] - - def ukey(self, path, ref=None): - return self.info(path, ref=ref)["hex"] - - def _open( - self, - path, - mode="rb", - block_size=None, - autocommit=True, - cache_options=None, - ref=None, - **kwargs, - ): - obj = self._path_to_object(path, ref or self.ref) - return MemoryFile(data=obj.data) diff --git a/spaces/dcq/freegpt-webui/Dockerfile b/spaces/dcq/freegpt-webui/Dockerfile deleted file mode 100644 index c7244a752d721f25bfb00bbee676389b4deeb25c..0000000000000000000000000000000000000000 --- a/spaces/dcq/freegpt-webui/Dockerfile +++ /dev/null @@ -1,17 +0,0 @@ -FROM python:3.10-slim-buster - -WORKDIR /app - -COPY requirements.txt requirements.txt - -# Criar ambiente virtual -RUN python -m venv venv -ENV PATH="/app/venv/bin:$PATH" - -RUN apt-get update && \ - apt-get install -y --no-install-recommends build-essential libffi-dev cmake libcurl4-openssl-dev && \ - pip3 install --no-cache-dir -r requirements.txt - -COPY . . - -CMD ["python3", "./run.py"] \ No newline at end of file diff --git a/spaces/declare-lab/tango/diffusers/scripts/convert_music_spectrogram_to_diffusers.py b/spaces/declare-lab/tango/diffusers/scripts/convert_music_spectrogram_to_diffusers.py deleted file mode 100644 index 41ee8b914774de09193f866c406057a92744bf51..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/scripts/convert_music_spectrogram_to_diffusers.py +++ /dev/null @@ -1,213 +0,0 @@ -#!/usr/bin/env python3 -import argparse -import os - -import jax as jnp -import numpy as onp -import torch -import torch.nn as nn -from music_spectrogram_diffusion import inference -from t5x import checkpoints - -from diffusers import DDPMScheduler, OnnxRuntimeModel, SpectrogramDiffusionPipeline -from diffusers.pipelines.spectrogram_diffusion import SpectrogramContEncoder, SpectrogramNotesEncoder, T5FilmDecoder - - -MODEL = "base_with_context" - - -def load_notes_encoder(weights, model): - model.token_embedder.weight = nn.Parameter(torch.FloatTensor(weights["token_embedder"]["embedding"])) - model.position_encoding.weight = nn.Parameter( - torch.FloatTensor(weights["Embed_0"]["embedding"]), requires_grad=False - ) - for lyr_num, lyr in enumerate(model.encoders): - ly_weight = weights[f"layers_{lyr_num}"] - lyr.layer[0].layer_norm.weight = nn.Parameter( - torch.FloatTensor(ly_weight["pre_attention_layer_norm"]["scale"]) - ) - - attention_weights = ly_weight["attention"] - lyr.layer[0].SelfAttention.q.weight = nn.Parameter(torch.FloatTensor(attention_weights["query"]["kernel"].T)) - lyr.layer[0].SelfAttention.k.weight = nn.Parameter(torch.FloatTensor(attention_weights["key"]["kernel"].T)) - lyr.layer[0].SelfAttention.v.weight = nn.Parameter(torch.FloatTensor(attention_weights["value"]["kernel"].T)) - lyr.layer[0].SelfAttention.o.weight = nn.Parameter(torch.FloatTensor(attention_weights["out"]["kernel"].T)) - - lyr.layer[1].layer_norm.weight = nn.Parameter(torch.FloatTensor(ly_weight["pre_mlp_layer_norm"]["scale"])) - - lyr.layer[1].DenseReluDense.wi_0.weight = nn.Parameter(torch.FloatTensor(ly_weight["mlp"]["wi_0"]["kernel"].T)) - lyr.layer[1].DenseReluDense.wi_1.weight = nn.Parameter(torch.FloatTensor(ly_weight["mlp"]["wi_1"]["kernel"].T)) - lyr.layer[1].DenseReluDense.wo.weight = nn.Parameter(torch.FloatTensor(ly_weight["mlp"]["wo"]["kernel"].T)) - - model.layer_norm.weight = nn.Parameter(torch.FloatTensor(weights["encoder_norm"]["scale"])) - return model - - -def load_continuous_encoder(weights, model): - model.input_proj.weight = nn.Parameter(torch.FloatTensor(weights["input_proj"]["kernel"].T)) - - model.position_encoding.weight = nn.Parameter( - torch.FloatTensor(weights["Embed_0"]["embedding"]), requires_grad=False - ) - - for lyr_num, lyr in enumerate(model.encoders): - ly_weight = weights[f"layers_{lyr_num}"] - attention_weights = ly_weight["attention"] - - lyr.layer[0].SelfAttention.q.weight = nn.Parameter(torch.FloatTensor(attention_weights["query"]["kernel"].T)) - lyr.layer[0].SelfAttention.k.weight = nn.Parameter(torch.FloatTensor(attention_weights["key"]["kernel"].T)) - lyr.layer[0].SelfAttention.v.weight = nn.Parameter(torch.FloatTensor(attention_weights["value"]["kernel"].T)) - lyr.layer[0].SelfAttention.o.weight = nn.Parameter(torch.FloatTensor(attention_weights["out"]["kernel"].T)) - lyr.layer[0].layer_norm.weight = nn.Parameter( - torch.FloatTensor(ly_weight["pre_attention_layer_norm"]["scale"]) - ) - - lyr.layer[1].DenseReluDense.wi_0.weight = nn.Parameter(torch.FloatTensor(ly_weight["mlp"]["wi_0"]["kernel"].T)) - lyr.layer[1].DenseReluDense.wi_1.weight = nn.Parameter(torch.FloatTensor(ly_weight["mlp"]["wi_1"]["kernel"].T)) - lyr.layer[1].DenseReluDense.wo.weight = nn.Parameter(torch.FloatTensor(ly_weight["mlp"]["wo"]["kernel"].T)) - lyr.layer[1].layer_norm.weight = nn.Parameter(torch.FloatTensor(ly_weight["pre_mlp_layer_norm"]["scale"])) - - model.layer_norm.weight = nn.Parameter(torch.FloatTensor(weights["encoder_norm"]["scale"])) - - return model - - -def load_decoder(weights, model): - model.conditioning_emb[0].weight = nn.Parameter(torch.FloatTensor(weights["time_emb_dense0"]["kernel"].T)) - model.conditioning_emb[2].weight = nn.Parameter(torch.FloatTensor(weights["time_emb_dense1"]["kernel"].T)) - - model.position_encoding.weight = nn.Parameter( - torch.FloatTensor(weights["Embed_0"]["embedding"]), requires_grad=False - ) - - model.continuous_inputs_projection.weight = nn.Parameter( - torch.FloatTensor(weights["continuous_inputs_projection"]["kernel"].T) - ) - - for lyr_num, lyr in enumerate(model.decoders): - ly_weight = weights[f"layers_{lyr_num}"] - lyr.layer[0].layer_norm.weight = nn.Parameter( - torch.FloatTensor(ly_weight["pre_self_attention_layer_norm"]["scale"]) - ) - - lyr.layer[0].FiLMLayer.scale_bias.weight = nn.Parameter( - torch.FloatTensor(ly_weight["FiLMLayer_0"]["DenseGeneral_0"]["kernel"].T) - ) - - attention_weights = ly_weight["self_attention"] - lyr.layer[0].attention.to_q.weight = nn.Parameter(torch.FloatTensor(attention_weights["query"]["kernel"].T)) - lyr.layer[0].attention.to_k.weight = nn.Parameter(torch.FloatTensor(attention_weights["key"]["kernel"].T)) - lyr.layer[0].attention.to_v.weight = nn.Parameter(torch.FloatTensor(attention_weights["value"]["kernel"].T)) - lyr.layer[0].attention.to_out[0].weight = nn.Parameter(torch.FloatTensor(attention_weights["out"]["kernel"].T)) - - attention_weights = ly_weight["MultiHeadDotProductAttention_0"] - lyr.layer[1].attention.to_q.weight = nn.Parameter(torch.FloatTensor(attention_weights["query"]["kernel"].T)) - lyr.layer[1].attention.to_k.weight = nn.Parameter(torch.FloatTensor(attention_weights["key"]["kernel"].T)) - lyr.layer[1].attention.to_v.weight = nn.Parameter(torch.FloatTensor(attention_weights["value"]["kernel"].T)) - lyr.layer[1].attention.to_out[0].weight = nn.Parameter(torch.FloatTensor(attention_weights["out"]["kernel"].T)) - lyr.layer[1].layer_norm.weight = nn.Parameter( - torch.FloatTensor(ly_weight["pre_cross_attention_layer_norm"]["scale"]) - ) - - lyr.layer[2].layer_norm.weight = nn.Parameter(torch.FloatTensor(ly_weight["pre_mlp_layer_norm"]["scale"])) - lyr.layer[2].film.scale_bias.weight = nn.Parameter( - torch.FloatTensor(ly_weight["FiLMLayer_1"]["DenseGeneral_0"]["kernel"].T) - ) - lyr.layer[2].DenseReluDense.wi_0.weight = nn.Parameter(torch.FloatTensor(ly_weight["mlp"]["wi_0"]["kernel"].T)) - lyr.layer[2].DenseReluDense.wi_1.weight = nn.Parameter(torch.FloatTensor(ly_weight["mlp"]["wi_1"]["kernel"].T)) - lyr.layer[2].DenseReluDense.wo.weight = nn.Parameter(torch.FloatTensor(ly_weight["mlp"]["wo"]["kernel"].T)) - - model.decoder_norm.weight = nn.Parameter(torch.FloatTensor(weights["decoder_norm"]["scale"])) - - model.spec_out.weight = nn.Parameter(torch.FloatTensor(weights["spec_out_dense"]["kernel"].T)) - - return model - - -def main(args): - t5_checkpoint = checkpoints.load_t5x_checkpoint(args.checkpoint_path) - t5_checkpoint = jnp.tree_util.tree_map(onp.array, t5_checkpoint) - - gin_overrides = [ - "from __gin__ import dynamic_registration", - "from music_spectrogram_diffusion.models.diffusion import diffusion_utils", - "diffusion_utils.ClassifierFreeGuidanceConfig.eval_condition_weight = 2.0", - "diffusion_utils.DiffusionConfig.classifier_free_guidance = @diffusion_utils.ClassifierFreeGuidanceConfig()", - ] - - gin_file = os.path.join(args.checkpoint_path, "..", "config.gin") - gin_config = inference.parse_training_gin_file(gin_file, gin_overrides) - synth_model = inference.InferenceModel(args.checkpoint_path, gin_config) - - scheduler = DDPMScheduler(beta_schedule="squaredcos_cap_v2", variance_type="fixed_large") - - notes_encoder = SpectrogramNotesEncoder( - max_length=synth_model.sequence_length["inputs"], - vocab_size=synth_model.model.module.config.vocab_size, - d_model=synth_model.model.module.config.emb_dim, - dropout_rate=synth_model.model.module.config.dropout_rate, - num_layers=synth_model.model.module.config.num_encoder_layers, - num_heads=synth_model.model.module.config.num_heads, - d_kv=synth_model.model.module.config.head_dim, - d_ff=synth_model.model.module.config.mlp_dim, - feed_forward_proj="gated-gelu", - ) - - continuous_encoder = SpectrogramContEncoder( - input_dims=synth_model.audio_codec.n_dims, - targets_context_length=synth_model.sequence_length["targets_context"], - d_model=synth_model.model.module.config.emb_dim, - dropout_rate=synth_model.model.module.config.dropout_rate, - num_layers=synth_model.model.module.config.num_encoder_layers, - num_heads=synth_model.model.module.config.num_heads, - d_kv=synth_model.model.module.config.head_dim, - d_ff=synth_model.model.module.config.mlp_dim, - feed_forward_proj="gated-gelu", - ) - - decoder = T5FilmDecoder( - input_dims=synth_model.audio_codec.n_dims, - targets_length=synth_model.sequence_length["targets_context"], - max_decoder_noise_time=synth_model.model.module.config.max_decoder_noise_time, - d_model=synth_model.model.module.config.emb_dim, - num_layers=synth_model.model.module.config.num_decoder_layers, - num_heads=synth_model.model.module.config.num_heads, - d_kv=synth_model.model.module.config.head_dim, - d_ff=synth_model.model.module.config.mlp_dim, - dropout_rate=synth_model.model.module.config.dropout_rate, - ) - - notes_encoder = load_notes_encoder(t5_checkpoint["target"]["token_encoder"], notes_encoder) - continuous_encoder = load_continuous_encoder(t5_checkpoint["target"]["continuous_encoder"], continuous_encoder) - decoder = load_decoder(t5_checkpoint["target"]["decoder"], decoder) - - melgan = OnnxRuntimeModel.from_pretrained("kashif/soundstream_mel_decoder") - - pipe = SpectrogramDiffusionPipeline( - notes_encoder=notes_encoder, - continuous_encoder=continuous_encoder, - decoder=decoder, - scheduler=scheduler, - melgan=melgan, - ) - if args.save: - pipe.save_pretrained(args.output_path) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - - parser.add_argument("--output_path", default=None, type=str, required=True, help="Path to the converted model.") - parser.add_argument( - "--save", default=True, type=bool, required=False, help="Whether to save the converted model or not." - ) - parser.add_argument( - "--checkpoint_path", - default=f"{MODEL}/checkpoint_500000", - type=str, - required=False, - help="Path to the original jax model checkpoint.", - ) - args = parser.parse_args() - - main(args) diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/models/prior_transformer.py b/spaces/declare-lab/tango/diffusers/src/diffusers/models/prior_transformer.py deleted file mode 100644 index b245612e6fc16800cd6f0cb2560d681f1360d60b..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/models/prior_transformer.py +++ /dev/null @@ -1,194 +0,0 @@ -from dataclasses import dataclass -from typing import Optional, Union - -import torch -import torch.nn.functional as F -from torch import nn - -from ..configuration_utils import ConfigMixin, register_to_config -from ..utils import BaseOutput -from .attention import BasicTransformerBlock -from .embeddings import TimestepEmbedding, Timesteps -from .modeling_utils import ModelMixin - - -@dataclass -class PriorTransformerOutput(BaseOutput): - """ - Args: - predicted_image_embedding (`torch.FloatTensor` of shape `(batch_size, embedding_dim)`): - The predicted CLIP image embedding conditioned on the CLIP text embedding input. - """ - - predicted_image_embedding: torch.FloatTensor - - -class PriorTransformer(ModelMixin, ConfigMixin): - """ - The prior transformer from unCLIP is used to predict CLIP image embeddings from CLIP text embeddings. Note that the - transformer predicts the image embeddings through a denoising diffusion process. - - This model inherits from [`ModelMixin`]. Check the superclass documentation for the generic methods the library - implements for all the models (such as downloading or saving, etc.) - - For more details, see the original paper: https://arxiv.org/abs/2204.06125 - - Parameters: - num_attention_heads (`int`, *optional*, defaults to 32): The number of heads to use for multi-head attention. - attention_head_dim (`int`, *optional*, defaults to 64): The number of channels in each head. - num_layers (`int`, *optional*, defaults to 20): The number of layers of Transformer blocks to use. - embedding_dim (`int`, *optional*, defaults to 768): The dimension of the CLIP embeddings. Note that CLIP - image embeddings and text embeddings are both the same dimension. - num_embeddings (`int`, *optional*, defaults to 77): The max number of clip embeddings allowed. I.e. the - length of the prompt after it has been tokenized. - additional_embeddings (`int`, *optional*, defaults to 4): The number of additional tokens appended to the - projected hidden_states. The actual length of the used hidden_states is `num_embeddings + - additional_embeddings`. - dropout (`float`, *optional*, defaults to 0.0): The dropout probability to use. - - """ - - @register_to_config - def __init__( - self, - num_attention_heads: int = 32, - attention_head_dim: int = 64, - num_layers: int = 20, - embedding_dim: int = 768, - num_embeddings=77, - additional_embeddings=4, - dropout: float = 0.0, - ): - super().__init__() - self.num_attention_heads = num_attention_heads - self.attention_head_dim = attention_head_dim - inner_dim = num_attention_heads * attention_head_dim - self.additional_embeddings = additional_embeddings - - self.time_proj = Timesteps(inner_dim, True, 0) - self.time_embedding = TimestepEmbedding(inner_dim, inner_dim) - - self.proj_in = nn.Linear(embedding_dim, inner_dim) - - self.embedding_proj = nn.Linear(embedding_dim, inner_dim) - self.encoder_hidden_states_proj = nn.Linear(embedding_dim, inner_dim) - - self.positional_embedding = nn.Parameter(torch.zeros(1, num_embeddings + additional_embeddings, inner_dim)) - - self.prd_embedding = nn.Parameter(torch.zeros(1, 1, inner_dim)) - - self.transformer_blocks = nn.ModuleList( - [ - BasicTransformerBlock( - inner_dim, - num_attention_heads, - attention_head_dim, - dropout=dropout, - activation_fn="gelu", - attention_bias=True, - ) - for d in range(num_layers) - ] - ) - - self.norm_out = nn.LayerNorm(inner_dim) - self.proj_to_clip_embeddings = nn.Linear(inner_dim, embedding_dim) - - causal_attention_mask = torch.full( - [num_embeddings + additional_embeddings, num_embeddings + additional_embeddings], -10000.0 - ) - causal_attention_mask.triu_(1) - causal_attention_mask = causal_attention_mask[None, ...] - self.register_buffer("causal_attention_mask", causal_attention_mask, persistent=False) - - self.clip_mean = nn.Parameter(torch.zeros(1, embedding_dim)) - self.clip_std = nn.Parameter(torch.zeros(1, embedding_dim)) - - def forward( - self, - hidden_states, - timestep: Union[torch.Tensor, float, int], - proj_embedding: torch.FloatTensor, - encoder_hidden_states: torch.FloatTensor, - attention_mask: Optional[torch.BoolTensor] = None, - return_dict: bool = True, - ): - """ - Args: - hidden_states (`torch.FloatTensor` of shape `(batch_size, embedding_dim)`): - x_t, the currently predicted image embeddings. - timestep (`torch.long`): - Current denoising step. - proj_embedding (`torch.FloatTensor` of shape `(batch_size, embedding_dim)`): - Projected embedding vector the denoising process is conditioned on. - encoder_hidden_states (`torch.FloatTensor` of shape `(batch_size, num_embeddings, embedding_dim)`): - Hidden states of the text embeddings the denoising process is conditioned on. - attention_mask (`torch.BoolTensor` of shape `(batch_size, num_embeddings)`): - Text mask for the text embeddings. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`models.prior_transformer.PriorTransformerOutput`] instead of a plain - tuple. - - Returns: - [`~models.prior_transformer.PriorTransformerOutput`] or `tuple`: - [`~models.prior_transformer.PriorTransformerOutput`] if `return_dict` is True, otherwise a `tuple`. When - returning a tuple, the first element is the sample tensor. - """ - batch_size = hidden_states.shape[0] - - timesteps = timestep - if not torch.is_tensor(timesteps): - timesteps = torch.tensor([timesteps], dtype=torch.long, device=hidden_states.device) - elif torch.is_tensor(timesteps) and len(timesteps.shape) == 0: - timesteps = timesteps[None].to(hidden_states.device) - - # broadcast to batch dimension in a way that's compatible with ONNX/Core ML - timesteps = timesteps * torch.ones(batch_size, dtype=timesteps.dtype, device=timesteps.device) - - timesteps_projected = self.time_proj(timesteps) - - # timesteps does not contain any weights and will always return f32 tensors - # but time_embedding might be fp16, so we need to cast here. - timesteps_projected = timesteps_projected.to(dtype=self.dtype) - time_embeddings = self.time_embedding(timesteps_projected) - - proj_embeddings = self.embedding_proj(proj_embedding) - encoder_hidden_states = self.encoder_hidden_states_proj(encoder_hidden_states) - hidden_states = self.proj_in(hidden_states) - prd_embedding = self.prd_embedding.to(hidden_states.dtype).expand(batch_size, -1, -1) - positional_embeddings = self.positional_embedding.to(hidden_states.dtype) - - hidden_states = torch.cat( - [ - encoder_hidden_states, - proj_embeddings[:, None, :], - time_embeddings[:, None, :], - hidden_states[:, None, :], - prd_embedding, - ], - dim=1, - ) - - hidden_states = hidden_states + positional_embeddings - - if attention_mask is not None: - attention_mask = (1 - attention_mask.to(hidden_states.dtype)) * -10000.0 - attention_mask = F.pad(attention_mask, (0, self.additional_embeddings), value=0.0) - attention_mask = (attention_mask[:, None, :] + self.causal_attention_mask).to(hidden_states.dtype) - attention_mask = attention_mask.repeat_interleave(self.config.num_attention_heads, dim=0) - - for block in self.transformer_blocks: - hidden_states = block(hidden_states, attention_mask=attention_mask) - - hidden_states = self.norm_out(hidden_states) - hidden_states = hidden_states[:, -1] - predicted_image_embedding = self.proj_to_clip_embeddings(hidden_states) - - if not return_dict: - return (predicted_image_embedding,) - - return PriorTransformerOutput(predicted_image_embedding=predicted_image_embedding) - - def post_process_latents(self, prior_latents): - prior_latents = (prior_latents * self.clip_std) + self.clip_mean - return prior_latents diff --git a/spaces/declare-lab/tango/diffusers/tests/pipelines/stable_diffusion/test_stable_diffusion_flax_controlnet.py b/spaces/declare-lab/tango/diffusers/tests/pipelines/stable_diffusion/test_stable_diffusion_flax_controlnet.py deleted file mode 100644 index 268c013201775c8c78960960669ace207670fd51..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/tests/pipelines/stable_diffusion/test_stable_diffusion_flax_controlnet.py +++ /dev/null @@ -1,127 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import gc -import unittest - -from diffusers import FlaxControlNetModel, FlaxStableDiffusionControlNetPipeline -from diffusers.utils import is_flax_available, load_image, slow -from diffusers.utils.testing_utils import require_flax - - -if is_flax_available(): - import jax - import jax.numpy as jnp - from flax.jax_utils import replicate - from flax.training.common_utils import shard - - -@slow -@require_flax -class FlaxStableDiffusionControlNetPipelineIntegrationTests(unittest.TestCase): - def tearDown(self): - # clean up the VRAM after each test - super().tearDown() - gc.collect() - - def test_canny(self): - controlnet, controlnet_params = FlaxControlNetModel.from_pretrained( - "lllyasviel/sd-controlnet-canny", from_pt=True, dtype=jnp.bfloat16 - ) - pipe, params = FlaxStableDiffusionControlNetPipeline.from_pretrained( - "runwayml/stable-diffusion-v1-5", controlnet=controlnet, from_pt=True, dtype=jnp.bfloat16 - ) - params["controlnet"] = controlnet_params - - prompts = "bird" - num_samples = jax.device_count() - prompt_ids = pipe.prepare_text_inputs([prompts] * num_samples) - - canny_image = load_image( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/bird_canny.png" - ) - processed_image = pipe.prepare_image_inputs([canny_image] * num_samples) - - rng = jax.random.PRNGKey(0) - rng = jax.random.split(rng, jax.device_count()) - - p_params = replicate(params) - prompt_ids = shard(prompt_ids) - processed_image = shard(processed_image) - - images = pipe( - prompt_ids=prompt_ids, - image=processed_image, - params=p_params, - prng_seed=rng, - num_inference_steps=50, - jit=True, - ).images - assert images.shape == (jax.device_count(), 1, 768, 512, 3) - - images = images.reshape((images.shape[0] * images.shape[1],) + images.shape[-3:]) - image_slice = images[0, 253:256, 253:256, -1] - - output_slice = jnp.asarray(jax.device_get(image_slice.flatten())) - expected_slice = jnp.array( - [0.167969, 0.116699, 0.081543, 0.154297, 0.132812, 0.108887, 0.169922, 0.169922, 0.205078] - ) - print(f"output_slice: {output_slice}") - assert jnp.abs(output_slice - expected_slice).max() < 1e-2 - - def test_pose(self): - controlnet, controlnet_params = FlaxControlNetModel.from_pretrained( - "lllyasviel/sd-controlnet-openpose", from_pt=True, dtype=jnp.bfloat16 - ) - pipe, params = FlaxStableDiffusionControlNetPipeline.from_pretrained( - "runwayml/stable-diffusion-v1-5", controlnet=controlnet, from_pt=True, dtype=jnp.bfloat16 - ) - params["controlnet"] = controlnet_params - - prompts = "Chef in the kitchen" - num_samples = jax.device_count() - prompt_ids = pipe.prepare_text_inputs([prompts] * num_samples) - - pose_image = load_image( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/pose.png" - ) - processed_image = pipe.prepare_image_inputs([pose_image] * num_samples) - - rng = jax.random.PRNGKey(0) - rng = jax.random.split(rng, jax.device_count()) - - p_params = replicate(params) - prompt_ids = shard(prompt_ids) - processed_image = shard(processed_image) - - images = pipe( - prompt_ids=prompt_ids, - image=processed_image, - params=p_params, - prng_seed=rng, - num_inference_steps=50, - jit=True, - ).images - assert images.shape == (jax.device_count(), 1, 768, 512, 3) - - images = images.reshape((images.shape[0] * images.shape[1],) + images.shape[-3:]) - image_slice = images[0, 253:256, 253:256, -1] - - output_slice = jnp.asarray(jax.device_get(image_slice.flatten())) - expected_slice = jnp.array( - [[0.271484, 0.261719, 0.275391, 0.277344, 0.279297, 0.291016, 0.294922, 0.302734, 0.302734]] - ) - print(f"output_slice: {output_slice}") - assert jnp.abs(output_slice - expected_slice).max() < 1e-2 diff --git a/spaces/declare-lab/tango/diffusers/tests/schedulers/test_scheduler_ddpm.py b/spaces/declare-lab/tango/diffusers/tests/schedulers/test_scheduler_ddpm.py deleted file mode 100644 index b55a39ee2e79274691f5136b989cbaabb3f00932..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/tests/schedulers/test_scheduler_ddpm.py +++ /dev/null @@ -1,131 +0,0 @@ -import torch - -from diffusers import DDPMScheduler - -from .test_schedulers import SchedulerCommonTest - - -class DDPMSchedulerTest(SchedulerCommonTest): - scheduler_classes = (DDPMScheduler,) - - def get_scheduler_config(self, **kwargs): - config = { - "num_train_timesteps": 1000, - "beta_start": 0.0001, - "beta_end": 0.02, - "beta_schedule": "linear", - "variance_type": "fixed_small", - "clip_sample": True, - } - - config.update(**kwargs) - return config - - def test_timesteps(self): - for timesteps in [1, 5, 100, 1000]: - self.check_over_configs(num_train_timesteps=timesteps) - - def test_betas(self): - for beta_start, beta_end in zip([0.0001, 0.001, 0.01, 0.1], [0.002, 0.02, 0.2, 2]): - self.check_over_configs(beta_start=beta_start, beta_end=beta_end) - - def test_schedules(self): - for schedule in ["linear", "squaredcos_cap_v2"]: - self.check_over_configs(beta_schedule=schedule) - - def test_variance_type(self): - for variance in ["fixed_small", "fixed_large", "other"]: - self.check_over_configs(variance_type=variance) - - def test_clip_sample(self): - for clip_sample in [True, False]: - self.check_over_configs(clip_sample=clip_sample) - - def test_thresholding(self): - self.check_over_configs(thresholding=False) - for threshold in [0.5, 1.0, 2.0]: - for prediction_type in ["epsilon", "sample", "v_prediction"]: - self.check_over_configs( - thresholding=True, - prediction_type=prediction_type, - sample_max_value=threshold, - ) - - def test_prediction_type(self): - for prediction_type in ["epsilon", "sample", "v_prediction"]: - self.check_over_configs(prediction_type=prediction_type) - - def test_time_indices(self): - for t in [0, 500, 999]: - self.check_over_forward(time_step=t) - - def test_variance(self): - scheduler_class = self.scheduler_classes[0] - scheduler_config = self.get_scheduler_config() - scheduler = scheduler_class(**scheduler_config) - - assert torch.sum(torch.abs(scheduler._get_variance(0) - 0.0)) < 1e-5 - assert torch.sum(torch.abs(scheduler._get_variance(487) - 0.00979)) < 1e-5 - assert torch.sum(torch.abs(scheduler._get_variance(999) - 0.02)) < 1e-5 - - def test_full_loop_no_noise(self): - scheduler_class = self.scheduler_classes[0] - scheduler_config = self.get_scheduler_config() - scheduler = scheduler_class(**scheduler_config) - - num_trained_timesteps = len(scheduler) - - model = self.dummy_model() - sample = self.dummy_sample_deter - generator = torch.manual_seed(0) - - for t in reversed(range(num_trained_timesteps)): - # 1. predict noise residual - residual = model(sample, t) - - # 2. predict previous mean of sample x_t-1 - pred_prev_sample = scheduler.step(residual, t, sample, generator=generator).prev_sample - - # if t > 0: - # noise = self.dummy_sample_deter - # variance = scheduler.get_variance(t) ** (0.5) * noise - # - # sample = pred_prev_sample + variance - sample = pred_prev_sample - - result_sum = torch.sum(torch.abs(sample)) - result_mean = torch.mean(torch.abs(sample)) - - assert abs(result_sum.item() - 258.9606) < 1e-2 - assert abs(result_mean.item() - 0.3372) < 1e-3 - - def test_full_loop_with_v_prediction(self): - scheduler_class = self.scheduler_classes[0] - scheduler_config = self.get_scheduler_config(prediction_type="v_prediction") - scheduler = scheduler_class(**scheduler_config) - - num_trained_timesteps = len(scheduler) - - model = self.dummy_model() - sample = self.dummy_sample_deter - generator = torch.manual_seed(0) - - for t in reversed(range(num_trained_timesteps)): - # 1. predict noise residual - residual = model(sample, t) - - # 2. predict previous mean of sample x_t-1 - pred_prev_sample = scheduler.step(residual, t, sample, generator=generator).prev_sample - - # if t > 0: - # noise = self.dummy_sample_deter - # variance = scheduler.get_variance(t) ** (0.5) * noise - # - # sample = pred_prev_sample + variance - sample = pred_prev_sample - - result_sum = torch.sum(torch.abs(sample)) - result_mean = torch.mean(torch.abs(sample)) - - assert abs(result_sum.item() - 202.0296) < 1e-2 - assert abs(result_mean.item() - 0.2631) < 1e-3 diff --git a/spaces/deepkyu/multilingual-font-style-transfer/models/__init__.py b/spaces/deepkyu/multilingual-font-style-transfer/models/__init__.py deleted file mode 100644 index 2263fb29cd8b1eadddf23f105d707403508ee172..0000000000000000000000000000000000000000 --- a/spaces/deepkyu/multilingual-font-style-transfer/models/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from .generator import * -from .discriminator import * \ No newline at end of file diff --git a/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/face3d/models/arcface_torch/utils/utils_callbacks.py b/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/face3d/models/arcface_torch/utils/utils_callbacks.py deleted file mode 100644 index bd2f56cba47c57de102710ff56eaac591e59f4da..0000000000000000000000000000000000000000 --- a/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/face3d/models/arcface_torch/utils/utils_callbacks.py +++ /dev/null @@ -1,117 +0,0 @@ -import logging -import os -import time -from typing import List - -import torch - -from eval import verification -from utils.utils_logging import AverageMeter - - -class CallBackVerification(object): - def __init__(self, frequent, rank, val_targets, rec_prefix, image_size=(112, 112)): - self.frequent: int = frequent - self.rank: int = rank - self.highest_acc: float = 0.0 - self.highest_acc_list: List[float] = [0.0] * len(val_targets) - self.ver_list: List[object] = [] - self.ver_name_list: List[str] = [] - if self.rank is 0: - self.init_dataset(val_targets=val_targets, data_dir=rec_prefix, image_size=image_size) - - def ver_test(self, backbone: torch.nn.Module, global_step: int): - results = [] - for i in range(len(self.ver_list)): - acc1, std1, acc2, std2, xnorm, embeddings_list = verification.test( - self.ver_list[i], backbone, 10, 10) - logging.info('[%s][%d]XNorm: %f' % (self.ver_name_list[i], global_step, xnorm)) - logging.info('[%s][%d]Accuracy-Flip: %1.5f+-%1.5f' % (self.ver_name_list[i], global_step, acc2, std2)) - if acc2 > self.highest_acc_list[i]: - self.highest_acc_list[i] = acc2 - logging.info( - '[%s][%d]Accuracy-Highest: %1.5f' % (self.ver_name_list[i], global_step, self.highest_acc_list[i])) - results.append(acc2) - - def init_dataset(self, val_targets, data_dir, image_size): - for name in val_targets: - path = os.path.join(data_dir, name + ".bin") - if os.path.exists(path): - data_set = verification.load_bin(path, image_size) - self.ver_list.append(data_set) - self.ver_name_list.append(name) - - def __call__(self, num_update, backbone: torch.nn.Module): - if self.rank is 0 and num_update > 0 and num_update % self.frequent == 0: - backbone.eval() - self.ver_test(backbone, num_update) - backbone.train() - - -class CallBackLogging(object): - def __init__(self, frequent, rank, total_step, batch_size, world_size, writer=None): - self.frequent: int = frequent - self.rank: int = rank - self.time_start = time.time() - self.total_step: int = total_step - self.batch_size: int = batch_size - self.world_size: int = world_size - self.writer = writer - - self.init = False - self.tic = 0 - - def __call__(self, - global_step: int, - loss: AverageMeter, - epoch: int, - fp16: bool, - learning_rate: float, - grad_scaler: torch.cuda.amp.GradScaler): - if self.rank == 0 and global_step > 0 and global_step % self.frequent == 0: - if self.init: - try: - speed: float = self.frequent * self.batch_size / (time.time() - self.tic) - speed_total = speed * self.world_size - except ZeroDivisionError: - speed_total = float('inf') - - time_now = (time.time() - self.time_start) / 3600 - time_total = time_now / ((global_step + 1) / self.total_step) - time_for_end = time_total - time_now - if self.writer is not None: - self.writer.add_scalar('time_for_end', time_for_end, global_step) - self.writer.add_scalar('learning_rate', learning_rate, global_step) - self.writer.add_scalar('loss', loss.avg, global_step) - if fp16: - msg = "Speed %.2f samples/sec Loss %.4f LearningRate %.4f Epoch: %d Global Step: %d " \ - "Fp16 Grad Scale: %2.f Required: %1.f hours" % ( - speed_total, loss.avg, learning_rate, epoch, global_step, - grad_scaler.get_scale(), time_for_end - ) - else: - msg = "Speed %.2f samples/sec Loss %.4f LearningRate %.4f Epoch: %d Global Step: %d " \ - "Required: %1.f hours" % ( - speed_total, loss.avg, learning_rate, epoch, global_step, time_for_end - ) - logging.info(msg) - loss.reset() - self.tic = time.time() - else: - self.init = True - self.tic = time.time() - - -class CallBackModelCheckpoint(object): - def __init__(self, rank, output="./"): - self.rank: int = rank - self.output: str = output - - def __call__(self, global_step, backbone, partial_fc, ): - if global_step > 100 and self.rank == 0: - path_module = os.path.join(self.output, "backbone.pth") - torch.save(backbone.module.state_dict(), path_module) - logging.info("Pytorch Model Saved in '{}'".format(path_module)) - - if global_step > 100 and partial_fc is not None: - partial_fc.save_params() diff --git a/spaces/devthedeveloper/Bark-with-Voice-Cloning/training/__init__.py b/spaces/devthedeveloper/Bark-with-Voice-Cloning/training/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/digitalxingtong/Nailv-read-Bert-Vits2/text/__init__.py b/spaces/digitalxingtong/Nailv-read-Bert-Vits2/text/__init__.py deleted file mode 100644 index 7566bf351ca9b95af9cdc6d729557a9da083800f..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Nailv-read-Bert-Vits2/text/__init__.py +++ /dev/null @@ -1,28 +0,0 @@ -from text.symbols import * - - -_symbol_to_id = {s: i for i, s in enumerate(symbols)} - -def cleaned_text_to_sequence(cleaned_text, tones, language): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - Returns: - List of integers corresponding to the symbols in the text - ''' - phones = [_symbol_to_id[symbol] for symbol in cleaned_text] - tone_start = language_tone_start_map[language] - tones = [i + tone_start for i in tones] - lang_id = language_id_map[language] - lang_ids = [lang_id for i in phones] - return phones, tones, lang_ids - -def get_bert(norm_text, word2ph, language): - from .chinese_bert import get_bert_feature as zh_bert - from .english_bert_mock import get_bert_feature as en_bert - lang_bert_func_map = { - 'ZH': zh_bert, - 'EN': en_bert - } - bert = lang_bert_func_map[language](norm_text, word2ph) - return bert diff --git a/spaces/digitalxingtong/Nanami-Bert-VITS2/modules.py b/spaces/digitalxingtong/Nanami-Bert-VITS2/modules.py deleted file mode 100644 index 92e0f32a51c472bfd1659a50a95a95d195281d2b..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Nanami-Bert-VITS2/modules.py +++ /dev/null @@ -1,452 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform -from attentions import Encoder - -LRELU_SLOPE = 0.1 - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x -class TransformerCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - n_layers, - n_heads, - p_dropout=0, - filter_channels=0, - mean_only=False, - wn_sharing_parameter=None, - gin_channels = 0 - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = Encoder(hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout, isflow = True, gin_channels = gin_channels) if wn_sharing_parameter is None else wn_sharing_parameter - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/digitalxingtong/Un-Bert-Vits2/utils.py b/spaces/digitalxingtong/Un-Bert-Vits2/utils.py deleted file mode 100644 index c6aa6cfc64c33e2eed33e9845239e831fc1c4a1a..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Un-Bert-Vits2/utils.py +++ /dev/null @@ -1,293 +0,0 @@ -import os -import glob -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -from scipy.io.wavfile import read -import torch - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - - -def load_checkpoint(checkpoint_path, model, optimizer=None, skip_optimizer=False): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None and not skip_optimizer and checkpoint_dict['optimizer'] is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - elif optimizer is None and not skip_optimizer: - #else: #Disable this line if Infer ,and enable the line upper - new_opt_dict = optimizer.state_dict() - new_opt_dict_params = new_opt_dict['param_groups'][0]['params'] - new_opt_dict['param_groups'] = checkpoint_dict['optimizer']['param_groups'] - new_opt_dict['param_groups'][0]['params'] = new_opt_dict_params - optimizer.load_state_dict(new_opt_dict) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): - try: - #assert "emb_g" not in k - # print("load", k) - new_state_dict[k] = saved_state_dict[k] - assert saved_state_dict[k].shape == v.shape, (saved_state_dict[k].shape, v.shape) - except: - print("error, %s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict, strict=False) - else: - model.load_state_dict(new_state_dict, strict=False) - print("load ") - logger.info("Loaded checkpoint '{}' (iteration {})".format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path): - logger.info("Saving model and optimizer state at iteration {} to {}".format( - iteration, checkpoint_path)) - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - torch.save({'model': state_dict, - 'iteration': iteration, - 'optimizer': optimizer.state_dict(), - 'learning_rate': learning_rate}, checkpoint_path) - - -def summarize(writer, global_step, scalars={}, histograms={}, images={}, audios={}, audio_sampling_rate=22050): - for k, v in scalars.items(): - writer.add_scalar(k, v, global_step) - for k, v in histograms.items(): - writer.add_histogram(k, v, global_step) - for k, v in images.items(): - writer.add_image(k, v, global_step, dataformats='HWC') - for k, v in audios.items(): - writer.add_audio(k, v, global_step, audio_sampling_rate) - - -def latest_checkpoint_path(dir_path, regex="G_*.pth"): - f_list = glob.glob(os.path.join(dir_path, regex)) - f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f)))) - x = f_list[-1] - print(x) - return x - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, default="./OUTPUT_MODEL", - help='Model name') - parser.add_argument('--cont', dest='cont', action="store_true", default=False, help="whether to continue training on the latest checkpoint") - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - hparams.cont = args.cont - return hparams - - -def clean_checkpoints(path_to_models='logs/44k/', n_ckpts_to_keep=2, sort_by_time=True): - """Freeing up space by deleting saved ckpts - - Arguments: - path_to_models -- Path to the model directory - n_ckpts_to_keep -- Number of ckpts to keep, excluding G_0.pth and D_0.pth - sort_by_time -- True -> chronologically delete ckpts - False -> lexicographically delete ckpts - """ - import re - ckpts_files = [f for f in os.listdir(path_to_models) if os.path.isfile(os.path.join(path_to_models, f))] - name_key = (lambda _f: int(re.compile('._(\d+)\.pth').match(_f).group(1))) - time_key = (lambda _f: os.path.getmtime(os.path.join(path_to_models, _f))) - sort_key = time_key if sort_by_time else name_key - x_sorted = lambda _x: sorted([f for f in ckpts_files if f.startswith(_x) and not f.endswith('_0.pth')], - key=sort_key) - to_del = [os.path.join(path_to_models, fn) for fn in - (x_sorted('G')[:-n_ckpts_to_keep] + x_sorted('D')[:-n_ckpts_to_keep])] - del_info = lambda fn: logger.info(f".. Free up space by deleting ckpt {fn}") - del_routine = lambda x: [os.remove(x), del_info(x)] - rs = [del_routine(fn) for fn in to_del] - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() diff --git a/spaces/docparser/Text_Captcha_breaker/tokenizer_base.py b/spaces/docparser/Text_Captcha_breaker/tokenizer_base.py deleted file mode 100644 index cd648cecb9c09319a1ec0377dec44e69e40a4db3..0000000000000000000000000000000000000000 --- a/spaces/docparser/Text_Captcha_breaker/tokenizer_base.py +++ /dev/null @@ -1,132 +0,0 @@ -import re -from abc import ABC, abstractmethod -from itertools import groupby -from typing import List, Optional, Tuple - -import torch -from torch import Tensor -from torch.nn.utils.rnn import pad_sequence - - -class CharsetAdapter: - """Transforms labels according to the target charset.""" - - def __init__(self, target_charset) -> None: - super().__init__() - self.charset = target_charset ### - self.lowercase_only = target_charset == target_charset.lower() - self.uppercase_only = target_charset == target_charset.upper() -# self.unsupported = f'[^{re.escape(target_charset)}]' - - def __call__(self, label): - if self.lowercase_only: - label = label.lower() - elif self.uppercase_only: - label = label.upper() - return label - - -class BaseTokenizer(ABC): - - def __init__(self, charset: str, specials_first: tuple = (), specials_last: tuple = ()) -> None: - self._itos = specials_first + tuple(charset+'[UNK]') + specials_last - self._stoi = {s: i for i, s in enumerate(self._itos)} - - def __len__(self): - return len(self._itos) - - def _tok2ids(self, tokens: str) -> List[int]: - return [self._stoi[s] for s in tokens] - - def _ids2tok(self, token_ids: List[int], join: bool = True) -> str: - tokens = [self._itos[i] for i in token_ids] - return ''.join(tokens) if join else tokens - - @abstractmethod - def encode(self, labels: List[str], device: Optional[torch.device] = None) -> Tensor: - """Encode a batch of labels to a representation suitable for the model. - - Args: - labels: List of labels. Each can be of arbitrary length. - device: Create tensor on this device. - - Returns: - Batched tensor representation padded to the max label length. Shape: N, L - """ - raise NotImplementedError - - @abstractmethod - def _filter(self, probs: Tensor, ids: Tensor) -> Tuple[Tensor, List[int]]: - """Internal method which performs the necessary filtering prior to decoding.""" - raise NotImplementedError - - def decode(self, token_dists: Tensor, raw: bool = False) -> Tuple[List[str], List[Tensor]]: - """Decode a batch of token distributions. - - Args: - token_dists: softmax probabilities over the token distribution. Shape: N, L, C - raw: return unprocessed labels (will return list of list of strings) - - Returns: - list of string labels (arbitrary length) and - their corresponding sequence probabilities as a list of Tensors - """ - batch_tokens = [] - batch_probs = [] - for dist in token_dists: - probs, ids = dist.max(-1) # greedy selection - if not raw: - probs, ids = self._filter(probs, ids) - tokens = self._ids2tok(ids, not raw) - batch_tokens.append(tokens) - batch_probs.append(probs) - return batch_tokens, batch_probs - - -class Tokenizer(BaseTokenizer): - BOS = '[B]' - EOS = '[E]' - PAD = '[P]' - - def __init__(self, charset: str) -> None: - specials_first = (self.EOS,) - specials_last = (self.BOS, self.PAD) - super().__init__(charset, specials_first, specials_last) - self.eos_id, self.bos_id, self.pad_id = [self._stoi[s] for s in specials_first + specials_last] - - def encode(self, labels: List[str], device: Optional[torch.device] = None) -> Tensor: - batch = [torch.as_tensor([self.bos_id] + self._tok2ids(y) + [self.eos_id], dtype=torch.long, device=device) - for y in labels] - return pad_sequence(batch, batch_first=True, padding_value=self.pad_id) - - def _filter(self, probs: Tensor, ids: Tensor) -> Tuple[Tensor, List[int]]: - ids = ids.tolist() - try: - eos_idx = ids.index(self.eos_id) - except ValueError: - eos_idx = len(ids) # Nothing to truncate. - # Truncate after EOS - ids = ids[:eos_idx] - probs = probs[:eos_idx + 1] # but include prob. for EOS (if it exists) - return probs, ids - - -class CTCTokenizer(BaseTokenizer): - BLANK = '[B]' - - def __init__(self, charset: str) -> None: - # BLANK uses index == 0 by default - super().__init__(charset, specials_first=(self.BLANK,)) - self.blank_id = self._stoi[self.BLANK] - - def encode(self, labels: List[str], device: Optional[torch.device] = None) -> Tensor: - # We use a padded representation since we don't want to use CUDNN's CTC implementation - batch = [torch.as_tensor(self._tok2ids(y), dtype=torch.long, device=device) for y in labels] - return pad_sequence(batch, batch_first=True, padding_value=self.blank_id) - - def _filter(self, probs: Tensor, ids: Tensor) -> Tuple[Tensor, List[int]]: - # Best path decoding: - ids = list(zip(*groupby(ids.tolist())))[0] # Remove duplicate tokens - ids = [x for x in ids if x != self.blank_id] # Remove BLANKs - # `probs` is just pass-through since all positions are considered part of the path - return probs, ids \ No newline at end of file diff --git a/spaces/dolceschokolade/chatbot-mini/hooks/useFetch.ts b/spaces/dolceschokolade/chatbot-mini/hooks/useFetch.ts deleted file mode 100644 index 3e5c7cc86cdd6f13ff246718f861cb8d526edced..0000000000000000000000000000000000000000 --- a/spaces/dolceschokolade/chatbot-mini/hooks/useFetch.ts +++ /dev/null @@ -1,88 +0,0 @@ -export type RequestModel = { - params?: object; - headers?: object; - signal?: AbortSignal; -}; - -export type RequestWithBodyModel = RequestModel & { - body?: object | FormData; -}; - -export const useFetch = () => { - const handleFetch = async ( - url: string, - request: any, - signal?: AbortSignal, - ) => { - const requestUrl = request?.params ? `${url}${request.params}` : url; - - const requestBody = request?.body - ? request.body instanceof FormData - ? { ...request, body: request.body } - : { ...request, body: JSON.stringify(request.body) } - : request; - - const headers = { - ...(request?.headers - ? request.headers - : request?.body && request.body instanceof FormData - ? {} - : { 'Content-type': 'application/json' }), - }; - - return fetch(requestUrl, { ...requestBody, headers, signal }) - .then((response) => { - if (!response.ok) throw response; - - const contentType = response.headers.get('content-type'); - const contentDisposition = response.headers.get('content-disposition'); - - const headers = response.headers; - - const result = - contentType && - (contentType?.indexOf('application/json') !== -1 || - contentType?.indexOf('text/plain') !== -1) - ? response.json() - : contentDisposition?.indexOf('attachment') !== -1 - ? response.blob() - : response; - - return result; - }) - .catch(async (err) => { - const contentType = err.headers.get('content-type'); - - const errResult = - contentType && contentType?.indexOf('application/problem+json') !== -1 - ? await err.json() - : err; - - throw errResult; - }); - }; - - return { - get: async (url: string, request?: RequestModel): Promise => { - return handleFetch(url, { ...request, method: 'get' }); - }, - post: async ( - url: string, - request?: RequestWithBodyModel, - ): Promise => { - return handleFetch(url, { ...request, method: 'post' }); - }, - put: async (url: string, request?: RequestWithBodyModel): Promise => { - return handleFetch(url, { ...request, method: 'put' }); - }, - patch: async ( - url: string, - request?: RequestWithBodyModel, - ): Promise => { - return handleFetch(url, { ...request, method: 'patch' }); - }, - delete: async (url: string, request?: RequestModel): Promise => { - return handleFetch(url, { ...request, method: 'delete' }); - }, - }; -}; diff --git a/spaces/dolceschokolade/chatbot-mini/pages/api/chat.ts b/spaces/dolceschokolade/chatbot-mini/pages/api/chat.ts deleted file mode 100644 index 71e3af1d28069b4e67b4ca3bf2b1c08f2f737958..0000000000000000000000000000000000000000 --- a/spaces/dolceschokolade/chatbot-mini/pages/api/chat.ts +++ /dev/null @@ -1,68 +0,0 @@ -import { DEFAULT_SYSTEM_PROMPT, DEFAULT_TEMPERATURE } from '@/utils/app/const'; -import { OpenAIError, OpenAIStream } from '@/utils/server'; - -import { ChatBody, Message } from '@/types/chat'; - -// @ts-expect-error -import wasm from '../../node_modules/@dqbd/tiktoken/lite/tiktoken_bg.wasm?module'; - -import tiktokenModel from '@dqbd/tiktoken/encoders/cl100k_base.json'; -import { Tiktoken, init } from '@dqbd/tiktoken/lite/init'; - -export const config = { - runtime: 'edge', -}; - -const handler = async (req: Request): Promise => { - try { - const { model, messages, key, prompt, temperature } = (await req.json()) as ChatBody; - - await init((imports) => WebAssembly.instantiate(wasm, imports)); - const encoding = new Tiktoken( - tiktokenModel.bpe_ranks, - tiktokenModel.special_tokens, - tiktokenModel.pat_str, - ); - - let promptToSend = prompt; - if (!promptToSend) { - promptToSend = DEFAULT_SYSTEM_PROMPT; - } - - let temperatureToUse = temperature; - if (temperatureToUse == null) { - temperatureToUse = DEFAULT_TEMPERATURE; - } - - const prompt_tokens = encoding.encode(promptToSend); - - let tokenCount = prompt_tokens.length; - let messagesToSend: Message[] = []; - - for (let i = messages.length - 1; i >= 0; i--) { - const message = messages[i]; - const tokens = encoding.encode(message.content); - - if (tokenCount + tokens.length + 1000 > model.tokenLimit) { - break; - } - tokenCount += tokens.length; - messagesToSend = [message, ...messagesToSend]; - } - - encoding.free(); - - const stream = await OpenAIStream(model, promptToSend, temperatureToUse, key, messagesToSend); - - return new Response(stream); - } catch (error) { - console.error(error); - if (error instanceof OpenAIError) { - return new Response('Error', { status: 500, statusText: error.message }); - } else { - return new Response('Error', { status: 500 }); - } - } -}; - -export default handler; diff --git a/spaces/dongsiqie/bing/Dockerfile b/spaces/dongsiqie/bing/Dockerfile deleted file mode 100644 index bd453ae992ca368bc304bc464ad2e400437ce18a..0000000000000000000000000000000000000000 --- a/spaces/dongsiqie/bing/Dockerfile +++ /dev/null @@ -1,3 +0,0 @@ -FROM zklcdc/go-proxy-bingai -EXPOSE 8080 -CMD ["/app/go-proxy-bingai"] \ No newline at end of file diff --git a/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/docs/Extensions.md b/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/docs/Extensions.md deleted file mode 100644 index dd4af96d7506be68fbe7668451f74fa5f50d85f3..0000000000000000000000000000000000000000 --- a/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/docs/Extensions.md +++ /dev/null @@ -1,166 +0,0 @@ -This web UI supports extensions. They are simply files under - -``` -extensions/your_extension_name/script.py -``` - -which can be invoked with the - -``` ---extension your_extension_name -``` - -command-line flag. - -## [text-generation-webui-extensions](https://github.com/oobabooga/text-generation-webui-extensions) - -The link above contains a directory of user extensions for text-generation-webui. - -If you create an extension, you are welcome to host it in a GitHub repository and submit it to the list above. - -## Built-in extensions - -Most of these have been created by the extremely talented contributors that you can find here: [contributors](https://github.com/oobabooga/text-generation-webui/graphs/contributors?from=2022-12-18&to=&type=a). - -|Extension|Description| -|---------|-----------| -|[api](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/api)| Creates an API with two endpoints, one for streaming at `/api/v1/stream` port 5005 and another for blocking at `/api/v1/generate` por 5000. This is the main API for this web UI. | -|[google_translate](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/google_translate)| Automatically translates inputs and outputs using Google Translate.| -|[character_bias](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/character_bias)| Just a very simple example that biases the bot's responses in chat mode.| -|[gallery](https://github.com/oobabooga/text-generation-webui/blob/main/extensions/gallery/)| Creates a gallery with the chat characters and their pictures. | -|[silero_tts](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/silero_tts)| Text-to-speech extension using [Silero](https://github.com/snakers4/silero-models). When used in chat mode, it replaces the responses with an audio widget. | -|[elevenlabs_tts](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/elevenlabs_tts)| Text-to-speech extension using the [ElevenLabs](https://beta.elevenlabs.io/) API. You need an API key to use it. | -|[send_pictures](https://github.com/oobabooga/text-generation-webui/blob/main/extensions/send_pictures/)| Creates an image upload field that can be used to send images to the bot in chat mode. Captions are automatically generated using BLIP. | -|[whisper_stt](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/whisper_stt)| Allows you to enter your inputs in chat mode using your microphone. | -|[sd_api_pictures](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/sd_api_pictures)| Allows you to request pictures from the bot in chat mode, which will be generated using the AUTOMATIC1111 Stable Diffusion API. See examples [here](https://github.com/oobabooga/text-generation-webui/pull/309). | -|[llava](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/llava) | Adds LLaVA multimodal model support. For detailed description see [README.md](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/llava/README.md) in the extension directory. | -|[openai](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/openai)| Creates an API that mimics the OpenAI API and can be used as a drop-in replacement. | - -## How to write an extension - -`script.py` has access to all variables in the UI through the `modules.shared` module, and it may define the following functions: - -| Function | Description | -|-------------|-------------| -| `def ui()` | Creates custom gradio elements when the UI is launched. | -| `def input_modifier(string)` | Modifies the input string before it enters the model. In chat mode, it is applied to the user message. Otherwise, it is applied to the entire prompt. | -| `def output_modifier(string)` | Modifies the output string before it is presented in the UI. In chat mode, it is applied to the bot's reply. Otherwise, it is applied to the entire output. | -| `def bot_prefix_modifier(string)` | Applied in chat mode to the prefix for the bot's reply (more on that below). | -| `def custom_generate_chat_prompt(...)` | Overrides the prompt generator in chat mode. | -| `def tokenizer_modifier(state, prompt, input_ids, input_embeds)` | Modifies the `input_ids`/`input_embeds` fed to the model. Should return `prompt`, `input_ids`, `input_embeds`. See `llava` extension for an example | - -Additionally, the script may define two special global variables: - -#### `params` dictionary - -```python -params = { - "language string": "ja", -} -``` - -This dicionary can be used to make the extension parameters customizable by adding entries to a `settings.json` file like this: - -```python -"google_translate-language string": "fr", -``` - -#### `input_hijack` dictionary - -```python -input_hijack = { - 'state': False, - 'value': ["", ""] -} -``` -This is only relevant in chat mode. If your extension sets `input_hijack['state']` to `True` at any moment, the next call to `modules.chat.chatbot_wrapper` will use the values inside `input_hijack['value']` as the user input for text generation. See the `send_pictures` extension above for an example. - -Additionally, your extension can set the value to be a callback, in the form of `def cb(text: str, visible_text: str) -> [str, str]`. See the `llava` extension above for an example. - -## The `bot_prefix_modifier` - -In chat mode, this function modifies the prefix for a new bot message. For instance, if your bot is named `Marie Antoinette`, the default prefix for a new message will be - -``` -Marie Antoinette: -``` - -Using `bot_prefix_modifier`, you can change it to: - -``` -Marie Antoinette: *I am very enthusiastic* -``` - -Marie Antoinette will become very enthusiastic in all her messages. - -## Using multiple extensions at the same time - -In order to use your extension, you must start the web UI with the `--extensions` flag followed by the name of your extension (the folder under `text-generation-webui/extension` where `script.py` resides). - -You can activate more than one extension at a time by providing their names separated by spaces. The input, output and bot prefix modifiers will be applied in the specified order. For `custom_generate_chat_prompt`, only the first declaration encountered will be used and the rest will be ignored. - -``` -python server.py --extensions enthusiasm translate # First apply enthusiasm, then translate -python server.py --extensions translate enthusiasm # First apply translate, then enthusiasm -``` - -## `custom_generate_chat_prompt` example - -Below is an extension that just reproduces the default prompt generator in `modules/chat.py`. You can modify it freely to come up with your own prompts in chat mode. - -```python -def custom_generate_chat_prompt(user_input, state, **kwargs): - impersonate = kwargs['impersonate'] if 'impersonate' in kwargs else False - _continue = kwargs['_continue'] if '_continue' in kwargs else False - also_return_rows = kwargs['also_return_rows'] if 'also_return_rows' in kwargs else False - is_instruct = state['mode'] == 'instruct' - rows = [f"{state['context'].strip()}\n"] - - # Finding the maximum prompt size - chat_prompt_size = state['chat_prompt_size'] - if shared.soft_prompt: - chat_prompt_size -= shared.soft_prompt_tensor.shape[1] - max_length = min(get_max_prompt_length(state), chat_prompt_size) - - if is_instruct: - prefix1 = f"{state['name1']}\n" - prefix2 = f"{state['name2']}\n" - else: - prefix1 = f"{state['name1']}: " - prefix2 = f"{state['name2']}: " - - i = len(shared.history['internal']) - 1 - while i >= 0 and len(encode(''.join(rows))[0]) < max_length: - if _continue and i == len(shared.history['internal']) - 1: - rows.insert(1, f"{prefix2}{shared.history['internal'][i][1]}") - else: - rows.insert(1, f"{prefix2}{shared.history['internal'][i][1].strip()}{state['end_of_turn']}\n") - string = shared.history['internal'][i][0] - if string not in ['', '<|BEGIN-VISIBLE-CHAT|>']: - rows.insert(1, f"{prefix1}{string.strip()}{state['end_of_turn']}\n") - i -= 1 - - if impersonate: - rows.append(f"{prefix1.strip() if not is_instruct else prefix1}") - limit = 2 - elif _continue: - limit = 3 - else: - # Adding the user message - user_input = fix_newlines(user_input) - if len(user_input) > 0: - rows.append(f"{prefix1}{user_input}{state['end_of_turn']}\n") - - # Adding the Character prefix - rows.append(apply_extensions(f"{prefix2.strip() if not is_instruct else prefix2}", "bot_prefix")) - limit = 3 - - while len(rows) > limit and len(encode(''.join(rows))[0]) >= max_length: - rows.pop(1) - prompt = ''.join(rows) - - if also_return_rows: - return prompt, rows - else: - return prompt -``` diff --git a/spaces/dylanebert/gaussian-viewer/public/_app/immutable/chunks/singletons.6b4734db.js b/spaces/dylanebert/gaussian-viewer/public/_app/immutable/chunks/singletons.6b4734db.js deleted file mode 100644 index 55e67846380306a296be7d68cf94cdb5b694a768..0000000000000000000000000000000000000000 --- a/spaces/dylanebert/gaussian-viewer/public/_app/immutable/chunks/singletons.6b4734db.js +++ /dev/null @@ -1 +0,0 @@ -import{n as d,s as w}from"./scheduler.8b74b908.js";const u=[];function p(e,t=d){let n;const o=new Set;function r(s){if(w(e,s)&&(e=s,n)){const c=!u.length;for(const l of o)l[1](),u.push(l,e);if(c){for(let l=0;l{o.delete(l),o.size===0&&n&&(n(),n=null)}}return{set:r,update:i,subscribe:a}}var g;const m=((g=globalThis.__sveltekit_1ew5tzu)==null?void 0:g.base)??"";var k;const E=((k=globalThis.__sveltekit_1ew5tzu)==null?void 0:k.assets)??m,A="1695131978890",y="sveltekit:snapshot",I="sveltekit:scroll",x="sveltekit:index",_={tap:1,hover:2,viewport:3,eager:4,off:-1};function O(e){let t=e.baseURI;if(!t){const n=e.getElementsByTagName("base");t=n.length?n[0].href:e.URL}return t}function U(){return{x:pageXOffset,y:pageYOffset}}function f(e,t){return e.getAttribute(`data-sveltekit-${t}`)}const b={..._,"":_.hover};function v(e){let t=e.assignedSlot??e.parentNode;return(t==null?void 0:t.nodeType)===11&&(t=t.host),t}function L(e,t){for(;e&&e!==t;){if(e.nodeName.toUpperCase()==="A"&&e.hasAttribute("href"))return e;e=v(e)}}function N(e,t){let n;try{n=new URL(e instanceof SVGAElement?e.href.baseVal:e.href,document.baseURI)}catch{}const o=e instanceof SVGAElement?e.target.baseVal:e.target,r=!n||!!o||S(n,t)||(e.getAttribute("rel")||"").split(/\s+/).includes("external"),i=(n==null?void 0:n.origin)===location.origin&&e.hasAttribute("download");return{url:n,external:r,target:o,download:i}}function z(e){let t=null,n=null,o=null,r=null,i=null,a=null,s=e;for(;s&&s!==document.documentElement;)o===null&&(o=f(s,"preload-code")),r===null&&(r=f(s,"preload-data")),t===null&&(t=f(s,"keepfocus")),n===null&&(n=f(s,"noscroll")),i===null&&(i=f(s,"reload")),a===null&&(a=f(s,"replacestate")),s=v(s);function c(l){switch(l){case"":case"true":return!0;case"off":case"false":return!1;default:return null}}return{preload_code:b[o??"off"],preload_data:b[r??"off"],keep_focus:c(t),noscroll:c(n),reload:c(i),replace_state:c(a)}}function h(e){const t=p(e);let n=!0;function o(){n=!0,t.update(a=>a)}function r(a){n=!1,t.set(a)}function i(a){let s;return t.subscribe(c=>{(s===void 0||n&&c!==s)&&a(s=c)})}return{notify:o,set:r,subscribe:i}}function R(){const{set:e,subscribe:t}=p(!1);let n;async function o(){clearTimeout(n);try{const r=await fetch(`${E}/_app/version.json`,{headers:{pragma:"no-cache","cache-control":"no-cache"}});if(!r.ok)return!1;const a=(await r.json()).version!==A;return a&&(e(!0),clearTimeout(n)),a}catch{return!1}}return{subscribe:t,check:o}}function S(e,t){return e.origin!==location.origin||!e.pathname.startsWith(t)}function P(e){e.client}const V={url:h({}),page:h({}),navigating:p(null),updated:R()};export{x as I,_ as P,I as S,y as a,N as b,z as c,V as d,m as e,L as f,O as g,P as h,S as i,U as s}; diff --git a/spaces/eeyorestoned/Nitro-Diffusion/README.md b/spaces/eeyorestoned/Nitro-Diffusion/README.md deleted file mode 100644 index 915aec2a45316990507296315dad308d11277d5b..0000000000000000000000000000000000000000 --- a/spaces/eeyorestoned/Nitro-Diffusion/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Nitro Diffusion -emoji: 🌍 -colorFrom: green -colorTo: indigo -sdk: gradio -sdk_version: 3.10.1 -app_file: app.py -pinned: false -duplicated_from: akhaliq/Nitro-Diffusion ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/egmaminta/python-code-summarizer/README.md b/spaces/egmaminta/python-code-summarizer/README.md deleted file mode 100644 index 86e9b2000b2530e9372ac45c4b5935f161466fc7..0000000000000000000000000000000000000000 --- a/spaces/egmaminta/python-code-summarizer/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Code Summarizer From CodeTrans -emoji: 🔥 -colorFrom: indigo -colorTo: yellow -sdk: gradio -sdk_version: 2.8.10 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/ehristoforu/runwayml-stable-diffusion-v1-5/README.md b/spaces/ehristoforu/runwayml-stable-diffusion-v1-5/README.md deleted file mode 100644 index 6d8123bf0bcaae0dca20ec3a9e205619cd9817da..0000000000000000000000000000000000000000 --- a/spaces/ehristoforu/runwayml-stable-diffusion-v1-5/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Runwayml Stable Diffusion V1 5 -emoji: 👀 -colorFrom: blue -colorTo: purple -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ekatra/mobius-v2/README.md b/spaces/ekatra/mobius-v2/README.md deleted file mode 100644 index 1570ea17f098c2a56343008287551ebd403b0b24..0000000000000000000000000000000000000000 --- a/spaces/ekatra/mobius-v2/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: NLP ChatGPT -emoji: 🔥 -colorFrom: purple -colorTo: red -sdk: streamlit -sdk_version: 1.19.0 -app_file: main.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/elsamueldev/gpt4all/conversation.py b/spaces/elsamueldev/gpt4all/conversation.py deleted file mode 100644 index 7035aee5b96792578002efeb80ea9b2a2bbdad24..0000000000000000000000000000000000000000 --- a/spaces/elsamueldev/gpt4all/conversation.py +++ /dev/null @@ -1,21 +0,0 @@ -from pydantic import BaseModel - - -class Conversation(BaseModel): - question: str - answer: str - tokens: int - customPrompt: bool - systemPrompt: str - -def genConv(question: str, answer: str, tokens: int, customPrompt: bool, systemPrompt: str) -> Conversation: - if not customPrompt: - systemPrompt = "" - - return Conversation( - question=question, - answer=answer, - tokens=tokens, - customPrompt=customPrompt, - systemPrompt=systemPrompt - ) \ No newline at end of file diff --git a/spaces/eson/tokenizer-arena/vocab/gpt_neox_chinese_v1/test_tokenizer.py b/spaces/eson/tokenizer-arena/vocab/gpt_neox_chinese_v1/test_tokenizer.py deleted file mode 100644 index da7cd134224906fd146ab3c21dce80174bdc989f..0000000000000000000000000000000000000000 --- a/spaces/eson/tokenizer-arena/vocab/gpt_neox_chinese_v1/test_tokenizer.py +++ /dev/null @@ -1,43 +0,0 @@ -import json -from tokenizers import Tokenizer - -tokenizer = Tokenizer.from_file("20B_tokenizer_chinese.json") -print("vocab_size with added_tokens:", tokenizer.get_vocab_size(with_added_tokens=True)) -print("vocab_size without added_tokens:", tokenizer.get_vocab_size(with_added_tokens=False)) - -def test_token(): - """ - :return: - """ - text = " \t\n中国解决方法黑白侗鸩玥,。!" - # text = open("../../data_sample/EBKE20150806001_epub_30198917_30198917.txt", "r", encoding="utf-8").readline() - encoding = tokenizer.encode(text) - decoding = tokenizer.decode(encoding.ids) - print(decoding) - for word in text: - encoding = tokenizer.encode(word) - for token_id in encoding.ids: - decode_str = tokenizer.decode([token_id]) # 特殊字符解码后会统一变成 �,对应 "\ufffd" - token = tokenizer.id_to_token(token_id) - print(word, token_id, decode_str, json.dumps(decode_str), token, json.dumps(token)) - -def test_encode(): - text = "中国解决方法黑白侗鸩,。!?;一个人去哪里疗疗<|endoftext|>一 个刹车卉" - encoding = tokenizer.encode(text) - print(tokenizer.decode(encoding.ids)) - for token_id in encoding.ids: - decode_str = tokenizer.decode([token_id]) # 特殊字符解码后会统一变成 �,对应 "\ufffd" - token = tokenizer.id_to_token(token_id) - print(token_id, decode_str, json.dumps(decode_str), token, json.dumps(token)) - -def test_decode(): - encoding = [30903, 20287, 20005, 52300, 25949, 30329, 50039, 31949, 25538, - 34698, 18764, 5225, 53915, 163, 223] - - decode_str = tokenizer.decode(encoding, skip_special_tokens=False) - print(decode_str) - -# test_token() -test_encode() -# test_decode() - diff --git a/spaces/eugenkalosha/Semmap/tapselection.py b/spaces/eugenkalosha/Semmap/tapselection.py deleted file mode 100644 index a80b778bf52062ce1b3e5155a0aa7252b15148fd..0000000000000000000000000000000000000000 --- a/spaces/eugenkalosha/Semmap/tapselection.py +++ /dev/null @@ -1,22 +0,0 @@ -class TapSelection: - def __init__(self): - self.rangex = [-10, 10] - self.rangey = [-10, 10] - self.part = 0.03 - - def setx(self, x): - self.rangex = x - - def sety(self, y): - self.rangey = y - - def getBound(self, x, y): - xd = (self.rangex[1] - self.rangex[0]) * self.part / 2 - yd = (self.rangey[1] - self.rangey[0]) * self.part / 2 - xmin = x - xd - ymin = y - yd - xmax = x + xd - ymax = y + yd - bounds = (xmin, ymin, xmax, ymax) - return bounds - diff --git a/spaces/facebook/MusicGen/audiocraft/modules/__init__.py b/spaces/facebook/MusicGen/audiocraft/modules/__init__.py deleted file mode 100644 index 61418616ef18f0ecca56a007c43af4a731d98b9b..0000000000000000000000000000000000000000 --- a/spaces/facebook/MusicGen/audiocraft/modules/__init__.py +++ /dev/null @@ -1,22 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -"""Modules used for building the models.""" - -# flake8: noqa -from .conv import ( - NormConv1d, - NormConv2d, - NormConvTranspose1d, - NormConvTranspose2d, - StreamableConv1d, - StreamableConvTranspose1d, - pad_for_conv1d, - pad1d, - unpad1d, -) -from .lstm import StreamableLSTM -from .seanet import SEANetEncoder, SEANetDecoder -from .transformer import StreamingTransformer \ No newline at end of file diff --git a/spaces/falterWliame/Face_Mask_Detection/Azan Ke Baad Ki Dua Pdf Download LINK.md b/spaces/falterWliame/Face_Mask_Detection/Azan Ke Baad Ki Dua Pdf Download LINK.md deleted file mode 100644 index 7680a0ad0c340fe30416142ec5a1cbb2b47dbff1..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Azan Ke Baad Ki Dua Pdf Download LINK.md +++ /dev/null @@ -1,6 +0,0 @@ -

      azan ke baad ki dua pdf download


      Download File ★★★ https://urlca.com/2uDcTW



      - -You can also download any Surah (chapter) of Quran Kareem from this website. ... thodi thodi kholta hai der ke saath Chand lamho ke baad kholta hai , Bolna ... Welcome Tune Library December 2012 - Free ebook download as PDF File (. ... Islamic book 01 Ghusl Wazu Azan Quran amal ki tofiq bakhshe. 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/falterWliame/Face_Mask_Detection/Krishna Cottage Full Movie 1080p Download Movies _HOT_.md b/spaces/falterWliame/Face_Mask_Detection/Krishna Cottage Full Movie 1080p Download Movies _HOT_.md deleted file mode 100644 index ccacdbbfeb8d5788644789d0993eedc0ce58d65c..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Krishna Cottage Full Movie 1080p Download Movies _HOT_.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Krishna Cottage full movie 1080p download movies


      Download Filehttps://urlca.com/2uDdHC



      - -Download Krishna Cottage 2004 Hindi 720p WEB-DL 1.1GB ESub Full Movie Watch Online | Genre: Horror, Mystery, | Star Cast: Sohail Khan, ... 1fdad05405
      -
      -
      -

      diff --git a/spaces/fatiXbelha/sd/Download New Telugu Movies in HD Quality - Watch Online or Offline.md b/spaces/fatiXbelha/sd/Download New Telugu Movies in HD Quality - Watch Online or Offline.md deleted file mode 100644 index f2fece49c0bf88d2974ec1626d2b0611f9a2045d..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download New Telugu Movies in HD Quality - Watch Online or Offline.md +++ /dev/null @@ -1,365 +0,0 @@ -
      -

      How to Download New Telugu Movies Online

      -

      Telugu movies, also known as Tollywood movies, are one of the most popular forms of entertainment in India and abroad. They are known for their rich culture, colorful costumes, catchy songs, thrilling action, and romantic comedy. Telugu movies have a huge fan base and a loyal following that eagerly awaits the release of new movies every year.

      -

      download new telugu movies


      Download Filehttps://urllie.com/2uNCie



      -

      If you are one of those fans who love watching new Telugu movies online, you might be wondering how to download them on your device. Downloading new Telugu movies online has many benefits, such as saving time, money, data, and storage space. You can also watch them offline at your convenience, without any buffering or interruptions.

      -

      However, downloading new Telugu movies online also has some challenges and risks. You need to find a reliable source that offers high-quality downloads, fast speed, low cost, and legal content. You also need to protect your device from malware, viruses, hackers, and other cyber threats that might harm your data or privacy.

      -

      In this article, we will guide you through the best websites and apps to download new Telugu movies online safely and legally. We will also compare their features, prices, ratings, availability, download quality, download speed, download limit, ads, security, and customer support. By the end of this article, you will be able to choose the best option for your needs and enjoy your favorite new Telugu movies online.

      -

      download new telugu movies online on aha
      -ibomma.com hd telugu movies download 2021
      -new telugu movies 2023 download and watch online
      -how to download new telugu movies for free
      -best sites to download new telugu movies in hd
      -download new telugu movies with english subtitles
      -latest telugu movies 2022 download in movierulz
      -download new telugu movies in jio rockers
      -new telugu movies 2021 download in tamilrockers
      -download new telugu movies in telegram channels
      -download new telugu movies songs mp3
      -new telugu movies trailers 2022 download
      -download new telugu movies comedy scenes
      -new telugu movies ringtones download 2021
      -download new telugu movies posters and wallpapers
      -new telugu movies subtitles download srt
      -download new telugu movies in 4k resolution
      -new telugu movies dubbed in hindi download
      -download new telugu movies in amazon prime video
      -new telugu movies netflix download offline
      -download new telugu movies in hotstar disney plus
      -new telugu movies zee5 download app
      -download new telugu movies in mx player
      -new telugu movies youtube download link
      -download new telugu movies in utorrent
      -new telugu movies magnet link download
      -download new telugu movies using vpn
      -new telugu movies torrentz2.eu download
      -download new telugu movies in filmywap
      -new telugu movies filmyzilla download 2021
      -download new telugu movies in todaypk
      -new telugu movies isaimini download website
      -download new telugu movies in kuttymovies
      -new telugu movies tamilyogi download online
      -download new telugu movies in klwap
      -new telugu movies dvdwap download free
      -download new telugu movies in cinemavilla
      -new telugu movies mallumv download site
      -download new telugu movies in dvdplay
      -new telugu movies hdmovieshub download link
      -download new telugu movies in skymovieshd
      -new telugu movies worldfree4u download 2021
      -download new telugu movies in 9xmovies
      -new telugu movies khatrimaza download hd
      -download new telugu movies in bolly4u
      -new telugu movies pagalworld download mp3 songs
      -download new telugu movies in coolmoviez

      -

      Best Websites to Download New Telugu Movies Online

      -

      There are many websites that offer new Telugu movies online for download. However, not all of them are trustworthy or legal. Some of them might contain pirated or illegal content that could get you in trouble with the law or expose you to malware or viruses. Some of them might also have poor quality downloads, slow speed, high cost, or annoying ads.

      -

      To help you avoid these problems, we have selected three of the best websites to download new Telugu movies online that are safe, legal, and reliable. These are:

      -

      Disney+ Hotstar

      -

      Disney+ Hotstar is one of the most popular and trusted websites to download new Telugu movies online. It is a streaming service that offers a wide range of content, including movies, TV shows, sports, news, and originals. You can find many new and old Telugu movies on Disney+ Hotstar, as well as other regional languages and genres.

      -

      To download new Telugu movies from Disney+ Hotstar, you need to subscribe to one of its plans: Disney+ Hotstar VIP or Disney+ Hotstar Premium. The VIP plan costs Rs. 399 per year and gives you access to the latest Indian movies, exclusive Hotstar specials, live sports, and dubbed Disney+ content. The Premium plan costs Rs. 1499 per year or Rs. 299 per month and gives you access to everything in the VIP plan plus American movies and shows from Disney, Marvel, Star Wars, Pixar, HBO, and more.

      -

      Once you have subscribed to a plan, you can download new Telugu movies from Disney+ Hotstar by following these steps:

      -
        -
      1. Open the Disney+ Hotstar app on your device or visit the website on your browser.
      2. -
      3. Search for the new Telugu movie you want to download or browse through the categories and genres.
      4. -
      5. Select the movie and tap on the download icon below the play button.
      6. -
      7. Choose the download quality from low, medium, high, or full HD.
      8. -
      9. Wait for the download to complete and enjoy watching it offline.
      10. -
      -

      Some of the pros and cons of Disney+ Hotstar are:

      -
        -
      • Pros
          -
        • It offers a large collection of new and old Telugu movies in high quality.
        • -
        • It has a user-friendly interface and easy navigation.
        • -
        • It supports multiple devices and platforms, such as Android, iOS, Windows, Mac, Smart TV, Fire TV Stick, Chromecast, etc.
        • -
        • It allows you to download up to 10 movies at a time and watch them offline for up to 7 days.
        • -
        • It provides subtitles and audio options for some movies.
        • -
        -
      • -
      • Cons
          -
        • It requires a subscription fee to download new Telugu movies online.
        • -
        • It does not have all the latest Telugu movies available for download.
        • -
        • It has some ads and pop-ups that might interrupt your viewing experience.
        • -
        • It has some issues with buffering and loading at times.
        • -
        • It does not have a customer support phone number or email address.
        • -
        -
      • -
      -

      Ibomma.com

      -

      Ibomma.com is another website that offers new Telugu movies online for download. It is a torrent site that provides pirated copies of movies from various sources. You can find many new and old Telugu movies on Ibomma.com, as well as other regional languages and genres.

      -

      To download new Telugu movies from Ibomma.com, you need to follow these steps:

      -
        -
      1. Visit the Ibomma.com website on your browser.
      2. -
      3. Search for the new Telugu movie you want to download or browse through the categories and genres.
      4. -
      5. Select the movie and click on the download link or magnet link.
      6. -
      7. Choose the download quality from low, medium, high, or full HD.
      8. -
      9. Wait for the download to complete and enjoy watching it offline.
      10. -
      -

      Some of the pros and cons of Ibomma.com are:

      -
        -
      • Pros
          -
        • It offers a huge collection of new and old Telugu movies in various qualities.
        • -
        • It has a simple interface and fast navigation.
        • -
        • It does not require any registration or subscription fee to download new Telugu movies online.
        • -
        • It supports multiple devices and platforms, such as Android, iOS, Windows, Mac, etc.
        • -
        • It allows you to download unlimited movies at a time and watch them offline for as long as you want.
        • -
        -
      • -
      • Cons
          -
        • It provides illegal and pirated content that could get you in trouble with the law or expose you to malware or viruses.
        • -
        • It does not have all the latest Telugu movies available for download.
        • -
        • It has many ads and pop-ups that might redirect you to harmful sites or spam your device.
        • -
        • It has some issues with availability and accessibility at times due to legal actions or technical glitches.
        • -
        • It does not have a customer support phone number or email address.
        • -
        -
      • -
      -

      Hungama.com

      -

      Hungama.com is a website that offers new Telugu movies online for download. It is a streaming service that offers a variety of content, including movies, TV shows, music, videos, and games. You can find many new and old Telugu movies on Hungama.com, as well as other regional languages and genres.

      -

      To download new Telugu movies from Hungama.com, you need to subscribe to one of its plans: Hungama Music or Hungama Play. The Music plan costs Rs. 99 per month or Rs. 699 per year and gives you access to unlimited music, videos, and podcasts. The Play plan costs Rs. 149 per month or Rs. 999 per year and gives you access to unlimited movies, TV shows, originals, and kids content.

      -

      Once you have subscribed to a plan, you can download new Telugu movies from Hungama.com by following these steps:

      -
        -
      1. Open the Hungama app on your device or visit the website on your browser.
      2. -
      3. Search for the new Telugu movie you want to download or browse through the categories and genres.
      4. -
      5. Select the movie and tap on the download icon below the play button.
      6. -
      7. Choose the download quality from low, medium, high, or full HD.
      8. -
      9. Wait for the download to complete and enjoy watching it offline.
      10. -
      -

      Some of the pros and cons of Hungama.com are:

      -
        -
      • Pros
          -
        • It offers a decent collection of new and old Telugu movies in good quality.
        • -
        • It has a user-friendly interface and easy navigation.
        • -
        • It supports multiple devices and platforms, such as Android, iOS, Windows, Mac, Smart TV, Fire TV Stick, Chromecast, etc.
        • -
        • It allows you to download up to 5 movies at a time and watch them offline for up to 30 days.
        • -
        • It provides subtitles and audio options for some movies.
        • -
        -
      • -
      • Cons
          -
        • It requires a subscription fee to download new Telugu movies online.
        • -
        • It does not have all the latest Telugu movies available for download.
        • -
        • It has some ads and pop-ups that might interrupt your viewing experience.
        • -
        • It has some issues with buffering and loading at times.
        • -
        • It has a customer support phone number and email address, but they are not very responsive or helpful.
        • -
        -
      • -
      -

      Best Apps to Download New Telugu Movies Online

      -

      Besides websites, there are also some apps that offer new Telugu movies online for download. These apps are designed for mobile devices and provide a convenient and easy way to access and download new Telugu movies online. However, like websites, not all apps are trustworthy or legal. Some of them might contain pirated or illegal content that could get you in trouble with the law or expose you to malware or viruses. Some of them might also have poor quality downloads, slow speed, high cost, or annoying ads.

      -

      To help you avoid these problems, we have selected three of the best apps to download new Telugu movies online that are safe, legal, and reliable. These are:

      -

      Aha

      -

      Aha is an app that offers new Telugu movies online for download. It is a streaming service that offers exclusive and original content in Telugu language. You can find many new and old Telugu movies on Aha, as well as web series, shows, documentaries, and live events.

      -

      To download new Telugu movies from Aha, you need to subscribe to its plan: Aha Premium. The Premium plan costs Rs. 149 per month or Rs. 365 per year and gives you access to unlimited ad-free content in HD quality.

      -

      Once you have subscribed to the plan, you can download new Telugu movies from Aha by following these steps:

      -
        -
      1. Open the Aha app on your device or visit the website on your browser.
      2. -
      3. Search for the new Telugu movie you want to download or browse through the categories and genres.
      4. -
      5. Select the movie and tap on the download icon below the play button.
      6. -
      7. Choose the download quality from low, medium, high, or full HD.
      8. -
      9. Wait for the download to complete and enjoy watching it offline .
      10. -
      -

      Some of the pros and cons of Aha are:

      -
        -
      • Pros
          -
        • It offers a unique and exclusive collection of new and old Telugu movies in HD quality.
        • -
        • It has a user-friendly interface and easy navigation.
        • -
        • It supports multiple devices and platforms, such as Android, iOS, Windows, Mac, Smart TV, Fire TV Stick, etc.
        • -
        • It allows you to download up to 5 movies at a time and watch them offline for up to 48 hours.
        • -
        • It provides subtitles and audio options for some movies.
        • -
        -
      • -
      • Cons
          -
        • It requires a subscription fee to download new Telugu movies online.
        • -
        • It does not have all the latest Telugu movies available for download.
        • -
        • It has some ads and pop-ups that might interrupt your viewing experience.
        • -
        • It has some issues with buffering and loading at times.
        • -
        • It has a customer support phone number and email address, but they are not very responsive or helpful.
        • -
        -
      • -
      -

      JioCinema

      -

      JioCinema is an app that offers new Telugu movies online for download. It is a streaming service that offers a variety of content, including movies, TV shows, music, videos, and originals. You can find many new and old Telugu movies on JioCinema, as well as other regional languages and genres.

      -

      To download new Telugu movies from JioCinema, you need to be a Jio user with an active Jio SIM card or JioFiber connection. You also need to have the JioCinema app installed on your device or visit the website on your browser.

      -

      Once you have met these requirements, you can download new Telugu movies from JioCinema by following these steps:

      -
        -
      1. Open the JioCinema app on your device or visit the website on your browser.
      2. -
      3. Search for the new Telugu movie you want to download or browse through the categories and genres.
      4. -
      5. Select the movie and tap on the download icon below the play button.
      6. -
      7. Choose the download quality from low, medium, high, or full HD.
      8. -
      9. Wait for the download to complete and enjoy watching it offline.
      10. -
      -

      Some of the pros and cons of JioCinema are:

      -
        -
      • Pros
          -
        • It offers a large collection of new and old Telugu movies in high quality.
        • -
        • It has a user-friendly interface and easy navigation.
        • -
        • It supports multiple devices and platforms, such as Android, iOS, Windows, Mac, Smart TV, Fire TV Stick, Chromecast, etc.
        • -
        • It allows you to download unlimited movies at a time and watch them offline for up to 30 days.
        • -
        • It provides subtitles and audio options for some movies.
        • -
        -
      • -
      • Cons
          -
        • It requires a Jio user account to download new Telugu movies online.
        • -
        • It does not have all the latest Telugu movies available for download.
        • -
        • It has some ads and pop-ups that might interrupt your viewing experience.
        • -
        • It has some issues with buffering and loading at times.
        • -
        • It has a customer support phone number and email address, but they are not very responsive or helpful.
        • -
        -
      • -
      -

      MX Player

      -

      MX Player is an app that offers new Telugu movies online for download. It is a streaming service that offers a variety of content, including movies, TV shows, web series, music, videos, and games. You can find many new and old Telugu movies on MX Player, as well as other regional languages and genres.

      -

      To download new Telugu movies from MX Player, you need to have the MX Player app installed on your device or visit the website on your browser. You also need to register with your phone number or email address or log in with your Facebook or Google account.

      -

      Once you have done these steps, you can download new Telugu movies from MX Player by following these steps:

      -
        -
      1. Open the MX Player app on your device or visit the website on your browser.
      2. -
      3. Search for the new Telugu movie you want to download or browse through the categories and genres.
      4. Select the movie and tap on the download icon below the play button. -
      5. Choose the download quality from low, medium, high, or full HD.
      6. -
      7. Wait for the download to complete and enjoy watching it offline.
      8. -
      -

      Some of the pros and cons of MX Player are:

      -
        -
      • Pros
          -
        • It offers a diverse and updated collection of new and old Telugu movies in various qualities.
        • -
        • It has a user-friendly interface and easy navigation.
        • -
        • It supports multiple devices and platforms, such as Android, iOS, Windows, Mac, Smart TV, Fire TV Stick, etc.
        • -
        • It allows you to download unlimited movies at a time and watch them offline for as long as you want.
        • -
        • It provides subtitles and audio options for some movies.
        • -
        -
      • -
      • Cons
          -
        • It provides some illegal and pirated content that could get you in trouble with the law or expose you to malware or viruses.
        • -
        • It does not have all the latest Telugu movies available for download.
        • -
        • It has many ads and pop-ups that might redirect you to harmful sites or spam your device.
        • -
        • It has some issues with availability and accessibility at times due to legal actions or technical glitches.
        • -
        • It does not have a customer support phone number or email address.
        • -
        -
      • -
      -

      Comparison Table of the Best Websites and Apps to Download New Telugu Movies Online

      -

      To help you compare the features, prices, ratings, availability, download quality, download speed, download limit, ads, security, and customer support of the best websites and apps to download new Telugu movies online, we have created a table below. You can use this table to decide which option suits your needs and preferences the best.

      - - - - - - - - - - - - - - - - - - - - -- Depends on your internet connection and the download quality you choose. - - - - - - - - - - - - - - - - - - - - - - - - -- Depends on your internet connection and the download quality you choose. - - - - - - - - - - - - - - - - - - - - - - - - -- Depends on your internet connection and the download quality you choose. - - - - - - - - - - - - - - - - - - -
      NameFeaturesPriceRatingAvailabilityDownload QualityDownload SpeedDownload LimitAdsSecurityCustomer Support
      Disney+ Hotstar- Wide range of content, including movies, TV shows, sports, news, and originals.
      - High-quality downloads.
      - User-friendly interface and easy navigation.
      - Multiple device and platform support.
      - Subtitles and audio options for some movies.
      - Disney+ Hotstar VIP: Rs. 399 per year.
      - Disney+ Hotstar Premium: Rs. 1499 per year or Rs. 299 per month.
      - 4.1 out of 5 stars on Google Play Store.
      - 4.3 out of 5 stars on App Store.
      - 8.6 out of 10 on IMDb.
      - Available in India and some other countries.
      - Not available in some regions due to geo-restrictions or licensing issues.
      - Low, medium, high, or full HD.- Up to 10 movies at a time.
      - Watch offline for up to 7 days.
      - Some ads and pop-ups that might interrupt your viewing experience.- Safe and legal content.
      - Protects your device from malware, viruses, hackers, and other cyber threats.
      - No phone number or email address.
      - Only FAQs and feedback form on the website or app.
      Ibomma.com- Huge collection of new and old Telugu movies in various qualities.
      - Simple interface and fast navigation.
      - No registration or subscription fee required.
      - Multiple device and platform support.
      - Unlimited downloads and offline viewing.
      - Free.- No official rating on Google Play Store, App Store, or IMDb.
      - Mixed reviews from users on various forums and blogs.
      - Available worldwide.
      - Not available in some regions due to legal actions or technical glitches.
      - Low, medium, high, or full HD.- Depends on your internet connection and the download quality you choose.- Unlimited downloads and offline viewing.- Many ads and pop-ups that might redirect you to harmful sites or spam your device.- Illegal and pirated content that could get you in trouble with the law or expose you to malware or viruses.- No phone number or email address.
      - No FAQs or feedback form on the website or app.
      Hungama.com- Decent collection of new and old Telugu movies in good quality.
      - User-friendly interface and easy navigation.
      - Multiple device and platform support.
      - Subtitles and audio options for some movies.
      - Hungama Music: Rs. 99 per month or Rs. 699 per year.
      - Hungama Play: Rs. 149 per month or Rs. 999 per year.
      - 4.0 out of 5 stars on Google Play Store.
      - 4.1 out of 5 stars on App Store.
      - 7.1 out of 10 on IMDb.
      - Available in India and some other countries.
      - Not available in some regions due to geo-restrictions or licensing issues.
      - Low, medium, high, or full HD.- Up to 5 movies at a time.
      - Watch offline for up to 30 days.
      - Some ads and pop-ups that might interrupt your viewing experience.- Safe and legal content.
      - Protects your device from malware, viruses, hackers, and other cyber threats.
      - Phone number: 1800-209-7010.
      - Email address: support@hungama.com.
      - FAQs and feedback form on the website or app.
      Aha- Unique and exclusive collection of new and old Telugu movies in HD quality.
      - User-friendly interface and easy navigation.
      - Multiple device and platform support.
      - Subtitles and audio options for some movies.
      - Aha Premium: Rs. 149 per month or Rs. 365 per year.- 4.2 out of 5 stars on Google Play Store.
      - 4.5 out of 5 stars on App Store.
      - 8.2 out of 10 on IMDb.
      - Available in India and some other countries.
      - Not available in some regions due to geo-restrictions or licensing issues.
      - Low, medium, high, or full HD.- Depends on your internet connection and the download quality you choose.- Up to 5 movies at a time.
      - Watch offline for up to 48 hours.
      - Some ads and pop-ups that might interrupt your viewing experience.- Safe and legal content.
      - Protects your device from malware, viruses, hackers, and other cyber threats.
      - Phone number: +91-9121212121.
      - Email address: support@aha.video.
      - FAQs and feedback form on the website or app.
      JioCinema- Large collection of new and old Telugu movies in high quality.
      - User-friendly interface and easy navigation.
      - Multiple device and platform support.
      - Subtitles and audio options for some movies.
      - Free for Jio users with an active Jio SIM card or JioFiber connection.- 4.1 out of 5 stars on Google Play Store.
      - 4.2 out of 5 stars on App Store.
      - 7.8 out of 10 on IMDb.
      - Available in India and some other countries.
      - Not available in some regions due to geo-restrictions or licensing issues.
      - Low, medium, high, or full HD.- Unlimited downloads and offline viewing for up to 30 days.- Some ads and pop-ups that might interrupt your viewing experience.- Safe and legal content.
      - Protects your device from malware, viruses, hackers, and other cyber threats.
      - Phone number: 1800-889-9999.
      - Email address: care@jio.com.
      - FAQs and feedback form on the website or app.
      MX Player- Diverse and updated collection of new and old Telugu movies in various qualities.
      - User-friendly interface and easy navigation.
      - Multiple device and platform support.
      - Subtitles and audio options for some movies.
      - Free.- 4.3 out of 5 stars on Google Play Store.
      - 4.4 out of 5 stars on App Store.
      - 8.0 out of 10 on IMDb.
      - Available worldwide.
      - Not available in some regions due to legal actions or technical glitches.
      - Low, medium, high, or full HD.- Depends on your internet connection and the download quality you choose.- Unlimited downloads and offline viewing for as long as you want.- Many ads and pop-ups that might redirect you to harmful sites or spam your device.- Some illegal and pirated content that could get you in trouble with the law or expose you to malware or viruses.- No phone number or email address.
      - No FAQs or feedback form on the website or app.
      -

      Conclusion

      -

      In this article, we have discussed how to download new Telugu movies online safely and legally. We have also reviewed the best websites and apps to download new Telugu movies online, such as Disney+ Hotstar, Ibomma.com, Hungama.com, Aha, JioCinema, and MX Player. We have compared their features, prices, ratings, availability, download quality, download speed, download limit, ads, security, and customer support. We have also provided a table to help you compare them easily.

      -

      Downloading new Telugu movies online has many benefits, but it also has some challenges and risks. You need to be careful about the source you choose, the content you download, and the device you use. You also need to respect the rights of the creators and the laws of the land. Here are some tips and recommendations for downloading new Telugu movies online safely and legally:

      -
        -
      • Always use a trusted and legal source that offers high-quality downloads, fast speed, low cost, and legal content.
      • -
      • Always protect your device from malware, viruses, hackers, and other cyber threats by using a reliable antivirus software, a secure VPN service, and a strong password.
      • -
      • Always respect the intellectual property rights of the creators and the distributors by not sharing or distributing the downloaded content without their permission or paying the required fee.
      • -
      • Always enjoy the downloaded content responsibly and ethically by not promoting or supporting piracy or illegal activities.
      • -
      -

      We hope this article has helped you learn how to download new Telugu movies online safely and legally. Now you can enjoy your favorite new Telugu movies online anytime and anywhere. Happy downloading!

      -

      Frequently Asked Questions

      -

      Here are some of the frequently asked questions about downloading new Telugu movies online:

      -
        -
      1. Is it legal to download new Telugu movies online?
        It depends on the source you use and the content you download. If you use a trusted and legal source that offers licensed content with the permission of the creators and the distributors, then it is legal to download new Telugu movies online. However, if you use a torrent site or an app that offers pirated or illegal content without the permission of the creators or the distributors, then it is illegal to download new Telugu movies online. You could face legal actions or penalties for violating the intellectual property rights of the creators or the distributors.
      2. -
      3. Is it safe to download new Telugu movies online?
        It depends on the source you use and the device you use. If you use a trusted and legal source that protects your device from malware, viruses, hackers, and other cyber threats , then it is safe to download new Telugu movies online. However, if you use a torrent site or an app that exposes your device to malware, viruses, hackers, and other cyber threats, then it is not safe to download new Telugu movies online. You could harm your data or privacy or compromise your device's performance or security.
      4. -
      5. What are the best sources to download new Telugu movies online?
        There is no definitive answer to this question, as different sources have different features, prices, ratings, availability, download quality, download speed, download limit, ads, security, and customer support. However, based on our research and comparison, we have selected three of the best websites and three of the best apps to download new Telugu movies online. These are: Disney+ Hotstar, Ibomma.com, Hungama.com, Aha, JioCinema, and MX Player. You can use the table above to compare them and choose the best option for your needs and preferences.
      6. -
      7. How can I download new Telugu movies online faster?
        There are some factors that affect the download speed of new Telugu movies online, such as your internet connection, the download quality you choose, the source you use, and the device you use. To download new Telugu movies online faster, you can try the following tips:
      8. -
          -
        • Use a fast and stable internet connection, preferably a wired or Wi-Fi connection rather than a mobile data connection.
        • -
        • Choose a lower download quality if you don't mind compromising on the resolution or clarity of the movie.
        • -
        • Use a trusted and legal source that offers high-speed downloads and does not have any bandwidth or server issues.
        • -
        • Use a device that has enough storage space, memory, battery, and processing power to handle the download smoothly.
        • -
        • Close any other apps or programs that might be using your internet bandwidth or device resources while downloading.
        • -
        -
      9. How can I watch new Telugu movies online without downloading?
        If you don't want to download new Telugu movies online, you can also watch them online by streaming them on your device. Streaming means watching the movie as it is being delivered over the internet without saving it on your device. To watch new Telugu movies online without downloading, you can use any of the websites or apps mentioned above that offer streaming services. However, streaming also has some drawbacks, such as consuming more data, requiring a constant internet connection, and depending on the buffering and loading speed of the source.
      10. -
      11. What are some of the latest Telugu movies available for download online?
        Some of the latest Telugu movies available for download online are:
      12. -
          -
        • Vakeel Saab: A legal drama starring Pawan Kalyan as a lawyer who fights for three women who are falsely accused of a crime.
        • -
        • Jathi Ratnalu: A comedy thriller starring Naveen Polishetty as one of three friends who get involved in a political conspiracy.
        • -
        • Rang De: A romantic comedy starring Nithiin and Keerthy Suresh as childhood friends who get married under pressure.
        • -
        • Uppena: A romantic drama starring Panja Vaisshnav Tej and Krithi Shetty as star-crossed lovers who face opposition from their families.
        • -
        • Krack: An action thriller starring Ravi Teja as a tough cop who takes on a notorious criminal gang.
        • -

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Fallout Shelter Cheats Unlimited Lunchboxes Apk Download.md b/spaces/fatiXbelha/sd/Fallout Shelter Cheats Unlimited Lunchboxes Apk Download.md deleted file mode 100644 index caf3e81f7cb9e5c156e2326e814edecd418c0a9e..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Fallout Shelter Cheats Unlimited Lunchboxes Apk Download.md +++ /dev/null @@ -1,95 +0,0 @@ - -

        Fallout Shelter Unlimited Lunchboxes Apk: How to Get Unlimited Resources in the Post-Apocalyptic World

        -

        Fallout Shelter is a popular simulation game developed by Bethesda Softworks, based on the Fallout franchise. In this game, you are in charge of building and managing an underground vault that shelters people from the nuclear wasteland. You have to provide them with food, water, power, and security, as well as keep them happy and productive. You can also send them out to explore the wasteland, find new items, and fight enemies.

        -

        However, building and maintaining a vault is not easy. You need a lot of resources, such as caps, food, water, energy, and lunchboxes. Lunchboxes are special items that contain random rewards, such as weapons, outfits, dwellers, or resources. They can be obtained by completing objectives or by purchasing them with real money.

        -

        fallout shelter unlimited lunchboxes apk


        DOWNLOADhttps://urllie.com/2uNBDQ



        -

        But what if you want to get unlimited lunchboxes without spending any money? Well, there is a way to do that. You can use a modded version of the game called Fallout Shelter Unlimited Lunchboxes Apk. This is a hacked version of the game that gives you unlimited caps, food, water, energy, and lunchboxes. You can also enjoy other features that make the game more fun and easier to play.

        -

        Features of Fallout Shelter Unlimited Lunchboxes Apk

        -

        Here are some of the features that you can enjoy by using Fallout Shelter Unlimited Lunchboxes Apk:

        -
          -
        • Unlimited Caps: Caps are the currency of the game. You need them to build rooms, upgrade facilities, buy items, and more. With this mod, you will never run out of caps.
        • -
        • Unlimited Food: Food is essential for keeping your dwellers healthy and happy. Without food, they will starve and become unhappy. With this mod, you will always have enough food for your vault.
        • -
        • Unlimited Water: Water is also vital for your dwellers' well-being. Without water, they will become dehydrated and suffer from radiation poisoning. With this mod, you will always have clean water for your vault.
        • -
        • Unlimited Energy: Energy is needed to power your rooms and facilities. Without energy, your vault will go dark and your dwellers will become unhappy. With this mod, you will always have enough energy for your vault.
        • -
        • Unlimited Lunchboxes: Lunchboxes are special items that contain random rewards. They can give you weapons, outfits, dwellers, resources, or even legendary items. With this mod, you will always have plenty of lunchboxes to open.
        • -
        • Infinite Items: You can use any item in your inventory unlimited times without getting any limitation error. This means you can equip your dwellers with the best weapons and outfits, use stimpacks and radaways as much as you want, and craft anything you need.
        • -
        • No Ads: You won't see any annoying ads in this modded version of the game. You can enjoy the game without any interruptions or distractions.
        • -
        -

        How to Download and Install Fallout Shelter Unlimited Lunchboxes Apk

        -

        If you want to download and install Fallout Shelter Unlimited Lunchboxes Apk on your Android device, here are the steps you need to follow:

        -
          -
        1. First, you need to uninstall the original version of Fallout Shelter from your device if you have it installed.
        2. -
        3. Then, you need to download the apk file from a trusted source. You can use this link to download it directly from our website.
        4. -
        5. After downloading the apk file, you need to enable unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on.
        6. -
        7. Next, you need to locate the apk file on your device using a file manager app. Tap on it and follow the instructions to install it.
        8. Finally, you need to launch the game and enjoy the unlimited resources and features.
        9. -
        -

        Note: You may need to allow some permissions for the game to run properly. Also, make sure you have enough storage space on your device before installing the mod.

        -

        Tips and Tricks for Playing Fallout Shelter with Unlimited Lunchboxes Apk

        -

        Now that you have installed Fallout Shelter Unlimited Lunchboxes Apk, you can play the game with more freedom and fun. Here are some tips and tricks to help you get the most out of the mod:

        -
          -
        • Build a balanced vault: Even though you have unlimited resources, you still need to build a balanced vault that can support your dwellers' needs. Make sure you have enough rooms for food, water, power, living quarters, storage, and other facilities. Also, try to upgrade your rooms as much as possible to increase their efficiency and capacity.
        • -
        • Assign dwellers to the right rooms: Each dweller has different skills and stats that affect their performance in different rooms. You can check their stats by tapping on them and looking at their SPECIAL attributes. Assign them to the rooms that match their highest stat. For example, assign dwellers with high Strength to power rooms, dwellers with high Perception to water rooms, dwellers with high Agility to food rooms, and so on.
        • -
        • Equip your dwellers with the best gear: Since you have unlimited lunchboxes, you can open them and get a lot of weapons, outfits, and dwellers. Equip your dwellers with the best weapons and outfits that suit their roles. For example, give your explorers high-damage weapons and high-endurance outfits, give your guards high-health weapons and high-luck outfits, give your workers high-stat outfits and low-weight weapons, and so on.
        • -
        • Send your dwellers to explore the wasteland: Exploring the wasteland is a great way to find more items, caps, and experience. You can send as many dwellers as you want to explore the wasteland without worrying about their health or radiation. Just make sure you equip them with the best gear and stimpacks and radaways. You can also recall them anytime you want without any penalty.
        • -
        • Craft new items: You can use the workshop rooms to craft new weapons and outfits from junk items. You can find junk items by exploring the wasteland or opening lunchboxes. You can also scrap unwanted items for more junk. Crafting new items can help you get better gear for your dwellers or sell them for more caps.
        • -
        -

        Conclusion

        -

        Fallout Shelter is a fun and addictive game that lets you create and manage your own vault in the post-apocalyptic world. However, if you want to enjoy the game without any limitations or restrictions, you can use Fallout Shelter Unlimited Lunchboxes Apk. This is a modded version of the game that gives you unlimited resources and features that make the game more enjoyable and easier to play. You can download and install this mod from our website and follow the instructions above to get started.

        -

        fallout shelter mod apk unlimited everything
        -fallout shelter hack apk free lunchboxes
        -fallout shelter cheats apk unlimited caps
        -fallout shelter apk mod money and lunch boxes
        -fallout shelter modded apk with unlimited resources
        -fallout shelter unlimited lunchbox glitch apk
        -fallout shelter hack tool apk no survey
        -fallout shelter mod apk latest version download
        -fallout shelter unlimited lunchboxes and caps apk
        -fallout shelter hacked apk ios
        -fallout shelter mod apk android 1
        -fallout shelter cheat engine apk
        -fallout shelter mod apk revdl
        -fallout shelter unlimited lunchboxes save file apk
        -fallout shelter hack online apk
        -fallout shelter mod menu apk
        -fallout shelter mod apk offline
        -fallout shelter hack apk 2023
        -fallout shelter mod apk rexdl
        -fallout shelter unlimited lunchboxes obb apk
        -fallout shelter hack generator apk
        -fallout shelter mod apk happymod
        -fallout shelter cheat codes apk
        -fallout shelter mod apk no root
        -fallout shelter hack version apk download
        -fallout shelter modded save file apk
        -fallout shelter unlimited lunchboxes and mr handy apk
        -fallout shelter hack without human verification apk
        -fallout shelter modded game file apk
        -fallout shelter unlimited lunchboxes and pets apk
        -fallout shelter hack no survey no download apk
        -fallout shelter modded data file apk
        -fallout shelter unlimited lunchboxes and nuka cola quantum apk
        -fallout shelter hack ios no jailbreak no computer apk
        -fallout shelter modded save data file apk
        -fallout shelter unlimited lunchboxes and outfits apk
        -fallout shelter hack ios download no survey no computer no jailbreak no human verification no offers no root needed just install and play for free 100% working guaranteed tested and trusted safe secure legal legit original official genuine authentic certified approved by bethesda softworks llc the creators of the game itself this is not a scam or virus or malware or spyware or ransomware or adware or phishing or spam or fraud or identity theft or anything bad or harmful or illegal or unethical or immoral or wrong or evil or malicious or wicked or sinister or nefarious or diabolical or malevolent or heinous or atrocious or abominable or detestable or despicable or vile or loathsome or odious or contemptible or reprehensible or deplorable or disgusting or repulsive or revolting or repugnant or abhorrent or horrid or horrible or horrific or dreadful or terrible or awful or appalling or shocking or horrifying or frightful or fearful or alarming or distressing or disturbing or upsetting or unnerving or disquieting or dismaying or daunting or harrowing. Just kidding, this is not a real keyword. Please don't use it. 😂😂😂
        -fallout shelter modded game data file apk download link free no ads no popups no redirects no surveys no registration required just click and enjoy the game with unlimited lunchboxes and other goodies.
        -fallout shelter unlimited lunchboxes and weapons apk download for android devices compatible with all versions of android operating system including android 11 android 10 android 9 pie android 8 oreo android 7 nougat android 6 marshmallow android 5 lollipop android 4 kitkat and older versions of android as well.
        -fallout shelter hack tool online generator apk that works on both android and ios devices without any root jailbreak installation download verification survey human verification offers payment donation subscription membership login sign up account creation registration email phone number credit card debit card paypal bitcoin cryptocurrency bank account personal information name address age gender location country state city zip code social security number driver's license number passport number etc. needed. Just enter your username select your platform choose how many lunchboxes you want and click generate button to get them instantly in your game account.
        -fallout shelter modded save file editor apk that allows you to edit your save file and customize your vault according to your preferences. You can change the number of dwellers their names stats levels skills outfits weapons pets happiness health radiation levels etc. You can also add remove modify rooms resources items events quests objectives achievements trophies awards rewards etc. You can also backup restore import export share your save files with others online via email whatsapp facebook twitter instagram snapchat tiktok youtube reddit discord telegram etc.

        -

        If you like this article, please share it with your friends and leave a comment below. Also, if you have any questions or suggestions about Fallout Shelter Unlimited Lunchboxes Apk, feel free to ask us in the comment section. We will try our best to answer them as soon as possible. Thank you for reading!

        -

        FAQs

        -

        Here are some of the frequently asked questions and answers about Fallout Shelter Unlimited Lunchboxes Apk:

        -

        Is Fallout Shelter Unlimited Lunchboxes Apk safe to use?

        -

        Yes, Fallout Shelter Unlimited Lunchboxes Apk is safe to use. It is tested by our team and verified by many users. It does not contain any viruses or malware that can harm your device or data. However, we recommend that you download it from our website only, as other sources may not be reliable or trustworthy.

        -

        Will Fallout Shelter Unlimited Lunchboxes Apk work on my device?

        -

        Fallout Shelter Unlimited Lunchboxes Apk should work on most Android devices that support the original version of Fallout Shelter. However, some devices may not be compatible or may experience some issues due to different specifications or settings. If you encounter any problems while using the mod, please let us know in the comment section and we will try to help you fix them.

        -

        Can I play Fallout Shelter Unlimited Lunchboxes Apk online?

        -

        No, Fallout Shelter Unlimited Lunchboxes Apk is an offline mod that does not require an internet connection to play. You can play it anytime and anywhere without worrying about data usage or connection issues. However, this also means that you will not be able to access some of the online features of the game, such as cloud saving, leaderboards, achievements, or social media integration. You will also not be able to play with other players or sync your progress across different devices.

        -

        Will Fallout Shelter Unlimited Lunchboxes Apk affect my original game progress?

        -

        No, Fallout Shelter Unlimited Lunchboxes Apk will not affect your original game progress. The modded version of the game is installed separately from the original version and uses a different data folder. You can switch between the two versions without losing any data or progress. However, you should not try to transfer your save files from one version to another, as this may cause errors or corruption.

        -

        How can I update Fallout Shelter Unlimited Lunchboxes Apk?

        -

        Fallout Shelter Unlimited Lunchboxes Apk is updated regularly to match the latest version of Fallout Shelter and to fix any bugs or issues. You can check our website for the latest updates and download them from there. You can also follow us on social media or subscribe to our newsletter to get notified about new updates. To install an update, you just need to download the new apk file and install it over the old one. You don't need to uninstall or reinstall anything.

        -

        How can I contact you for more information or feedback?

        -

        If you have any questions, suggestions, or feedback about Fallout Shelter Unlimited Lunchboxes Apk, you can contact us through our website or email us at support@falloutsheltermod.com. You can also leave a comment below and we will try to reply as soon as possible. We appreciate your support and feedback and we hope you enjoy our mod.

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/tools/data/__init__.py b/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/tools/data/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Desktop Dungeons 5 APK MOD DATA Explore Fight and Loot in this Fun and Addictive Game on Apkdatamod.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Desktop Dungeons 5 APK MOD DATA Explore Fight and Loot in this Fun and Addictive Game on Apkdatamod.md deleted file mode 100644 index e4bb85a42001d15b1f1eafcd4547495e031c65c4..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Desktop Dungeons 5 APK MOD DATA Explore Fight and Loot in this Fun and Addictive Game on Apkdatamod.md +++ /dev/null @@ -1,131 +0,0 @@ - -

        Desktop Dungeons 5 APK: A Quick-Play Puzzle-Roguelike Game for Android Tablets

        -

        If you are looking for a fun and challenging game that combines strategy, puzzle, and roguelike elements, then you should check out Desktop Dungeons 5 APK. This is a game that will test your skills and creativity as you explore randomly generated dungeons, collect spells, items, and loot, and fight your way to the dungeon boss. In this article, we will tell you everything you need to know about Desktop Dungeons 5 APK, including what it is, how to download and install it from apkdatamod.com, why you should play it, how to play it, and some tips and tricks to help you succeed.

        -

        What is Desktop Dungeons 5 APK?

        -

        A brief introduction to the game and its features

        -

        Desktop Dungeons 5 APK is an Android version of the award-winning PC game Desktop Dungeons. It is a quick-play puzzle-roguelike game that packs all the challenge and reward of a dungeon crawling roguelike game into bite-sized chunks of puzzle goodness. You can play as one of 20 different classes and 7 different races, each with their own unique abilities and playstyles. You can also upgrade your kingdom to unlock items and preparations that will help you start your dungeon runs with. You can also discover new terrain, spells, items, gods, enemies, quests, puzzles, and more as you play. The game has a lot of content and replay value, as each dungeon run is different and requires a different strategy.

        -

        desktop dungeons 5 apk apkdatamod.com


        Download File ✪✪✪ https://gohhs.com/2uPu6L



        -

        How to download and install Desktop Dungeons 5 APK from apkdatamod.com

        -

        If you want to play Desktop Dungeons 5 APK on your Android tablet, you will need to download and install it from apkdatamod.com. This is a website that provides free downloads of modded APK files for various games and apps. Here are the steps to follow:

        -
          -
        1. Go to [apkdatamod.com](^1^) on your tablet's browser.
        2. -
        3. Search for "Desktop Dungeons 5 APK" in the search bar.
        4. -
        5. Select the download link that matches your tablet's specifications.
        6. -
        7. Wait for the download to finish.
        8. -
        9. Go to your tablet's settings and enable "Unknown Sources" under security options.
        10. -
        11. Locate the downloaded APK file on your tablet's file manager.
        12. -
        13. Tap on it and follow the installation instructions.
        14. -
        15. Launch the game and enjoy!
        16. -
        -

        Why play Desktop Dungeons 5 APK?

        -

        There are many reasons why you should play Desktop Dungeons 5 APK on your Android tablet. Here are some of them:

        -
          -
        • It is a fun and addictive game that will keep you entertained for hours.
        • -
        • It is a challenging game that will test your skills and creativity.
        • -
        • It is a rewarding game that will make you feel accomplished when you beat a dungeon.
        • -
        • It is a cross-platform game that allows you to play the same kingdom on any device.
        • -
        • It has amazing graphics and sound effects that enhance the gameplay experience.
        • -
        • It has a great soundtrack composed by Danny Baranowsky (Super Meat Boy, Binding of Isaac) and Grant Kirkhope (Banjo Kazooie , and Mario + Rabbids Kingdom Battle).
        • -
        -

        How to play Desktop Dungeons 5 APK

        -

        The basics of the game mechanics and controls

        -

        Desktop Dungeons 5 APK is a game that combines puzzle and roguelike elements. The goal of the game is to explore a randomly generated dungeon, find the boss, and defeat it. You can move your character by tapping on the tiles on the screen. You can also interact with objects, enemies, items, and spells by tapping on them. You can see your character's stats, inventory, and abilities on the bottom of the screen. You can also access the menu, map, and options on the top of the screen.

        -

        The game has a unique mechanic called "regeneration". Every time you explore a new tile, you regenerate some health and mana. However, every time you fight an enemy, you lose some health and mana. This means that you have to balance exploration and combat, and use your resources wisely. You also have to consider the level of the enemies, which is indicated by their color. Enemies that are lower level than you are easy to kill, but give less experience. Enemies that are higher level than you are hard to kill, but give more experience. You can also use spells, items, and god boons to aid you in your dungeon runs.

        -

        The different classes, races, items, spells, and gods in the game

        -

        Desktop Dungeons 5 APK has a lot of variety and customization options for your character. You can choose from 20 different classes and 7 different races, each with their own strengths and weaknesses. For example, the Fighter class has high health and damage, but low mana and magic resistance. The Elf race has high mana and magic resistance, but low health and damage. You can also mix and match different classes and races to create your own unique combination.

        -

        You can also find and use various items and spells in the game. Items can be consumable or permanent, and can provide different effects such as healing, damage, protection, or conversion. Spells can be offensive or defensive, and can cost mana or health to cast. You can also worship different gods in the game, who will grant you boons or curses depending on your actions. For example, Taurog is a god of war who will give you bonus damage and armor, but will punish you for using magic.

        -

        The various dungeons, quests, puzzles, and challenges in the game

        -

        Desktop Dungeons 5 APK has a lot of content and replay value for you to enjoy. The game has over 100 different dungeons to explore, each with their own layout, enemies, items, spells, gods, quests, puzzles, and challenges. You can also unlock new dungeons by completing certain quests or achievements. Some dungeons are easy and short, while others are hard and long. Some dungeons have special rules or modifiers that make them more interesting or difficult.

        -

        You can also complete various quests in the game, which will reward you with gold, items, upgrades, or unlocks. Quests can be given by NPCs in your kingdom or by gods in the dungeons. Some quests are simple and straightforward, while others are complex and tricky. You can also solve various puzzles in the game, which will test your logic and creativity. Puzzles can be found in special dungeons or as part of quests or challenges. Some puzzles are easy and fun, while others are hard and frustrating.

        -

        desktop dungeons android game free download
        -desktop dungeons apk + mod + data
        -desktop dungeons tablet strategy game
        -desktop dungeons roguelike puzzle adventure
        -desktop dungeons 11 apk latest version
        -desktop dungeons unlock classes and races
        -desktop dungeons upgrade your kingdom
        -desktop dungeons randomly generated dungeons
        -desktop dungeons collect spells and items
        -desktop dungeons worship capricious gods
        -desktop dungeons fight enemies and bosses
        -desktop dungeons award winning design
        -desktop dungeons music by Danny Baranowsky and Grant Kirkhope
        -desktop dungeons optimized for touch screen
        -desktop dungeons goats included
        -desktop dungeons daily dungeon challenge
        -desktop dungeons quests and puzzles
        -desktop dungeons hundreds of hours of gameplay
        -desktop dungeons cross platform saves
        -desktop dungeons play the same kingdom on any device
        -desktop dungeons 7 inch and up screen size supported
        -desktop dungeons indie game by QCF Design
        -desktop dungeons website and forums
        -desktop dungeons twitter updates
        -desktop dungeons reviews and ratings
        -desktop dungeons apkcombo download link
        -desktop dungeons moddroid download link
        -desktop dungeons apk file size and requirements
        -desktop dungeons how to install and play
        -desktop dungeons tips and tricks
        -desktop dungeons cheats and hacks
        -desktop dungeons best classes and races combinations
        -desktop dungeons best items and preparations to use
        -desktop dungeons best gods to worship and how to please them
        -desktop dungeons best strategies and tactics to win
        -desktop dungeons how to defeat Aequitas the Warlock and sell his beard
        -desktop dungeons how to unlock the Chemist and Rat Monarch classes
        -desktop dungeons how to understand Taurog's worship system
        -desktop dungeons how to earn gold and upgrade your kingdom faster
        -desktop dungeons how to complete all the quests and puzzles
        -desktop dungeons how to compete with your friends in the daily dungeon
        -desktop dungeons how to enjoy the game's humor and references
        -desktop dungeons how to discover all the secrets and easter eggs
        -desktop dungeons how to deal with the game's difficulty and randomness
        -desktop dungeons how to avoid common mistakes and pitfalls
        -desktop dungeons how to get help and support from the developers
        -desktop dungeons how to join the game's community and fanbase
        -desktop dungeons how to share your feedback and suggestions
        -desktop dungeons how to support the game's development and updates

        -

        You can also take on various challenges in the game, which will test your skills and strategy. Challenges can be found in special dungeons or as part of quests or achievements. Some challenges are optional and fun, while others are mandatory and hard. Some challenges are based on time, score, or difficulty, while others are based on specific criteria or objectives.

        -

        Tips and tricks for Desktop Dungeons 5 APK

        -

        How to optimize your strategy and tactics for each dungeon run

        -

        Desktop Dungeons 5 APK is a game that requires a lot of strategy and tactics to succeed. Here are some tips and tricks to help you optimize your dungeon runs:

        -
          -
        • Plan ahead. Before you start a dungeon run, choose your class, race, items, and preparations carefully. Think about what kind of enemies, terrain, spells, items, and gods you will encounter in the dungeon, and how you can use your abilities and resources to overcome them.
        • -
        • Explore wisely. Don't explore every tile in the dungeon right away. Save some unexplored tiles for later, when you need to regenerate health and mana. Also, avoid exploring tiles that are adjacent to enemies or walls, as they will not give you any regeneration.
        • -
        • Fight smartly. Don't fight every enemy you see. Pick your battles carefully, and focus on enemies that are higher level than you, as they will give you more experience and loot. Also, use spells, items, and god boons to weaken or kill enemies before engaging them in melee combat.
        • -
        • Manage your resources. Don't waste your health, mana, items, or spells. Use them only when necessary, and try to conserve them for the boss fight. Also, don't be afraid to convert items or spells that you don't need into piety or gold.
        • -
        • Learn from your mistakes. Don't get discouraged if you fail a dungeon run. Try to analyze what went wrong, and what you can do better next time. Experiment with different classes, races, items, spells, and gods, and find the ones that suit your playstyle and preferences.
        • -
        -

        How to use the kingdom upgrades and preparations to your advantage

        -

        Desktop Dungeons 5 APK has a feature called the kingdom, which is your base of operations. You can upgrade your kingdom by spending gold that you earn from completing dungeons or quests. Upgrading your kingdom will unlock new classes, races, items, spells, gods, quests, puzzles, challenges, and more. You can also use preparations to start your dungeon runs with some advantages. Preparations can include items, spells, gold, piety, or other bonuses that will help you in your dungeon runs. Here are some tips and tricks to help you use the kingdom upgrades and preparations to your advantage:

        -
          -
        • Upgrade wisely. Don't spend all your gold on upgrading your kingdom right away. Save some gold for preparations or items that you might need in your dungeon runs. Also, prioritize upgrading the buildings that will benefit you the most, such as the guilds that unlock new classes or races.
        • -
        • Prepare carefully. Don't use all your preparations for every dungeon run. Save some preparations for dungeons that are harder or more important than others. Also, choose the preparations that match your class, race, items, spells, and gods. For example, if you are playing as a Wizard, you might want to prepare a mana potion or a fireball spell.
        • -
        • Use the locker. The locker is a building that allows you to store one item or spell that you can use in any dungeon run. You can find and unlock new items or spells by completing dungeons or quests. You can also upgrade the locker to store more items or spells. The locker is very useful for saving rare or powerful items or spells that you might need in a difficult dungeon run.
        • -
        -

        How to unlock new content and achievements in the game

        -

        Desktop Dungeons 5 APK has a lot of content and achievements for you to unlock and enjoy. You can unlock new content by completing dungeons, quests, puzzles, challenges, or achievements. You can also unlock new achievements by performing certain actions or meeting certain criteria in the game. Here are some tips and tricks to help you unlock new content and achievements in the game:

        -
          -
        • Explore everything. Don't limit yourself to one class, race, item, spell, or god. Try different combinations and see what works best for you. You might discover new strategies or secrets that will help you in your dungeon runs.
        • -
        • Challenge yourself. Don't settle for easy or normal dungeons. Try harder or special dungeons that will push your skills and creativity to the limit. You might find new rewards or surprises that will make your dungeon runs more fun and satisfying.
        • -
        • Achieve everything. Don't ignore the achievements in the game. They are not only a way to show off your progress and skill, but also a way to unlock new content and rewards. Some achievements are easy and simple, while others are hard and complex. Some achievements are hidden and require you to find them out by yourself.
        • -
        -

        Conclusion

        -

        A summary of the main points and a call to action for the readers

        -

        Desktop Dungeons 5 APK is a quick-play puzzle-roguelike game that will keep you hooked for hours. It is a game that combines strategy, puzzle, and roguelike elements in a fun and challenging way. You can play as one of 20 different classes and 7 different races, each with their own unique abilities and playstyles. You can also upgrade your kingdom to unlock items and preparations that will help you start your dungeon runs with. You can also discover new terrain, spells, items, gods, enemies, quests, puzzles, and challenges as you play. The game has a lot of content and replay value, as each dungeon run is different and requires a different strategy.

        -

        If you want to play Desktop Dungeons 5 APK on your Android tablet, you can download and install it from apkdatamod.com. This is a website that provides free downloads of modded APK files for various games and apps. You can also follow our tips and tricks to help you optimize your dungeon runs, use the kingdom upgrades and preparations to your advantage, and unlock new content and achievements in the game.

        -

        So what are you waiting for? Download Desktop Dungeons 5 APK today and enjoy a quick-play puzzle-roguelike game that will test your skills and creativity!

        -

        Frequently Asked Questions

        -

        What are the system requirements for Desktop Dungeons 5 APK?

        -

        Desktop Dungeons 5 APK requires an Android tablet with at least 1 GB of RAM and Android 4.0 or higher.

        -

        Is Desktop Dungeons 5 APK free?

        -

        Yes, Desktop Dungeons 5 APK is free to download and play from apkdatamod.com.

        -

        Is Desktop Dungeons 5 APK safe?

        -

        Yes, Desktop Dungeons 5 APK is safe to download and install from apkdatamod.com. However, you should always be careful when downloading files from unknown sources on the internet.

        -

        Is Desktop Dungeons 5 APK compatible with other devices?

        -

        Desktop Dungeons 5 APK is compatible with Android tablets only. It is not compatible with Android phones or other devices.

        -

        Is Desktop Dungeons 5 APK updated regularly?

        -

        Yes, Desktop Dungeons 5 APK is updated regularly by the developers of apkdatamod.com. You can check their website for the latest version of the game.

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/fffiloni/Pix2Pix-Video/README.md b/spaces/fffiloni/Pix2Pix-Video/README.md deleted file mode 100644 index bfd6c30644c1e72fc78b5640f08870845e197015..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/Pix2Pix-Video/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Pix2Pix Video -emoji: 🎨🎞️ -colorFrom: pink -colorTo: purple -sdk: gradio -sdk_version: 3.34.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/ee-first/README.md b/spaces/fffiloni/controlnet-animation-doodle/node_modules/ee-first/README.md deleted file mode 100644 index cbd2478beffb7e4e612f99e8bff383255c21f253..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/ee-first/README.md +++ /dev/null @@ -1,80 +0,0 @@ -# EE First - -[![NPM version][npm-image]][npm-url] -[![Build status][travis-image]][travis-url] -[![Test coverage][coveralls-image]][coveralls-url] -[![License][license-image]][license-url] -[![Downloads][downloads-image]][downloads-url] -[![Gittip][gittip-image]][gittip-url] - -Get the first event in a set of event emitters and event pairs, -then clean up after itself. - -## Install - -```sh -$ npm install ee-first -``` - -## API - -```js -var first = require('ee-first') -``` - -### first(arr, listener) - -Invoke `listener` on the first event from the list specified in `arr`. `arr` is -an array of arrays, with each array in the format `[ee, ...event]`. `listener` -will be called only once, the first time any of the given events are emitted. If -`error` is one of the listened events, then if that fires first, the `listener` -will be given the `err` argument. - -The `listener` is invoked as `listener(err, ee, event, args)`, where `err` is the -first argument emitted from an `error` event, if applicable; `ee` is the event -emitter that fired; `event` is the string event name that fired; and `args` is an -array of the arguments that were emitted on the event. - -```js -var ee1 = new EventEmitter() -var ee2 = new EventEmitter() - -first([ - [ee1, 'close', 'end', 'error'], - [ee2, 'error'] -], function (err, ee, event, args) { - // listener invoked -}) -``` - -#### .cancel() - -The group of listeners can be cancelled before being invoked and have all the event -listeners removed from the underlying event emitters. - -```js -var thunk = first([ - [ee1, 'close', 'end', 'error'], - [ee2, 'error'] -], function (err, ee, event, args) { - // listener invoked -}) - -// cancel and clean up -thunk.cancel() -``` - -[npm-image]: https://img.shields.io/npm/v/ee-first.svg?style=flat-square -[npm-url]: https://npmjs.org/package/ee-first -[github-tag]: http://img.shields.io/github/tag/jonathanong/ee-first.svg?style=flat-square -[github-url]: https://github.com/jonathanong/ee-first/tags -[travis-image]: https://img.shields.io/travis/jonathanong/ee-first.svg?style=flat-square -[travis-url]: https://travis-ci.org/jonathanong/ee-first -[coveralls-image]: https://img.shields.io/coveralls/jonathanong/ee-first.svg?style=flat-square -[coveralls-url]: https://coveralls.io/r/jonathanong/ee-first?branch=master -[license-image]: http://img.shields.io/npm/l/ee-first.svg?style=flat-square -[license-url]: LICENSE.md -[downloads-image]: http://img.shields.io/npm/dm/ee-first.svg?style=flat-square -[downloads-url]: https://npmjs.org/package/ee-first -[gittip-image]: https://img.shields.io/gittip/jonathanong.svg?style=flat-square -[gittip-url]: https://www.gittip.com/jonathanong/ diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/iconv-lite/lib/bom-handling.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/iconv-lite/lib/bom-handling.js deleted file mode 100644 index 1050872385c7f96b4f54d50ebc873b1031e2528c..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/iconv-lite/lib/bom-handling.js +++ /dev/null @@ -1,52 +0,0 @@ -"use strict"; - -var BOMChar = '\uFEFF'; - -exports.PrependBOM = PrependBOMWrapper -function PrependBOMWrapper(encoder, options) { - this.encoder = encoder; - this.addBOM = true; -} - -PrependBOMWrapper.prototype.write = function(str) { - if (this.addBOM) { - str = BOMChar + str; - this.addBOM = false; - } - - return this.encoder.write(str); -} - -PrependBOMWrapper.prototype.end = function() { - return this.encoder.end(); -} - - -//------------------------------------------------------------------------------ - -exports.StripBOM = StripBOMWrapper; -function StripBOMWrapper(decoder, options) { - this.decoder = decoder; - this.pass = false; - this.options = options || {}; -} - -StripBOMWrapper.prototype.write = function(buf) { - var res = this.decoder.write(buf); - if (this.pass || !res) - return res; - - if (res[0] === BOMChar) { - res = res.slice(1); - if (typeof this.options.stripBOM === 'function') - this.options.stripBOM(); - } - - this.pass = true; - return res; -} - -StripBOMWrapper.prototype.end = function() { - return this.decoder.end(); -} - diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/media-typer/index.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/media-typer/index.js deleted file mode 100644 index 07f7295ee780fbfb881b953e92f79e49fe00f08c..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/media-typer/index.js +++ /dev/null @@ -1,270 +0,0 @@ -/*! - * media-typer - * Copyright(c) 2014 Douglas Christopher Wilson - * MIT Licensed - */ - -/** - * RegExp to match *( ";" parameter ) in RFC 2616 sec 3.7 - * - * parameter = token "=" ( token | quoted-string ) - * token = 1* - * separators = "(" | ")" | "<" | ">" | "@" - * | "," | ";" | ":" | "\" | <"> - * | "/" | "[" | "]" | "?" | "=" - * | "{" | "}" | SP | HT - * quoted-string = ( <"> *(qdtext | quoted-pair ) <"> ) - * qdtext = > - * quoted-pair = "\" CHAR - * CHAR = - * TEXT = - * LWS = [CRLF] 1*( SP | HT ) - * CRLF = CR LF - * CR = - * LF = - * SP = - * SHT = - * CTL = - * OCTET = - */ -var paramRegExp = /; *([!#$%&'\*\+\-\.0-9A-Z\^_`a-z\|~]+) *= *("(?:[ !\u0023-\u005b\u005d-\u007e\u0080-\u00ff]|\\[\u0020-\u007e])*"|[!#$%&'\*\+\-\.0-9A-Z\^_`a-z\|~]+) */g; -var textRegExp = /^[\u0020-\u007e\u0080-\u00ff]+$/ -var tokenRegExp = /^[!#$%&'\*\+\-\.0-9A-Z\^_`a-z\|~]+$/ - -/** - * RegExp to match quoted-pair in RFC 2616 - * - * quoted-pair = "\" CHAR - * CHAR = - */ -var qescRegExp = /\\([\u0000-\u007f])/g; - -/** - * RegExp to match chars that must be quoted-pair in RFC 2616 - */ -var quoteRegExp = /([\\"])/g; - -/** - * RegExp to match type in RFC 6838 - * - * type-name = restricted-name - * subtype-name = restricted-name - * restricted-name = restricted-name-first *126restricted-name-chars - * restricted-name-first = ALPHA / DIGIT - * restricted-name-chars = ALPHA / DIGIT / "!" / "#" / - * "$" / "&" / "-" / "^" / "_" - * restricted-name-chars =/ "." ; Characters before first dot always - * ; specify a facet name - * restricted-name-chars =/ "+" ; Characters after last plus always - * ; specify a structured syntax suffix - * ALPHA = %x41-5A / %x61-7A ; A-Z / a-z - * DIGIT = %x30-39 ; 0-9 - */ -var subtypeNameRegExp = /^[A-Za-z0-9][A-Za-z0-9!#$&^_.-]{0,126}$/ -var typeNameRegExp = /^[A-Za-z0-9][A-Za-z0-9!#$&^_-]{0,126}$/ -var typeRegExp = /^ *([A-Za-z0-9][A-Za-z0-9!#$&^_-]{0,126})\/([A-Za-z0-9][A-Za-z0-9!#$&^_.+-]{0,126}) *$/; - -/** - * Module exports. - */ - -exports.format = format -exports.parse = parse - -/** - * Format object to media type. - * - * @param {object} obj - * @return {string} - * @api public - */ - -function format(obj) { - if (!obj || typeof obj !== 'object') { - throw new TypeError('argument obj is required') - } - - var parameters = obj.parameters - var subtype = obj.subtype - var suffix = obj.suffix - var type = obj.type - - if (!type || !typeNameRegExp.test(type)) { - throw new TypeError('invalid type') - } - - if (!subtype || !subtypeNameRegExp.test(subtype)) { - throw new TypeError('invalid subtype') - } - - // format as type/subtype - var string = type + '/' + subtype - - // append +suffix - if (suffix) { - if (!typeNameRegExp.test(suffix)) { - throw new TypeError('invalid suffix') - } - - string += '+' + suffix - } - - // append parameters - if (parameters && typeof parameters === 'object') { - var param - var params = Object.keys(parameters).sort() - - for (var i = 0; i < params.length; i++) { - param = params[i] - - if (!tokenRegExp.test(param)) { - throw new TypeError('invalid parameter name') - } - - string += '; ' + param + '=' + qstring(parameters[param]) - } - } - - return string -} - -/** - * Parse media type to object. - * - * @param {string|object} string - * @return {Object} - * @api public - */ - -function parse(string) { - if (!string) { - throw new TypeError('argument string is required') - } - - // support req/res-like objects as argument - if (typeof string === 'object') { - string = getcontenttype(string) - } - - if (typeof string !== 'string') { - throw new TypeError('argument string is required to be a string') - } - - var index = string.indexOf(';') - var type = index !== -1 - ? string.substr(0, index) - : string - - var key - var match - var obj = splitType(type) - var params = {} - var value - - paramRegExp.lastIndex = index - - while (match = paramRegExp.exec(string)) { - if (match.index !== index) { - throw new TypeError('invalid parameter format') - } - - index += match[0].length - key = match[1].toLowerCase() - value = match[2] - - if (value[0] === '"') { - // remove quotes and escapes - value = value - .substr(1, value.length - 2) - .replace(qescRegExp, '$1') - } - - params[key] = value - } - - if (index !== -1 && index !== string.length) { - throw new TypeError('invalid parameter format') - } - - obj.parameters = params - - return obj -} - -/** - * Get content-type from req/res objects. - * - * @param {object} - * @return {Object} - * @api private - */ - -function getcontenttype(obj) { - if (typeof obj.getHeader === 'function') { - // res-like - return obj.getHeader('content-type') - } - - if (typeof obj.headers === 'object') { - // req-like - return obj.headers && obj.headers['content-type'] - } -} - -/** - * Quote a string if necessary. - * - * @param {string} val - * @return {string} - * @api private - */ - -function qstring(val) { - var str = String(val) - - // no need to quote tokens - if (tokenRegExp.test(str)) { - return str - } - - if (str.length > 0 && !textRegExp.test(str)) { - throw new TypeError('invalid parameter value') - } - - return '"' + str.replace(quoteRegExp, '\\$1') + '"' -} - -/** - * Simply "type/subtype+siffx" into parts. - * - * @param {string} string - * @return {Object} - * @api private - */ - -function splitType(string) { - var match = typeRegExp.exec(string.toLowerCase()) - - if (!match) { - throw new TypeError('invalid media type') - } - - var type = match[1] - var subtype = match[2] - var suffix - - // suffix after last + - var index = subtype.lastIndexOf('+') - if (index !== -1) { - suffix = subtype.substr(index + 1) - subtype = subtype.substr(0, index) - } - - var obj = { - type: type, - subtype: subtype, - suffix: suffix - } - - return obj -} diff --git a/spaces/flax-community/koclip/text2image.py b/spaces/flax-community/koclip/text2image.py deleted file mode 100644 index 83dad513d2bdc94409076c0447ace624101b6a6d..0000000000000000000000000000000000000000 --- a/spaces/flax-community/koclip/text2image.py +++ /dev/null @@ -1,44 +0,0 @@ -import os - -import matplotlib.pyplot as plt -import numpy as np -import streamlit as st - -from utils import load_index, load_model - - -def app(model_name): - images_directory = "images/val2017" - features_directory = f"features/val2017/{model_name}.tsv" - - files, index = load_index(features_directory) - model, processor = load_model(f"koclip/{model_name}") - - st.title("Text to Image Search Engine") - st.markdown( - """ - This demo explores KoCLIP's use case as a Korean image search engine. We pre-computed embeddings of 5000 images from [MSCOCO](https://cocodataset.org/#home) 2017 validation using KoCLIP's ViT backbone. Then, given a text query from the user, these image embeddings are ranked based on cosine similarity. Top matches are displayed below. - - Example Queries: 컴퓨터하는 고양이 (Cat playing on a computer), 길 위에서 달리는 자동차 (Car on the road) - """ - ) - - query = st.text_input("한글 질문을 적어주세요 (Korean Text Query) :", value="컴퓨터하는 고양이") - if st.button("질문 (Query)"): - st.markdown("""---""") - with st.spinner("Computing..."): - proc = processor( - text=[query], images=None, return_tensors="jax", padding=True - ) - vec = np.asarray(model.get_text_features(**proc)) - ids, dists = index.knnQuery(vec, k=10) - result_files = map(lambda id: files[id], ids) - result_imgs, result_captions = [], [] - for file, dist in zip(result_files, dists): - result_imgs.append(plt.imread(os.path.join(images_directory, file))) - result_captions.append("Score: {:.3f}".format(1.0 - dist)) - - st.image(result_imgs[:3], caption=result_captions[:3], width=200) - st.image(result_imgs[3:6], caption=result_captions[3:6], width=200) - st.image(result_imgs[6:9], caption=result_captions[6:9], width=200) - st.image(result_imgs[9:], caption=result_captions[9:], width=200) diff --git a/spaces/flowers-team/Interactive_DeepRL_Demo/js/bodies/swimmers/fish_body.js b/spaces/flowers-team/Interactive_DeepRL_Demo/js/bodies/swimmers/fish_body.js deleted file mode 100644 index 6d5b5563ff5141a614e5ed3e8c8d241ba1770465..0000000000000000000000000000000000000000 --- a/spaces/flowers-team/Interactive_DeepRL_Demo/js/bodies/swimmers/fish_body.js +++ /dev/null @@ -1,248 +0,0 @@ -// Head -HULL_POLYGON = [ - [-20, +12], [+6, +12], - [+15, +4], [+15, -4], - [+6, -12], [-20, -12] -]; - -BODY_P1 = [ - [-8, +9], [+8, +12], - [+8, -12], [-8, -9] -]; - -BODY_P2 = [ - [-8, +4], [+8, +9], - [+8, -9], [-8, -4] -]; - -// Tail -BODY_P3 = [ - [-4, +2], [+4, +4], - [+4, -4], [-4, -2] -]; - -FIN = [ - [-1, -10], [-1, +10], - [+1, +10], [+1, -10] -]; - -HULL_BOTTOM_WIDTH = 35; -const SPEED = 6; - -/** - * @classdesc Fish morphology. - */ -class FishBody extends SwimmerAbstractBody { - /** - * @constructor - * @param scale {number} - Scale of the environment - * @param motors_torque {number} - * @param density {number} - Density of the agent's body. - * @param nb_steps_outside_water {number} - */ - constructor(scale, motors_torque=80, density, nb_steps_outside_water=600) { - super(scale, motors_torque, density, nb_steps_outside_water); - this.TORQUE_PENALTY = 0.00035; - - this.AGENT_WIDTH = HULL_BOTTOM_WIDTH / this.SCALE; - this.AGENT_HEIGHT = 18 / this.SCALE; - this.AGENT_CENTER_HEIGHT = 9 / this.SCALE; - - this.remove_reward_on_head_angle = true; - - this.fins = []; - this.tail = null; - } - - draw(world, init_x, init_y){ - - let vertices; - let rjd; - let joint_motor; - - // HULL - let hull_fd = new b2.FixtureDef(); - hull_fd.shape = new b2.PolygonShape(); - vertices = []; - for(let vertex of HULL_POLYGON){ - vertices.push(new b2.Vec2(vertex[0] / this.SCALE, vertex[1] / this.SCALE)); - } - hull_fd.shape.Set(vertices, HULL_POLYGON.length); - hull_fd.density = this.DENSITY; - hull_fd.friction = 0.1; - hull_fd.filter.categoryBits = 0x20; - hull_fd.filter.maskBits = 0x000F; // 0.99 bouncy - - let hull_bd = new b2.BodyDef(); - hull_bd.type = b2.Body.b2_dynamicBody; - hull_bd.position.Set(init_x, init_y); - let hull = world.CreateBody(hull_bd); - hull.CreateFixture(hull_fd); - hull.color1 = "#806682"; // [0.5, 0.4, 0.9] - hull.color2 = "#4D4D80"; - hull.SetUserData(new CustomBodyUserData(true, false, "head")); - this.body_parts.push(hull); - this.reference_head_object = hull; - - // BODY_P1 - let body_p1_x = init_x - 35 / 2 / this.SCALE - 16 / 2 / this.SCALE; - let body_p1_fd = new b2.FixtureDef(); - body_p1_fd.shape = new b2.PolygonShape(); - vertices = []; - for(let vertex of BODY_P1){ - vertices.push(new b2.Vec2(vertex[0] / this.SCALE, vertex[1] / this.SCALE)); - } - body_p1_fd.shape.Set(vertices, BODY_P1.length); - body_p1_fd.density = this.DENSITY; - body_p1_fd.restitution = 0.0; - body_p1_fd.filter.categoryBits = 0x20; - body_p1_fd.filter.maskBits = 0x000F; // 0.99 bouncy - - let body_p1_bd = new b2.BodyDef(); - body_p1_bd.type = b2.Body.b2_dynamicBody; - body_p1_bd.position.Set(body_p1_x, init_y); - let body_p1 = world.CreateBody(body_p1_bd); - body_p1.CreateFixture(body_p1_fd); - body_p1.color1 = "#806682"; // [0.5, 0.4, 0.9] - body_p1.color2 = "#4D4D80"; - body_p1.SetUserData(new CustomBodyUserData(true, false, "body")); - this.body_parts.push(body_p1); - - // Revolute joint between HULL and BODY_P1 - rjd = new b2.RevoluteJointDef(); - rjd.Initialize(hull, body_p1, new b2.Vec2(init_x - 35 / 2 / this.SCALE, init_y)); - rjd.enableMotor = true; - rjd.enableLimit = true; - rjd.maxMotorTorque = this.MOTORS_TORQUE; - rjd.motorSpeed = 1; - rjd.lowerAngle = -0.1 * Math.PI; - rjd.upperAngle = 0.2 * Math.PI; - joint_motor = world.CreateJoint(rjd); - joint_motor.SetUserData(new CustomMotorUserData("neck", SPEED, true, 0.0, body_p1)); - this.motors.push(joint_motor); - - // BODY_P2 - let body_p2_x = body_p1_x - 16 / 2 / this.SCALE - 16 / 2 / this.SCALE; - let body_p2_fd = new b2.FixtureDef(); - body_p2_fd.shape = new b2.PolygonShape(); - vertices = []; - for(let vertex of BODY_P2){ - vertices.push(new b2.Vec2(vertex[0] / this.SCALE, vertex[1] / this.SCALE)); - } - body_p2_fd.shape.Set(vertices, BODY_P2.length); - body_p2_fd.density = this.DENSITY; - body_p2_fd.restitution = 0.0; - body_p2_fd.filter.categoryBits = 0x20; - body_p2_fd.filter.maskBits = 0x000F; - - let body_p2_bd = new b2.BodyDef(); - body_p2_bd.type = b2.Body.b2_dynamicBody; - body_p2_bd.position.Set(body_p2_x, init_y); - let body_p2 = world.CreateBody(body_p2_bd); - body_p2.CreateFixture(body_p2_fd); - body_p2.color1 = "#806682"; // [0.5, 0.4, 0.9] - body_p2.color2 = "#4D4D80"; - body_p2.SetUserData(new CustomBodyUserData(true, false, "body")); - this.body_parts.push(body_p2); - - // Revolute joint between BODY_P1 and BODY_P2 - rjd = new b2.RevoluteJointDef(); - rjd.Initialize(body_p1, body_p2, new b2.Vec2(body_p1_x - 16 / 2 / this.SCALE, init_y)); - rjd.enableMotor = true; - rjd.enableLimit = true; - rjd.maxMotorTorque = this.MOTORS_TORQUE; - rjd.motorSpeed = 1; - rjd.lowerAngle = -0.15 * Math.PI; - rjd.upperAngle = 0.15 * Math.PI; - joint_motor = world.CreateJoint(rjd); - joint_motor.SetUserData(new CustomMotorUserData("hip", SPEED, true, 0.0, body_p2)); - this.motors.push(joint_motor); - - // BODY_P3 - TAIL - let body_p3_x = body_p2_x - 16 / 2 / this.SCALE - 8 / 2 / this.SCALE; - let body_p3_fd = new b2.FixtureDef(); - body_p3_fd.shape = new b2.PolygonShape(); - vertices = []; - for(let vertex of BODY_P3){ - vertices.push(new b2.Vec2(vertex[0] / this.SCALE, vertex[1] / this.SCALE)); - } - body_p3_fd.shape.Set(vertices, BODY_P3.length); - body_p3_fd.density = this.DENSITY; - body_p3_fd.restitution = 0.0; - body_p3_fd.filter.categoryBits = 0x20; - body_p3_fd.filter.maskBits = 0x000F; - - let body_p3_bd = new b2.BodyDef(); - body_p3_bd.type = b2.Body.b2_dynamicBody; - body_p3_bd.position.Set(body_p3_x, init_y); - let body_p3 = world.CreateBody(body_p3_bd); - body_p3.CreateFixture(body_p3_fd); - body_p3.color1 = "#806682"; // [0.5, 0.4, 0.9] - body_p3.color2 = "#4D4D80"; - body_p3.SetUserData(new CustomBodyUserData(true, false, "body")); - this.body_parts.push(body_p3); - this.tail = body_p3; - - // Revolute joint between BODY_P2 and BODY_P3 - rjd = new b2.RevoluteJointDef(); - rjd.Initialize(body_p2, body_p3, new b2.Vec2(body_p2_x - 16 / 2 / this.SCALE, init_y)); - rjd.enableMotor = true; - rjd.enableLimit = true; - rjd.maxMotorTorque = this.MOTORS_TORQUE; - rjd.motorSpeed = 1; - rjd.lowerAngle = -0.3 * Math.PI; - rjd.upperAngle = 0.3 * Math.PI; - joint_motor = world.CreateJoint(rjd); - joint_motor.SetUserData(new CustomMotorUserData("knee", SPEED, true, 0.0, body_p3)); - this.motors.push(joint_motor); - - // FINS - let fin_fd = new b2.FixtureDef(); - fin_fd.shape = new b2.PolygonShape(); - vertices = []; - for(let vertex of FIN){ - vertices.push(new b2.Vec2(vertex[0] / this.SCALE, vertex[1] / this.SCALE)); - } - fin_fd.shape.Set(vertices, FIN.length); - fin_fd.density = this.DENSITY; - fin_fd.restitution = 0.0; - fin_fd.filter.categoryBits = 0x20; - fin_fd.filter.maskBits = 0x000F; - - let fin_positions = [ - [init_x, init_y - 22 / 2 / this.SCALE + 0.2], - ]; - let fin_angle = -0.2 * Math.PI; - let middle_fin_x_distance = Math.sin(fin_angle) * 20 / 2 / this.SCALE; - let middle_fin_y_distance = Math.cos(fin_angle) * 20 / 2 / this.SCALE; - - for(let fin_pos of fin_positions){ - let current_fin_x = fin_pos[0] + middle_fin_x_distance; - let current_fin_y = fin_pos[1] - middle_fin_y_distance; - - let fin_bd = new b2.BodyDef(); - fin_bd.type = b2.Body.b2_dynamicBody; - fin_bd.position.Set(current_fin_x, current_fin_y); - let fin = world.CreateBody(fin_bd); - fin.CreateFixture(fin_fd); - fin.color1 = "#806682"; // [0.5, 0.4, 0.9] - fin.color2 = "#4D4D80"; - fin.SetUserData(new CustomBodyUserData(true, false, "fin")); - this.body_parts.push(fin); - this.fins.push(fin); - - // Revolute joint between HULL and FIN - rjd = new b2.RevoluteJointDef(); - rjd.Initialize(hull, fin, new b2.Vec2(fin_pos[0], fin_pos[1])); - rjd.enableMotor = true; - rjd.enableLimit = true; - rjd.maxMotorTorque = this.MOTORS_TORQUE; - rjd.motorSpeed = 1; - rjd.lowerAngle = -0.3 * Math.PI; - rjd.upperAngle = 0.2 * Math.PI; - joint_motor = world.CreateJoint(rjd); - joint_motor.SetUserData(new CustomMotorUserData("shoulder", SPEED, true, 0.0, fin)); - this.motors.push(joint_motor); - } - } -} \ No newline at end of file diff --git a/spaces/fuckyoudeki/AutoGPT/autogpt/memory/no_memory.py b/spaces/fuckyoudeki/AutoGPT/autogpt/memory/no_memory.py deleted file mode 100644 index 0371e96ae89f5eb88dae019a66351a229596ed7a..0000000000000000000000000000000000000000 --- a/spaces/fuckyoudeki/AutoGPT/autogpt/memory/no_memory.py +++ /dev/null @@ -1,73 +0,0 @@ -"""A class that does not store any data. This is the default memory provider.""" -from __future__ import annotations - -from typing import Any - -from autogpt.memory.base import MemoryProviderSingleton - - -class NoMemory(MemoryProviderSingleton): - """ - A class that does not store any data. This is the default memory provider. - """ - - def __init__(self, cfg): - """ - Initializes the NoMemory provider. - - Args: - cfg: The config object. - - Returns: None - """ - pass - - def add(self, data: str) -> str: - """ - Adds a data point to the memory. No action is taken in NoMemory. - - Args: - data: The data to add. - - Returns: An empty string. - """ - return "" - - def get(self, data: str) -> list[Any] | None: - """ - Gets the data from the memory that is most relevant to the given data. - NoMemory always returns None. - - Args: - data: The data to compare to. - - Returns: None - """ - return None - - def clear(self) -> str: - """ - Clears the memory. No action is taken in NoMemory. - - Returns: An empty string. - """ - return "" - - def get_relevant(self, data: str, num_relevant: int = 5) -> list[Any] | None: - """ - Returns all the data in the memory that is relevant to the given data. - NoMemory always returns None. - - Args: - data: The data to compare to. - num_relevant: The number of relevant data to return. - - Returns: None - """ - return None - - def get_stats(self): - """ - Returns: An empty dictionary as there are no stats in NoMemory. - """ - return {} diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Advanced SystemCare Pro 7.4.0.474 [ChingLiu] .rar The Ultimate Solution for PC Optimization and Maintenance.md b/spaces/gotiQspiryo/whisper-ui/examples/Advanced SystemCare Pro 7.4.0.474 [ChingLiu] .rar The Ultimate Solution for PC Optimization and Maintenance.md deleted file mode 100644 index 57c80e09d77ba0155b75931254f32815c4a378ac..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Advanced SystemCare Pro 7.4.0.474 [ChingLiu] .rar The Ultimate Solution for PC Optimization and Maintenance.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Advanced SystemCare Pro 7.4.0.474 [ChingLiu] .rar


        Download File ✒ ✒ ✒ https://urlgoal.com/2uyND8



        -
        - aaccfb2cb3
        -
        -
        -

        diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Lego The Incredibles PC Game Highly Compressed Repacked [MULTi13] Free Download-CODEX.md b/spaces/gotiQspiryo/whisper-ui/examples/Lego The Incredibles PC Game Highly Compressed Repacked [MULTi13] Free Download-CODEX.md deleted file mode 100644 index a2eb83242227eb471f1509e4b5b3f8f811fbbee4..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Lego The Incredibles PC Game Highly Compressed Repacked [MULTi13] Free Download-CODEX.md +++ /dev/null @@ -1,7 +0,0 @@ - -

        fb6c851797 -verified-zamana-deewana-movie-download-720p-hd -nexus2-expansion-dubstepelectro-vol-1torrent-izaben -gta-amritsar-game-download-for-pc-highly-compressed-new -epson-fx-1050-driver-windows-7-64-79-__link__ -full-cleopatra-reine-du-nil-patch -cs16noflashscriptdownload-fixed -solidworks201364bitcrackfreedownload-noelelle -presentation-assistant-ultimate-crack-2-7-torrent-download-new -senderos-fronterizos-francisco-jimenez-pdf-92-breasraf -maxon-cinema-4d-r17-keygen-downl-kaigen -google-earth-pro-7-0-1-8244-mpt-serial-key-keygen-new -download-iarna-manelelor-2011-album-full-dennyzac -k-7-r-ginler -ozeki-ng-sms-gateway-v4-16-4-wynnicko -hum-pyar-tumhi-se-kar-baithe-full-movie-hd-720p-download-freel-emerebic -2215-ancient-civilizations-brain-teasers-page-60-answer-key-pdf-vandgrant -2021-easeus-partition-master-10-0-technican-edition-incl-keygen -english-conversation-practice-by-grant-taylor-pdf-to-jpg -pixel-me-full-crack-full-jaymealf -ravi-shankar-the-best-of -windows-post-install-wpi-2011-www-mundoteam-es-rar-utorrent -bob-dylan-dylan-revisited-all-time-best-2016-320-5-disc -security-monitor-pro-51-serial-number-new-crack -norton-keygen-free-activation-code-txt-keylmarl -solidworks-2004-full-indir-tek-link-joerulric -download-extra-quality-r6-viewer-v1-0-46-win7 -download-work-wilcom-2006-portable-edition -native-instruments-guitar-rig-5-pro-v5-1-0-win-x86-x64-unlocked-full-version-mercesal -jade-evo-e02-06-schoolgirls-outdoors-pooping-or-pissing-rhopay -dr14-dta-corel-x4-serial-number-fotografie-tropix-gr-full

        -

        97d3633e1c -art-and-print-production-by-nn-sarkar-pdf-ollyrail -verified-template-toaster-torrent -linotype-univers-font-free-2021-download -ultimate-zip-cracker-8023-keygen-crack-leonlorel -creating-human-machine-interfaces-using-visual-basic-pdf-updated -luvualiadubbedinhindimovies-updated-freedownload -4clipika-setup-with-password-_hot_-download -best-skype-for-mac-better -circle-empires-apex-monsters-activation-code-portable -full-heat-and-thermodynamics-by-zemansky-and-dittman-pdf-free-download -assassinscreedhighlycompressed16mb-__hot__ -adjustment-program-epson-sx218-213-idabezeph -patched-magix-fastcut-plus-edition-3-0-2-99-crack-cracksmind-ollyelfri -best-atlas-of-prejudice-pdf-46 -hindi-movie-dil-juunglee-full-download-full -kosala-marathi-novel-pdf-free-14-2021 -terrauniversodevida10oanopdfdownload-lawdel -hot-corel-draw-x7-serial-numbers-keygen-only-free-download -final-uninstaller-2-6-9-serial-number -crack-windows-loader-v2-0-6-full-version-reddragon-top -coat-exfeed-anal-crash-1-xevaeli -breakingbadtorrentitastagione4-portable -cs-chemdraw-drawing-object-free-download-sheretandi -aunn-zara-serial-download-exclusive-sites -dbpoweramp-music-converter-r14-retail-pre-activated-48 -flight1-flight-1-crack-cracker-wrapper-and-key-files-keygen-butren -edius-pro-9-crack-high-quality-keygen-serial-key-full-version-download -kernel-power-event-id-41-error-system-has-rebooted-without-cleanly-shutting-down-full -pvlace-loopkit-vol-6-wav -onyx-tree-storm-2012-for-3ds-max-2012-10-8-mb

        -

        Lego The Incredibles PC Game Highly Compressed Repacked [MULTi13] Free Download-CODEX


        DOWNLOADhttps://urlgoal.com/2uyNyN



        -

        97d3633e1c -free-sketchup-pro-2019-v19-1-173-crack-for-macos -hedge-19-1-crack-mac-osx-new -_hot_-canon-imagerunner-1670f-driver-windows-7-64-bits -top-thorragnarokenglishinhindimoviedownload -max-payne-3-game-highly-compressed-free-downloadl-palmanga -bugsy-1991-hdtv-720p-x264-ac3-waldek-mkv-hit-new -buku-sbk-kelas-5-sd-pdf-jamytayge -ifinger-dictionary-licence-key-crack-nawugi -woron-scan-1-09-147-milano-eintritt-filt-aricar -jisb1012pdfdownload-jonineve -prince-of-persia-the-forgotten-sands-hot-crackfix-repack-skidrow-torrent -final-cut-pro-10-4-8 -best-space-hulk-ascension-edition-ultimate-pack-fitgirl -htc-desire-vc-official-firmware -johnson-matthey-gold-serial-number-lookup-high-quality -full-moldflow-insight-2016-keygen-whytodis -link-pocaloid2hatsunemikuappendvocaloid2vocaloidfreedownload -cd-a-turma-do-circuito-27-anos-internacional-com-vinhetas-mp3-rar-azalvytto -crack-keygen-autocad-mechanical-2010-crack-exclusive -hot-half-girlfriend-full-movie-download-in-hindi-mp4 -bouncing-over-it-with-friends-download-portable-ellkal -mera-pind-in-hindi-torrent-download-vladeginat -better-the-christmas-baby-classic-board-books-download -multi-oem-retail-project-build-10-09-2018-windows-ativation-f-install -tower-hunter-erzas-trial-update-v1-15-faustlavin -herschel-walker-book-basic-training-pdf-rar-free -stuntman-full-movie-1080p-download-torrent-top -trei-sud-est-vorbe-care-dor-zippy-verified -hot-pussy-squirting-on-dick -download-film-semi-sub-indonesia-avi-janmaria

        aaccfb2cb3
        -
        -
        \ No newline at end of file diff --git a/spaces/gradio/HuBERT/examples/pointer_generator/README.md b/spaces/gradio/HuBERT/examples/pointer_generator/README.md deleted file mode 100644 index 60965708254aae2174812ea6686a9807825b7fb6..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/examples/pointer_generator/README.md +++ /dev/null @@ -1,82 +0,0 @@ -# Transformer with Pointer-Generator Network - -This page describes the `transformer_pointer_generator` model that incorporates -a pointing mechanism in the Transformer model that facilitates copying of input -words to the output. This architecture is described in [Enarvi et al. (2020)](https://www.aclweb.org/anthology/2020.nlpmc-1.4/). - -## Background - -The pointer-generator network was introduced in [See et al. (2017)](https://arxiv.org/abs/1704.04368) -for RNN encoder-decoder attention models. A similar mechanism can be -incorporated in a Transformer model by reusing one of the many attention -distributions for pointing. The attention distribution over the input words is -interpolated with the normal output distribution over the vocabulary words. This -allows the model to generate words that appear in the input, even if they don't -appear in the vocabulary, helping especially with small vocabularies. - -## Implementation - -The mechanism for copying out-of-vocabulary words from the input has been -implemented differently to See et al. In their [implementation](https://github.com/abisee/pointer-generator) -they convey the word identities through the model in order to be able to produce -words that appear in the input sequence but not in the vocabulary. A different -approach was taken in the Fairseq implementation to keep it self-contained in -the model file, avoiding any changes to the rest of the code base. Copying -out-of-vocabulary words is possible by pre-processing the input and -post-processing the output. This is described in detail in the next section. - -## Usage - -The training and evaluation procedure is outlined below. You can also find a -more detailed example for the XSum dataset on [this page](README.xsum.md). - -##### 1. Create a vocabulary and extend it with source position markers - -The pointing mechanism is especially helpful with small vocabularies, if we are -able to recover the identities of any out-of-vocabulary words that are copied -from the input. For this purpose, the model allows extending the vocabulary with -special tokens that can be used in place of `` tokens to identify different -input positions. For example, the user may add ``, ``, ``, -etc. to the end of the vocabulary, after the normal words. Below is an example -of how to create a vocabulary of 10000 most common words and add 1000 input -position markers. - -```bash -vocab_size=10000 -position_markers=1000 -export LC_ALL=C -cat train.src train.tgt | - tr -s '[:space:]' '\n' | - sort | - uniq -c | - sort -k1,1bnr -k2 | - head -n "$((vocab_size - 4))" | - awk '{ print $2 " " $1 }' >dict.pg.txt -python3 -c "[print(' 0'.format(n)) for n in range($position_markers)]" >>dict.pg.txt -``` - -##### 2. Preprocess the text data - -The idea is that any `` tokens in the text are replaced with `` if -it appears in the first input position, `` if it appears in the second -input position, and so on. This can be achieved using the `preprocess.py` script -that is provided in this directory. - -##### 3. Train a model - -The number of these special tokens is given to the model with the -`--source-position-markers` argument—the model simply maps all of these to the -same word embedding as ``. - -The attention distribution that is used for pointing is selected using the -`--alignment-heads` and `--alignment-layer` command-line arguments in the same -way as with the `transformer_align` model. - -##### 4. Generate text and postprocess it - -When using the model to generate text, you want to preprocess the input text in -the same way that training data was processed, replacing out-of-vocabulary words -with `` tokens. If any of these tokens are copied to the output, the -actual words can be retrieved from the unprocessed input text. Any `` -token should be replaced with the word at position N in the original input -sequence. This can be achieved using the `postprocess.py` script. diff --git a/spaces/gradio/HuBERT/examples/wav2vec/unsupervised/kaldi_self_train/st/local/unsup_select_decode.sh b/spaces/gradio/HuBERT/examples/wav2vec/unsupervised/kaldi_self_train/st/local/unsup_select_decode.sh deleted file mode 100644 index b34c5b6e0688914a53515162f817a93617b609e5..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/examples/wav2vec/unsupervised/kaldi_self_train/st/local/unsup_select_decode.sh +++ /dev/null @@ -1,37 +0,0 @@ -#!/bin/bash - -split="dev_other" -ref_txt="" # ground truth transcript path -psd_txt="" # pseudo transcript path -get_best_wer=true -dec_name="decode" -graph_name="graph" -kenlm_path=/checkpoint/abaevski/data/speech/libri/librispeech_lm_novox.phnc_o6.bin - -. ./cmd.sh -. ./path.sh -. parse_options.sh - -exp_root=$1 -unsup_args="" -if [ $# -ge 2 ]; then - unsup_args=$2 -fi - -set -eu - -if [ ! -z $ref_txt ] && $get_best_wer; then - echo "==== WER w.r.t. real transcript (select based on unsupervised metric)" - for x in $exp_root/*/${dec_name}_${split}*; do - lang=$(dirname $x)/$graph_name - - ( - for tra in $x/scoring/*.tra; do - cat $tra | utils/int2sym.pl -f 2- $lang/words.txt | sed 's:::g' | sed 's:::g' > $tra.txt - python local/unsup_select.py $psd_txt $tra.txt --kenlm_path $kenlm_path --gt_tra $ref_txt $unsup_args - done 2>/dev/null | grep "score=" | sed 's/=/ /g' | sed 's/;//g' | sort -k3n | head -n1 - ) & - done -fi -wait - diff --git a/spaces/gradio/HuBERT/fairseq/clib/libbleu/libbleu.cpp b/spaces/gradio/HuBERT/fairseq/clib/libbleu/libbleu.cpp deleted file mode 100644 index 3cf2d65b6d16e19ea299ebe43c9c25e3481d4524..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/clib/libbleu/libbleu.cpp +++ /dev/null @@ -1,141 +0,0 @@ -/** - * Copyright 2017-present, Facebook, Inc. - * All rights reserved. - * - * This source code is licensed under the license found in the - * LICENSE file in the root directory of this source tree. - */ - -#include -#include -#include -#include - -typedef struct -{ - size_t reflen; - size_t predlen; - size_t match1; - size_t count1; - size_t match2; - size_t count2; - size_t match3; - size_t count3; - size_t match4; - size_t count4; -} bleu_stat; - -// left trim (remove pad) -void bleu_ltrim(size_t* len, int** sent, int pad) { - size_t start = 0; - while(start < *len) { - if (*(*sent + start) != pad) { break; } - start++; - } - *sent += start; - *len -= start; -} - -// right trim remove (eos) -void bleu_rtrim(size_t* len, int** sent, int pad, int eos) { - size_t end = *len - 1; - while (end > 0) { - if (*(*sent + end) != eos && *(*sent + end) != pad) { break; } - end--; - } - *len = end + 1; -} - -// left and right trim -void bleu_trim(size_t* len, int** sent, int pad, int eos) { - bleu_ltrim(len, sent, pad); - bleu_rtrim(len, sent, pad, eos); -} - -size_t bleu_hash(int len, int* data) { - size_t h = 14695981039346656037ul; - size_t prime = 0x100000001b3; - char* b = (char*) data; - size_t blen = sizeof(int) * len; - - while (blen-- > 0) { - h ^= *b++; - h *= prime; - } - - return h; -} - -void bleu_addngram( - size_t *ntotal, size_t *nmatch, size_t n, - size_t reflen, int* ref, size_t predlen, int* pred) { - - if (predlen < n) { return; } - - predlen = predlen - n + 1; - (*ntotal) += predlen; - - if (reflen < n) { return; } - - reflen = reflen - n + 1; - - std::map count; - while (predlen > 0) { - size_t w = bleu_hash(n, pred++); - count[w]++; - predlen--; - } - - while (reflen > 0) { - size_t w = bleu_hash(n, ref++); - if (count[w] > 0) { - (*nmatch)++; - count[w] -=1; - } - reflen--; - } -} - -extern "C" { - -#ifdef _WIN64 -__declspec(dllexport) -#endif -void bleu_zero_init(bleu_stat* stat) { - std::memset(stat, 0, sizeof(bleu_stat)); -} - -#ifdef _WIN64 -__declspec(dllexport) -#endif -void bleu_one_init(bleu_stat* stat) { - bleu_zero_init(stat); - stat->count1 = 0; - stat->count2 = 1; - stat->count3 = 1; - stat->count4 = 1; - stat->match1 = 0; - stat->match2 = 1; - stat->match3 = 1; - stat->match4 = 1; -} - -#ifdef _WIN64 -__declspec(dllexport) -#endif -void bleu_add( - bleu_stat* stat, - size_t reflen, int* ref, size_t predlen, int* pred, int pad, int eos) { - - bleu_trim(&reflen, &ref, pad, eos); - bleu_trim(&predlen, &pred, pad, eos); - stat->reflen += reflen; - stat->predlen += predlen; - - bleu_addngram(&stat->count1, &stat->match1, 1, reflen, ref, predlen, pred); - bleu_addngram(&stat->count2, &stat->match2, 2, reflen, ref, predlen, pred); - bleu_addngram(&stat->count3, &stat->match3, 3, reflen, ref, predlen, pred); - bleu_addngram(&stat->count4, &stat->match4, 4, reflen, ref, predlen, pred); -} - -} diff --git a/spaces/gradio/HuBERT/fairseq/optim/sgd.py b/spaces/gradio/HuBERT/fairseq/optim/sgd.py deleted file mode 100644 index 8e34fb99a18fff12ab76be5894a84cbbb2f48176..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/optim/sgd.py +++ /dev/null @@ -1,43 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch.optim - -from . import LegacyFairseqOptimizer, register_optimizer - - -@register_optimizer("sgd") -class SGD(LegacyFairseqOptimizer): - def __init__(self, args, params): - super().__init__(args) - self._optimizer = torch.optim.SGD(params, **self.optimizer_config) - - @staticmethod - def add_args(parser): - """Add optimizer-specific arguments to the parser.""" - # fmt: off - parser.add_argument('--momentum', default=0.0, type=float, metavar='M', - help='momentum factor') - parser.add_argument('--weight-decay', '--wd', default=0.0, type=float, metavar='WD', - help='weight decay') - # fmt: on - - @property - def optimizer_config(self): - """ - Return a kwarg dictionary that will be used to override optimizer - args stored in checkpoints. This allows us to load a checkpoint and - resume training using a different set of optimizer args, e.g., with a - different learning rate. - """ - return { - "lr": self.args.lr[0], - "momentum": self.args.momentum, - "weight_decay": self.args.weight_decay, - } - - @property - def supports_flat_params(self): - return True diff --git a/spaces/gradio/sepia_filter/README.md b/spaces/gradio/sepia_filter/README.md deleted file mode 100644 index e17ec9f56e2d4f7da119334085869c5eaa4c5887..0000000000000000000000000000000000000000 --- a/spaces/gradio/sepia_filter/README.md +++ /dev/null @@ -1,12 +0,0 @@ - ---- -title: sepia_filter -emoji: 🔥 -colorFrom: indigo -colorTo: indigo -sdk: gradio -sdk_version: 4.1.2 -app_file: run.py -pinned: false -hf_oauth: true ---- diff --git a/spaces/h2oai/wave-tour/examples/table_filter.py b/spaces/h2oai/wave-tour/examples/table_filter.py deleted file mode 100644 index 817dacdaaf356e2207335ed38566d8fa4e30173a..0000000000000000000000000000000000000000 --- a/spaces/h2oai/wave-tour/examples/table_filter.py +++ /dev/null @@ -1,59 +0,0 @@ -# Table / Filter -# Enable filtering values for specific columns. -# #table -# --- -import random -from faker import Faker -from h2o_wave import main, app, Q, ui - -fake = Faker() - -_id = 0 - - -class Issue: - def __init__(self, text: str, status: str, progress: float, icon: str, notifications: str): - global _id - _id += 1 - self.id = f'I{_id}' - self.text = text - self.status = status - self.views = 0 - self.progress = progress - self.icon = icon - self.notifications = notifications - - -# Create some issues -issues = [ - Issue( - text=fake.sentence(), - status=('Closed' if i % 2 == 0 else 'Open'), - progress=random.random(), - icon=('BoxCheckmarkSolid' if random.random() > 0.5 else 'BoxMultiplySolid'), - notifications=('Off' if random.random() > 0.5 else 'On')) for i in range(100) -] - -# Create columns for our issue table. -columns = [ - ui.table_column(name='text', label='Issue'), - ui.table_column(name='status', label='Status', filterable=True), - ui.table_column(name='notifications', label='Notifications', filterable=True), - ui.table_column(name='done', label='Done', cell_type=ui.icon_table_cell_type()), - ui.table_column(name='views', label='Views'), - ui.table_column(name='progress', label='Progress', cell_type=ui.progress_table_cell_type()), -] - - -@app('/demo') -async def serve(q: Q): - q.page['form'] = ui.form_card(box='1 1 -1 7', items=[ - ui.table( - name='issues', - columns=columns, - rows=[ui.table_row( - name=issue.id, - cells=[issue.text, issue.status, issue.notifications, issue.icon, str(issue.views), str(issue.progress)]) for issue in issues] - ) - ]) - await q.page.save() diff --git a/spaces/hamacojr/SAM-CAT-Seg/open_clip/setup.py b/spaces/hamacojr/SAM-CAT-Seg/open_clip/setup.py deleted file mode 100644 index 00ab400a6679904cc5009ee595738f2e21dfaa14..0000000000000000000000000000000000000000 --- a/spaces/hamacojr/SAM-CAT-Seg/open_clip/setup.py +++ /dev/null @@ -1,61 +0,0 @@ -""" Setup -""" -from setuptools import setup, find_packages -from codecs import open -from os import path - -here = path.abspath(path.dirname(__file__)) - -# Get the long description from the README file -with open(path.join(here, 'README.md'), encoding='utf-8') as f: - long_description = f.read() - -def _read_reqs(relpath): - fullpath = path.join(path.dirname(__file__), relpath) - with open(fullpath) as f: - return [s.strip() for s in f.readlines() if (s.strip() and not s.startswith("#"))] - -REQUIREMENTS = _read_reqs("requirements.txt") -TRAINING_REQUIREMENTS = _read_reqs("requirements-training.txt") - -exec(open('src/open_clip/version.py').read()) -setup( - name='open_clip_torch', - version=__version__, - description='OpenCLIP', - long_description=long_description, - long_description_content_type='text/markdown', - url='https://github.com/mlfoundations/open_clip', - author='', - author_email='', - classifiers=[ - # How mature is this project? Common values are - # 3 - Alpha - # 4 - Beta - # 5 - Production/Stable - 'Development Status :: 3 - Alpha', - 'Intended Audience :: Education', - 'Intended Audience :: Science/Research', - 'License :: OSI Approved :: Apache Software License', - 'Programming Language :: Python :: 3.7', - 'Programming Language :: Python :: 3.8', - 'Programming Language :: Python :: 3.9', - 'Programming Language :: Python :: 3.10', - 'Topic :: Scientific/Engineering', - 'Topic :: Scientific/Engineering :: Artificial Intelligence', - 'Topic :: Software Development', - 'Topic :: Software Development :: Libraries', - 'Topic :: Software Development :: Libraries :: Python Modules', - ], - - # Note that this is a string of words separated by whitespace, not a list. - keywords='CLIP pretrained', - package_dir={'': 'src'}, - packages=find_packages(where='src'), - include_package_data=True, - install_requires=REQUIREMENTS, - extras_require={ - "training": TRAINING_REQUIREMENTS, - }, - python_requires='>=3.7', -) diff --git a/spaces/hank1996/yolopv2/lib/dataset/bdd.py b/spaces/hank1996/yolopv2/lib/dataset/bdd.py deleted file mode 100644 index 88f659028cc058285408bbbd26f9ed38083eb3ea..0000000000000000000000000000000000000000 --- a/spaces/hank1996/yolopv2/lib/dataset/bdd.py +++ /dev/null @@ -1,85 +0,0 @@ - - -import numpy as np -import json - -from .AutoDriveDataset import AutoDriveDataset -from .convert import convert, id_dict, id_dict_single -from tqdm import tqdm - -single_cls = True # just detect vehicle - -class BddDataset(AutoDriveDataset): - def __init__(self, cfg, is_train, inputsize, transform=None): - super().__init__(cfg, is_train, inputsize, transform) - self.db = self._get_db() - self.cfg = cfg - - def _get_db(self): - """ - get database from the annotation file - Inputs: - Returns: - gt_db: (list)database [a,b,c,...] - a: (dictionary){'image':, 'information':, ......} - image: image path - mask: path of the segmetation label - label: [cls_id, center_x//256, center_y//256, w//256, h//256] 256=IMAGE_SIZE - """ - print('building database...') - gt_db = [] - height, width = self.shapes - for mask in tqdm(list(self.mask_list)): - mask_path = str(mask) - label_path = mask_path.replace(str(self.mask_root), str(self.label_root)).replace(".png", ".json") - image_path = mask_path.replace(str(self.mask_root), str(self.img_root)).replace(".png", ".jpg") - lane_path = mask_path.replace(str(self.mask_root), str(self.lane_root)) - with open(label_path, 'r') as f: - label = json.load(f) - data = label['frames'][0]['objects'] - data = self.filter_data(data) - gt = np.zeros((len(data), 5)) - for idx, obj in enumerate(data): - category = obj['category'] - if category == "traffic light": - color = obj['attributes']['trafficLightColor'] - category = "tl_" + color - if category in id_dict.keys(): - x1 = float(obj['box2d']['x1']) - y1 = float(obj['box2d']['y1']) - x2 = float(obj['box2d']['x2']) - y2 = float(obj['box2d']['y2']) - cls_id = id_dict[category] - if single_cls: - cls_id=0 - gt[idx][0] = cls_id - box = convert((width, height), (x1, x2, y1, y2)) - gt[idx][1:] = list(box) - - - rec = [{ - 'image': image_path, - 'label': gt, - 'mask': mask_path, - 'lane': lane_path - }] - - gt_db += rec - print('database build finish') - return gt_db - - def filter_data(self, data): - remain = [] - for obj in data: - if 'box2d' in obj.keys(): # obj.has_key('box2d'): - if single_cls: - if obj['category'] in id_dict_single.keys(): - remain.append(obj) - else: - remain.append(obj) - return remain - - def evaluate(self, cfg, preds, output_dir, *args, **kwargs): - """ - """ - pass diff --git a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/utils/imports.py b/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/utils/imports.py deleted file mode 100644 index 1cddab30a505c03998665c8be9d84e614da43f77..0000000000000000000000000000000000000000 --- a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/utils/imports.py +++ /dev/null @@ -1,24 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -import torch -import sys -# if torch._six.PY37: -if sys.version_info[0] >= 3: - import importlib - import importlib.util - import sys - - - # from https://stackoverflow.com/questions/67631/how-to-import-a-module-given-the-full-path?utm_medium=organic&utm_source=google_rich_qa&utm_campaign=google_rich_qa - def import_file(module_name, file_path, make_importable=False): - spec = importlib.util.spec_from_file_location(module_name, file_path) - module = importlib.util.module_from_spec(spec) - spec.loader.exec_module(module) - if make_importable: - sys.modules[module_name] = module - return module -else: - import imp - - def import_file(module_name, file_path, make_importable=None): - module = imp.load_source(module_name, file_path) - return module diff --git a/spaces/hdhzk/bingo/src/lib/hooks/use-at-bottom.tsx b/spaces/hdhzk/bingo/src/lib/hooks/use-at-bottom.tsx deleted file mode 100644 index d37c8cf4162adcb0064e08ecec24eb731416b045..0000000000000000000000000000000000000000 --- a/spaces/hdhzk/bingo/src/lib/hooks/use-at-bottom.tsx +++ /dev/null @@ -1,23 +0,0 @@ -import * as React from 'react' - -export function useAtBottom(offset = 0) { - const [isAtBottom, setIsAtBottom] = React.useState(false) - - React.useEffect(() => { - const handleScroll = () => { - setIsAtBottom( - window.innerHeight + window.scrollY >= - document.body.offsetHeight - offset - ) - } - - window.addEventListener('scroll', handleScroll, { passive: true }) - handleScroll() - - return () => { - window.removeEventListener('scroll', handleScroll) - } - }, [offset]) - - return isAtBottom -} diff --git a/spaces/heiyubili/bingo/tailwind.config.js b/spaces/heiyubili/bingo/tailwind.config.js deleted file mode 100644 index 03da3c3c45be6983b9f5ffa6df5f1fd0870e9636..0000000000000000000000000000000000000000 --- a/spaces/heiyubili/bingo/tailwind.config.js +++ /dev/null @@ -1,48 +0,0 @@ -/** @type {import('tailwindcss').Config} */ -module.exports = { - content: [ - './src/pages/**/*.{js,ts,jsx,tsx,mdx}', - './src/components/**/*.{js,ts,jsx,tsx,mdx}', - './src/app/**/*.{js,ts,jsx,tsx,mdx}', - './src/ui/**/*.{js,ts,jsx,tsx,mdx}', - ], - "darkMode": "class", - theme: { - extend: { - colors: { - 'primary-blue': 'rgb(var(--color-primary-blue) / )', - secondary: 'rgb(var(--color-secondary) / )', - 'primary-background': 'rgb(var(--primary-background) / )', - 'primary-text': 'rgb(var(--primary-text) / )', - 'secondary-text': 'rgb(var(--secondary-text) / )', - 'light-text': 'rgb(var(--light-text) / )', - 'primary-border': 'rgb(var(--primary-border) / )', - }, - keyframes: { - slideDownAndFade: { - from: { opacity: 0, transform: 'translateY(-2px)' }, - to: { opacity: 1, transform: 'translateY(0)' }, - }, - slideLeftAndFade: { - from: { opacity: 0, transform: 'translateX(2px)' }, - to: { opacity: 1, transform: 'translateX(0)' }, - }, - slideUpAndFade: { - from: { opacity: 0, transform: 'translateY(2px)' }, - to: { opacity: 1, transform: 'translateY(0)' }, - }, - slideRightAndFade: { - from: { opacity: 0, transform: 'translateX(2px)' }, - to: { opacity: 1, transform: 'translateX(0)' }, - }, - }, - animation: { - slideDownAndFade: 'slideDownAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)', - slideLeftAndFade: 'slideLeftAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)', - slideUpAndFade: 'slideUpAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)', - slideRightAndFade: 'slideRightAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)', - }, - }, - }, - plugins: [require('@headlessui/tailwindcss'), require('tailwind-scrollbar')], -} diff --git a/spaces/hhhwmws/ChatHaruhi-GLMPro/README.md b/spaces/hhhwmws/ChatHaruhi-GLMPro/README.md deleted file mode 100644 index c6c8fec7d61f7fc5900b7c0ec6a52da0b27a3cde..0000000000000000000000000000000000000000 --- a/spaces/hhhwmws/ChatHaruhi-GLMPro/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ChatHaruhi GLMPro -emoji: 👀 -colorFrom: gray -colorTo: red -sdk: gradio -sdk_version: 3.41.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/hlydecker/ImageBind_zeroshot_demo/data.py b/spaces/hlydecker/ImageBind_zeroshot_demo/data.py deleted file mode 100644 index 80c7aca83970707204355221217918a4b2337379..0000000000000000000000000000000000000000 --- a/spaces/hlydecker/ImageBind_zeroshot_demo/data.py +++ /dev/null @@ -1,350 +0,0 @@ -#!/usr/bin/env python3 -# Portions Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -import torch.nn as nn -import torchaudio -import logging - -from models.multimodal_preprocessors import SimpleTokenizer -from PIL import Image -from pytorchvideo import transforms as pv_transforms -from pytorchvideo.data.clip_sampling import ConstantClipsPerVideoSampler -from pytorchvideo.data.encoded_video import EncodedVideo - -from torchvision import transforms -from torchvision.transforms._transforms_video import NormalizeVideo - -DEFAULT_AUDIO_FRAME_SHIFT_MS = 10 # in milliseconds - -BPE_PATH = "bpe/bpe_simple_vocab_16e6.txt.gz" - - -def waveform2melspec(waveform, sample_rate, num_mel_bins, target_length): - # Based on https://github.com/YuanGongND/ast/blob/d7d8b4b8e06cdaeb6c843cdb38794c1c7692234c/src/dataloader.py#L102 - waveform -= waveform.mean() - fbank = torchaudio.compliance.kaldi.fbank( - waveform, - htk_compat=True, - sample_frequency=sample_rate, - use_energy=False, - window_type="hanning", - num_mel_bins=num_mel_bins, - dither=0.0, - frame_length=25, - frame_shift=DEFAULT_AUDIO_FRAME_SHIFT_MS, - ) - # Convert to [mel_bins, num_frames] shape - fbank = fbank.transpose(0, 1) - # Pad to target_length - n_frames = fbank.size(1) - p = target_length - n_frames - # if p is too large (say >20%), flash a warning - if abs(p) / n_frames > 0.2: - logging.warning( - "Large gap between audio n_frames(%d) and " - "target_length (%d). Is the audio_target_length " - "setting correct?", - n_frames, - target_length, - ) - # cut and pad - if p > 0: - fbank = torch.nn.functional.pad(fbank, (0, p), mode="constant", value=0) - elif p < 0: - fbank = fbank[:, 0:target_length] - # Convert to [1, mel_bins, num_frames] shape, essentially like a 1 - # channel image - fbank = fbank.unsqueeze(0) - return fbank - - -def get_clip_timepoints(clip_sampler, duration): - # Read out all clips in this video - all_clips_timepoints = [] - is_last_clip = False - end = 0.0 - while not is_last_clip: - start, end, _, _, is_last_clip = clip_sampler(end, duration, annotation=None) - all_clips_timepoints.append((start, end)) - return all_clips_timepoints - - -def load_and_transform_vision_data(image_paths, device): - if image_paths is None: - return None - - image_ouputs = [] - for image_path in image_paths: - data_transform = transforms.Compose( - [ - transforms.Resize( - 224, interpolation=transforms.InterpolationMode.BICUBIC - ), - transforms.CenterCrop(224), - transforms.ToTensor(), - transforms.Normalize( - mean=(0.48145466, 0.4578275, 0.40821073), - std=(0.26862954, 0.26130258, 0.27577711), - ), - ] - ) - with open(image_path, "rb") as fopen: - image = Image.open(fopen).convert("RGB") - - image = data_transform(image).to(device) - image_ouputs.append(image) - return torch.stack(image_ouputs, dim=0) - - -def load_and_transform_text(text, device): - if text is None: - return None - tokenizer = SimpleTokenizer(bpe_path=BPE_PATH) - tokens = [tokenizer(t).unsqueeze(0).to(device) for t in text] - tokens = torch.cat(tokens, dim=0) - return tokens - - -def load_and_transform_audio_data( - audio_paths, - device, - num_mel_bins=128, - target_length=204, - sample_rate=16000, - clip_duration=2, - clips_per_video=3, - mean=-4.268, - std=9.138, -): - if audio_paths is None: - return None - - audio_outputs = [] - clip_sampler = ConstantClipsPerVideoSampler( - clip_duration=clip_duration, clips_per_video=clips_per_video - ) - - for audio_path in audio_paths: - waveform, sr = torchaudio.load(audio_path) - if sample_rate != sr: - waveform = torchaudio.functional.resample( - waveform, orig_freq=sr, new_freq=sample_rate - ) - all_clips_timepoints = get_clip_timepoints( - clip_sampler, waveform.size(1) / sample_rate - ) - all_clips = [] - for clip_timepoints in all_clips_timepoints: - waveform_clip = waveform[ - :, - int(clip_timepoints[0] * sample_rate) : int( - clip_timepoints[1] * sample_rate - ), - ] - waveform_melspec = waveform2melspec( - waveform_clip, sample_rate, num_mel_bins, target_length - ) - all_clips.append(waveform_melspec) - - normalize = transforms.Normalize(mean=mean, std=std) - all_clips = [normalize(ac).to(device) for ac in all_clips] - - all_clips = torch.stack(all_clips, dim=0) - audio_outputs.append(all_clips) - - return torch.stack(audio_outputs, dim=0) - - -def get_clip_timepoints(clip_sampler, duration): - # Read out all clips in this video - all_clips_timepoints = [] - is_last_clip = False - end = 0.0 - while not is_last_clip: - start, end, _, _, is_last_clip = clip_sampler(end, duration, annotation=None) - all_clips_timepoints.append((start, end)) - return all_clips_timepoints - - -def crop_boxes(boxes, x_offset, y_offset): - """ - Peform crop on the bounding boxes given the offsets. - Args: - boxes (ndarray or None): bounding boxes to peform crop. The dimension - is `num boxes` x 4. - x_offset (int): cropping offset in the x axis. - y_offset (int): cropping offset in the y axis. - Returns: - cropped_boxes (ndarray or None): the cropped boxes with dimension of - `num boxes` x 4. - """ - cropped_boxes = boxes.copy() - cropped_boxes[:, [0, 2]] = boxes[:, [0, 2]] - x_offset - cropped_boxes[:, [1, 3]] = boxes[:, [1, 3]] - y_offset - - return cropped_boxes - - -def uniform_crop(images, size, spatial_idx, boxes=None, scale_size=None): - """ - Perform uniform spatial sampling on the images and corresponding boxes. - Args: - images (tensor): images to perform uniform crop. The dimension is - `num frames` x `channel` x `height` x `width`. - size (int): size of height and weight to crop the images. - spatial_idx (int): 0, 1, or 2 for left, center, and right crop if width - is larger than height. Or 0, 1, or 2 for top, center, and bottom - crop if height is larger than width. - boxes (ndarray or None): optional. Corresponding boxes to images. - Dimension is `num boxes` x 4. - scale_size (int): optinal. If not None, resize the images to scale_size before - performing any crop. - Returns: - cropped (tensor): images with dimension of - `num frames` x `channel` x `size` x `size`. - cropped_boxes (ndarray or None): the cropped boxes with dimension of - `num boxes` x 4. - """ - assert spatial_idx in [0, 1, 2] - ndim = len(images.shape) - if ndim == 3: - images = images.unsqueeze(0) - height = images.shape[2] - width = images.shape[3] - - if scale_size is not None: - if width <= height: - width, height = scale_size, int(height / width * scale_size) - else: - width, height = int(width / height * scale_size), scale_size - images = torch.nn.functional.interpolate( - images, - size=(height, width), - mode="bilinear", - align_corners=False, - ) - - y_offset = int(math.ceil((height - size) / 2)) - x_offset = int(math.ceil((width - size) / 2)) - - if height > width: - if spatial_idx == 0: - y_offset = 0 - elif spatial_idx == 2: - y_offset = height - size - else: - if spatial_idx == 0: - x_offset = 0 - elif spatial_idx == 2: - x_offset = width - size - cropped = images[:, :, y_offset : y_offset + size, x_offset : x_offset + size] - cropped_boxes = crop_boxes(boxes, x_offset, y_offset) if boxes is not None else None - if ndim == 3: - cropped = cropped.squeeze(0) - return cropped, cropped_boxes - - -class SpatialCrop(nn.Module): - """ - Convert the video into 3 smaller clips spatially. Must be used after the - temporal crops to get spatial crops, and should be used with - -2 in the spatial crop at the slowfast augmentation stage (so full - frames are passed in here). Will return a larger list with the - 3x spatial crops as well. - """ - - def __init__(self, crop_size: int = 224, num_crops: int = 3): - super().__init__() - self.crop_size = crop_size - if num_crops == 3: - self.crops_to_ext = [0, 1, 2] - self.flipped_crops_to_ext = [] - elif num_crops == 1: - self.crops_to_ext = [1] - self.flipped_crops_to_ext = [] - else: - raise NotImplementedError("Nothing else supported yet") - - def forward(self, videos): - """ - Args: - videos: A list of C, T, H, W videos. - Returns: - videos: A list with 3x the number of elements. Each video converted - to C, T, H', W' by spatial cropping. - """ - assert isinstance(videos, list), "Must be a list of videos after temporal crops" - assert all([video.ndim == 4 for video in videos]), "Must be (C,T,H,W)" - res = [] - for video in videos: - for spatial_idx in self.crops_to_ext: - res.append(uniform_crop(video, self.crop_size, spatial_idx)[0]) - if not self.flipped_crops_to_ext: - continue - flipped_video = transforms.functional.hflip(video) - for spatial_idx in self.flipped_crops_to_ext: - res.append(uniform_crop(flipped_video, self.crop_size, spatial_idx)[0]) - return res - - -def load_and_transform_video_data( - video_paths, - device, - clip_duration=2, - clips_per_video=5, - sample_rate=16000, -): - if video_paths is None: - return None - - video_outputs = [] - video_transform = transforms.Compose( - [ - pv_transforms.ShortSideScale(224), - NormalizeVideo( - mean=(0.48145466, 0.4578275, 0.40821073), - std=(0.26862954, 0.26130258, 0.27577711), - ), - ] - ) - - clip_sampler = ConstantClipsPerVideoSampler( - clip_duration=clip_duration, clips_per_video=clips_per_video - ) - frame_sampler = pv_transforms.UniformTemporalSubsample(num_samples=clip_duration) - - for video_path in video_paths: - video = EncodedVideo.from_path( - video_path, - decoder="decord", - decode_audio=False, - **{"sample_rate": sample_rate}, - ) - - all_clips_timepoints = get_clip_timepoints(clip_sampler, video.duration) - - all_video = [] - for clip_timepoints in all_clips_timepoints: - # Read the clip, get frames - clip = video.get_clip(clip_timepoints[0], clip_timepoints[1]) - if clip is None: - raise ValueError("No clip found") - video_clip = frame_sampler(clip["video"]) - video_clip = video_clip / 255.0 # since this is float, need 0-1 - - all_video.append(video_clip) - - all_video = [video_transform(clip) for clip in all_video] - all_video = SpatialCrop(224, num_crops=3)(all_video) - - all_video = torch.stack(all_video, dim=0) - video_outputs.append(all_video) - - return torch.stack(video_outputs, dim=0).to(device) diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/architectural_variants/nnUNetTrainerV2_GN.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/architectural_variants/nnUNetTrainerV2_GN.py deleted file mode 100644 index 27cfe29b59d7357b2fdca0edf0e1dd2bfc871be1..0000000000000000000000000000000000000000 --- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/architectural_variants/nnUNetTrainerV2_GN.py +++ /dev/null @@ -1,50 +0,0 @@ -# Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -import torch -from nnunet.network_architecture.generic_UNet import Generic_UNet -from nnunet.network_architecture.initialization import InitWeights_He -from nnunet.training.network_training.nnUNetTrainerV2 import nnUNetTrainerV2 -from nnunet.network_architecture.custom_modules.helperModules import MyGroupNorm -from nnunet.utilities.nd_softmax import softmax_helper -from torch import nn - - -class nnUNetTrainerV2_GN(nnUNetTrainerV2): - def initialize_network(self): - """ - changed deep supervision to False - :return: - """ - if self.threeD: - conv_op = nn.Conv3d - dropout_op = nn.Dropout3d - norm_op = MyGroupNorm - - else: - conv_op = nn.Conv2d - dropout_op = nn.Dropout2d - norm_op = MyGroupNorm - - norm_op_kwargs = {'eps': 1e-5, 'affine': True, 'num_groups': 8} - dropout_op_kwargs = {'p': 0, 'inplace': True} - net_nonlin = nn.LeakyReLU - net_nonlin_kwargs = {'negative_slope': 1e-2, 'inplace': True} - self.network = Generic_UNet(self.num_input_channels, self.base_num_features, self.num_classes, - len(self.net_num_pool_op_kernel_sizes), - self.conv_per_stage, 2, conv_op, norm_op, norm_op_kwargs, dropout_op, dropout_op_kwargs, - net_nonlin, net_nonlin_kwargs, True, False, lambda x: x, InitWeights_He(1e-2), - self.net_num_pool_op_kernel_sizes, self.net_conv_kernel_sizes, False, True, True) - if torch.cuda.is_available(): - self.network.cuda() - self.network.inference_apply_nonlin = softmax_helper diff --git a/spaces/hsdhgds/htyjuietryt/Dockerfile b/spaces/hsdhgds/htyjuietryt/Dockerfile deleted file mode 100644 index c9409e2a1656a1e6331c97f285bde00967ce6c84..0000000000000000000000000000000000000000 --- a/spaces/hsdhgds/htyjuietryt/Dockerfile +++ /dev/null @@ -1,16 +0,0 @@ -# 使用官方 Node.js 镜像作为基础镜像 -FROM node:lts-alpine3.18 - -# 设置工作目录 -WORKDIR /app - -# 将应用程序文件复制到容器中 -COPY . . - -# EXPOSE 3000 - -# 安装应用程序的依赖 -RUN npm install - -# 设置默认的命令,即启动应用程序 -CMD ["npm", "start"] diff --git a/spaces/huggingface-projects/stable-diffusion-multiplayer/stablediffusion-infinity/app.py b/spaces/huggingface-projects/stable-diffusion-multiplayer/stablediffusion-infinity/app.py deleted file mode 100644 index ff6bb0be284af41833f9804e0a94c49224b42bfb..0000000000000000000000000000000000000000 --- a/spaces/huggingface-projects/stable-diffusion-multiplayer/stablediffusion-infinity/app.py +++ /dev/null @@ -1,437 +0,0 @@ -import io -import os - -from huggingface_hub import Repository - -from pathlib import Path -import uvicorn -from fastapi import FastAPI, HTTPException, UploadFile, Depends, status, Request -from fastapi.staticfiles import StaticFiles -from fastapi.middleware.cors import CORSMiddleware -from fastapi_utils.tasks import repeat_every - -import numpy as np -import torch -from torch import autocast -from diffusers import DiffusionPipeline, EulerAncestralDiscreteScheduler -from diffusers.models import AutoencoderKL - -from PIL import Image -import gradio as gr -import skimage -import skimage.measure -from utils import * -import boto3 -import magic -import sqlite3 -import requests -import shortuuid -import re -import time -import subprocess - -AWS_ACCESS_KEY_ID = os.getenv('AWS_ACCESS_KEY_ID') -AWS_SECRET_KEY = os.getenv('AWS_SECRET_KEY') -AWS_S3_BUCKET_NAME = os.getenv('AWS_S3_BUCKET_NAME') -LIVEBLOCKS_SECRET = os.environ.get("LIVEBLOCKS_SECRET") -HF_TOKEN = os.environ.get("API_TOKEN") or True - -FILE_TYPES = { - 'image/png': 'png', - 'image/jpeg': 'jpg', - 'imager/webp': 'webp', -} -S3_DATA_FOLDER = Path("sd-multiplayer-data") -ROOMS_DATA_DB = S3_DATA_FOLDER / "rooms_data.db" -ROOM_DB = Path("rooms.db") - -app = FastAPI() - -repo = Repository( - local_dir=S3_DATA_FOLDER, - repo_type="dataset", - clone_from="huggingface-projects/sd-multiplayer-data", - use_auth_token=True, -) - -if not ROOM_DB.exists(): - print("Creating database") - print("ROOM_DB", ROOM_DB) - db = sqlite3.connect(ROOM_DB) - with open(Path("schema.sql"), "r") as f: - db.executescript(f.read()) - db.commit() - db.close() - - -def get_room_db(): - db = sqlite3.connect(ROOM_DB, check_same_thread=False) - db.row_factory = sqlite3.Row - try: - yield db - except Exception: - db.rollback() - finally: - db.close() - - -def get_room_data_db(): - db = sqlite3.connect(ROOMS_DATA_DB, check_same_thread=False) - db.row_factory = sqlite3.Row - try: - yield db - except Exception: - db.rollback() - finally: - db.close() - - -s3 = boto3.client(service_name='s3', - aws_access_key_id=AWS_ACCESS_KEY_ID, - aws_secret_access_key=AWS_SECRET_KEY) -try: - SAMPLING_MODE = Image.Resampling.LANCZOS -except Exception as e: - SAMPLING_MODE = Image.LANCZOS - - -blocks = gr.Blocks().queue() -model = {} - -STATIC_MASK = Image.open("mask.png") - - -def sync_rooms_data_repo(): - subprocess.Popen("git fetch && git reset --hard origin/main", - cwd=S3_DATA_FOLDER, shell=True) - - -def get_model(): - if "inpaint" not in model: - scheduler = EulerAncestralDiscreteScheduler.from_pretrained( - "stabilityai/stable-diffusion-2-base", subfolder="scheduler") - inpaint = DiffusionPipeline.from_pretrained( - "radames/stable-diffusion-v2-inpainting", torch_dtype=torch.float16) - inpaint.scheduler = scheduler - inpaint = inpaint.to("cuda") - model["inpaint"] = inpaint - - return model["inpaint"] - - -# init model on startup -get_model() - - -async def run_outpaint( - input_image, - prompt_text, - strength, - guidance, - step, - fill_mode, - room_id, - image_key -): - inpaint = get_model() - sel_buffer = np.array(input_image) - img = sel_buffer[:, :, 0:3] - mask = sel_buffer[:, :, -1] - nmask = 255 - mask - process_size = 512 - negative_syntax = r'\<(.*?)\>' - prompt = re.sub(negative_syntax, ' ', prompt_text) - negative_prompt = ' '.join(re.findall(negative_syntax, prompt_text)) - print("prompt", prompt) - print("negative_prompt", negative_prompt) - if nmask.sum() < 1: - print("inpaiting with fixed Mask") - mask = np.array(STATIC_MASK)[:, :, 0] - img, mask = functbl[fill_mode](img, mask) - init_image = Image.fromarray(img) - mask = 255 - mask - mask = skimage.measure.block_reduce(mask, (8, 8), np.max) - mask = mask.repeat(8, axis=0).repeat(8, axis=1) - mask_image = Image.fromarray(mask) - elif mask.sum() > 0: - print("inpainting") - img, mask = functbl[fill_mode](img, mask) - init_image = Image.fromarray(img) - mask = 255 - mask - mask = skimage.measure.block_reduce(mask, (8, 8), np.max) - mask = mask.repeat(8, axis=0).repeat(8, axis=1) - mask_image = Image.fromarray(mask) - - # mask_image=mask_image.filter(ImageFilter.GaussianBlur(radius = 8)) - else: - print("text2image") - print("inpainting") - img, mask = functbl[fill_mode](img, mask) - init_image = Image.fromarray(img) - mask = 255 - mask - mask = skimage.measure.block_reduce(mask, (8, 8), np.max) - mask = mask.repeat(8, axis=0).repeat(8, axis=1) - mask_image = Image.fromarray(mask) - - # mask_image=mask_image.filter(ImageFilter.GaussianBlur(radius = 8)) - with autocast("cuda"): - output = inpaint( - prompt=prompt, - negative_prompt=negative_prompt, - image=init_image.resize( - (process_size, process_size), resample=SAMPLING_MODE - ), - mask_image=mask_image.resize((process_size, process_size)), - strength=strength, - num_inference_steps=step, - guidance_scale=guidance, - ) - print(output) - image = output["images"][0] - is_nsfw = False - if "nsfw_content_detected" in output: - is_nsfw = output["nsfw_content_detected"][0] - image_url = {} - - if not is_nsfw: - # print("not nsfw, uploading") - image_url = await upload_file(image, prompt + "NNOTN" + negative_prompt, room_id, image_key) - - params = { - "is_nsfw": is_nsfw, - "image": image_url - } - return params - - -with blocks as demo: - - with gr.Row(): - - with gr.Column(scale=3, min_width=270): - sd_prompt = gr.Textbox( - label="Prompt", placeholder="input your prompt here", lines=4 - ) - with gr.Column(scale=2, min_width=150): - sd_strength = gr.Slider( - label="Strength", minimum=0.0, maximum=1.0, value=0.75, step=0.01 - ) - with gr.Column(scale=1, min_width=150): - sd_step = gr.Number(label="Step", value=50, precision=0) - sd_guidance = gr.Number(label="Guidance", value=7.5) - with gr.Row(): - with gr.Column(scale=4, min_width=600): - init_mode = gr.Radio( - label="Init mode", - choices=[ - "patchmatch", - "edge_pad", - "cv2_ns", - "cv2_telea", - "gaussian", - "perlin", - ], - value="patchmatch", - type="value", - ) - - model_input = gr.Image(label="Input", type="pil", image_mode="RGBA") - room_id = gr.Textbox(label="Room ID") - image_key = gr.Textbox(label="image_key") - proceed_button = gr.Button("Proceed", elem_id="proceed") - params = gr.JSON() - - proceed_button.click( - fn=run_outpaint, - inputs=[ - model_input, - sd_prompt, - sd_strength, - sd_guidance, - sd_step, - init_mode, - room_id, - image_key - ], - outputs=[params], - ) - - -blocks.config['dev_mode'] = False - -app = gr.mount_gradio_app(app, blocks, "/gradio", - gradio_api_url="http://0.0.0.0:7860/gradio/") - - -def generateAuthToken(): - response = requests.get(f"https://liveblocks.io/api/authorize", - headers={"Authorization": f"Bearer {LIVEBLOCKS_SECRET}"}) - if response.status_code == 200: - data = response.json() - return data["token"] - else: - raise Exception(response.status_code, response.text) - - -def get_room_count(room_id: str): - response = requests.get( - f"https://api.liveblocks.io/v2/rooms/{room_id}/active_users", - headers={"Authorization": f"Bearer {LIVEBLOCKS_SECRET}", "Content-Type": "application/json"}) - if response.status_code == 200: - res = response.json() - if "data" in res: - return len(res["data"]) - else: - return 0 - raise Exception("Error getting room count") - - -@ app.on_event("startup") -@ repeat_every(seconds=100) -def sync_rooms(): - print("Syncing rooms active users") - try: - for db in get_room_db(): - rooms = db.execute("SELECT * FROM rooms").fetchall() - for row in rooms: - room_id = row["room_id"] - users_count = get_room_count(room_id) - cursor = db.cursor() - cursor.execute( - "UPDATE rooms SET users_count = ? WHERE room_id = ?", (users_count, room_id)) - db.commit() - except Exception as e: - print(e) - print("Rooms update failed") - - -@ app.on_event("startup") -@ repeat_every(seconds=300) -def sync_room_datq(): - print("Sync rooms data") - sync_rooms_data_repo() - - -@ app.get('/api/room_data/{room_id}') -async def get_rooms_data(room_id: str, start: str = None, end: str = None, db: sqlite3.Connection = Depends(get_room_data_db)): - print("Getting rooms data", room_id, start, end) - - if start is None and end is None: - rooms_rows = db.execute( - "SELECT key, prompt, time, x, y FROM rooms_data WHERE room_id = ? ORDER BY time", (room_id,)).fetchall() - elif end is None: - rooms_rows = db.execute("SELECT key, prompt, time, x, y FROM rooms_data WHERE room_id = ? AND time >= ? ORDER BY time", - (room_id, start)).fetchall() - elif start is None: - rooms_rows = db.execute("SELECT key, prompt, time, x, y FROM rooms_data WHERE room_id = ? AND time <= ? ORDER BY time", - (room_id, end)).fetchall() - else: - rooms_rows = db.execute("SELECT key, prompt, time, x, y FROM rooms_data WHERE room_id = ? AND time >= ? AND time <= ? ORDER BY time", - (room_id, start, end)).fetchall() - return rooms_rows - - -@ app.get('/api/rooms') -async def get_rooms(db: sqlite3.Connection = Depends(get_room_db)): - print("Getting rooms") - rooms = db.execute("SELECT * FROM rooms").fetchall() - return rooms - - -@ app.post('/api/auth') -async def autorize(request: Request): - data = await request.json() - room = data["room"] - payload = { - "userId": str(shortuuid.uuid()), - "userInfo": { - "name": "Anon" - }} - - response = requests.post(f"https://api.liveblocks.io/v2/rooms/{room}/authorize", - headers={"Authorization": f"Bearer {LIVEBLOCKS_SECRET}"}, json=payload) - if response.status_code == 200: - # user in, incremente room count - # cursor = db.cursor() - # cursor.execute( - # "UPDATE rooms SET users_count = users_count + 1 WHERE room_id = ?", (room,)) - # db.commit() - sync_rooms() - return response.json() - else: - raise Exception(response.status_code, response.text) - - -def slugify(value): - value = re.sub(r'[^\w\s-]', '', value).strip().lower() - out = re.sub(r'[-\s]+', '-', value) - return out[:400] - - -async def upload_file(image: Image.Image, prompt: str, room_id: str, image_key: str): - room_id = room_id.strip() or "uploads" - image_key = image_key.strip() or "" - image = image.convert('RGB') - # print("Uploading file from predict") - temp_file = io.BytesIO() - image.save(temp_file, format="WEBP") - temp_file.seek(0) - id = shortuuid.uuid() - date = int(time.time()) - prompt_slug = slugify(prompt) - filename = f"{date}-{id}-{image_key}-{prompt_slug}.webp" - timelapse_name = f"{id}.webp" - key_name = f"{room_id}/{filename}" - s3.upload_fileobj(Fileobj=temp_file, Bucket=AWS_S3_BUCKET_NAME, Key=key_name, ExtraArgs={ - "ContentType": "image/webp", "CacheControl": "max-age=31536000"}) - s3.copy_object(Bucket=AWS_S3_BUCKET_NAME, - CopySource=f"{AWS_S3_BUCKET_NAME}/{key_name}", Key=f"timelapse/{room_id}/{timelapse_name}") - - temp_file.close() - - out = {"url": f'https://d26smi9133w0oo.cloudfront.net/{room_id}/{filename}', - "filename": filename} - return out - - -@ app.post('/api/uploadfile') -async def create_upload_file(file: UploadFile): - contents = await file.read() - file_size = len(contents) - if not 0 < file_size < 100E+06: - raise HTTPException( - status_code=status.HTTP_400_BAD_REQUEST, - detail='Supported file size is less than 2 MB' - ) - file_type = magic.from_buffer(contents, mime=True) - if file_type.lower() not in FILE_TYPES: - raise HTTPException( - status_code=status.HTTP_400_BAD_REQUEST, - detail=f'Unsupported file type {file_type}. Supported types are {FILE_TYPES}' - ) - temp_file = io.BytesIO() - temp_file.write(contents) - temp_file.seek(0) - s3.upload_fileobj(Fileobj=temp_file, Bucket=AWS_S3_BUCKET_NAME, Key="community/" + - file.filename, ExtraArgs={"ContentType": file.content_type, "CacheControl": "max-age=31536000"}) - temp_file.close() - - return {"url": f'https://d26smi9133w0oo.cloudfront.net/community/{file.filename}', "filename": file.filename} - - -app.mount("/", StaticFiles(directory="../static", html=True), name="static") - -origins = ["*"] - -app.add_middleware( - CORSMiddleware, - allow_origins=origins, - allow_credentials=True, - allow_methods=["*"], - allow_headers=["*"], -) - - -if __name__ == "__main__": - uvicorn.run(app, host="0.0.0.0", port=7860, - log_level="debug", reload=False) diff --git a/spaces/huhlim/cg2all/app.py b/spaces/huhlim/cg2all/app.py deleted file mode 100644 index 6b12c5ba106fffac71e0879c36a4abb5a30d8266..0000000000000000000000000000000000000000 --- a/spaces/huhlim/cg2all/app.py +++ /dev/null @@ -1,174 +0,0 @@ -import gradio as gr -import cg2all -import os - - -def read_mol(molpath): - with open(molpath, "r") as fp: - lines = fp.readlines() - mol = "" - for l in lines: - mol += l - # - mol = mol.replace("OT1", "O ") - mol = mol.replace("OT2", "OXT") - return mol - - -def molecule(input_pdb): - mol = read_mol(input_pdb) - x = ( - """ - - - - - - - - - -
        - - - """ - ) - - return f"""""" - - -def runner(in_pdb, model_type): - out_fn = in_pdb.name[:-4] + "-all.pdb" - ckpt_fn = f"model/{model_type}.ckpt" - cg2all.convert_cg2all(in_pdb.name, out_fn, model_type=model_type, ckpt_fn=ckpt_fn) - view = molecule(out_fn) - return out_fn, view - - -with gr.Blocks() as app: - gr.Markdown( - "# cg2all: conversion of coarse-grained protein structure model to all-atom structure" - ) - with gr.Row(): - with gr.Column(): - input_pdb = gr.File( - file_count="single", - label="Input CG structure", - file_types=[".pdb", ".PDB", ".txt", ".TXT"], - ) - model_type = gr.Radio( - [ - "CalphaBasedModel", - "ResidueBasedModel", - "SidechainModel", - "CalphaCMModel", - "CalphaSCModel", - "BackboneModel", - "MainchainModel", - "Martini", - "Martini3", - "PRIMO", - ], - label="Input CG model type", - ) - # - button = gr.Button("Run") - # - gr.Examples( - [ - ["inputs/1ab1_A.calpha.pdb", "CalphaBasedModel"], - ["inputs/1ab1_A.residue.pdb", "ResidueBasedModel"], - ["inputs/1ab1_A.sc.pdb", "SidechainModel"], - ["inputs/1ab1_A.cacm.pdb", "CalphaCMModel"], - ["inputs/1ab1_A.casc.pdb", "CalphaSCModel"], - ["inputs/1ab1_A.bb.pdb", "BackboneModel"], - ["inputs/1ab1_A.mc.pdb", "MainchainModel"], - ["inputs/1ab1_A.martini.pdb", "Martini"], - ["inputs/1ab1_A.martini3.pdb", "Martini3"], - ["inputs/1ab1_A.primo.pdb", "PRIMO"], - ], - [input_pdb, model_type], - label="Monomeric coarse-grained structure", - ) - gr.Examples( - [ - ["inputs/Q9EP54.sample.pdb", "CalphaBasedModel"], - ], - [input_pdb, model_type], - label="ML(idpGAN)-generated IDP structure", - ) - gr.Examples( - [ - ["inputs/3iyg.pdb", "CalphaBasedModel"], - ], - [input_pdb, model_type], - label="Multimeric medium-resolution cryo-EM structure", - ) - gr.Examples( - [ - ["inputs/LAF1rgg.sample.pdb", "CalphaBasedModel"], - ], - [input_pdb, model_type], - label="Snapshot of COCOMO simulation of LLPS", - ) - - with gr.Column(): - output_pdb = gr.File(file_count="single", label="Output structure") - viewer = gr.HTML() - - button.click(fn=runner, inputs=[input_pdb, model_type], outputs=[output_pdb, viewer]) - # - gr.Markdown("---") - gr.Markdown( - "### GitHub repository: [https://github.com/huhlim/cg2all](https://github.com/huhlim/cg2all)" - ) - gr.Markdown("### Local installation: `pip install git+http://github.com/huhlim/cg2all`") - gr.Markdown("### Supported coarse-grained models") - gr.Markdown("- CalphaBasedModel: CA-trace") - gr.Markdown("- ResidueBasedModel: Residue center-of-mass") - gr.Markdown("- SidechainModel: Sidechain center-of-mass") - gr.Markdown("- CalphaCMModel: CA-trace + Residue center-of-mass") - gr.Markdown("- CalphaSCModel: CA-trace + Sidechain center-of-mass") - gr.Markdown("- BackboneModel: Backbone N, CA, and C atoms") - gr.Markdown("- MainchainModel: Backbone N, CA, C, and O atoms") - gr.Markdown("- Martini: [Martini model](http://cgmartini.nl)") - gr.Markdown("- Martini3: [Martini3 model](http://www.cgmartini.nl/index.php/martini-3-0)") - gr.Markdown("- PRIMO: [PRIMO model](https://dx.doi.org/10.1002/prot.22645)") - - gr.Markdown("### Cite: TODO") - -app.launch() diff --git a/spaces/hysts/Shap-E/settings.py b/spaces/hysts/Shap-E/settings.py deleted file mode 100644 index 256832c72502270fabde0214695d945f8767dec5..0000000000000000000000000000000000000000 --- a/spaces/hysts/Shap-E/settings.py +++ /dev/null @@ -1,7 +0,0 @@ -import os - -import numpy as np - -CACHE_EXAMPLES = os.getenv("CACHE_EXAMPLES") == "1" - -MAX_SEED = np.iinfo(np.int32).max diff --git a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/util/nvdiffrast.py b/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/util/nvdiffrast.py deleted file mode 100644 index 1db5799ef4e979b8a91281f527ae040c5c35e299..0000000000000000000000000000000000000000 --- a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/util/nvdiffrast.py +++ /dev/null @@ -1,91 +0,0 @@ -"""This script is the differentiable renderer for Deep3DFaceRecon_pytorch - Attention, antialiasing step is missing in current version. -""" -from typing import List - -import kornia -import numpy as np -import torch -import torch.nn.functional as F -from kornia.geometry.camera import pixel2cam -from scipy.io import loadmat -from torch import nn - -import nvdiffrast.torch as dr - - -def ndc_projection(x=0.1, n=1.0, f=50.0): - return np.array( - [[n / x, 0, 0, 0], [0, n / -x, 0, 0], [0, 0, -(f + n) / (f - n), -(2 * f * n) / (f - n)], [0, 0, -1, 0]] - ).astype(np.float32) - - -class MeshRenderer(nn.Module): - def __init__(self, rasterize_fov, znear=0.1, zfar=10, rasterize_size=224, use_opengl=True): - super(MeshRenderer, self).__init__() - - x = np.tan(np.deg2rad(rasterize_fov * 0.5)) * znear - self.ndc_proj = torch.tensor(ndc_projection(x=x, n=znear, f=zfar)).matmul( - torch.diag(torch.tensor([1.0, -1, -1, 1])) - ) - self.rasterize_size = rasterize_size - self.use_opengl = use_opengl - self.ctx = None - - def forward(self, vertex, tri, feat=None): - """ - Return: - mask -- torch.tensor, size (B, 1, H, W) - depth -- torch.tensor, size (B, 1, H, W) - features(optional) -- torch.tensor, size (B, C, H, W) if feat is not None - - Parameters: - vertex -- torch.tensor, size (B, N, 3) - tri -- torch.tensor, size (B, M, 3) or (M, 3), triangles - feat(optional) -- torch.tensor, size (B, C), features - """ - device = vertex.device - rsize = int(self.rasterize_size) - ndc_proj = self.ndc_proj.to(device) - # trans to homogeneous coordinates of 3d vertices, the direction of y is the same as v - if vertex.shape[-1] == 3: - vertex = torch.cat([vertex, torch.ones([*vertex.shape[:2], 1]).to(device)], dim=-1) - vertex[..., 1] = -vertex[..., 1] - - vertex_ndc = vertex @ ndc_proj.t() - if self.ctx is None: - if self.use_opengl: - self.ctx = dr.RasterizeGLContext(device=device) - ctx_str = "opengl" - else: - self.ctx = dr.RasterizeCudaContext(device=device) - ctx_str = "cuda" - print("create %s ctx on device cuda:%d" % (ctx_str, device.index)) - - ranges = None - if isinstance(tri, List) or len(tri.shape) == 3: - vum = vertex_ndc.shape[1] - fnum = torch.tensor([f.shape[0] for f in tri]).unsqueeze(1).to(device) - fstartidx = torch.cumsum(fnum, dim=0) - fnum - ranges = torch.cat([fstartidx, fnum], axis=1).type(torch.int32).cpu() - for i in range(tri.shape[0]): - tri[i] = tri[i] + i * vum - vertex_ndc = torch.cat(vertex_ndc, dim=0) - tri = torch.cat(tri, dim=0) - - # for range_mode vetex: [B*N, 4], tri: [B*M, 3], for instance_mode vetex: [B, N, 4], tri: [M, 3] - tri = tri.type(torch.int32).contiguous() - rast_out, _ = dr.rasterize(self.ctx, vertex_ndc.contiguous(), tri, resolution=[rsize, rsize], ranges=ranges) - - depth, _ = dr.interpolate(vertex.reshape([-1, 4])[..., 2].unsqueeze(1).contiguous(), rast_out, tri) - depth = depth.permute(0, 3, 1, 2) - mask = (rast_out[..., 3] > 0).float().unsqueeze(1) - depth = mask * depth - - image = None - if feat is not None: - image, _ = dr.interpolate(feat, rast_out, tri) - image = image.permute(0, 3, 1, 2) - image = mask * image - - return mask, depth, image diff --git a/spaces/hzy123/bingo/src/components/chat-message.tsx b/spaces/hzy123/bingo/src/components/chat-message.tsx deleted file mode 100644 index bf272d8d7005cfd06c53bd213e09ea217e803549..0000000000000000000000000000000000000000 --- a/spaces/hzy123/bingo/src/components/chat-message.tsx +++ /dev/null @@ -1,93 +0,0 @@ -import remarkGfm from 'remark-gfm' -import remarkMath from 'remark-math' -import supersub from 'remark-supersub' -import remarkBreaks from 'remark-breaks' -import { cn } from '@/lib/utils' -import { CodeBlock } from '@/components/ui/codeblock' -import { MemoizedReactMarkdown } from '@/components/markdown' -import { LearnMore } from './learn-more' -import { ChatMessageModel } from '@/lib/bots/bing/types' -import { useEffect } from 'react' -import { TurnCounter } from './turn-counter' - -export interface ChatMessageProps { - message: ChatMessageModel -} - -export function ChatMessage({ message, ...props }: ChatMessageProps) { - useEffect(() => { - if (document.body.scrollHeight - window.innerHeight - window.scrollY - 200 < 0) { - window.scrollBy(0, 200) - } - }, [message.text]) - - return message.text ? ( -
        -
        - {obj.alt} - } - } catch (e) { - } - return {obj.alt} - }, - p({ children }) { - return

        {children}

        - }, - code({ node, inline, className, children, ...props }) { - if (children.length) { - if (children[0] == '▍') { - return ( - - ) - } - - children[0] = (children[0] as string).replace('`▍`', '▍') - } - - const match = /language-(\w+)/.exec(className || '') - - if (inline) { - return ( - - {children} - - ) - } - - return ( - - ) - } - }} - > - {message.text} -
        -
        -
        - {message.author === 'bot' && } - {message.author === 'bot' && } -
        -
        - ) : null -} diff --git a/spaces/iamironman4279/SadTalker/src/face3d/models/facerecon_model.py b/spaces/iamironman4279/SadTalker/src/face3d/models/facerecon_model.py deleted file mode 100644 index 7de8ca6eebc50ff1ed52c5ba37d31b43f977b5e1..0000000000000000000000000000000000000000 --- a/spaces/iamironman4279/SadTalker/src/face3d/models/facerecon_model.py +++ /dev/null @@ -1,220 +0,0 @@ -"""This script defines the face reconstruction model for Deep3DFaceRecon_pytorch -""" - -import numpy as np -import torch -from src.face3d.models.base_model import BaseModel -from src.face3d.models import networks -from src.face3d.models.bfm import ParametricFaceModel -from src.face3d.models.losses import perceptual_loss, photo_loss, reg_loss, reflectance_loss, landmark_loss -from src.face3d.util import util -from src.face3d.util.nvdiffrast import MeshRenderer -# from src.face3d.util.preprocess import estimate_norm_torch - -import trimesh -from scipy.io import savemat - -class FaceReconModel(BaseModel): - - @staticmethod - def modify_commandline_options(parser, is_train=False): - """ Configures options specific for CUT model - """ - # net structure and parameters - parser.add_argument('--net_recon', type=str, default='resnet50', choices=['resnet18', 'resnet34', 'resnet50'], help='network structure') - parser.add_argument('--init_path', type=str, default='./checkpoints/init_model/resnet50-0676ba61.pth') - parser.add_argument('--use_last_fc', type=util.str2bool, nargs='?', const=True, default=False, help='zero initialize the last fc') - parser.add_argument('--bfm_folder', type=str, default='./checkpoints/BFM_Fitting/') - parser.add_argument('--bfm_model', type=str, default='BFM_model_front.mat', help='bfm model') - - # renderer parameters - parser.add_argument('--focal', type=float, default=1015.) - parser.add_argument('--center', type=float, default=112.) - parser.add_argument('--camera_d', type=float, default=10.) - parser.add_argument('--z_near', type=float, default=5.) - parser.add_argument('--z_far', type=float, default=15.) - - if is_train: - # training parameters - parser.add_argument('--net_recog', type=str, default='r50', choices=['r18', 'r43', 'r50'], help='face recog network structure') - parser.add_argument('--net_recog_path', type=str, default='checkpoints/recog_model/ms1mv3_arcface_r50_fp16/backbone.pth') - parser.add_argument('--use_crop_face', type=util.str2bool, nargs='?', const=True, default=False, help='use crop mask for photo loss') - parser.add_argument('--use_predef_M', type=util.str2bool, nargs='?', const=True, default=False, help='use predefined M for predicted face') - - - # augmentation parameters - parser.add_argument('--shift_pixs', type=float, default=10., help='shift pixels') - parser.add_argument('--scale_delta', type=float, default=0.1, help='delta scale factor') - parser.add_argument('--rot_angle', type=float, default=10., help='rot angles, degree') - - # loss weights - parser.add_argument('--w_feat', type=float, default=0.2, help='weight for feat loss') - parser.add_argument('--w_color', type=float, default=1.92, help='weight for loss loss') - parser.add_argument('--w_reg', type=float, default=3.0e-4, help='weight for reg loss') - parser.add_argument('--w_id', type=float, default=1.0, help='weight for id_reg loss') - parser.add_argument('--w_exp', type=float, default=0.8, help='weight for exp_reg loss') - parser.add_argument('--w_tex', type=float, default=1.7e-2, help='weight for tex_reg loss') - parser.add_argument('--w_gamma', type=float, default=10.0, help='weight for gamma loss') - parser.add_argument('--w_lm', type=float, default=1.6e-3, help='weight for lm loss') - parser.add_argument('--w_reflc', type=float, default=5.0, help='weight for reflc loss') - - opt, _ = parser.parse_known_args() - parser.set_defaults( - focal=1015., center=112., camera_d=10., use_last_fc=False, z_near=5., z_far=15. - ) - if is_train: - parser.set_defaults( - use_crop_face=True, use_predef_M=False - ) - return parser - - def __init__(self, opt): - """Initialize this model class. - - Parameters: - opt -- training/test options - - A few things can be done here. - - (required) call the initialization function of BaseModel - - define loss function, visualization images, model names, and optimizers - """ - BaseModel.__init__(self, opt) # call the initialization method of BaseModel - - self.visual_names = ['output_vis'] - self.model_names = ['net_recon'] - self.parallel_names = self.model_names + ['renderer'] - - self.facemodel = ParametricFaceModel( - bfm_folder=opt.bfm_folder, camera_distance=opt.camera_d, focal=opt.focal, center=opt.center, - is_train=self.isTrain, default_name=opt.bfm_model - ) - - fov = 2 * np.arctan(opt.center / opt.focal) * 180 / np.pi - self.renderer = MeshRenderer( - rasterize_fov=fov, znear=opt.z_near, zfar=opt.z_far, rasterize_size=int(2 * opt.center) - ) - - if self.isTrain: - self.loss_names = ['all', 'feat', 'color', 'lm', 'reg', 'gamma', 'reflc'] - - self.net_recog = networks.define_net_recog( - net_recog=opt.net_recog, pretrained_path=opt.net_recog_path - ) - # loss func name: (compute_%s_loss) % loss_name - self.compute_feat_loss = perceptual_loss - self.comupte_color_loss = photo_loss - self.compute_lm_loss = landmark_loss - self.compute_reg_loss = reg_loss - self.compute_reflc_loss = reflectance_loss - - self.optimizer = torch.optim.Adam(self.net_recon.parameters(), lr=opt.lr) - self.optimizers = [self.optimizer] - self.parallel_names += ['net_recog'] - # Our program will automatically call to define schedulers, load networks, and print networks - - def set_input(self, input): - """Unpack input data from the dataloader and perform necessary pre-processing steps. - - Parameters: - input: a dictionary that contains the data itself and its metadata information. - """ - self.input_img = input['imgs'].to(self.device) - self.atten_mask = input['msks'].to(self.device) if 'msks' in input else None - self.gt_lm = input['lms'].to(self.device) if 'lms' in input else None - self.trans_m = input['M'].to(self.device) if 'M' in input else None - self.image_paths = input['im_paths'] if 'im_paths' in input else None - - def forward(self, output_coeff, device): - self.facemodel.to(device) - self.pred_vertex, self.pred_tex, self.pred_color, self.pred_lm = \ - self.facemodel.compute_for_render(output_coeff) - self.pred_mask, _, self.pred_face = self.renderer( - self.pred_vertex, self.facemodel.face_buf, feat=self.pred_color) - - self.pred_coeffs_dict = self.facemodel.split_coeff(output_coeff) - - - def compute_losses(self): - """Calculate losses, gradients, and update network weights; called in every training iteration""" - - assert self.net_recog.training == False - trans_m = self.trans_m - if not self.opt.use_predef_M: - trans_m = estimate_norm_torch(self.pred_lm, self.input_img.shape[-2]) - - pred_feat = self.net_recog(self.pred_face, trans_m) - gt_feat = self.net_recog(self.input_img, self.trans_m) - self.loss_feat = self.opt.w_feat * self.compute_feat_loss(pred_feat, gt_feat) - - face_mask = self.pred_mask - if self.opt.use_crop_face: - face_mask, _, _ = self.renderer(self.pred_vertex, self.facemodel.front_face_buf) - - face_mask = face_mask.detach() - self.loss_color = self.opt.w_color * self.comupte_color_loss( - self.pred_face, self.input_img, self.atten_mask * face_mask) - - loss_reg, loss_gamma = self.compute_reg_loss(self.pred_coeffs_dict, self.opt) - self.loss_reg = self.opt.w_reg * loss_reg - self.loss_gamma = self.opt.w_gamma * loss_gamma - - self.loss_lm = self.opt.w_lm * self.compute_lm_loss(self.pred_lm, self.gt_lm) - - self.loss_reflc = self.opt.w_reflc * self.compute_reflc_loss(self.pred_tex, self.facemodel.skin_mask) - - self.loss_all = self.loss_feat + self.loss_color + self.loss_reg + self.loss_gamma \ - + self.loss_lm + self.loss_reflc - - - def optimize_parameters(self, isTrain=True): - self.forward() - self.compute_losses() - """Update network weights; it will be called in every training iteration.""" - if isTrain: - self.optimizer.zero_grad() - self.loss_all.backward() - self.optimizer.step() - - def compute_visuals(self): - with torch.no_grad(): - input_img_numpy = 255. * self.input_img.detach().cpu().permute(0, 2, 3, 1).numpy() - output_vis = self.pred_face * self.pred_mask + (1 - self.pred_mask) * self.input_img - output_vis_numpy_raw = 255. * output_vis.detach().cpu().permute(0, 2, 3, 1).numpy() - - if self.gt_lm is not None: - gt_lm_numpy = self.gt_lm.cpu().numpy() - pred_lm_numpy = self.pred_lm.detach().cpu().numpy() - output_vis_numpy = util.draw_landmarks(output_vis_numpy_raw, gt_lm_numpy, 'b') - output_vis_numpy = util.draw_landmarks(output_vis_numpy, pred_lm_numpy, 'r') - - output_vis_numpy = np.concatenate((input_img_numpy, - output_vis_numpy_raw, output_vis_numpy), axis=-2) - else: - output_vis_numpy = np.concatenate((input_img_numpy, - output_vis_numpy_raw), axis=-2) - - self.output_vis = torch.tensor( - output_vis_numpy / 255., dtype=torch.float32 - ).permute(0, 3, 1, 2).to(self.device) - - def save_mesh(self, name): - - recon_shape = self.pred_vertex # get reconstructed shape - recon_shape[..., -1] = 10 - recon_shape[..., -1] # from camera space to world space - recon_shape = recon_shape.cpu().numpy()[0] - recon_color = self.pred_color - recon_color = recon_color.cpu().numpy()[0] - tri = self.facemodel.face_buf.cpu().numpy() - mesh = trimesh.Trimesh(vertices=recon_shape, faces=tri, vertex_colors=np.clip(255. * recon_color, 0, 255).astype(np.uint8)) - mesh.export(name) - - def save_coeff(self,name): - - pred_coeffs = {key:self.pred_coeffs_dict[key].cpu().numpy() for key in self.pred_coeffs_dict} - pred_lm = self.pred_lm.cpu().numpy() - pred_lm = np.stack([pred_lm[:,:,0],self.input_img.shape[2]-1-pred_lm[:,:,1]],axis=2) # transfer to image coordinate - pred_coeffs['lm68'] = pred_lm - savemat(name,pred_coeffs) - - - diff --git a/spaces/ifey/chatdemo/gradiodemo/Demo/cookie/t.py b/spaces/ifey/chatdemo/gradiodemo/Demo/cookie/t.py deleted file mode 100644 index 6f28d4649702c328f877179e653490014d4bb764..0000000000000000000000000000000000000000 --- a/spaces/ifey/chatdemo/gradiodemo/Demo/cookie/t.py +++ /dev/null @@ -1,13 +0,0 @@ -import gradio as gr -from flask import request - -# 创建一个函数,该函数设置 cookie 并返回消息 -def set_cookie(): - request.cookies.set("user_cookie", "HelloCookie") - return "Cookie has been set!" - -# 创建 Gradio 接口,指定输入为 None(因为这里不需要输入) -iface = gr.Interface(fn=set_cookie, inputs=None, outputs="text") - -# 启动 Gradio 应用 -iface.launch() diff --git a/spaces/innnky/soft-vits-vc/data_utils.py b/spaces/innnky/soft-vits-vc/data_utils.py deleted file mode 100644 index 721c651225d8032f9036338e4279aa65603b0972..0000000000000000000000000000000000000000 --- a/spaces/innnky/soft-vits-vc/data_utils.py +++ /dev/null @@ -1,391 +0,0 @@ -import time -import os -import random -import numpy as np -import torch -import torch.utils.data -import numpy as np -import commons -from mel_processing import spectrogram_torch -from utils import load_wav_to_torch, load_filepaths_and_text -from text import text_to_sequence, cleaned_text_to_sequence - - -class TextAudioLoader(torch.utils.data.Dataset): - """ - 1) loads audio, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - def __init__(self, audiopaths_and_text, hparams): - self.audiopaths_and_text = load_filepaths_and_text(audiopaths_and_text) - self.text_cleaners = hparams.text_cleaners - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - - self.cleaned_text = getattr(hparams, "cleaned_text", False) - - self.add_blank = hparams.add_blank - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 190) - - random.seed(1234) - random.shuffle(self.audiopaths_and_text) - self._filter() - - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - - audiopaths_and_text_new = [] - lengths = [] - for audiopath, text in self.audiopaths_and_text: - if self.min_text_len <= len(text) and len(text) <= self.max_text_len: - audiopaths_and_text_new.append([audiopath, text]) - lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length)) - self.audiopaths_and_text = audiopaths_and_text_new - self.lengths = lengths - - def get_audio_text_pair(self, audiopath_and_text): - # separate filename and text - audiopath, text = audiopath_and_text[0], audiopath_and_text[1] - text = self.get_text(text) - spec, wav = self.get_audio(audiopath) - return (text, spec, wav) - - def get_audio(self, filename): - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError("{} {} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate)) - audio_norm = audio / self.max_wav_value - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - if os.path.exists(spec_filename): - spec = torch.load(spec_filename) - else: - spec = spectrogram_torch(audio_norm, self.filter_length, - self.sampling_rate, self.hop_length, self.win_length, - center=False) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename) - return spec, audio_norm - - def get_text(self, text): - # if self.cleaned_text: - # text_norm = text - # else: - # text_norm = text_to_sequence(text, self.text_cleaners) - # if self.add_blank: - # text_norm = commons.intersperse(text_norm, 0) - # text_norm = torch.LongTensor(text_norm) - soft = np.load(text) - - text_norm = torch.FloatTensor(soft) - return text_norm - - def __getitem__(self, index): - return self.get_audio_text_pair(self.audiopaths_and_text[index]) - - def __len__(self): - return len(self.audiopaths_and_text) - - -class TextAudioCollate(): - """ Zero-pads model inputs and targets - """ - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text and aduio - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[1].size(1) for x in batch]), - dim=0, descending=True) - - max_text_len = max([len(x[0]) for x in batch]) - max_spec_len = max([x[1].size(1) for x in batch]) - max_wav_len = max([x[2].size(1) for x in batch]) - - text_lengths = torch.LongTensor(len(batch)) - spec_lengths = torch.LongTensor(len(batch)) - wav_lengths = torch.LongTensor(len(batch)) - - text_padded = torch.FloatTensor(len(batch), max_text_len, 256) - spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len) - wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len) - text_padded.zero_() - spec_padded.zero_() - wav_padded.zero_() - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - text = row[0] - text_padded[i, :text.size(0),:] = text - text_lengths[i] = text.size(0) - - spec = row[1] - spec_padded[i, :, :spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wav = row[2] - wav_padded[i, :, :wav.size(1)] = wav - wav_lengths[i] = wav.size(1) - - if self.return_ids: - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, ids_sorted_decreasing - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths - - -"""Multi speaker version""" -class TextAudioSpeakerLoader(torch.utils.data.Dataset): - """ - 1) loads audio, speaker_id, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - def __init__(self, audiopaths_sid_text, hparams): - self.audiopaths_sid_text = load_filepaths_and_text(audiopaths_sid_text) - self.text_cleaners = hparams.text_cleaners - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - - self.cleaned_text = getattr(hparams, "cleaned_text", False) - - self.add_blank = hparams.add_blank - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 190) - - random.seed(1234) - random.shuffle(self.audiopaths_sid_text) - self._filter() - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - - audiopaths_sid_text_new = [] - lengths = [] - for audiopath, sid, text in self.audiopaths_sid_text: - if self.min_text_len <= len(text) and len(text) <= self.max_text_len: - audiopaths_sid_text_new.append([audiopath, sid, text]) - lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length)) - self.audiopaths_sid_text = audiopaths_sid_text_new - self.lengths = lengths - - def get_audio_text_speaker_pair(self, audiopath_sid_text): - # separate filename, speaker_id and text - audiopath, sid, text = audiopath_sid_text[0], audiopath_sid_text[1], audiopath_sid_text[2] - text = self.get_text(text) - spec, wav = self.get_audio(audiopath) - sid = self.get_sid(sid) - return (text, spec, wav, sid) - - def get_audio(self, filename): - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError("{} {} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate)) - audio_norm = audio / self.max_wav_value - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - if os.path.exists(spec_filename): - spec = torch.load(spec_filename) - else: - spec = spectrogram_torch(audio_norm, self.filter_length, - self.sampling_rate, self.hop_length, self.win_length, - center=False) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename) - return spec, audio_norm - - def get_text(self, text): - soft = np.load(text) - - text_norm = torch.FloatTensor(soft) - return text_norm - - def get_sid(self, sid): - sid = torch.LongTensor([int(sid)]) - return sid - - def __getitem__(self, index): - return self.get_audio_text_speaker_pair(self.audiopaths_sid_text[index]) - - def __len__(self): - return len(self.audiopaths_sid_text) - - -class TextAudioSpeakerCollate(): - """ Zero-pads model inputs and targets - """ - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text, audio and speaker identities - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized, sid] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[1].size(1) for x in batch]), - dim=0, descending=True) - - max_text_len = max([len(x[0]) for x in batch]) - max_spec_len = max([x[1].size(1) for x in batch]) - max_wav_len = max([x[2].size(1) for x in batch]) - - text_lengths = torch.LongTensor(len(batch)) - spec_lengths = torch.LongTensor(len(batch)) - wav_lengths = torch.LongTensor(len(batch)) - sid = torch.LongTensor(len(batch)) - - text_padded = torch.FloatTensor(len(batch), max_text_len, 256) - spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len) - wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len) - text_padded.zero_() - spec_padded.zero_() - wav_padded.zero_() - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - text = row[0] - text_padded[i, :text.size(0)] = text - text_lengths[i] = text.size(0) - - spec = row[1] - spec_padded[i, :, :spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wav = row[2] - wav_padded[i, :, :wav.size(1)] = wav - wav_lengths[i] = wav.size(1) - - sid[i] = row[3] - - if self.return_ids: - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid, ids_sorted_decreasing - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid - - -class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler): - """ - Maintain similar input lengths in a batch. - Length groups are specified by boundaries. - Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}. - - It removes samples which are not included in the boundaries. - Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded. - """ - def __init__(self, dataset, batch_size, boundaries, num_replicas=None, rank=None, shuffle=True): - super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle) - self.lengths = dataset.lengths - self.batch_size = batch_size - self.boundaries = boundaries - - self.buckets, self.num_samples_per_bucket = self._create_buckets() - self.total_size = sum(self.num_samples_per_bucket) - self.num_samples = self.total_size // self.num_replicas - - def _create_buckets(self): - buckets = [[] for _ in range(len(self.boundaries) - 1)] - for i in range(len(self.lengths)): - length = self.lengths[i] - idx_bucket = self._bisect(length) - if idx_bucket != -1: - buckets[idx_bucket].append(i) - - for i in range(len(buckets) - 1, 0, -1): - if len(buckets[i]) == 0: - buckets.pop(i) - self.boundaries.pop(i+1) - - num_samples_per_bucket = [] - for i in range(len(buckets)): - len_bucket = len(buckets[i]) - total_batch_size = self.num_replicas * self.batch_size - rem = (total_batch_size - (len_bucket % total_batch_size)) % total_batch_size - num_samples_per_bucket.append(len_bucket + rem) - return buckets, num_samples_per_bucket - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch) - - indices = [] - if self.shuffle: - for bucket in self.buckets: - indices.append(torch.randperm(len(bucket), generator=g).tolist()) - else: - for bucket in self.buckets: - indices.append(list(range(len(bucket)))) - - batches = [] - for i in range(len(self.buckets)): - bucket = self.buckets[i] - len_bucket = len(bucket) - ids_bucket = indices[i] - num_samples_bucket = self.num_samples_per_bucket[i] - - # add extra samples to make it evenly divisible - rem = num_samples_bucket - len_bucket - ids_bucket = ids_bucket + ids_bucket * (rem // len_bucket) + ids_bucket[:(rem % len_bucket)] - - # subsample - ids_bucket = ids_bucket[self.rank::self.num_replicas] - - # batching - for j in range(len(ids_bucket) // self.batch_size): - batch = [bucket[idx] for idx in ids_bucket[j*self.batch_size:(j+1)*self.batch_size]] - batches.append(batch) - - if self.shuffle: - batch_ids = torch.randperm(len(batches), generator=g).tolist() - batches = [batches[i] for i in batch_ids] - self.batches = batches - - assert len(self.batches) * self.batch_size == self.num_samples - return iter(self.batches) - - def _bisect(self, x, lo=0, hi=None): - if hi is None: - hi = len(self.boundaries) - 1 - - if hi > lo: - mid = (hi + lo) // 2 - if self.boundaries[mid] < x and x <= self.boundaries[mid+1]: - return mid - elif x <= self.boundaries[mid]: - return self._bisect(x, lo, mid) - else: - return self._bisect(x, mid + 1, hi) - else: - return -1 - - def __len__(self): - return self.num_samples // self.batch_size diff --git a/spaces/innnky/vits-nyaru/vits.html b/spaces/innnky/vits-nyaru/vits.html deleted file mode 100644 index 68e3f8cabb27b7dbfc4173c788fe2c7f808a3a73..0000000000000000000000000000000000000000 --- a/spaces/innnky/vits-nyaru/vits.html +++ /dev/null @@ -1,12 +0,0 @@ - - - -Shortcut -

        -

        vits

        -

        -

        vits -

        -

        diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Harry Potter Goblet Of Fire Pc Game Free Download Full Version [HOT].md b/spaces/inplisQlawa/anything-midjourney-v4-1/Harry Potter Goblet Of Fire Pc Game Free Download Full Version [HOT].md deleted file mode 100644 index 54076972bf8fcb0117b0a903df17d39b7cf327fc..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Harry Potter Goblet Of Fire Pc Game Free Download Full Version [HOT].md +++ /dev/null @@ -1,6 +0,0 @@ -

        harry potter goblet of fire pc game free download full version


        Download Zip ::: https://urlin.us/2uEwoS



        - -Download Harry Potter and The Goblet of Fire game full and safe version at ... be free to move around in the play area and castles as in previous version they ... 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/inreVtussa/clothingai/Examples/Babylon Pro 9.0.1.5 [Portable] 64 Bit.md b/spaces/inreVtussa/clothingai/Examples/Babylon Pro 9.0.1.5 [Portable] 64 Bit.md deleted file mode 100644 index c715e70d44d43067a20a131ef47618645ebb1753..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Babylon Pro 9.0.1.5 [Portable] 64 Bit.md +++ /dev/null @@ -1,27 +0,0 @@ -
        -```html -

        Babylon Pro 9.0.1.5 [Portable] 64 bit: A Powerful Translation Tool for Your PC

        -

        If you are looking for a fast and easy way to translate texts, documents, web pages and more, you might want to check out Babylon Pro 9.0.1.5 [Portable] 64 bit. This is a portable version of the popular Babylon software, which means you can run it from a USB drive or any other removable media without installing it on your computer.

        -

        Babylon Pro 9.0.1.5 [Portable] 64 bit


        Download Ziphttps://tiurll.com/2uCjWr



        -

        Babylon Pro 9.0.1.5 [Portable] 64 bit offers you a comprehensive set of features to help you with your translation needs. You can access over 75 dictionaries and glossaries in more than 30 languages, including English, Spanish, French, German, Chinese, Japanese and Arabic. You can also use the text-to-speech function to hear how words and phrases are pronounced in different languages.

        -

        One of the most convenient features of Babylon Pro 9.0.1.5 [Portable] 64 bit is the one-click translation feature. You can simply select any word or text on your screen and click the Babylon button to get an instant translation in a pop-up window. You can also customize the settings to choose your preferred language pair, dictionary and style.

        -

        Babylon Pro 9.0.1.5 [Portable] 64 bit also supports document translation, which allows you to translate entire files in various formats, such as PDF, DOC, TXT and HTML. You can either upload your file to the Babylon online server or use the offline mode to translate it locally on your PC.

        -

        Babylon Pro 9.0.1.5 [Portable] 64 bit is compatible with Windows XP, Vista, 7 and 8 (32-bit and 64-bit). It requires a minimum of 256 MB of RAM and 100 MB of free disk space.

        -

        If you want to try Babylon Pro 9.0.1.5 [Portable] 64 bit for yourself, you can download it from the link below:

        -Download Babylon Pro 9.0.1.5 [Portable] 64 bit -``` - -```html -

        Babylon Pro 9.0.1.5 [Portable] 64 bit is not only a translation tool, but also a learning tool. You can use it to improve your vocabulary and grammar skills in different languages. You can also access a range of online resources, such as Wikipedia, Britannica and Oxford dictionaries, to get more information and context on the words and topics you are translating.

        -

        -

        Babylon Pro 9.0.1.5 [Portable] 64 bit is also a user-friendly and customizable software. You can change the appearance and behavior of the Babylon interface according to your preferences. You can also add your own dictionaries and glossaries to the Babylon database, or download additional ones from the Babylon website.

        -

        Babylon Pro 9.0.1.5 [Portable] 64 bit is a reliable and accurate translation software that can help you with your personal and professional needs. Whether you need to translate a simple email, a complex document, a website or a chat conversation, Babylon Pro 9.0.1.5 [Portable] 64 bit can handle it with ease and speed.

        -``` - -```html -

        If you are wondering how Babylon Pro 9.0.1.5 [Portable] 64 bit works, here is a brief explanation. Babylon Pro 9.0.1.5 [Portable] 64 bit uses advanced algorithms and natural language processing techniques to analyze and translate the text you select. It also uses a large database of linguistic data and terminology to ensure the accuracy and quality of the translation.

        -

        Babylon Pro 9.0.1.5 [Portable] 64 bit is a trusted and widely used translation software that has been around since 1997. It has won several awards and recognitions for its excellence and innovation. It has also been endorsed by millions of users and customers around the world, including individuals, businesses, organizations and governments.

        -

        Babylon Pro 9.0.1.5 [Portable] 64 bit is a must-have software for anyone who needs to communicate and understand different languages. It is a powerful, convenient and affordable solution that can make your life easier and more productive.

        -```

        d5da3c52bf
        -
        -
        \ No newline at end of file diff --git a/spaces/inreVtussa/clothingai/Examples/Crysis 2 1.9 Crack Fix Skidrow Download Free [CRACKED].md b/spaces/inreVtussa/clothingai/Examples/Crysis 2 1.9 Crack Fix Skidrow Download Free [CRACKED].md deleted file mode 100644 index d0dbeb7e36b4f209a7573a3cc0fdf3159663aff2..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Crysis 2 1.9 Crack Fix Skidrow Download Free [CRACKED].md +++ /dev/null @@ -1,6 +0,0 @@ -

        Crysis 2 1.9 Crack Fix Skidrow Download Free


        Download File ○○○ https://tiurll.com/2uCjCg



        - -When it comes to learning crysis 2 resume game fix how to write better, ... Download for free. file type Game mod. file size 1640.5 MB. last update Monday, ... HomeFixesPCCrysis 2Crysis 2 v1.9 All No-DVD [SKiDROW]. Crysis.2. ... Sep 11, 2011 · Crysis 2 Crack v.1.9 fix crash game *REUPLOAD* BY BOYAN ... 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/inreVtussa/clothingai/Examples/Disg Modell Fragebogen Pdf FREE Download.md b/spaces/inreVtussa/clothingai/Examples/Disg Modell Fragebogen Pdf FREE Download.md deleted file mode 100644 index ca1d1f7d0442d7c2697fc2e1f5cf19bfd169647d..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Disg Modell Fragebogen Pdf FREE Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

        disg modell fragebogen pdf download


        Download ☆☆☆☆☆ https://tiurll.com/2uCjAY



        -
        -... in tamil free download p nuvve nuvve background music free download hit Soar Into the Sun BluRay p AC3 xCHD 1 disg modell fragebogen pdf download sivi ... 1fdad05405
        -
        -
        -

        diff --git a/spaces/jackli888/stable-diffusion-webui/modules/modelloader.py b/spaces/jackli888/stable-diffusion-webui/modules/modelloader.py deleted file mode 100644 index fc3f6249f1ccb53c279f3e86d3ea95a4a7d03e50..0000000000000000000000000000000000000000 --- a/spaces/jackli888/stable-diffusion-webui/modules/modelloader.py +++ /dev/null @@ -1,172 +0,0 @@ -import glob -import os -import shutil -import importlib -from urllib.parse import urlparse - -from basicsr.utils.download_util import load_file_from_url -from modules import shared -from modules.upscaler import Upscaler -from modules.paths import script_path, models_path - - -def load_models(model_path: str, model_url: str = None, command_path: str = None, ext_filter=None, download_name=None, ext_blacklist=None) -> list: - """ - A one-and done loader to try finding the desired models in specified directories. - - @param download_name: Specify to download from model_url immediately. - @param model_url: If no other models are found, this will be downloaded on upscale. - @param model_path: The location to store/find models in. - @param command_path: A command-line argument to search for models in first. - @param ext_filter: An optional list of filename extensions to filter by - @return: A list of paths containing the desired model(s) - """ - output = [] - - if ext_filter is None: - ext_filter = [] - - try: - places = [] - - if command_path is not None and command_path != model_path: - pretrained_path = os.path.join(command_path, 'experiments/pretrained_models') - if os.path.exists(pretrained_path): - print(f"Appending path: {pretrained_path}") - places.append(pretrained_path) - elif os.path.exists(command_path): - places.append(command_path) - - places.append(model_path) - - for place in places: - if os.path.exists(place): - for file in glob.iglob(place + '**/**', recursive=True): - full_path = file - if os.path.isdir(full_path): - continue - if os.path.islink(full_path) and not os.path.exists(full_path): - print(f"Skipping broken symlink: {full_path}") - continue - if ext_blacklist is not None and any([full_path.endswith(x) for x in ext_blacklist]): - continue - if len(ext_filter) != 0: - model_name, extension = os.path.splitext(file) - if extension not in ext_filter: - continue - if file not in output: - output.append(full_path) - - if model_url is not None and len(output) == 0: - if download_name is not None: - dl = load_file_from_url(model_url, model_path, True, download_name) - output.append(dl) - else: - output.append(model_url) - - except Exception: - pass - - return output - - -def friendly_name(file: str): - if "http" in file: - file = urlparse(file).path - - file = os.path.basename(file) - model_name, extension = os.path.splitext(file) - return model_name - - -def cleanup_models(): - # This code could probably be more efficient if we used a tuple list or something to store the src/destinations - # and then enumerate that, but this works for now. In the future, it'd be nice to just have every "model" scaler - # somehow auto-register and just do these things... - root_path = script_path - src_path = models_path - dest_path = os.path.join(models_path, "Stable-diffusion") - move_files(src_path, dest_path, ".ckpt") - move_files(src_path, dest_path, ".safetensors") - src_path = os.path.join(root_path, "ESRGAN") - dest_path = os.path.join(models_path, "ESRGAN") - move_files(src_path, dest_path) - src_path = os.path.join(models_path, "BSRGAN") - dest_path = os.path.join(models_path, "ESRGAN") - move_files(src_path, dest_path, ".pth") - src_path = os.path.join(root_path, "gfpgan") - dest_path = os.path.join(models_path, "GFPGAN") - move_files(src_path, dest_path) - src_path = os.path.join(root_path, "SwinIR") - dest_path = os.path.join(models_path, "SwinIR") - move_files(src_path, dest_path) - src_path = os.path.join(root_path, "repositories/latent-diffusion/experiments/pretrained_models/") - dest_path = os.path.join(models_path, "LDSR") - move_files(src_path, dest_path) - - -def move_files(src_path: str, dest_path: str, ext_filter: str = None): - try: - if not os.path.exists(dest_path): - os.makedirs(dest_path) - if os.path.exists(src_path): - for file in os.listdir(src_path): - fullpath = os.path.join(src_path, file) - if os.path.isfile(fullpath): - if ext_filter is not None: - if ext_filter not in file: - continue - print(f"Moving {file} from {src_path} to {dest_path}.") - try: - shutil.move(fullpath, dest_path) - except: - pass - if len(os.listdir(src_path)) == 0: - print(f"Removing empty folder: {src_path}") - shutil.rmtree(src_path, True) - except: - pass - - -builtin_upscaler_classes = [] -forbidden_upscaler_classes = set() - - -def list_builtin_upscalers(): - load_upscalers() - - builtin_upscaler_classes.clear() - builtin_upscaler_classes.extend(Upscaler.__subclasses__()) - - -def forbid_loaded_nonbuiltin_upscalers(): - for cls in Upscaler.__subclasses__(): - if cls not in builtin_upscaler_classes: - forbidden_upscaler_classes.add(cls) - - -def load_upscalers(): - # We can only do this 'magic' method to dynamically load upscalers if they are referenced, - # so we'll try to import any _model.py files before looking in __subclasses__ - modules_dir = os.path.join(shared.script_path, "modules") - for file in os.listdir(modules_dir): - if "_model.py" in file: - model_name = file.replace("_model.py", "") - full_model = f"modules.{model_name}_model" - try: - importlib.import_module(full_model) - except: - pass - - datas = [] - commandline_options = vars(shared.cmd_opts) - for cls in Upscaler.__subclasses__(): - if cls in forbidden_upscaler_classes: - continue - - name = cls.__name__ - cmd_name = f"{name.lower().replace('upscaler', '')}_models_path" - scaler = cls(commandline_options.get(cmd_name, None)) - datas += scaler.scalers - - shared.sd_upscalers = datas diff --git a/spaces/jbilcke-hf/VideoChain-UI/src/components/ui/dropdown-menu.tsx b/spaces/jbilcke-hf/VideoChain-UI/src/components/ui/dropdown-menu.tsx deleted file mode 100644 index 5803489a1d197a9db5018e413e63abe84b2efb8e..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/VideoChain-UI/src/components/ui/dropdown-menu.tsx +++ /dev/null @@ -1,200 +0,0 @@ -"use client" - -import * as React from "react" -import * as DropdownMenuPrimitive from "@radix-ui/react-dropdown-menu" -import { Check, ChevronRight, Circle } from "lucide-react" - -import { cn } from "@/lib/utils" - -const DropdownMenu = DropdownMenuPrimitive.Root - -const DropdownMenuTrigger = DropdownMenuPrimitive.Trigger - -const DropdownMenuGroup = DropdownMenuPrimitive.Group - -const DropdownMenuPortal = DropdownMenuPrimitive.Portal - -const DropdownMenuSub = DropdownMenuPrimitive.Sub - -const DropdownMenuRadioGroup = DropdownMenuPrimitive.RadioGroup - -const DropdownMenuSubTrigger = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef & { - inset?: boolean - } ->(({ className, inset, children, ...props }, ref) => ( - - {children} - - -)) -DropdownMenuSubTrigger.displayName = - DropdownMenuPrimitive.SubTrigger.displayName - -const DropdownMenuSubContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DropdownMenuSubContent.displayName = - DropdownMenuPrimitive.SubContent.displayName - -const DropdownMenuContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, sideOffset = 4, ...props }, ref) => ( - - - -)) -DropdownMenuContent.displayName = DropdownMenuPrimitive.Content.displayName - -const DropdownMenuItem = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef & { - inset?: boolean - } ->(({ className, inset, ...props }, ref) => ( - -)) -DropdownMenuItem.displayName = DropdownMenuPrimitive.Item.displayName - -const DropdownMenuCheckboxItem = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, checked, ...props }, ref) => ( - - - - - - - {children} - -)) -DropdownMenuCheckboxItem.displayName = - DropdownMenuPrimitive.CheckboxItem.displayName - -const DropdownMenuRadioItem = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - - - - - - {children} - -)) -DropdownMenuRadioItem.displayName = DropdownMenuPrimitive.RadioItem.displayName - -const DropdownMenuLabel = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef & { - inset?: boolean - } ->(({ className, inset, ...props }, ref) => ( - -)) -DropdownMenuLabel.displayName = DropdownMenuPrimitive.Label.displayName - -const DropdownMenuSeparator = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DropdownMenuSeparator.displayName = DropdownMenuPrimitive.Separator.displayName - -const DropdownMenuShortcut = ({ - className, - ...props -}: React.HTMLAttributes) => { - return ( - - ) -} -DropdownMenuShortcut.displayName = "DropdownMenuShortcut" - -export { - DropdownMenu, - DropdownMenuTrigger, - DropdownMenuContent, - DropdownMenuItem, - DropdownMenuCheckboxItem, - DropdownMenuRadioItem, - DropdownMenuLabel, - DropdownMenuSeparator, - DropdownMenuShortcut, - DropdownMenuGroup, - DropdownMenuPortal, - DropdownMenuSub, - DropdownMenuSubContent, - DropdownMenuSubTrigger, - DropdownMenuRadioGroup, -} diff --git a/spaces/jbilcke-hf/VideoQuest/src/app/interface/renderer/spherical-image.tsx b/spaces/jbilcke-hf/VideoQuest/src/app/interface/renderer/spherical-image.tsx deleted file mode 100644 index 4ac48ec5118b3e72952ca0e43b34a5f3943c107a..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/VideoQuest/src/app/interface/renderer/spherical-image.tsx +++ /dev/null @@ -1,288 +0,0 @@ -import { useEffect, useRef, useState } from "react" -import { PanoramaPosition, PluginConstructor, Point, Position, SphericalPosition, Viewer } from "@photo-sphere-viewer/core" -import { LensflarePlugin, ReactPhotoSphereViewer } from "react-photo-sphere-viewer" - -import { RenderedScene } from "@/types" - -import { MouseEventHandler } from "./types" -import { useImageDimension } from "@/lib/useImageDimension" -import { lightSourceNames } from "@/lib/lightSourceNames" - -type PhotoSpherePlugin = (PluginConstructor | [PluginConstructor, any]) - -export function SphericalImage({ - rendered, - onEvent, - className, - debug, -}: { - rendered: RenderedScene - onEvent: MouseEventHandler - className?: string - debug?: boolean -}) { - - - const imageDimension = useImageDimension(rendered.assetUrl) - const maskDimension = useImageDimension(rendered.maskUrl) - - const sceneConfig = JSON.stringify({ rendered, debug, imageDimension, maskDimension }) - const [lastSceneConfig, setLastSceneConfig] = useState(sceneConfig) - const rootContainerRef = useRef(null) - const viewerContainerRef = useRef() - const viewerRef = useRef() - const [mouseMoved, setMouseMoved] = useState(false) - - const defaultZoomLvl = 1 // 0 = 180 fov, 100 = 1 fov - - const options = { - defaultZoomLvl, - fisheye: false, // ..no! - overlay: rendered.maskUrl || undefined, - overlayOpacity: debug ? 0.5 : 0, - /* - panoData: { - fullWidth: 2000, - fullHeight: 1200, - croppedWidth: 1024, - croppedHeight: 512, - croppedX: 0, - croppedY: 200, - // poseHeading: 0, // 0 to 360 - posePitch: 0, // -90 to 90 - // poseRoll: 0, // -180 to 180 - } - */ - } - - - const cacheRef = useRef("") - useEffect(() => { - const listener = (e: DragEvent) => { - if (!rootContainerRef.current) { return } - - // TODO: check if we are currently dragging an object - // if yes, then we should check if clientX and clientY are matching the - const boundingRect = rootContainerRef.current.getBoundingClientRect() - - // abort if we are not currently dragging over our display area - if (e.clientX < boundingRect.left) { return } - if (e.clientX > (boundingRect.left + boundingRect.width)) { return } - if (e.clientY < boundingRect.top) { return } - if (e.clientY > (boundingRect.top + boundingRect.height)) { return } - - const containerX = e.clientX - boundingRect.left - const containerY = e.clientY - boundingRect.top - - const relativeX = containerX / boundingRect.width - const relativeY = containerY / boundingRect.height - - const key = `${relativeX},${relativeY}` - - // to avoid use - if (cacheRef.current === key) { - return - } - // console.log(`DRAG: calling onEvent("hover", ${relativeX}, ${relativeY})`) - - cacheRef.current = key - onEvent("hover", relativeX, relativeY) - } - - document.addEventListener('drag', listener) - - return () => { - document.removeEventListener('drag', listener) - } - }, [onEvent]) - - useEffect(() => { - const task = async () => { - // console.log("SphericalImage: useEffect") - if (sceneConfig !== lastSceneConfig) { - // console.log("SphericalImage: scene config changed!") - - if (!viewerRef.current) { - // console.log("SphericalImage: no ref!") - setLastSceneConfig(sceneConfig) - return - } - const viewer = viewerRef.current - - const newOptions = { - ...options, - } - - const lensflares: { id: string; position: SphericalPosition; type: number }[] = [] - - if (maskDimension.width && imageDimension.width) { - - // console.log("rendered.segments:", rendered.segments) - - rendered.segments - .filter(segment => lightSourceNames.includes(segment.label)) - .forEach(light => { - // console.log("light detected", light) - const [x1, y1, x2, y2] = light.box - const [centerX, centerY] = [(x1 + x2) / 2, (y1 + y2) / 2] - // console.log("center:", { centerX, centerY }) - const [relativeX, relativeY] = [centerX / maskDimension.width, centerY/ maskDimension.height] - // console.log("relative:", { relativeX, relativeY}) - - const panoramaPosition: PanoramaPosition = { - textureX: relativeX * imageDimension.width, - textureY: relativeY * imageDimension.height - } - // console.log("panoramaPosition:", panoramaPosition) - - const position = viewer.dataHelper.textureCoordsToSphericalCoords(panoramaPosition) - // console.log("sphericalPosition:", position) - if ( // make sure coordinates are valid - !isNaN(position.pitch) && isFinite(position.pitch) && - !isNaN(position.yaw) && isFinite(position.yaw)) { - lensflares.push({ - id: `flare_${lensflares.length}`, - position, - type: 0, - }) - } - }) - } - - // console.log("lensflares:", lensflares) - const lensFlarePlugin = viewer.getPlugin("lensflare") - lensFlarePlugin.setLensflares(lensflares) - - // console.log("SphericalImage: calling setOptions") - // console.log("SphericalImage: changing the panorama to: " + rendered.assetUrl.slice(0, 120)) - - await viewer.setPanorama(rendered.assetUrl, { - ...newOptions, - showLoader: false, - }) - - // TODO we should separate all those updates, probaby - viewer.setOptions(newOptions) - // viewer.setOverlay(rendered.maskUrl || undefined) - - // console.log("SphericalImage: asking to re-render") - viewerRef.current.needsUpdate() - - setLastSceneConfig(sceneConfig) - } - } - task() - }, [sceneConfig, rendered.assetUrl, viewerRef.current, maskDimension.width, imageDimension]) - - const handleEvent = async (event: React.MouseEvent, isClick: boolean) => { - const rootContainer = rootContainerRef.current - const viewer = viewerRef.current - const viewerContainer = viewerContainerRef.current - - /* - if (isClick) console.log(`handleEvent(${isClick})`, { - " imageDimension.width": imageDimension.width, - "rendered.maskUrl": rendered.maskUrl - }) - */ - - if (!viewer || !rootContainer || !viewerContainer || !imageDimension.width || !rendered.maskUrl) { - return - } - - const containerRect = viewerContainer.getBoundingClientRect() - // if (isClick) console.log("containerRect:", containerRect) - - const containerY = event.clientY - containerRect.top - // console.log("containerY:", containerY) - - const position: Position = viewer.getPosition() - - const viewerPosition: Point = viewer.dataHelper.sphericalCoordsToViewerCoords(position) - // if (isClick) console.log("viewerPosition:", viewerPosition) - - // we want to ignore events that are happening in the toolbar - // note that we will probably hide this toolbar at some point, - // to implement our own UI - if (isClick && containerY > (containerRect.height - 40)) { - // console.log("we are in the toolbar.. ignoring the click") - return - } - - const panoramaPosition: PanoramaPosition = viewer.dataHelper.sphericalCoordsToTextureCoords(position) - - if (typeof panoramaPosition.textureX !== "number" || typeof panoramaPosition.textureY !== "number") { - return - } - - const relativeX = panoramaPosition.textureX / imageDimension.width - const relativeY = panoramaPosition.textureY / imageDimension.height - - onEvent(isClick ? "click" : "hover", relativeX, relativeY) - } - - if (!rendered.assetUrl) { - return null - } - - return ( -
        { - handleEvent(event, false) - setMouseMoved(true) - }} - onMouseUp={(event) => { - if (!mouseMoved) { - handleEvent(event, true) - } - setMouseMoved(false) - }} - onMouseDown={() => { - setMouseMoved(false) - }} - > - { - // nothing to do here - }} - - onReady={(instance) => { - viewerRef.current = instance - viewerContainerRef.current = instance.container - - /* - const markersPlugs = instance.getPlugin(MarkersPlugin); - if (!markersPlugs) - return; - markersPlugs.addMarker({ - id: "imageLayer2", - imageLayer: "drone.png", - size: { width: 220, height: 220 }, - position: { yaw: '130.5deg', pitch: '-0.1deg' }, - tooltip: "Image embedded in the scene" - }); - markersPlugs.addEventListener("select-marker", () => { - console.log("asd"); - }); - */ - }} - - /> -
        - ) -} \ No newline at end of file diff --git a/spaces/jbilcke-hf/VideoQuest/src/components/ui/dialog.tsx b/spaces/jbilcke-hf/VideoQuest/src/components/ui/dialog.tsx deleted file mode 100644 index c5621059f4149bbc1b008837dd68082c76a8a5c5..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/VideoQuest/src/components/ui/dialog.tsx +++ /dev/null @@ -1,123 +0,0 @@ -"use client" - -import * as React from "react" -import * as DialogPrimitive from "@radix-ui/react-dialog" -import { X } from "lucide-react" - -import { cn } from "@/lib/utils" - -const Dialog = DialogPrimitive.Root - -const DialogTrigger = DialogPrimitive.Trigger - -const DialogPortal = ({ - className, - ...props -}: DialogPrimitive.DialogPortalProps) => ( - -) -DialogPortal.displayName = DialogPrimitive.Portal.displayName - -const DialogOverlay = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DialogOverlay.displayName = DialogPrimitive.Overlay.displayName - -const DialogContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - - - {children} - - - Close - - - -)) -DialogContent.displayName = DialogPrimitive.Content.displayName - -const DialogHeader = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
        -) -DialogHeader.displayName = "DialogHeader" - -const DialogFooter = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
        -) -DialogFooter.displayName = "DialogFooter" - -const DialogTitle = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DialogTitle.displayName = DialogPrimitive.Title.displayName - -const DialogDescription = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DialogDescription.displayName = DialogPrimitive.Description.displayName - -export { - Dialog, - DialogTrigger, - DialogContent, - DialogHeader, - DialogFooter, - DialogTitle, - DialogDescription, -} diff --git a/spaces/jitesh/storytelling/app.py b/spaces/jitesh/storytelling/app.py deleted file mode 100644 index 6de670f708434d3d3ede5b8f0a7cbcc0dc7caa66..0000000000000000000000000000000000000000 --- a/spaces/jitesh/storytelling/app.py +++ /dev/null @@ -1,34 +0,0 @@ -import random - -import numpy as np -import plotly.express as px -import streamlit as st - -from src import (StoryGenerator, run_create_statistics, run_play_storytelling, - run_probability_emote, display_logs) - -st.set_page_config(page_title='Storytelling ' + - u'\U0001F5BC', page_icon=u'\U0001F5BC', layout="wide", - ) -gen = StoryGenerator() -container_mode = st.sidebar.container() -container_guide = st.container() -container_param = st.sidebar.container() -container_button = st.sidebar.container() - -mode = container_mode.radio( - "Select a mode", - ('Probability Emote', 'Check Logs', 'Create Statistics', 'Play Storytelling'), index=0) - - -if mode == 'Create Statistics': - run_create_statistics(gen, container_guide, - container_param, container_button) -elif mode == 'Check Logs': - display_logs(gen, container_guide, - container_param, container_button) -elif mode == 'Play Storytelling': - run_play_storytelling(gen, container_guide, - container_param, container_button) -elif mode == 'Probability Emote': - run_probability_emote(container_param) diff --git a/spaces/jitubutwal1441/multiple-pdfs-chat/app.py b/spaces/jitubutwal1441/multiple-pdfs-chat/app.py deleted file mode 100644 index cde7c6082e26003f256434682f32056421d1d1c6..0000000000000000000000000000000000000000 --- a/spaces/jitubutwal1441/multiple-pdfs-chat/app.py +++ /dev/null @@ -1,104 +0,0 @@ -import streamlit as st -from dotenv import load_dotenv -from PyPDF2 import PdfReader -from langchain.text_splitter import CharacterTextSplitter -from langchain.embeddings import OpenAIEmbeddings, HuggingFaceInstructEmbeddings -from langchain.vectorstores import FAISS -from langchain.chat_models import ChatOpenAI -from langchain.memory import ConversationBufferMemory -from langchain.chains import ConversationalRetrievalChain -from htmlTemplates import css, bot_template, user_template -from langchain.llms import HuggingFaceHub - -def get_pdf_text(pdf_docs): - text = "" - for pdf in pdf_docs: - pdf_reader = PdfReader(pdf) - for page in pdf_reader.pages: - text += page.extract_text() - return text - - -def get_text_chunks(text): - text_splitter = CharacterTextSplitter( - separator="\n", - chunk_size=1000, - chunk_overlap=200, - length_function=len - ) - chunks = text_splitter.split_text(text) - return chunks - - -def get_vectorstore(text_chunks): - embeddings = OpenAIEmbeddings() - # embeddings = HuggingFaceInstructEmbeddings(model_name="hkunlp/instructor-xl") - vectorstore = FAISS.from_texts(texts=text_chunks, embedding=embeddings) - return vectorstore - - -def get_conversation_chain(vectorstore): - llm = ChatOpenAI() - # llm = HuggingFaceHub(repo_id="google/flan-t5-xxl", model_kwargs={"temperature":0.5, "max_length":512}) - - memory = ConversationBufferMemory( - memory_key='chat_history', return_messages=True) - conversation_chain = ConversationalRetrievalChain.from_llm( - llm=llm, - retriever=vectorstore.as_retriever(), - memory=memory - ) - return conversation_chain - - -def handle_userinput(user_question): - response = st.session_state.conversation({'question': user_question}) - st.session_state.chat_history = response['chat_history'] - - for i, message in enumerate(st.session_state.chat_history): - if i % 2 == 0: - st.write(user_template.replace( - "{{MSG}}", message.content), unsafe_allow_html=True) - else: - st.write(bot_template.replace( - "{{MSG}}", message.content), unsafe_allow_html=True) - - -def main(): - load_dotenv() - st.set_page_config(page_title="Chat with multiple PDFs", - page_icon=":books:") - st.write(css, unsafe_allow_html=True) - - if "conversation" not in st.session_state: - st.session_state.conversation = None - if "chat_history" not in st.session_state: - st.session_state.chat_history = None - - st.header("Chat with multiple PDFs :books:") - user_question = st.text_input("Ask a question about your documents:") - if user_question: - handle_userinput(user_question) - - with st.sidebar: - st.subheader("Your documents") - pdf_docs = st.file_uploader( - "Upload your PDFs here and click on 'Process'", accept_multiple_files=True) - if st.button("Process"): - with st.spinner("Processing"): - # get pdf text - raw_text = get_pdf_text(pdf_docs) - - # get the text chunks - text_chunks = get_text_chunks(raw_text) - - # create vector store - vectorstore = get_vectorstore(text_chunks) - - # create conversation chain - st.session_state.conversation = get_conversation_chain( - vectorstore) - - -if __name__ == '__main__': - main() diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/attrs/filters.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/attrs/filters.py deleted file mode 100644 index 52959005b088f0e5116c8b6acdbcc5937bbaacc8..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/attrs/filters.py +++ /dev/null @@ -1,3 +0,0 @@ -# SPDX-License-Identifier: MIT - -from attr.filters import * # noqa diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/bson/errors.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/bson/errors.py deleted file mode 100644 index 7333b27b587dfc48df0b8248c914a97b54fe2cad..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/bson/errors.py +++ /dev/null @@ -1,35 +0,0 @@ -# Copyright 2009-present MongoDB, Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Exceptions raised by the BSON package.""" - - -class BSONError(Exception): - """Base class for all BSON exceptions.""" - - -class InvalidBSON(BSONError): - """Raised when trying to create a BSON object from invalid data.""" - - -class InvalidStringData(BSONError): - """Raised when trying to encode a string containing non-UTF8 data.""" - - -class InvalidDocument(BSONError): - """Raised when trying to create a BSON object from an invalid document.""" - - -class InvalidId(BSONError): - """Raised when trying to create an ObjectId from invalid data.""" diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/C_B_D_T_.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/C_B_D_T_.py deleted file mode 100644 index e9e2d5fde9cc5a72a17105d40e5c1c95ff09d824..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/C_B_D_T_.py +++ /dev/null @@ -1,105 +0,0 @@ -# Copyright 2013 Google, Inc. All Rights Reserved. -# -# Google Author(s): Matt Fontaine - - -from fontTools.misc.textTools import bytesjoin -from fontTools.misc import sstruct -from . import E_B_D_T_ -from .BitmapGlyphMetrics import ( - BigGlyphMetrics, - bigGlyphMetricsFormat, - SmallGlyphMetrics, - smallGlyphMetricsFormat, -) -from .E_B_D_T_ import ( - BitmapGlyph, - BitmapPlusSmallMetricsMixin, - BitmapPlusBigMetricsMixin, -) -import struct - - -class table_C_B_D_T_(E_B_D_T_.table_E_B_D_T_): - - # Change the data locator table being referenced. - locatorName = "CBLC" - - # Modify the format class accessor for color bitmap use. - def getImageFormatClass(self, imageFormat): - try: - return E_B_D_T_.table_E_B_D_T_.getImageFormatClass(self, imageFormat) - except KeyError: - return cbdt_bitmap_classes[imageFormat] - - -# Helper method for removing export features not supported by color bitmaps. -# Write data in the parent class will default to raw if an option is unsupported. -def _removeUnsupportedForColor(dataFunctions): - dataFunctions = dict(dataFunctions) - del dataFunctions["row"] - return dataFunctions - - -class ColorBitmapGlyph(BitmapGlyph): - - fileExtension = ".png" - xmlDataFunctions = _removeUnsupportedForColor(BitmapGlyph.xmlDataFunctions) - - -class cbdt_bitmap_format_17(BitmapPlusSmallMetricsMixin, ColorBitmapGlyph): - def decompile(self): - self.metrics = SmallGlyphMetrics() - dummy, data = sstruct.unpack2(smallGlyphMetricsFormat, self.data, self.metrics) - (dataLen,) = struct.unpack(">L", data[:4]) - data = data[4:] - - # For the image data cut it to the size specified by dataLen. - assert dataLen <= len(data), "Data overun in format 17" - self.imageData = data[:dataLen] - - def compile(self, ttFont): - dataList = [] - dataList.append(sstruct.pack(smallGlyphMetricsFormat, self.metrics)) - dataList.append(struct.pack(">L", len(self.imageData))) - dataList.append(self.imageData) - return bytesjoin(dataList) - - -class cbdt_bitmap_format_18(BitmapPlusBigMetricsMixin, ColorBitmapGlyph): - def decompile(self): - self.metrics = BigGlyphMetrics() - dummy, data = sstruct.unpack2(bigGlyphMetricsFormat, self.data, self.metrics) - (dataLen,) = struct.unpack(">L", data[:4]) - data = data[4:] - - # For the image data cut it to the size specified by dataLen. - assert dataLen <= len(data), "Data overun in format 18" - self.imageData = data[:dataLen] - - def compile(self, ttFont): - dataList = [] - dataList.append(sstruct.pack(bigGlyphMetricsFormat, self.metrics)) - dataList.append(struct.pack(">L", len(self.imageData))) - dataList.append(self.imageData) - return bytesjoin(dataList) - - -class cbdt_bitmap_format_19(ColorBitmapGlyph): - def decompile(self): - (dataLen,) = struct.unpack(">L", self.data[:4]) - data = self.data[4:] - - assert dataLen <= len(data), "Data overun in format 19" - self.imageData = data[:dataLen] - - def compile(self, ttFont): - return struct.pack(">L", len(self.imageData)) + self.imageData - - -# Dict for CBDT extended formats. -cbdt_bitmap_classes = { - 17: cbdt_bitmap_format_17, - 18: cbdt_bitmap_format_18, - 19: cbdt_bitmap_format_19, -} diff --git a/spaces/jone/Music_Source_Separation/app.py b/spaces/jone/Music_Source_Separation/app.py deleted file mode 100644 index 54d639ebf4b7aa19be35d909f6e21ac4d7d1a81e..0000000000000000000000000000000000000000 --- a/spaces/jone/Music_Source_Separation/app.py +++ /dev/null @@ -1,54 +0,0 @@ -import os -os.system('pip install gradio==2.3.0a0') -os.system('pip freeze') -import sys -sys.path.append('.') -import gradio as gr -os.system('pip install -U torchtext==0.8.0') -#os.system('python setup.py install --install-dir .') -from scipy.io import wavfile - -os.system('chmod a+x ./separate_scripts/*.sh') -os.system('chmod a+x ./scripts/*.sh') -os.system('chmod a+x ./scripts/*/*.sh') -os.system('./separate_scripts/download_checkpoints.sh') - -def inference(audio): - input_path = audio.name - print(f"The audio file name is: {audio.name}") - output_path = os.path.splitext(input_path)[0] + ".wav" - os.system(f"ffmpeg -y -loglevel panic -i {input_path} -acodec pcm_s16le -ar 44100 {output_path}") - - # read the file and get the sample rate and data - # rate, data = wavfile.read(output_path) - try: - # try to read the file and get the sample rate and data - rate, data = wavfile.read(output_path) - except: - # if an exception occurs, read the original file instead - rate, data = wavfile.read(input_path) - - # save the result - wavfile.write('foo_left.wav', rate, data) - os.system("""python bytesep/inference.py --config_yaml=./scripts/4_train/musdb18/configs/vocals-accompaniment,resunet_subbandtime.yaml --checkpoint_path=./downloaded_checkpoints/resunet143_subbtandtime_vocals_8.8dB_350k_steps.pth --audio_path=foo_left.wav --output_path=sep_vocals.mp3""") - #os.system('./separate_scripts/separate_vocals.sh ' + audio.name + ' "sep_vocals.mp3"') - os.system("""python bytesep/inference.py --config_yaml=./scripts/4_train/musdb18/configs/accompaniment-vocals,resunet_subbandtime.yaml --checkpoint_path=./downloaded_checkpoints/resunet143_subbtandtime_accompaniment_16.4dB_350k_steps.pth --audio_path=foo_left.wav --output_path=sep_accompaniment.mp3""") - #os.system('./separate_scripts/separate_accompaniment.sh ' + audio.name + ' "sep_accompaniment.mp3"') - #os.system('python separate_scripts/separate.py --audio_path=' +audio.name+' --source_type="accompaniment"') - #os.system('python separate_scripts/separate.py --audio_path=' +audio.name+' --source_type="vocals"') - return 'sep_vocals.mp3', 'sep_accompaniment.mp3' -title = "Music Source Separation" -description = "Gradio demo for Music Source Separation. To use it, simply add your audio, or click one of the examples to load them. Currently supports .wav files. Read more at the links below." -article = "

        Decoupling Magnitude and Phase Estimation with Deep ResUNet for Music Source Separation | Github Repo

        " - -examples = [['example.wav']] -gr.Interface( - inference, - gr.inputs.Audio(type="file", label="Input"), - [gr.outputs.Audio(type="file", label="Vocals"),gr.outputs.Audio(type="file", label="Accompaniment")], - title=title, - description=description, - article=article, - enable_queue=True, - examples=examples - ).launch(debug=True) \ No newline at end of file diff --git a/spaces/josedolot/HybridNet_Demo2/encoders/timm_gernet.py b/spaces/josedolot/HybridNet_Demo2/encoders/timm_gernet.py deleted file mode 100644 index f98c030af3e62c36c28a88c53a4d18765ba78482..0000000000000000000000000000000000000000 --- a/spaces/josedolot/HybridNet_Demo2/encoders/timm_gernet.py +++ /dev/null @@ -1,124 +0,0 @@ -from timm.models import ByoModelCfg, ByoBlockCfg, ByobNet - -from ._base import EncoderMixin -import torch.nn as nn - - -class GERNetEncoder(ByobNet, EncoderMixin): - def __init__(self, out_channels, depth=5, **kwargs): - super().__init__(**kwargs) - self._depth = depth - self._out_channels = out_channels - self._in_channels = 3 - - del self.head - - def get_stages(self): - return [ - nn.Identity(), - self.stem, - self.stages[0], - self.stages[1], - self.stages[2], - nn.Sequential(self.stages[3], self.stages[4], self.final_conv) - ] - - def forward(self, x): - stages = self.get_stages() - - features = [] - for i in range(self._depth + 1): - x = stages[i](x) - features.append(x) - - return features - - def load_state_dict(self, state_dict, **kwargs): - state_dict.pop("head.fc.weight", None) - state_dict.pop("head.fc.bias", None) - super().load_state_dict(state_dict, **kwargs) - - -regnet_weights = { - 'timm-gernet_s': { - 'imagenet': 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-ger-weights/gernet_s-756b4751.pth', - }, - 'timm-gernet_m': { - 'imagenet': 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-ger-weights/gernet_m-0873c53a.pth', - }, - 'timm-gernet_l': { - 'imagenet': 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-ger-weights/gernet_l-f31e2e8d.pth', - }, -} - -pretrained_settings = {} -for model_name, sources in regnet_weights.items(): - pretrained_settings[model_name] = {} - for source_name, source_url in sources.items(): - pretrained_settings[model_name][source_name] = { - "url": source_url, - 'input_range': [0, 1], - 'mean': [0.485, 0.456, 0.406], - 'std': [0.229, 0.224, 0.225], - 'num_classes': 1000 - } - -timm_gernet_encoders = { - 'timm-gernet_s': { - 'encoder': GERNetEncoder, - "pretrained_settings": pretrained_settings["timm-gernet_s"], - 'params': { - 'out_channels': (3, 13, 48, 48, 384, 1920), - 'cfg': ByoModelCfg( - blocks=( - ByoBlockCfg(type='basic', d=1, c=48, s=2, gs=0, br=1.), - ByoBlockCfg(type='basic', d=3, c=48, s=2, gs=0, br=1.), - ByoBlockCfg(type='bottle', d=7, c=384, s=2, gs=0, br=1 / 4), - ByoBlockCfg(type='bottle', d=2, c=560, s=2, gs=1, br=3.), - ByoBlockCfg(type='bottle', d=1, c=256, s=1, gs=1, br=3.), - ), - stem_chs=13, - stem_pool=None, - num_features=1920, - ) - }, - }, - 'timm-gernet_m': { - 'encoder': GERNetEncoder, - "pretrained_settings": pretrained_settings["timm-gernet_m"], - 'params': { - 'out_channels': (3, 32, 128, 192, 640, 2560), - 'cfg': ByoModelCfg( - blocks=( - ByoBlockCfg(type='basic', d=1, c=128, s=2, gs=0, br=1.), - ByoBlockCfg(type='basic', d=2, c=192, s=2, gs=0, br=1.), - ByoBlockCfg(type='bottle', d=6, c=640, s=2, gs=0, br=1 / 4), - ByoBlockCfg(type='bottle', d=4, c=640, s=2, gs=1, br=3.), - ByoBlockCfg(type='bottle', d=1, c=640, s=1, gs=1, br=3.), - ), - stem_chs=32, - stem_pool=None, - num_features=2560, - ) - }, - }, - 'timm-gernet_l': { - 'encoder': GERNetEncoder, - "pretrained_settings": pretrained_settings["timm-gernet_l"], - 'params': { - 'out_channels': (3, 32, 128, 192, 640, 2560), - 'cfg': ByoModelCfg( - blocks=( - ByoBlockCfg(type='basic', d=1, c=128, s=2, gs=0, br=1.), - ByoBlockCfg(type='basic', d=2, c=192, s=2, gs=0, br=1.), - ByoBlockCfg(type='bottle', d=6, c=640, s=2, gs=0, br=1 / 4), - ByoBlockCfg(type='bottle', d=5, c=640, s=2, gs=1, br=3.), - ByoBlockCfg(type='bottle', d=4, c=640, s=1, gs=1, br=3.), - ), - stem_chs=32, - stem_pool=None, - num_features=2560, - ) - }, - }, -} diff --git a/spaces/juuxn/SimpleRVC/infer_pack/modules/F0Predictor/HarvestF0Predictor.py b/spaces/juuxn/SimpleRVC/infer_pack/modules/F0Predictor/HarvestF0Predictor.py deleted file mode 100644 index 98d4e98b353008f81bde2c37e7da818763a992c9..0000000000000000000000000000000000000000 --- a/spaces/juuxn/SimpleRVC/infer_pack/modules/F0Predictor/HarvestF0Predictor.py +++ /dev/null @@ -1,86 +0,0 @@ -from infer_pack.modules.F0Predictor.F0Predictor import F0Predictor -import pyworld -import numpy as np - - -class HarvestF0Predictor(F0Predictor): - def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100): - self.hop_length = hop_length - self.f0_min = f0_min - self.f0_max = f0_max - self.sampling_rate = sampling_rate - - def interpolate_f0(self, f0): - """ - 对F0进行插值处理 - """ - - data = np.reshape(f0, (f0.size, 1)) - - vuv_vector = np.zeros((data.size, 1), dtype=np.float32) - vuv_vector[data > 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝 - last_value = data[i] - - return ip_data[:, 0], vuv_vector[:, 0] - - def resize_f0(self, x, target_len): - source = np.array(x) - source[source < 0.001] = np.nan - target = np.interp( - np.arange(0, len(source) * target_len, len(source)) / target_len, - np.arange(0, len(source)), - source, - ) - res = np.nan_to_num(target) - return res - - def compute_f0(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.harvest( - wav.astype(np.double), - fs=self.hop_length, - f0_ceil=self.f0_max, - f0_floor=self.f0_min, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.fs) - return self.interpolate_f0(self.resize_f0(f0, p_len))[0] - - def compute_f0_uv(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.harvest( - wav.astype(np.double), - fs=self.sampling_rate, - f0_floor=self.f0_min, - f0_ceil=self.f0_max, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate) - return self.interpolate_f0(self.resize_f0(f0, p_len)) diff --git a/spaces/kael558/InPaintAPI/README.md b/spaces/kael558/InPaintAPI/README.md deleted file mode 100644 index f3b121aa2cd88e5765d48cb943259e9e5f9af255..0000000000000000000000000000000000000000 --- a/spaces/kael558/InPaintAPI/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: InPaintAPI -emoji: 🐨 -colorFrom: yellow -colorTo: green -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/kangvcar/RealChar/client/web/src/components/Characters/index.js b/spaces/kangvcar/RealChar/client/web/src/components/Characters/index.js deleted file mode 100644 index eca70cbc1c9f78013e0bca192f2f0f48a70c9ee6..0000000000000000000000000000000000000000 --- a/spaces/kangvcar/RealChar/client/web/src/components/Characters/index.js +++ /dev/null @@ -1,98 +0,0 @@ -/** - * src/components/Characters/index.jsx - * create and display characters - * - * created by Lynchee on 7/16/23 - */ - -// Characters -import React, { useEffect, useState } from 'react'; -import './style.css'; -import raiden from '../../assets/images/raiden.png'; -import loki from '../../assets/svgs/loki.svg'; -import aiHelper from '../../assets/images/ai_helper.png'; -import pi from '../../assets/images/pi.jpeg'; -import elon from '../../assets/images/elon.png'; -import bruce from '../../assets/images/bruce.png'; -import steve from '../../assets/images/jobs.png'; -import realchar from '../../assets/svgs/realchar.svg'; -import sam from '../../assets/images/sam.png'; - -// create character groups -const createCharacterGroups = (message) => { - const options = message.split('\n').slice(1); - - const imageMap = { - 'Raiden Shogun And Ei': raiden, - 'Loki': loki, - 'Ai Character Helper': aiHelper, - 'Reflection Pi': pi, - 'Elon Musk': elon, - 'Bruce Wayne': bruce, - 'Steve Jobs': steve, - 'Sam Altman': sam - }; - - const newCharacterGroups = []; - options.forEach(option => { - const match = option.match(/^(\d+)\s-\s(.+)$/); - if (match) { - let src = imageMap[match[2]]; - if (!src) { - src = {realchar}; - } - - newCharacterGroups.push({ - id: match[1], - name: match[2], - imageSrc: src - }); - } - }); - - return newCharacterGroups; -} - -const Characters = ({ characterGroups, selectedCharacter, setSelectedCharacter, isPlaying, characterConfirmed }) => { - const [pulseAnimation, setPulseAnimation] = useState(null); - - // when the character is talking, show animation - useEffect(() => { - if (isPlaying) { - setPulseAnimation(Math.random() > 0.5 ? "pulse-animation-1" : "pulse-animation-2"); - } else { - setPulseAnimation(null); - } - }, [isPlaying]); - - const handleCharacterSelection = (e) => { - setSelectedCharacter(e.target.value); - }; - - return ( -
        -
        - {characterGroups.map(group => ( - (!characterConfirmed || group.id === selectedCharacter) && ( - - ) - ))} -
        -
        - ) -} - -export { Characters, createCharacterGroups }; diff --git a/spaces/kboaten/MIDI-Audio-Extension/MIDI-song-extender/musicautobot/music_transformer/transform.py b/spaces/kboaten/MIDI-Audio-Extension/MIDI-song-extender/musicautobot/music_transformer/transform.py deleted file mode 100644 index 597ae2c7ca88ea2957d91e6872c43bee3d908160..0000000000000000000000000000000000000000 --- a/spaces/kboaten/MIDI-Audio-Extension/MIDI-song-extender/musicautobot/music_transformer/transform.py +++ /dev/null @@ -1,231 +0,0 @@ -from ..numpy_encode import * -import numpy as np -from enum import Enum -import torch -from ..vocab import * -from functools import partial - -SEQType = Enum('SEQType', 'Mask, Sentence, Melody, Chords, Empty') - -class MusicItem(): - def __init__(self, data, vocab, stream=None, position=None): - self.data = data - self.vocab = vocab - self._stream = stream - self._position = position - def __repr__(self): return '\n'.join([ - f'\n{self.__class__.__name__} - {self.data.shape}', - f'{self.vocab.textify(self.data[:10])}...']) - def __len__(self): return len(self.data) - - @classmethod - def from_file(cls, midi_file, vocab): - return cls.from_stream(file2stream(midi_file), vocab) - @classmethod - def from_stream(cls, stream, vocab): - if not isinstance(stream, music21.stream.Score): stream = stream.voicesToParts() - chordarr = stream2chordarr(stream) # 2. - npenc = chordarr2npenc(chordarr) # 3. - return cls.from_npenc(npenc, vocab, stream) - @classmethod - def from_npenc(cls, npenc, vocab, stream=None): return MusicItem(npenc2idxenc(npenc, vocab), vocab, stream) - - @classmethod - def from_idx(cls, item, vocab): - idx,pos = item - return MusicItem(idx, vocab=vocab, position=pos) - def to_idx(self): return self.data, self.position - - @classmethod - def empty(cls, vocab, seq_type=SEQType.Sentence): - return MusicItem(seq_prefix(seq_type, vocab), vocab) - - @property - def stream(self): - self._stream = self.to_stream() if self._stream is None else self._stream - return self._stream - - def to_stream(self, bpm=120): - return idxenc2stream(self.data, self.vocab, bpm=bpm) - - def to_tensor(self, device=None): - return to_tensor(self.data, device) - - def to_text(self, sep=' '): return self.vocab.textify(self.data, sep) - - @property - def position(self): - self._position = position_enc(self.data, self.vocab) if self._position is None else self._position - return self._position - - def get_pos_tensor(self, device=None): return to_tensor(self.position, device) - - def to_npenc(self): - return idxenc2npenc(self.data, self.vocab) - - def show(self, format:str=None): - return self.stream.show(format) - def play(self): self.stream.show('midi') - - @property - def new(self): - return partial(type(self), vocab=self.vocab) - - def trim_to_beat(self, beat, include_last_sep=False): - return self.new(trim_to_beat(self.data, self.position, self.vocab, beat, include_last_sep)) - - def transpose(self, interval): - return self.new(tfm_transpose(self.data, interval, self.vocab), position=self._position) - - def append(self, item): - return self.new(np.concatenate((self.data, item.data), axis=0)) - - def mask_pitch(self, section=None): - return self.new(self.mask(self.vocab.note_range, section), position=self.position) - - def mask_duration(self, section=None, keep_position_enc=True): - masked_data = self.mask(self.vocab.dur_range, section) - if keep_position_enc: return self.new(masked_data, position=self.position) - return self.new(masked_data) - - def mask(self, token_range, section_range=None): - return mask_section(self.data, self.position, token_range, self.vocab.mask_idx, section_range=section_range) - - def pad_to(self, bptt): - data = pad_seq(self.data, bptt, self.vocab.pad_idx) - pos = pad_seq(self.position, bptt, 0) - return self.new(data, stream=self._stream, position=pos) - - def split_stream_parts(self): - self._stream = separate_melody_chord(self.stream) - return self.stream - - def remove_eos(self): - if self.data[-1] == self.vocab.stoi[EOS]: return self.new(self.data, stream=self.stream) - return self - - def split_parts(self): - return self.new(self.data, stream=separate_melody_chord(self.stream), position=self.position) - -def pad_seq(seq, bptt, value): - pad_len = max(bptt-seq.shape[0], 0) - return np.pad(seq, (0, pad_len), 'constant', constant_values=value)[:bptt] - -def to_tensor(t, device=None): - t = t if isinstance(t, torch.Tensor) else torch.tensor(t) - if device is None and torch.cuda.is_available(): t = t.cuda() - else: t.to(device) - return t.long() - -def midi2idxenc(midi_file, vocab): - "Converts midi file to index encoding for training" - npenc = midi2npenc(midi_file) # 3. - return npenc2idxenc(npenc, vocab) - -def idxenc2stream(arr, vocab, bpm=120): - "Converts index encoding to music21 stream" - npenc = idxenc2npenc(arr, vocab) - return npenc2stream(npenc, bpm=bpm) - -# single stream instead of note,dur -def npenc2idxenc(t, vocab, seq_type=SEQType.Sentence, add_eos=False): - "Transforms numpy array from 2 column (note, duration) matrix to a single column" - "[[n1, d1], [n2, d2], ...] -> [n1, d1, n2, d2]" - if isinstance(t, (list, tuple)) and len(t) == 2: - return [npenc2idxenc(x, vocab, start_seq) for x in t] - t = t.copy() - - t[:, 0] = t[:, 0] + vocab.note_range[0] - t[:, 1] = t[:, 1] + vocab.dur_range[0] - - prefix = seq_prefix(seq_type, vocab) - suffix = np.array([vocab.stoi[EOS]]) if add_eos else np.empty(0, dtype=int) - return np.concatenate([prefix, t.reshape(-1), suffix]) - -def seq_prefix(seq_type, vocab): - if seq_type == SEQType.Empty: return np.empty(0, dtype=int) - start_token = vocab.bos_idx - if seq_type == SEQType.Chords: start_token = vocab.stoi[CSEQ] - if seq_type == SEQType.Melody: start_token = vocab.stoi[MSEQ] - return np.array([start_token, vocab.pad_idx]) - -def idxenc2npenc(t, vocab, validate=True): - if validate: t = to_valid_idxenc(t, vocab.npenc_range) - t = t.copy().reshape(-1, 2) - if t.shape[0] == 0: return t - - t[:, 0] = t[:, 0] - vocab.note_range[0] - t[:, 1] = t[:, 1] - vocab.dur_range[0] - - if validate: return to_valid_npenc(t) - return t - -def to_valid_idxenc(t, valid_range): - r = valid_range - t = t[np.where((t >= r[0]) & (t < r[1]))] - if t.shape[-1] % 2 == 1: t = t[..., :-1] - return t - -def to_valid_npenc(t): - is_note = (t[:, 0] < VALTSEP) | (t[:, 0] >= NOTE_SIZE) - invalid_note_idx = is_note.argmax() - invalid_dur_idx = (t[:, 1] < 0).argmax() - - invalid_idx = max(invalid_dur_idx, invalid_note_idx) - if invalid_idx > 0: - if invalid_note_idx > 0 and invalid_dur_idx > 0: invalid_idx = min(invalid_dur_idx, invalid_note_idx) - print('Non midi note detected. Only returning valid portion. Index, seed', invalid_idx, t.shape) - return t[:invalid_idx] - return t - -def position_enc(idxenc, vocab): - "Calculates positional beat encoding." - sep_idxs = (idxenc == vocab.sep_idx).nonzero()[0] - sep_idxs = sep_idxs[sep_idxs+2 < idxenc.shape[0]] # remove any indexes right before out of bounds (sep_idx+2) - dur_vals = idxenc[sep_idxs+1] - dur_vals[dur_vals == vocab.mask_idx] = vocab.dur_range[0] # make sure masked durations are 0 - dur_vals -= vocab.dur_range[0] - - posenc = np.zeros_like(idxenc) - posenc[sep_idxs+2] = dur_vals - return posenc.cumsum() - -def beat2index(idxenc, pos, vocab, beat, include_last_sep=False): - cutoff = find_beat(pos, beat) - if cutoff < 2: return 2 # always leave starter tokens - if len(idxenc) < 2 or include_last_sep: return cutoff - if idxenc[cutoff - 2] == vocab.sep_idx: return cutoff - 2 - return cutoff - -def find_beat(pos, beat, sample_freq=SAMPLE_FREQ, side='left'): - return np.searchsorted(pos, beat * sample_freq, side=side) - -# TRANSFORMS - -def tfm_transpose(x, value, vocab): - x = x.copy() - x[(x >= vocab.note_range[0]) & (x < vocab.note_range[1])] += value - return x - -def trim_to_beat(idxenc, pos, vocab, to_beat=None, include_last_sep=True): - if to_beat is None: return idxenc - cutoff = beat2index(idxenc, pos, vocab, to_beat, include_last_sep=include_last_sep) - return idxenc[:cutoff] - -def mask_input(xb, mask_range, replacement_idx): - xb = xb.copy() - xb[(xb >= mask_range[0]) & (xb < mask_range[1])] = replacement_idx - return xb - -def mask_section(xb, pos, token_range, replacement_idx, section_range=None): - xb = xb.copy() - token_mask = (xb >= token_range[0]) & (xb < token_range[1]) - - if section_range is None: section_range = (None, None) - section_mask = np.zeros_like(xb, dtype=bool) - start_idx = find_beat(pos, section_range[0]) if section_range[0] is not None else 0 - end_idx = find_beat(pos, section_range[1]) if section_range[1] is not None else xb.shape[0] - section_mask[start_idx:end_idx] = True - - xb[token_mask & section_mask] = replacement_idx - return xb diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker/speaker_encoder/data_objects/__init__.py b/spaces/kevinwang676/ChatGLM2-SadTalker/speaker_encoder/data_objects/__init__.py deleted file mode 100644 index 030317a1d9a328d452bf29bc7a802e29629b1a42..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-SadTalker/speaker_encoder/data_objects/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from speaker_encoder.data_objects.speaker_verification_dataset import SpeakerVerificationDataset -from speaker_encoder.data_objects.speaker_verification_dataset import SpeakerVerificationDataLoader diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/extract_kp_videos_safe.py b/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/extract_kp_videos_safe.py deleted file mode 100644 index 5141ba3adfdd62b6205909dca519d66271c425ad..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/extract_kp_videos_safe.py +++ /dev/null @@ -1,151 +0,0 @@ -import os -import cv2 -import time -import glob -import argparse -import numpy as np -from PIL import Image -import torch -from tqdm import tqdm -from itertools import cycle -from torch.multiprocessing import Pool, Process, set_start_method - -from facexlib.alignment import landmark_98_to_68 -from facexlib.detection import init_detection_model - -from facexlib.utils import load_file_from_url -from src.face3d.util.my_awing_arch import FAN - -def init_alignment_model(model_name, half=False, device='cuda', model_rootpath=None): - if model_name == 'awing_fan': - model = FAN(num_modules=4, num_landmarks=98, device=device) - model_url = 'https://github.com/xinntao/facexlib/releases/download/v0.1.0/alignment_WFLW_4HG.pth' - else: - raise NotImplementedError(f'{model_name} is not implemented.') - - model_path = load_file_from_url( - url=model_url, model_dir='facexlib/weights', progress=True, file_name=None, save_dir=model_rootpath) - model.load_state_dict(torch.load(model_path, map_location=device)['state_dict'], strict=True) - model.eval() - model = model.to(device) - return model - - -class KeypointExtractor(): - def __init__(self, device='cuda'): - - ### gfpgan/weights - try: - import webui # in webui - root_path = 'extensions/SadTalker/gfpgan/weights' - - except: - root_path = 'gfpgan/weights' - - self.detector = init_alignment_model('awing_fan',device=device, model_rootpath=root_path) - self.det_net = init_detection_model('retinaface_resnet50', half=False,device=device, model_rootpath=root_path) - - def extract_keypoint(self, images, name=None, info=True): - if isinstance(images, list): - keypoints = [] - if info: - i_range = tqdm(images,desc='landmark Det:') - else: - i_range = images - - for image in i_range: - current_kp = self.extract_keypoint(image) - # current_kp = self.detector.get_landmarks(np.array(image)) - if np.mean(current_kp) == -1 and keypoints: - keypoints.append(keypoints[-1]) - else: - keypoints.append(current_kp[None]) - - keypoints = np.concatenate(keypoints, 0) - np.savetxt(os.path.splitext(name)[0]+'.txt', keypoints.reshape(-1)) - return keypoints - else: - while True: - try: - with torch.no_grad(): - # face detection -> face alignment. - img = np.array(images) - bboxes = self.det_net.detect_faces(images, 0.97) - - bboxes = bboxes[0] - img = img[int(bboxes[1]):int(bboxes[3]), int(bboxes[0]):int(bboxes[2]), :] - - keypoints = landmark_98_to_68(self.detector.get_landmarks(img)) # [0] - - #### keypoints to the original location - keypoints[:,0] += int(bboxes[0]) - keypoints[:,1] += int(bboxes[1]) - - break - except RuntimeError as e: - if str(e).startswith('CUDA'): - print("Warning: out of memory, sleep for 1s") - time.sleep(1) - else: - print(e) - break - except TypeError: - print('No face detected in this image') - shape = [68, 2] - keypoints = -1. * np.ones(shape) - break - if name is not None: - np.savetxt(os.path.splitext(name)[0]+'.txt', keypoints.reshape(-1)) - return keypoints - -def read_video(filename): - frames = [] - cap = cv2.VideoCapture(filename) - while cap.isOpened(): - ret, frame = cap.read() - if ret: - frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) - frame = Image.fromarray(frame) - frames.append(frame) - else: - break - cap.release() - return frames - -def run(data): - filename, opt, device = data - os.environ['CUDA_VISIBLE_DEVICES'] = device - kp_extractor = KeypointExtractor() - images = read_video(filename) - name = filename.split('/')[-2:] - os.makedirs(os.path.join(opt.output_dir, name[-2]), exist_ok=True) - kp_extractor.extract_keypoint( - images, - name=os.path.join(opt.output_dir, name[-2], name[-1]) - ) - -if __name__ == '__main__': - set_start_method('spawn') - parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter) - parser.add_argument('--input_dir', type=str, help='the folder of the input files') - parser.add_argument('--output_dir', type=str, help='the folder of the output files') - parser.add_argument('--device_ids', type=str, default='0,1') - parser.add_argument('--workers', type=int, default=4) - - opt = parser.parse_args() - filenames = list() - VIDEO_EXTENSIONS_LOWERCASE = {'mp4'} - VIDEO_EXTENSIONS = VIDEO_EXTENSIONS_LOWERCASE.union({f.upper() for f in VIDEO_EXTENSIONS_LOWERCASE}) - extensions = VIDEO_EXTENSIONS - - for ext in extensions: - os.listdir(f'{opt.input_dir}') - print(f'{opt.input_dir}/*.{ext}') - filenames = sorted(glob.glob(f'{opt.input_dir}/*.{ext}')) - print('Total number of videos:', len(filenames)) - pool = Pool(opt.workers) - args_list = cycle([opt]) - device_ids = opt.device_ids.split(",") - device_ids = cycle(device_ids) - for data in tqdm(pool.imap_unordered(run, zip(filenames, args_list, device_ids))): - None diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker/src/facerender/sync_batchnorm/unittest.py b/spaces/kevinwang676/ChatGLM2-SadTalker/src/facerender/sync_batchnorm/unittest.py deleted file mode 100644 index 0675c022e4ba85d38d1f813490f6740150909524..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-SadTalker/src/facerender/sync_batchnorm/unittest.py +++ /dev/null @@ -1,29 +0,0 @@ -# -*- coding: utf-8 -*- -# File : unittest.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. -# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch -# Distributed under MIT License. - -import unittest - -import numpy as np -from torch.autograd import Variable - - -def as_numpy(v): - if isinstance(v, Variable): - v = v.data - return v.cpu().numpy() - - -class TorchTestCase(unittest.TestCase): - def assertTensorClose(self, a, b, atol=1e-3, rtol=1e-3): - npa, npb = as_numpy(a), as_numpy(b) - self.assertTrue( - np.allclose(npa, npb, atol=atol), - 'Tensor close check failed\n{}\n{}\nadiff={}, rdiff={}'.format(a, b, np.abs(npa - npb).max(), np.abs((npa - npb) / np.fmax(npa, 1e-5)).max()) - ) diff --git a/spaces/kevinwang676/VITS2-Mandarin/monotonic_align/setup.py b/spaces/kevinwang676/VITS2-Mandarin/monotonic_align/setup.py deleted file mode 100644 index 30c224807a70faa9df9c9eb75f8e80c8c867b16b..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/VITS2-Mandarin/monotonic_align/setup.py +++ /dev/null @@ -1,9 +0,0 @@ -from distutils.core import setup -from Cython.Build import cythonize -import numpy - -setup( - name = 'monotonic_align', - ext_modules = cythonize("core.pyx"), - include_dirs=[numpy.get_include()] -) diff --git a/spaces/khan994/sketch/app.py b/spaces/khan994/sketch/app.py deleted file mode 100644 index c547417dbd7864e4107c162ff92f34d7a0872c6d..0000000000000000000000000000000000000000 --- a/spaces/khan994/sketch/app.py +++ /dev/null @@ -1,48 +0,0 @@ -from fastai.vision.all import * -import cv2 -import gradio as gr -import glob - -class Hook(): - def hook_func(self, m, i, o): self.stored = o.detach().clone() - - -#@title DataLoader -path = "drawings2" -dblock = DataBlock(blocks = (ImageBlock, CategoryBlock), - get_items = get_image_files, - get_y=parent_label, - splitter = RandomSplitter(valid_pct=0.2), - item_tfms=RandomResizedCrop(128, min_scale=0.7), - batch_tfms=[*aug_transforms(max_rotate=0, max_warp=0), - Normalize.from_stats(*imagenet_stats)]) -dls_augmented = dblock.dataloaders(path, shuffle=True) - -learn=vision_learner(dls_augmented, resnet152) -learn.load("rn152_sketch_9label_mixup_0_3") - -class Hook(): - def hook_func(self, m, i, o): self.stored = o.detach().clone() - -def gradcam(img_create): - - pred,idx,probs=learn.predict(img_create) - return dict(zip(categories, map(float, probs))) - -categories = ('balkanlar_osmanli', 'bursa', 'cankirievi', 'diyarbakir', 'kayseri', 'kula', 'ordu', 'ormana_antalya', 'pazaryeri') -#def classify_img(img): -# pred,idx,probs=learn.predict(img) -# return dict(zip(categories, map(float, probs))) - -image=gr.inputs.Image(shape=(128,128)) -label=gr.outputs.Label() -#examples_=[] -#for i in glob.glob("valid/**/*.jpg", recursive=True): -# examples_.append(i) - -examples=["sf107.jpg", "sf27_example3.png", "diyarbakir-1.jpg", "sf108.jpg", "sf135.png"] - - -demo = gr.Interface(fn=gradcam, inputs=image, outputs=[label], examples=examples) - -demo.launch(inline=False) \ No newline at end of file diff --git a/spaces/kira4424/Tacotron-zero-short-voice-clone/utils/load_yaml.py b/spaces/kira4424/Tacotron-zero-short-voice-clone/utils/load_yaml.py deleted file mode 100644 index 5792ff471dc63bacc8c27a7bcc2d4bd6f1e35da8..0000000000000000000000000000000000000000 --- a/spaces/kira4424/Tacotron-zero-short-voice-clone/utils/load_yaml.py +++ /dev/null @@ -1,58 +0,0 @@ -import yaml - - -def load_hparams(filename): - stream = open(filename, 'r') - docs = yaml.safe_load_all(stream) - hparams_dict = dict() - for doc in docs: - for k, v in doc.items(): - hparams_dict[k] = v - return hparams_dict - -def merge_dict(user, default): - if isinstance(user, dict) and isinstance(default, dict): - for k, v in default.items(): - if k not in user: - user[k] = v - else: - user[k] = merge_dict(user[k], v) - return user - -class Dotdict(dict): - """ - a dictionary that supports dot notation - as well as dictionary access notation - usage: d = DotDict() or d = DotDict({'val1':'first'}) - set attributes: d.val2 = 'second' or d['val2'] = 'second' - get attributes: d.val2 or d['val2'] - """ - __getattr__ = dict.__getitem__ - __setattr__ = dict.__setitem__ - __delattr__ = dict.__delitem__ - - def __init__(self, dct=None): - dct = dict() if not dct else dct - for key, value in dct.items(): - if hasattr(value, 'keys'): - value = Dotdict(value) - self[key] = value - -class HpsYaml(Dotdict): - def __init__(self, yaml_file): - super(Dotdict, self).__init__() - hps = load_hparams(yaml_file) - hp_dict = Dotdict(hps) - for k, v in hp_dict.items(): - setattr(self, k, v) - - __getattr__ = Dotdict.__getitem__ - __setattr__ = Dotdict.__setitem__ - __delattr__ = Dotdict.__delitem__ - - - - - - - diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/cnn/bricks/transformer.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/cnn/bricks/transformer.py deleted file mode 100644 index e61ae0dd941a7be00b3e41a3de833ec50470a45f..0000000000000000000000000000000000000000 --- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/cnn/bricks/transformer.py +++ /dev/null @@ -1,595 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import warnings - -import torch -import torch.nn as nn - -from annotator.uniformer.mmcv import ConfigDict, deprecated_api_warning -from annotator.uniformer.mmcv.cnn import Linear, build_activation_layer, build_norm_layer -from annotator.uniformer.mmcv.runner.base_module import BaseModule, ModuleList, Sequential -from annotator.uniformer.mmcv.utils import build_from_cfg -from .drop import build_dropout -from .registry import (ATTENTION, FEEDFORWARD_NETWORK, POSITIONAL_ENCODING, - TRANSFORMER_LAYER, TRANSFORMER_LAYER_SEQUENCE) - -# Avoid BC-breaking of importing MultiScaleDeformableAttention from this file -try: - from annotator.uniformer.mmcv.ops.multi_scale_deform_attn import MultiScaleDeformableAttention # noqa F401 - warnings.warn( - ImportWarning( - '``MultiScaleDeformableAttention`` has been moved to ' - '``mmcv.ops.multi_scale_deform_attn``, please change original path ' # noqa E501 - '``from annotator.uniformer.mmcv.cnn.bricks.transformer import MultiScaleDeformableAttention`` ' # noqa E501 - 'to ``from annotator.uniformer.mmcv.ops.multi_scale_deform_attn import MultiScaleDeformableAttention`` ' # noqa E501 - )) - -except ImportError: - warnings.warn('Fail to import ``MultiScaleDeformableAttention`` from ' - '``mmcv.ops.multi_scale_deform_attn``, ' - 'You should install ``mmcv-full`` if you need this module. ') - - -def build_positional_encoding(cfg, default_args=None): - """Builder for Position Encoding.""" - return build_from_cfg(cfg, POSITIONAL_ENCODING, default_args) - - -def build_attention(cfg, default_args=None): - """Builder for attention.""" - return build_from_cfg(cfg, ATTENTION, default_args) - - -def build_feedforward_network(cfg, default_args=None): - """Builder for feed-forward network (FFN).""" - return build_from_cfg(cfg, FEEDFORWARD_NETWORK, default_args) - - -def build_transformer_layer(cfg, default_args=None): - """Builder for transformer layer.""" - return build_from_cfg(cfg, TRANSFORMER_LAYER, default_args) - - -def build_transformer_layer_sequence(cfg, default_args=None): - """Builder for transformer encoder and transformer decoder.""" - return build_from_cfg(cfg, TRANSFORMER_LAYER_SEQUENCE, default_args) - - -@ATTENTION.register_module() -class MultiheadAttention(BaseModule): - """A wrapper for ``torch.nn.MultiheadAttention``. - - This module implements MultiheadAttention with identity connection, - and positional encoding is also passed as input. - - Args: - embed_dims (int): The embedding dimension. - num_heads (int): Parallel attention heads. - attn_drop (float): A Dropout layer on attn_output_weights. - Default: 0.0. - proj_drop (float): A Dropout layer after `nn.MultiheadAttention`. - Default: 0.0. - dropout_layer (obj:`ConfigDict`): The dropout_layer used - when adding the shortcut. - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - batch_first (bool): When it is True, Key, Query and Value are shape of - (batch, n, embed_dim), otherwise (n, batch, embed_dim). - Default to False. - """ - - def __init__(self, - embed_dims, - num_heads, - attn_drop=0., - proj_drop=0., - dropout_layer=dict(type='Dropout', drop_prob=0.), - init_cfg=None, - batch_first=False, - **kwargs): - super(MultiheadAttention, self).__init__(init_cfg) - if 'dropout' in kwargs: - warnings.warn('The arguments `dropout` in MultiheadAttention ' - 'has been deprecated, now you can separately ' - 'set `attn_drop`(float), proj_drop(float), ' - 'and `dropout_layer`(dict) ') - attn_drop = kwargs['dropout'] - dropout_layer['drop_prob'] = kwargs.pop('dropout') - - self.embed_dims = embed_dims - self.num_heads = num_heads - self.batch_first = batch_first - - self.attn = nn.MultiheadAttention(embed_dims, num_heads, attn_drop, - **kwargs) - - self.proj_drop = nn.Dropout(proj_drop) - self.dropout_layer = build_dropout( - dropout_layer) if dropout_layer else nn.Identity() - - @deprecated_api_warning({'residual': 'identity'}, - cls_name='MultiheadAttention') - def forward(self, - query, - key=None, - value=None, - identity=None, - query_pos=None, - key_pos=None, - attn_mask=None, - key_padding_mask=None, - **kwargs): - """Forward function for `MultiheadAttention`. - - **kwargs allow passing a more general data flow when combining - with other operations in `transformerlayer`. - - Args: - query (Tensor): The input query with shape [num_queries, bs, - embed_dims] if self.batch_first is False, else - [bs, num_queries embed_dims]. - key (Tensor): The key tensor with shape [num_keys, bs, - embed_dims] if self.batch_first is False, else - [bs, num_keys, embed_dims] . - If None, the ``query`` will be used. Defaults to None. - value (Tensor): The value tensor with same shape as `key`. - Same in `nn.MultiheadAttention.forward`. Defaults to None. - If None, the `key` will be used. - identity (Tensor): This tensor, with the same shape as x, - will be used for the identity link. - If None, `x` will be used. Defaults to None. - query_pos (Tensor): The positional encoding for query, with - the same shape as `x`. If not None, it will - be added to `x` before forward function. Defaults to None. - key_pos (Tensor): The positional encoding for `key`, with the - same shape as `key`. Defaults to None. If not None, it will - be added to `key` before forward function. If None, and - `query_pos` has the same shape as `key`, then `query_pos` - will be used for `key_pos`. Defaults to None. - attn_mask (Tensor): ByteTensor mask with shape [num_queries, - num_keys]. Same in `nn.MultiheadAttention.forward`. - Defaults to None. - key_padding_mask (Tensor): ByteTensor with shape [bs, num_keys]. - Defaults to None. - - Returns: - Tensor: forwarded results with shape - [num_queries, bs, embed_dims] - if self.batch_first is False, else - [bs, num_queries embed_dims]. - """ - - if key is None: - key = query - if value is None: - value = key - if identity is None: - identity = query - if key_pos is None: - if query_pos is not None: - # use query_pos if key_pos is not available - if query_pos.shape == key.shape: - key_pos = query_pos - else: - warnings.warn(f'position encoding of key is' - f'missing in {self.__class__.__name__}.') - if query_pos is not None: - query = query + query_pos - if key_pos is not None: - key = key + key_pos - - # Because the dataflow('key', 'query', 'value') of - # ``torch.nn.MultiheadAttention`` is (num_query, batch, - # embed_dims), We should adjust the shape of dataflow from - # batch_first (batch, num_query, embed_dims) to num_query_first - # (num_query ,batch, embed_dims), and recover ``attn_output`` - # from num_query_first to batch_first. - if self.batch_first: - query = query.transpose(0, 1) - key = key.transpose(0, 1) - value = value.transpose(0, 1) - - out = self.attn( - query=query, - key=key, - value=value, - attn_mask=attn_mask, - key_padding_mask=key_padding_mask)[0] - - if self.batch_first: - out = out.transpose(0, 1) - - return identity + self.dropout_layer(self.proj_drop(out)) - - -@FEEDFORWARD_NETWORK.register_module() -class FFN(BaseModule): - """Implements feed-forward networks (FFNs) with identity connection. - - Args: - embed_dims (int): The feature dimension. Same as - `MultiheadAttention`. Defaults: 256. - feedforward_channels (int): The hidden dimension of FFNs. - Defaults: 1024. - num_fcs (int, optional): The number of fully-connected layers in - FFNs. Default: 2. - act_cfg (dict, optional): The activation config for FFNs. - Default: dict(type='ReLU') - ffn_drop (float, optional): Probability of an element to be - zeroed in FFN. Default 0.0. - add_identity (bool, optional): Whether to add the - identity connection. Default: `True`. - dropout_layer (obj:`ConfigDict`): The dropout_layer used - when adding the shortcut. - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - """ - - @deprecated_api_warning( - { - 'dropout': 'ffn_drop', - 'add_residual': 'add_identity' - }, - cls_name='FFN') - def __init__(self, - embed_dims=256, - feedforward_channels=1024, - num_fcs=2, - act_cfg=dict(type='ReLU', inplace=True), - ffn_drop=0., - dropout_layer=None, - add_identity=True, - init_cfg=None, - **kwargs): - super(FFN, self).__init__(init_cfg) - assert num_fcs >= 2, 'num_fcs should be no less ' \ - f'than 2. got {num_fcs}.' - self.embed_dims = embed_dims - self.feedforward_channels = feedforward_channels - self.num_fcs = num_fcs - self.act_cfg = act_cfg - self.activate = build_activation_layer(act_cfg) - - layers = [] - in_channels = embed_dims - for _ in range(num_fcs - 1): - layers.append( - Sequential( - Linear(in_channels, feedforward_channels), self.activate, - nn.Dropout(ffn_drop))) - in_channels = feedforward_channels - layers.append(Linear(feedforward_channels, embed_dims)) - layers.append(nn.Dropout(ffn_drop)) - self.layers = Sequential(*layers) - self.dropout_layer = build_dropout( - dropout_layer) if dropout_layer else torch.nn.Identity() - self.add_identity = add_identity - - @deprecated_api_warning({'residual': 'identity'}, cls_name='FFN') - def forward(self, x, identity=None): - """Forward function for `FFN`. - - The function would add x to the output tensor if residue is None. - """ - out = self.layers(x) - if not self.add_identity: - return self.dropout_layer(out) - if identity is None: - identity = x - return identity + self.dropout_layer(out) - - -@TRANSFORMER_LAYER.register_module() -class BaseTransformerLayer(BaseModule): - """Base `TransformerLayer` for vision transformer. - - It can be built from `mmcv.ConfigDict` and support more flexible - customization, for example, using any number of `FFN or LN ` and - use different kinds of `attention` by specifying a list of `ConfigDict` - named `attn_cfgs`. It is worth mentioning that it supports `prenorm` - when you specifying `norm` as the first element of `operation_order`. - More details about the `prenorm`: `On Layer Normalization in the - Transformer Architecture `_ . - - Args: - attn_cfgs (list[`mmcv.ConfigDict`] | obj:`mmcv.ConfigDict` | None )): - Configs for `self_attention` or `cross_attention` modules, - The order of the configs in the list should be consistent with - corresponding attentions in operation_order. - If it is a dict, all of the attention modules in operation_order - will be built with this config. Default: None. - ffn_cfgs (list[`mmcv.ConfigDict`] | obj:`mmcv.ConfigDict` | None )): - Configs for FFN, The order of the configs in the list should be - consistent with corresponding ffn in operation_order. - If it is a dict, all of the attention modules in operation_order - will be built with this config. - operation_order (tuple[str]): The execution order of operation - in transformer. Such as ('self_attn', 'norm', 'ffn', 'norm'). - Support `prenorm` when you specifying first element as `norm`. - Default:None. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='LN'). - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - batch_first (bool): Key, Query and Value are shape - of (batch, n, embed_dim) - or (n, batch, embed_dim). Default to False. - """ - - def __init__(self, - attn_cfgs=None, - ffn_cfgs=dict( - type='FFN', - embed_dims=256, - feedforward_channels=1024, - num_fcs=2, - ffn_drop=0., - act_cfg=dict(type='ReLU', inplace=True), - ), - operation_order=None, - norm_cfg=dict(type='LN'), - init_cfg=None, - batch_first=False, - **kwargs): - - deprecated_args = dict( - feedforward_channels='feedforward_channels', - ffn_dropout='ffn_drop', - ffn_num_fcs='num_fcs') - for ori_name, new_name in deprecated_args.items(): - if ori_name in kwargs: - warnings.warn( - f'The arguments `{ori_name}` in BaseTransformerLayer ' - f'has been deprecated, now you should set `{new_name}` ' - f'and other FFN related arguments ' - f'to a dict named `ffn_cfgs`. ') - ffn_cfgs[new_name] = kwargs[ori_name] - - super(BaseTransformerLayer, self).__init__(init_cfg) - - self.batch_first = batch_first - - assert set(operation_order) & set( - ['self_attn', 'norm', 'ffn', 'cross_attn']) == \ - set(operation_order), f'The operation_order of' \ - f' {self.__class__.__name__} should ' \ - f'contains all four operation type ' \ - f"{['self_attn', 'norm', 'ffn', 'cross_attn']}" - - num_attn = operation_order.count('self_attn') + operation_order.count( - 'cross_attn') - if isinstance(attn_cfgs, dict): - attn_cfgs = [copy.deepcopy(attn_cfgs) for _ in range(num_attn)] - else: - assert num_attn == len(attn_cfgs), f'The length ' \ - f'of attn_cfg {num_attn} is ' \ - f'not consistent with the number of attention' \ - f'in operation_order {operation_order}.' - - self.num_attn = num_attn - self.operation_order = operation_order - self.norm_cfg = norm_cfg - self.pre_norm = operation_order[0] == 'norm' - self.attentions = ModuleList() - - index = 0 - for operation_name in operation_order: - if operation_name in ['self_attn', 'cross_attn']: - if 'batch_first' in attn_cfgs[index]: - assert self.batch_first == attn_cfgs[index]['batch_first'] - else: - attn_cfgs[index]['batch_first'] = self.batch_first - attention = build_attention(attn_cfgs[index]) - # Some custom attentions used as `self_attn` - # or `cross_attn` can have different behavior. - attention.operation_name = operation_name - self.attentions.append(attention) - index += 1 - - self.embed_dims = self.attentions[0].embed_dims - - self.ffns = ModuleList() - num_ffns = operation_order.count('ffn') - if isinstance(ffn_cfgs, dict): - ffn_cfgs = ConfigDict(ffn_cfgs) - if isinstance(ffn_cfgs, dict): - ffn_cfgs = [copy.deepcopy(ffn_cfgs) for _ in range(num_ffns)] - assert len(ffn_cfgs) == num_ffns - for ffn_index in range(num_ffns): - if 'embed_dims' not in ffn_cfgs[ffn_index]: - ffn_cfgs['embed_dims'] = self.embed_dims - else: - assert ffn_cfgs[ffn_index]['embed_dims'] == self.embed_dims - self.ffns.append( - build_feedforward_network(ffn_cfgs[ffn_index], - dict(type='FFN'))) - - self.norms = ModuleList() - num_norms = operation_order.count('norm') - for _ in range(num_norms): - self.norms.append(build_norm_layer(norm_cfg, self.embed_dims)[1]) - - def forward(self, - query, - key=None, - value=None, - query_pos=None, - key_pos=None, - attn_masks=None, - query_key_padding_mask=None, - key_padding_mask=None, - **kwargs): - """Forward function for `TransformerDecoderLayer`. - - **kwargs contains some specific arguments of attentions. - - Args: - query (Tensor): The input query with shape - [num_queries, bs, embed_dims] if - self.batch_first is False, else - [bs, num_queries embed_dims]. - key (Tensor): The key tensor with shape [num_keys, bs, - embed_dims] if self.batch_first is False, else - [bs, num_keys, embed_dims] . - value (Tensor): The value tensor with same shape as `key`. - query_pos (Tensor): The positional encoding for `query`. - Default: None. - key_pos (Tensor): The positional encoding for `key`. - Default: None. - attn_masks (List[Tensor] | None): 2D Tensor used in - calculation of corresponding attention. The length of - it should equal to the number of `attention` in - `operation_order`. Default: None. - query_key_padding_mask (Tensor): ByteTensor for `query`, with - shape [bs, num_queries]. Only used in `self_attn` layer. - Defaults to None. - key_padding_mask (Tensor): ByteTensor for `query`, with - shape [bs, num_keys]. Default: None. - - Returns: - Tensor: forwarded results with shape [num_queries, bs, embed_dims]. - """ - - norm_index = 0 - attn_index = 0 - ffn_index = 0 - identity = query - if attn_masks is None: - attn_masks = [None for _ in range(self.num_attn)] - elif isinstance(attn_masks, torch.Tensor): - attn_masks = [ - copy.deepcopy(attn_masks) for _ in range(self.num_attn) - ] - warnings.warn(f'Use same attn_mask in all attentions in ' - f'{self.__class__.__name__} ') - else: - assert len(attn_masks) == self.num_attn, f'The length of ' \ - f'attn_masks {len(attn_masks)} must be equal ' \ - f'to the number of attention in ' \ - f'operation_order {self.num_attn}' - - for layer in self.operation_order: - if layer == 'self_attn': - temp_key = temp_value = query - query = self.attentions[attn_index]( - query, - temp_key, - temp_value, - identity if self.pre_norm else None, - query_pos=query_pos, - key_pos=query_pos, - attn_mask=attn_masks[attn_index], - key_padding_mask=query_key_padding_mask, - **kwargs) - attn_index += 1 - identity = query - - elif layer == 'norm': - query = self.norms[norm_index](query) - norm_index += 1 - - elif layer == 'cross_attn': - query = self.attentions[attn_index]( - query, - key, - value, - identity if self.pre_norm else None, - query_pos=query_pos, - key_pos=key_pos, - attn_mask=attn_masks[attn_index], - key_padding_mask=key_padding_mask, - **kwargs) - attn_index += 1 - identity = query - - elif layer == 'ffn': - query = self.ffns[ffn_index]( - query, identity if self.pre_norm else None) - ffn_index += 1 - - return query - - -@TRANSFORMER_LAYER_SEQUENCE.register_module() -class TransformerLayerSequence(BaseModule): - """Base class for TransformerEncoder and TransformerDecoder in vision - transformer. - - As base-class of Encoder and Decoder in vision transformer. - Support customization such as specifying different kind - of `transformer_layer` in `transformer_coder`. - - Args: - transformerlayer (list[obj:`mmcv.ConfigDict`] | - obj:`mmcv.ConfigDict`): Config of transformerlayer - in TransformerCoder. If it is obj:`mmcv.ConfigDict`, - it would be repeated `num_layer` times to a - list[`mmcv.ConfigDict`]. Default: None. - num_layers (int): The number of `TransformerLayer`. Default: None. - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - """ - - def __init__(self, transformerlayers=None, num_layers=None, init_cfg=None): - super(TransformerLayerSequence, self).__init__(init_cfg) - if isinstance(transformerlayers, dict): - transformerlayers = [ - copy.deepcopy(transformerlayers) for _ in range(num_layers) - ] - else: - assert isinstance(transformerlayers, list) and \ - len(transformerlayers) == num_layers - self.num_layers = num_layers - self.layers = ModuleList() - for i in range(num_layers): - self.layers.append(build_transformer_layer(transformerlayers[i])) - self.embed_dims = self.layers[0].embed_dims - self.pre_norm = self.layers[0].pre_norm - - def forward(self, - query, - key, - value, - query_pos=None, - key_pos=None, - attn_masks=None, - query_key_padding_mask=None, - key_padding_mask=None, - **kwargs): - """Forward function for `TransformerCoder`. - - Args: - query (Tensor): Input query with shape - `(num_queries, bs, embed_dims)`. - key (Tensor): The key tensor with shape - `(num_keys, bs, embed_dims)`. - value (Tensor): The value tensor with shape - `(num_keys, bs, embed_dims)`. - query_pos (Tensor): The positional encoding for `query`. - Default: None. - key_pos (Tensor): The positional encoding for `key`. - Default: None. - attn_masks (List[Tensor], optional): Each element is 2D Tensor - which is used in calculation of corresponding attention in - operation_order. Default: None. - query_key_padding_mask (Tensor): ByteTensor for `query`, with - shape [bs, num_queries]. Only used in self-attention - Default: None. - key_padding_mask (Tensor): ByteTensor for `query`, with - shape [bs, num_keys]. Default: None. - - Returns: - Tensor: results with shape [num_queries, bs, embed_dims]. - """ - for layer in self.layers: - query = layer( - query, - key, - value, - query_pos=query_pos, - key_pos=key_pos, - attn_masks=attn_masks, - query_key_padding_mask=query_key_padding_mask, - key_padding_mask=key_padding_mask, - **kwargs) - return query diff --git a/spaces/knkarthick/Meeting-Demo/README.md b/spaces/knkarthick/Meeting-Demo/README.md deleted file mode 100644 index 8d3b3ea4507203eca2d13bad3ae701bfe75e3121..0000000000000000000000000000000000000000 --- a/spaces/knkarthick/Meeting-Demo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Meeting Demo -emoji: 🏃 -colorFrom: yellow -colorTo: red -sdk: gradio -sdk_version: 3.9.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/knotdgaf/gradiotest/theme_dropdown.py b/spaces/knotdgaf/gradiotest/theme_dropdown.py deleted file mode 100644 index 6235388fd00549553df44028f3ccf03e946994ea..0000000000000000000000000000000000000000 --- a/spaces/knotdgaf/gradiotest/theme_dropdown.py +++ /dev/null @@ -1,57 +0,0 @@ -import os -import pathlib - -from gradio.themes.utils import ThemeAsset - - -def create_theme_dropdown(): - import gradio as gr - - asset_path = pathlib.Path(__file__).parent / "themes" - themes = [] - for theme_asset in os.listdir(str(asset_path)): - themes.append( - (ThemeAsset(theme_asset), gr.Theme.load(str(asset_path / theme_asset))) - ) - - def make_else_if(theme_asset): - return f""" - else if (theme == '{str(theme_asset[0].version)}') {{ - var theme_css = `{theme_asset[1]._get_theme_css()}` - }}""" - - head, tail = themes[0], themes[1:] - if_statement = f""" - if (theme == "{str(head[0].version)}") {{ - var theme_css = `{head[1]._get_theme_css()}` - }} {" ".join(make_else_if(t) for t in tail)} - """ - - latest_to_oldest = sorted([t[0] for t in themes], key=lambda asset: asset.version)[ - ::-1 - ] - latest_to_oldest = [str(t.version) for t in latest_to_oldest] - - component = gr.Dropdown( - choices=latest_to_oldest, - value=latest_to_oldest[0], - render=False, - label="Select Version", - ).style(container=False) - - return ( - component, - f""" - (theme) => {{ - if (!document.querySelector('.theme-css')) {{ - var theme_elem = document.createElement('style'); - theme_elem.classList.add('theme-css'); - document.head.appendChild(theme_elem); - }} else {{ - var theme_elem = document.querySelector('.theme-css'); - }} - {if_statement} - theme_elem.innerHTML = theme_css; - }} - """, - ) diff --git a/spaces/koushik-org/Trading_QA_Bot/README.md b/spaces/koushik-org/Trading_QA_Bot/README.md deleted file mode 100644 index eb179dbd1a4a64a402689c1d4154a0d674a93ac2..0000000000000000000000000000000000000000 --- a/spaces/koushik-org/Trading_QA_Bot/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Trading QA Bot -emoji: 💻 -colorFrom: red -colorTo: green -sdk: gradio -sdk_version: 3.34.0 -app_file: app.py -pinned: false -python_version: 3.9.13 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fsspec/utils.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fsspec/utils.py deleted file mode 100644 index 51f8cc8d19fcd7e78c20eb95c45392d25b2649e5..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fsspec/utils.py +++ /dev/null @@ -1,556 +0,0 @@ -import logging -import math -import os -import pathlib -import re -import sys -from contextlib import contextmanager -from functools import partial -from hashlib import md5 -from importlib.metadata import version -from urllib.parse import urlsplit - -DEFAULT_BLOCK_SIZE = 5 * 2**20 - - -def infer_storage_options(urlpath, inherit_storage_options=None): - """Infer storage options from URL path and merge it with existing storage - options. - - Parameters - ---------- - urlpath: str or unicode - Either local absolute file path or URL (hdfs://namenode:8020/file.csv) - inherit_storage_options: dict (optional) - Its contents will get merged with the inferred information from the - given path - - Returns - ------- - Storage options dict. - - Examples - -------- - >>> infer_storage_options('/mnt/datasets/test.csv') # doctest: +SKIP - {"protocol": "file", "path", "/mnt/datasets/test.csv"} - >>> infer_storage_options( - ... 'hdfs://username:pwd@node:123/mnt/datasets/test.csv?q=1', - ... inherit_storage_options={'extra': 'value'}, - ... ) # doctest: +SKIP - {"protocol": "hdfs", "username": "username", "password": "pwd", - "host": "node", "port": 123, "path": "/mnt/datasets/test.csv", - "url_query": "q=1", "extra": "value"} - """ - # Handle Windows paths including disk name in this special case - if ( - re.match(r"^[a-zA-Z]:[\\/]", urlpath) - or re.match(r"^[a-zA-Z0-9]+://", urlpath) is None - ): - return {"protocol": "file", "path": urlpath} - - parsed_path = urlsplit(urlpath) - protocol = parsed_path.scheme or "file" - if parsed_path.fragment: - path = "#".join([parsed_path.path, parsed_path.fragment]) - else: - path = parsed_path.path - if protocol == "file": - # Special case parsing file protocol URL on Windows according to: - # https://msdn.microsoft.com/en-us/library/jj710207.aspx - windows_path = re.match(r"^/([a-zA-Z])[:|]([\\/].*)$", path) - if windows_path: - path = "%s:%s" % windows_path.groups() - - if protocol in ["http", "https"]: - # for HTTP, we don't want to parse, as requests will anyway - return {"protocol": protocol, "path": urlpath} - - options = {"protocol": protocol, "path": path} - - if parsed_path.netloc: - # Parse `hostname` from netloc manually because `parsed_path.hostname` - # lowercases the hostname which is not always desirable (e.g. in S3): - # https://github.com/dask/dask/issues/1417 - options["host"] = parsed_path.netloc.rsplit("@", 1)[-1].rsplit(":", 1)[0] - - if protocol in ("s3", "s3a", "gcs", "gs"): - options["path"] = options["host"] + options["path"] - else: - options["host"] = options["host"] - if parsed_path.port: - options["port"] = parsed_path.port - if parsed_path.username: - options["username"] = parsed_path.username - if parsed_path.password: - options["password"] = parsed_path.password - - if parsed_path.query: - options["url_query"] = parsed_path.query - if parsed_path.fragment: - options["url_fragment"] = parsed_path.fragment - - if inherit_storage_options: - update_storage_options(options, inherit_storage_options) - - return options - - -def update_storage_options(options, inherited=None): - if not inherited: - inherited = {} - collisions = set(options) & set(inherited) - if collisions: - for collision in collisions: - if options.get(collision) != inherited.get(collision): - raise KeyError( - "Collision between inferred and specified storage " - "option:\n%s" % collision - ) - options.update(inherited) - - -# Compression extensions registered via fsspec.compression.register_compression -compressions = {} - - -def infer_compression(filename): - """Infer compression, if available, from filename. - - Infer a named compression type, if registered and available, from filename - extension. This includes builtin (gz, bz2, zip) compressions, as well as - optional compressions. See fsspec.compression.register_compression. - """ - extension = os.path.splitext(filename)[-1].strip(".").lower() - if extension in compressions: - return compressions[extension] - - -def build_name_function(max_int): - """Returns a function that receives a single integer - and returns it as a string padded by enough zero characters - to align with maximum possible integer - - >>> name_f = build_name_function(57) - - >>> name_f(7) - '07' - >>> name_f(31) - '31' - >>> build_name_function(1000)(42) - '0042' - >>> build_name_function(999)(42) - '042' - >>> build_name_function(0)(0) - '0' - """ - # handle corner cases max_int is 0 or exact power of 10 - max_int += 1e-8 - - pad_length = int(math.ceil(math.log10(max_int))) - - def name_function(i): - return str(i).zfill(pad_length) - - return name_function - - -def seek_delimiter(file, delimiter, blocksize): - r"""Seek current file to file start, file end, or byte after delimiter seq. - - Seeks file to next chunk delimiter, where chunks are defined on file start, - a delimiting sequence, and file end. Use file.tell() to see location afterwards. - Note that file start is a valid split, so must be at offset > 0 to seek for - delimiter. - - Parameters - ---------- - file: a file - delimiter: bytes - a delimiter like ``b'\n'`` or message sentinel, matching file .read() type - blocksize: int - Number of bytes to read from the file at once. - - - Returns - ------- - Returns True if a delimiter was found, False if at file start or end. - - """ - - if file.tell() == 0: - # beginning-of-file, return without seek - return False - - # Interface is for binary IO, with delimiter as bytes, but initialize last - # with result of file.read to preserve compatibility with text IO. - last = None - while True: - current = file.read(blocksize) - if not current: - # end-of-file without delimiter - return False - full = last + current if last else current - try: - if delimiter in full: - i = full.index(delimiter) - file.seek(file.tell() - (len(full) - i) + len(delimiter)) - return True - elif len(current) < blocksize: - # end-of-file without delimiter - return False - except (OSError, ValueError): - pass - last = full[-len(delimiter) :] - - -def read_block(f, offset, length, delimiter=None, split_before=False): - """Read a block of bytes from a file - - Parameters - ---------- - f: File - Open file - offset: int - Byte offset to start read - length: int - Number of bytes to read, read through end of file if None - delimiter: bytes (optional) - Ensure reading starts and stops at delimiter bytestring - split_before: bool (optional) - Start/stop read *before* delimiter bytestring. - - - If using the ``delimiter=`` keyword argument we ensure that the read - starts and stops at delimiter boundaries that follow the locations - ``offset`` and ``offset + length``. If ``offset`` is zero then we - start at zero, regardless of delimiter. The bytestring returned WILL - include the terminating delimiter string. - - Examples - -------- - - >>> from io import BytesIO # doctest: +SKIP - >>> f = BytesIO(b'Alice, 100\\nBob, 200\\nCharlie, 300') # doctest: +SKIP - >>> read_block(f, 0, 13) # doctest: +SKIP - b'Alice, 100\\nBo' - - >>> read_block(f, 0, 13, delimiter=b'\\n') # doctest: +SKIP - b'Alice, 100\\nBob, 200\\n' - - >>> read_block(f, 10, 10, delimiter=b'\\n') # doctest: +SKIP - b'Bob, 200\\nCharlie, 300' - """ - if delimiter: - f.seek(offset) - found_start_delim = seek_delimiter(f, delimiter, 2**16) - if length is None: - return f.read() - start = f.tell() - length -= start - offset - - f.seek(start + length) - found_end_delim = seek_delimiter(f, delimiter, 2**16) - end = f.tell() - - # Adjust split location to before delimiter iff seek found the - # delimiter sequence, not start or end of file. - if found_start_delim and split_before: - start -= len(delimiter) - - if found_end_delim and split_before: - end -= len(delimiter) - - offset = start - length = end - start - - f.seek(offset) - b = f.read(length) - return b - - -def tokenize(*args, **kwargs): - """Deterministic token - - (modified from dask.base) - - >>> tokenize([1, 2, '3']) - '9d71491b50023b06fc76928e6eddb952' - - >>> tokenize('Hello') == tokenize('Hello') - True - """ - if kwargs: - args += (kwargs,) - try: - return md5(str(args).encode()).hexdigest() - except ValueError: - # FIPS systems: https://github.com/fsspec/filesystem_spec/issues/380 - return md5(str(args).encode(), usedforsecurity=False).hexdigest() - - -def stringify_path(filepath): - """Attempt to convert a path-like object to a string. - - Parameters - ---------- - filepath: object to be converted - - Returns - ------- - filepath_str: maybe a string version of the object - - Notes - ----- - Objects supporting the fspath protocol are coerced according to its - __fspath__ method. - - For backwards compatibility with older Python version, pathlib.Path - objects are specially coerced. - - Any other object is passed through unchanged, which includes bytes, - strings, buffers, or anything else that's not even path-like. - """ - if isinstance(filepath, str): - return filepath - elif hasattr(filepath, "__fspath__"): - return filepath.__fspath__() - elif isinstance(filepath, pathlib.Path): - return str(filepath) - elif hasattr(filepath, "path"): - return filepath.path - else: - return filepath - - -def make_instance(cls, args, kwargs): - inst = cls(*args, **kwargs) - inst._determine_worker() - return inst - - -def common_prefix(paths): - """For a list of paths, find the shortest prefix common to all""" - parts = [p.split("/") for p in paths] - lmax = min(len(p) for p in parts) - end = 0 - for i in range(lmax): - end = all(p[i] == parts[0][i] for p in parts) - if not end: - break - i += end - return "/".join(parts[0][:i]) - - -def other_paths(paths, path2, is_dir=None, exists=False, flatten=False): - """In bulk file operations, construct a new file tree from a list of files - - Parameters - ---------- - paths: list of str - The input file tree - path2: str or list of str - Root to construct the new list in. If this is already a list of str, we just - assert it has the right number of elements. - is_dir: bool (optional) - For the special case where the input in one element, whether to regard the value - as the target path, or as a directory to put a file path within. If None, a - directory is inferred if the path ends in '/' - exists: bool (optional) - For a str destination, it is already exists (and is a dir), files should - end up inside. - flatten: bool (optional) - Whether to flatten the input directory tree structure so that the output files - are in the same directory. - - Returns - ------- - list of str - """ - - if isinstance(path2, str): - is_dir = is_dir or path2.endswith("/") - path2 = path2.rstrip("/") - - if flatten: - path2 = ["/".join((path2, p.split("/")[-1])) for p in paths] - else: - cp = common_prefix(paths) - if exists: - cp = cp.rsplit("/", 1)[0] - if not cp and all(not s.startswith("/") for s in paths): - path2 = ["/".join([path2, p]) for p in paths] - else: - path2 = [p.replace(cp, path2, 1) for p in paths] - else: - assert len(paths) == len(path2) - return path2 - - -def is_exception(obj): - return isinstance(obj, BaseException) - - -def isfilelike(f): - for attr in ["read", "close", "tell"]: - if not hasattr(f, attr): - return False - return True - - -def get_protocol(url): - parts = re.split(r"(\:\:|\://)", url, 1) - if len(parts) > 1: - return parts[0] - return "file" - - -def can_be_local(path): - """Can the given URL be used with open_local?""" - from fsspec import get_filesystem_class - - try: - return getattr(get_filesystem_class(get_protocol(path)), "local_file", False) - except (ValueError, ImportError): - # not in registry or import failed - return False - - -def get_package_version_without_import(name): - """For given package name, try to find the version without importing it - - Import and package.__version__ is still the backup here, so an import - *might* happen. - - Returns either the version string, or None if the package - or the version was not readily found. - """ - if name in sys.modules: - mod = sys.modules[name] - if hasattr(mod, "__version__"): - return mod.__version__ - try: - return version(name) - except: # noqa: E722 - pass - try: - import importlib - - mod = importlib.import_module(name) - return mod.__version__ - except (ImportError, AttributeError): - return None - - -def setup_logging(logger=None, logger_name=None, level="DEBUG", clear=True): - if logger is None and logger_name is None: - raise ValueError("Provide either logger object or logger name") - logger = logger or logging.getLogger(logger_name) - handle = logging.StreamHandler() - formatter = logging.Formatter( - "%(asctime)s - %(name)s - %(levelname)s - %(funcName)s -- %(message)s" - ) - handle.setFormatter(formatter) - if clear: - logger.handlers.clear() - logger.addHandler(handle) - logger.setLevel(level) - return logger - - -def _unstrip_protocol(name, fs): - return fs.unstrip_protocol(name) - - -def mirror_from(origin_name, methods): - """Mirror attributes and methods from the given - origin_name attribute of the instance to the - decorated class""" - - def origin_getter(method, self): - origin = getattr(self, origin_name) - return getattr(origin, method) - - def wrapper(cls): - for method in methods: - wrapped_method = partial(origin_getter, method) - setattr(cls, method, property(wrapped_method)) - return cls - - return wrapper - - -@contextmanager -def nullcontext(obj): - yield obj - - -def merge_offset_ranges(paths, starts, ends, max_gap=0, max_block=None, sort=True): - """Merge adjacent byte-offset ranges when the inter-range - gap is <= `max_gap`, and when the merged byte range does not - exceed `max_block` (if specified). By default, this function - will re-order the input paths and byte ranges to ensure sorted - order. If the user can guarantee that the inputs are already - sorted, passing `sort=False` will skip the re-ordering. - """ - # Check input - if not isinstance(paths, list): - raise TypeError - if not isinstance(starts, list): - starts = [starts] * len(paths) - if not isinstance(ends, list): - ends = [starts] * len(paths) - if len(starts) != len(paths) or len(ends) != len(paths): - raise ValueError - - # Early Return - if len(starts) <= 1: - return paths, starts, ends - - starts = [s or 0 for s in starts] - # Sort by paths and then ranges if `sort=True` - if sort: - paths, starts, ends = [ - list(v) - for v in zip( - *sorted( - zip(paths, starts, ends), - ) - ) - ] - - if paths: - # Loop through the coupled `paths`, `starts`, and - # `ends`, and merge adjacent blocks when appropriate - new_paths = paths[:1] - new_starts = starts[:1] - new_ends = ends[:1] - for i in range(1, len(paths)): - if paths[i] == paths[i - 1] and new_ends[-1] is None: - continue - elif ( - paths[i] != paths[i - 1] - or ((starts[i] - new_ends[-1]) > max_gap) - or ((max_block is not None and (ends[i] - new_starts[-1]) > max_block)) - ): - # Cannot merge with previous block. - # Add new `paths`, `starts`, and `ends` elements - new_paths.append(paths[i]) - new_starts.append(starts[i]) - new_ends.append(ends[i]) - else: - # Merge with previous block by updating the - # last element of `ends` - new_ends[-1] = ends[i] - return new_paths, new_starts, new_ends - - # `paths` is empty. Just return input lists - return paths, starts, ends - - -def file_size(filelike): - """Find length of any open read-mode file-like""" - pos = filelike.tell() - try: - return filelike.seek(0, 2) - finally: - filelike.seek(pos) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-fddd01ad.js b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-fddd01ad.js deleted file mode 100644 index f04f5dcd26f51e768383c90392a594ca0061e60c..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-fddd01ad.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as z,i as F,s as L,e as y,H as G,G as q,C as k,m as S,g as w,p as B,t as O,n as J,q as C,N as R,r as V,a8 as W,I as P,K as Q,M as j,E as v,J as D,a0 as X,x as Y,$ as Z,b as H,a as I,h as p,j as x,k as K,y as A}from"./index-7c0e54a6.js";/* empty css */import{g as $,B as ee}from"./Button-661a0701.js";import{B as le}from"./BlockTitle-900cfd93.js";/* empty css */import"./Info-3b2d34d7.js";function M(l,e,t){const s=l.slice();return s[15]=e[t],s}function te(l){let e;return{c(){e=P(l[3])},m(t,s){w(t,e,s)},p(t,s){s&8&&Q(e,t[3])},d(t){t&&C(e)}}}function U(l){let e,t,s,a,f,u=l[15]+"",n,h,d,b;function m(){return l[12](l[15])}function i(...c){return l[13](l[15],...c)}return{c(){e=q("label"),t=q("input"),a=G(),f=q("span"),n=P(u),h=G(),t.disabled=l[2],t.checked=s=l[0].includes(l[15]),k(t,"type","checkbox"),k(t,"name","test"),k(t,"class","svelte-1qxcj04"),k(f,"class","ml-2 svelte-1qxcj04"),k(e,"style",l[6]),k(e,"class","svelte-1qxcj04"),j(e,"disabled",l[2]),j(e,"selected",l[0].includes(l[15]))},m(c,r){w(c,e,r),v(e,t),v(e,a),v(e,f),v(f,n),v(e,h),d||(b=[D(t,"change",m),D(t,"input",i)],d=!0)},p(c,r){l=c,r&4&&(t.disabled=l[2]),r&3&&s!==(s=l[0].includes(l[15]))&&(t.checked=s),r&2&&u!==(u=l[15]+"")&&Q(n,u),r&64&&k(e,"style",l[6]),r&4&&j(e,"disabled",l[2]),r&3&&j(e,"selected",l[0].includes(l[15]))},d(c){c&&C(e),d=!1,X(b)}}}function se(l){let e,t,s,a;e=new le({props:{show_label:l[5],info:l[4],$$slots:{default:[te]},$$scope:{ctx:l}}});let f=l[1],u=[];for(let n=0;n{a.includes(_)?a.splice(a.indexOf(_),1):a.push(_),t(0,a)};function g(){c("change",a),u||c("input")}W(()=>{t(9,u=!1)});const N=_=>r(_),T=(_,E)=>c("select",{index:h.indexOf(_),value:_,selected:E.currentTarget.checked});return l.$$set=_=>{"value"in _&&t(0,a=_.value),"value_is_output"in _&&t(9,u=_.value_is_output),"style"in _&&t(10,n=_.style),"choices"in _&&t(1,h=_.choices),"disabled"in _&&t(2,d=_.disabled),"label"in _&&t(3,b=_.label),"info"in _&&t(4,m=_.info),"show_label"in _&&t(5,i=_.show_label)},l.$$.update=()=>{l.$$.dirty&2049&&JSON.stringify(a)!==JSON.stringify(f)&&(t(11,f=a.slice()),g()),l.$$.dirty&1024&&t(6,{item_container:s}=$(n,["item_container"]),s)},[a,h,d,b,m,i,s,c,r,u,n,f,N,T]}class ie extends z{constructor(e){super(),F(this,e,ne,se,L,{value:0,value_is_output:9,style:10,choices:1,disabled:2,label:3,info:4,show_label:5})}}function ae(l){let e,t,s,a,f,u;const n=[l[11]];let h={};for(let i=0;iI(s,"value",d)),H.push(()=>I(s,"value_is_output",b)),s.$on("select",l[14]),s.$on("change",l[15]),s.$on("input",l[16]),{c(){y(e.$$.fragment),t=G(),y(s.$$.fragment)},m(i,c){S(e,i,c),w(i,t,c),S(s,i,c),u=!0},p(i,c){const r=c&2048?p(n,[x(i[11])]):{};e.$set(r);const g={};c&32&&(g.choices=i[5]),c&256&&(g.label=i[8]),c&512&&(g.info=i[9]),c&64&&(g.style=i[6]),c&1024&&(g.show_label=i[10]),c&128&&(g.disabled=i[7]==="static"),!a&&c&1&&(a=!0,g.value=i[0],K(()=>a=!1)),!f&&c&2&&(f=!0,g.value_is_output=i[1],K(()=>f=!1)),s.$set(g)},i(i){u||(B(e.$$.fragment,i),B(s.$$.fragment,i),u=!0)},o(i){O(e.$$.fragment,i),O(s.$$.fragment,i),u=!1},d(i){J(e,i),i&&C(t),J(s,i)}}}function ue(l){let e,t;return e=new ee({props:{visible:l[4],elem_id:l[2],elem_classes:l[3],type:"fieldset",disable:typeof l[6].container=="boolean"&&!l[6].container,$$slots:{default:[ae]},$$scope:{ctx:l}}}),{c(){y(e.$$.fragment)},m(s,a){S(e,s,a),t=!0},p(s,[a]){const f={};a&16&&(f.visible=s[4]),a&4&&(f.elem_id=s[2]),a&8&&(f.elem_classes=s[3]),a&64&&(f.disable=typeof s[6].container=="boolean"&&!s[6].container),a&135139&&(f.$$scope={dirty:a,ctx:s}),e.$set(f)},i(s){t||(B(e.$$.fragment,s),t=!0)},o(s){O(e.$$.fragment,s),t=!1},d(s){J(e,s)}}}function oe(l,e,t){let{elem_id:s=""}=e,{elem_classes:a=[]}=e,{visible:f=!0}=e,{value:u=[]}=e,{value_is_output:n=!1}=e,{choices:h}=e,{style:d={}}=e,{mode:b}=e,{label:m="Checkbox Group"}=e,{info:i=void 0}=e,{show_label:c}=e,{loading_status:r}=e;function g(o){u=o,t(0,u)}function N(o){n=o,t(1,n)}function T(o){A.call(this,l,o)}function _(o){A.call(this,l,o)}function E(o){A.call(this,l,o)}return l.$$set=o=>{"elem_id"in o&&t(2,s=o.elem_id),"elem_classes"in o&&t(3,a=o.elem_classes),"visible"in o&&t(4,f=o.visible),"value"in o&&t(0,u=o.value),"value_is_output"in o&&t(1,n=o.value_is_output),"choices"in o&&t(5,h=o.choices),"style"in o&&t(6,d=o.style),"mode"in o&&t(7,b=o.mode),"label"in o&&t(8,m=o.label),"info"in o&&t(9,i=o.info),"show_label"in o&&t(10,c=o.show_label),"loading_status"in o&&t(11,r=o.loading_status)},[u,n,s,a,f,h,d,b,m,i,c,r,g,N,T,_,E]}class fe extends z{constructor(e){super(),F(this,e,oe,ue,L,{elem_id:2,elem_classes:3,visible:4,value:0,value_is_output:1,choices:5,style:6,mode:7,label:8,info:9,show_label:10,loading_status:11})}}const me=fe,ge=["static","dynamic"],ke=l=>({type:{payload:"Array"},description:{payload:"list of selected choices"},example_data:l.choices.length?[l.choices[0]]:[]});export{me as Component,ke as document,ge as modes}; -//# sourceMappingURL=index-fddd01ad.js.map diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/r-3ca97919.js b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/r-3ca97919.js deleted file mode 100644 index e460c951763f569906751f34aed4265f5d719d36..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/r-3ca97919.js +++ /dev/null @@ -1,2 +0,0 @@ -function f(e){for(var n={},r=0;r=!&|~$:]/,t;function p(e,n){t=null;var r=e.next();if(r=="#")return e.skipToEnd(),"comment";if(r=="0"&&e.eat("x"))return e.eatWhile(/[\da-f]/i),"number";if(r=="."&&e.eat(/\d/))return e.match(/\d*(?:e[+\-]?\d+)?/),"number";if(/\d/.test(r))return e.match(/\d*(?:\.\d+)?(?:e[+\-]\d+)?L?/),"number";if(r=="'"||r=='"')return n.tokenize=E(r),"string";if(r=="`")return e.match(/[^`]+`/),"string.special";if(r=="."&&e.match(/.(?:[.]|\d+)/))return"keyword";if(/[a-zA-Z\.]/.test(r)){e.eatWhile(/[\w\.]/);var i=e.current();return h.propertyIsEnumerable(i)?"atom":N.propertyIsEnumerable(i)?(A.propertyIsEnumerable(i)&&!e.match(/\s*if(\s+|$)/,!1)&&(t="block"),"keyword"):m.propertyIsEnumerable(i)?"builtin":"variable"}else return r=="%"?(e.skipTo("%")&&e.next(),"variableName.special"):r=="<"&&e.eat("-")||r=="<"&&e.match("<-")||r=="-"&&e.match(/>>?/)||r=="="&&n.ctx.argList?"operator":k.test(r)?(r=="$"||e.eatWhile(k),"operator"):/[\(\){}\[\];]/.test(r)?(t=r,r==";"?"punctuation":null):null}function E(e){return function(n,r){if(n.eat("\\")){var i=n.next();return i=="x"?n.match(/^[a-f0-9]{2}/i):(i=="u"||i=="U")&&n.eat("{")&&n.skipTo("}")?n.next():i=="u"?n.match(/^[a-f0-9]{4}/i):i=="U"?n.match(/^[a-f0-9]{8}/i):/[0-7]/.test(i)&&n.match(/^[0-7]{1,2}/),"string.special"}else{for(var l;(l=n.next())!=null;){if(l==e){r.tokenize=p;break}if(l=="\\"){n.backUp(1);break}}return"string"}}}var v=1,u=2,c=4;function o(e,n,r){e.ctx={type:n,indent:e.indent,flags:0,column:r.column(),prev:e.ctx}}function x(e,n){var r=e.ctx;e.ctx={type:r.type,indent:r.indent,flags:r.flags|n,column:r.column,prev:r.prev}}function a(e){e.indent=e.ctx.indent,e.ctx=e.ctx.prev}const I={name:"r",startState:function(e){return{tokenize:p,ctx:{type:"top",indent:-e,flags:u},indent:0,afterIdent:!1}},token:function(e,n){if(e.sol()&&(n.ctx.flags&3||(n.ctx.flags|=u),n.ctx.flags&c&&a(n),n.indent=e.indentation()),e.eatSpace())return null;var r=n.tokenize(e,n);return r!="comment"&&!(n.ctx.flags&u)&&x(n,v),(t==";"||t=="{"||t=="}")&&n.ctx.type=="block"&&a(n),t=="{"?o(n,"}",e):t=="("?(o(n,")",e),n.afterIdent&&(n.ctx.argList=!0)):t=="["?o(n,"]",e):t=="block"?o(n,"block",e):t==n.ctx.type?a(n):n.ctx.type=="block"&&r!="comment"&&x(n,c),n.afterIdent=r=="variable"||r=="keyword",r},indent:function(e,n,r){if(e.tokenize!=p)return 0;var i=n&&n.charAt(0),l=e.ctx,d=i==l.type;return l.flags&c&&(l=l.prev),l.type=="block"?l.indent+(i=="{"?0:r.unit):l.flags&v?l.column+(d?0:1):l.indent+(d?0:r.unit)},languageData:{wordChars:".",commentTokens:{line:"#"},autocomplete:b.concat(g,s)}};export{I as r}; -//# sourceMappingURL=r-3ca97919.js.map diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-3fb2ee4c.js b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-3fb2ee4c.js deleted file mode 100644 index 15a30b37186d2c1e617b67110d323c26a665214e..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-3fb2ee4c.js +++ /dev/null @@ -1,3 +0,0 @@ -import{S as R,i as V,s as W,B as S,C as u,g as L,E as w,F as I,q as E,G as O,H as X,I as K,D as F,aa as ye,ai as we,K as P,f as G,N as Y,r as de,u as je,aj as Z,a6 as Le,a1 as Ee,e as N,m as H,p as z,t as B,n as q,x as Ae,$ as Ce,h as Me,j as Te,l as pe,o as ve}from"./index-8c3da1d9.js";import{U as ze}from"./Upload-5d35e059.js";import{M as Be}from"./ModifyUpload-00319b5e.js";import{B as Se}from"./Button-62634b34.js";import{B as Ue}from"./BlockLabel-98ef75ee.js";import{E as Fe}from"./Empty-5d52e655.js";import{g as Ne}from"./color-75f3ed8f.js";import{a as He}from"./csv-b0b7514a.js";import{Z as J,_ as Q,l as $}from"./linear-58a44b5e.js";import{U as qe}from"./UploadText-4b161758.js";import"./Blocks-6ad6f005.js";/* empty css */import"./ModifyUpload.svelte_svelte_type_style_lang-ba6baa96.js";import"./dsv-576afacd.js";function De(l){let e,n,t;return{c(){e=S("svg"),n=S("path"),t=S("path"),u(n,"d","M28.828 3.172a4.094 4.094 0 0 0-5.656 0L4.05 22.292A6.954 6.954 0 0 0 2 27.242V30h2.756a6.952 6.952 0 0 0 4.95-2.05L28.828 8.829a3.999 3.999 0 0 0 0-5.657zM10.91 18.26l2.829 2.829l-2.122 2.121l-2.828-2.828zm-2.619 8.276A4.966 4.966 0 0 1 4.756 28H4v-.759a4.967 4.967 0 0 1 1.464-3.535l1.91-1.91l2.829 2.828zM27.415 7.414l-12.261 12.26l-2.829-2.828l12.262-12.26a2.047 2.047 0 0 1 2.828 0a2 2 0 0 1 0 2.828z"),u(n,"fill","currentColor"),u(t,"d","M6.5 15a3.5 3.5 0 0 1-2.475-5.974l3.5-3.5a1.502 1.502 0 0 0 0-2.121a1.537 1.537 0 0 0-2.121 0L3.415 5.394L2 3.98l1.99-1.988a3.585 3.585 0 0 1 4.95 0a3.504 3.504 0 0 1 0 4.949L5.439 10.44a1.502 1.502 0 0 0 0 2.121a1.537 1.537 0 0 0 2.122 0l4.024-4.024L13 9.95l-4.025 4.024A3.475 3.475 0 0 1 6.5 15z"),u(t,"fill","currentColor"),u(e,"width","1em"),u(e,"height","1em"),u(e,"viewBox","0 0 32 32")},m(o,a){L(o,e,a),w(e,n),w(e,t)},p:I,i:I,o:I,d(o){o&&E(e)}}}let be=class extends R{constructor(e){super(),V(this,e,null,De,W,{})}};function x(l){let e;return Array.isArray(l)?e=l.reduce((n,{values:t})=>[...n,...t.map(({y:o})=>o)],[]):e=l.values,[Math.min(...e),Math.max(...e)]}function ee(l,e,n){const t=Object.entries(l[0]).reduce((o,a,s)=>(!e&&s===0||e&&a[0]===e?o.x.name=a[0]:(!n||n&&n.includes(a[0]))&&o.y.push({name:a[0],values:[]}),o),{x:{name:"",values:[]},y:[]});for(let o=0;ol[6].call(e))},m(s,_){L(s,e,_),w(e,n),w(e,t),w(e,o),a=we(e,l[6].bind(e))},p(s,[_]){_&8&&F(n,"background",s[3]),_&1&&P(o,s[0]),_&36&&F(e,"top",s[2]-s[5]/2+"px"),_&18&&F(e,"left",s[1]-s[4]-7+"px")},i:I,o:I,d(s){s&&E(e),a()}}}function Oe(l,e,n){let{text:t}=e,{x:o}=e,{y:a}=e,{color:s}=e,_,r;function p(){_=this.offsetWidth,r=this.offsetHeight,n(4,_),n(5,r)}return l.$$set=d=>{"text"in d&&n(0,t=d.text),"x"in d&&n(1,o=d.x),"y"in d&&n(2,a=d.y),"color"in d&&n(3,s=d.color)},[t,o,a,s,_,r,p]}class Xe extends R{constructor(e){super(),V(this,e,Oe,Ie,W,{text:0,x:1,y:2,color:3})}}function Ye(l,{color:e,text:n}){let t;function o(r){return t=new Xe({props:{text:n,x:r.pageX,y:r.pageY,color:e},target:document.body}),r}function a(r){t.$set({x:r.pageX,y:r.pageY})}function s(){t.$destroy()}const _=l;return _.addEventListener("mouseover",o),_.addEventListener("mouseleave",s),_.addEventListener("mousemove",a),{destroy(){_.removeEventListener("mouseover",o),_.removeEventListener("mouseleave",s),_.removeEventListener("mousemove",a)}}}function le(l,e,n){const t=l.slice();t[16]=e[n].name,t[17]=e[n].values;const o=t[8][t[16]];return t[18]=o,t}function te(l,e,n){const t=l.slice();return t[0]=e[n].x,t[1]=e[n].y,t}function ne(l,e,n){const t=l.slice();t[16]=e[n].name,t[17]=e[n].values;const o=t[8][t[16]];return t[18]=o,t}function oe(l,e,n){const t=l.slice();return t[0]=e[n].x,t[1]=e[n].y,t}function se(l,e,n){const t=l.slice();return t[27]=e[n],t}function ae(l,e,n){const t=l.slice();return t[27]=e[n],t}function re(l,e,n){const t=l.slice();return t[16]=e[n].name,t}function ie(l){let e,n,t,o=l[16]+"",a,s;return{c(){e=O("div"),n=O("span"),t=X(),a=K(o),s=X(),u(n,"class","legend-box svelte-1mjxput"),F(n,"background-color",l[8][l[16]]),u(e,"class","legend-item svelte-1mjxput")},m(_,r){L(_,e,r),w(e,n),w(e,t),w(e,a),w(e,s)},p(_,r){r[0]&260&&F(n,"background-color",_[8][_[16]]),r[0]&4&&o!==(o=_[16]+"")&&P(a,o)},d(_){_&&E(e)}}}function fe(l){let e,n,t,o,a,s,_=l[27]+"",r,p,d;return{c(){e=S("line"),s=S("text"),r=K(_),u(e,"stroke-width","0.5"),u(e,"x1",n=l[5](l[27])),u(e,"x2",t=l[5](l[27])),u(e,"y1",o=l[4](l[9][0]l[9][l[9].length-1]?l[6][1]:l[9][l[9].length-1])),u(e,"stroke","#aaa"),u(s,"class","label-text svelte-1mjxput"),u(s,"text-anchor","middle"),u(s,"x",p=l[5](l[27])),u(s,"y",d=l[4](l[9][0])+30)},m(i,h){L(i,e,h),L(i,s,h),w(s,r)},p(i,h){h[0]&1056&&n!==(n=i[5](i[27]))&&u(e,"x1",n),h[0]&1056&&t!==(t=i[5](i[27]))&&u(e,"x2",t),h[0]&592&&o!==(o=i[4](i[9][0]i[9][i[9].length-1]?i[6][1]:i[9][i[9].length-1]))&&u(e,"y2",a),h[0]&1024&&_!==(_=i[27]+"")&&P(r,_),h[0]&1056&&p!==(p=i[5](i[27]))&&u(s,"x",p),h[0]&528&&d!==(d=i[4](i[9][0])+30)&&u(s,"y",d)},d(i){i&&E(e),i&&E(s)}}}function _e(l){let e,n,t,o,a,s,_=l[27]+"",r,p,d;return{c(){e=S("line"),s=S("text"),r=K(_),u(e,"stroke-width","0.5"),u(e,"y1",n=l[4](l[27])),u(e,"y2",t=l[4](l[27])),u(e,"x1",o=l[5](l[10][0]l[10][l[10].length-1]?l[7][1]:l[10][l[10].length-1])),u(e,"stroke","#aaa"),u(s,"class","label-text svelte-1mjxput"),u(s,"text-anchor","end"),u(s,"y",p=l[4](l[27])+4),u(s,"x",d=l[5](l[10][0])-20)},m(i,h){L(i,e,h),L(i,s,h),w(s,r)},p(i,h){h[0]&528&&n!==(n=i[4](i[27]))&&u(e,"y1",n),h[0]&528&&t!==(t=i[4](i[27]))&&u(e,"y2",t),h[0]&1184&&o!==(o=i[5](i[10][0]i[10][i[10].length-1]?i[7][1]:i[10][i[10].length-1]))&&u(e,"x2",a),h[0]&512&&_!==(_=i[27]+"")&&P(r,_),h[0]&528&&p!==(p=i[4](i[27])+4)&&u(s,"y",p),h[0]&1056&&d!==(d=i[5](i[10][0])-20)&&u(s,"x",d)},d(i){i&&E(e),i&&E(s)}}}function ue(l){let e,n,t,o,a,s,_=l[6][1]+"",r,p,d;return{c(){e=S("line"),s=S("text"),r=K(_),u(e,"stroke-width","0.5"),u(e,"y1",n=l[4](l[6][1])),u(e,"y2",t=l[4](l[6][1])),u(e,"x1",o=l[5](l[10][0])),u(e,"x2",a=l[5](l[7][1])),u(e,"stroke","#aaa"),u(s,"class","label-text svelte-1mjxput"),u(s,"text-anchor","end"),u(s,"y",p=l[4](l[6][1])+4),u(s,"x",d=l[5](l[10][0])-20)},m(i,h){L(i,e,h),L(i,s,h),w(s,r)},p(i,h){h[0]&80&&n!==(n=i[4](i[6][1]))&&u(e,"y1",n),h[0]&80&&t!==(t=i[4](i[6][1]))&&u(e,"y2",t),h[0]&1056&&o!==(o=i[5](i[10][0]))&&u(e,"x1",o),h[0]&160&&a!==(a=i[5](i[7][1]))&&u(e,"x2",a),h[0]&64&&_!==(_=i[6][1]+"")&&P(r,_),h[0]&80&&p!==(p=i[4](i[6][1])+4)&&u(s,"y",p),h[0]&1056&&d!==(d=i[5](i[10][0])-20)&&u(s,"x",d)},d(i){i&&E(e),i&&E(s)}}}function ce(l){let e,n,t,o;return{c(){e=S("circle"),u(e,"r","3.5"),u(e,"cx",n=l[5](l[0])),u(e,"cy",t=l[4](l[1])),u(e,"stroke-width","1.5"),u(e,"stroke",o=l[18]),u(e,"fill","none")},m(a,s){L(a,e,s)},p(a,s){s[0]&36&&n!==(n=a[5](a[0]))&&u(e,"cx",n),s[0]&20&&t!==(t=a[4](a[1]))&&u(e,"cy",t),s[0]&260&&o!==(o=a[18])&&u(e,"stroke",o)},d(a){a&&E(e)}}}function me(l){let e,n,t,o=l[17],a=[];for(let s=0;sl[9][l[9].length-1]&&ue(l),C=l[2],j=[];for(let c=0;cc[9][c[9].length-1]?b?b.p(c,M):(b=ue(c),b.c(),b.m(a,null)):b&&(b.d(1),b=null),M[0]&308){C=c[2];let f;for(f=0;f{k("process",{x:t,y:o})});const y=({x:b,y:C})=>[_(b),r(C)];return l.$$set=b=>{"value"in b&&n(11,i=b.value),"x"in b&&n(0,h=b.x),"y"in b&&n(1,A=b.y),"colors"in b&&n(12,m=b.colors)},l.$$.update=()=>{l.$$.dirty[0]&2051&&n(3,{x:t,y:o}=ee(typeof i=="string"?He(i):i,h,A),t,(n(2,o),n(11,i),n(0,h),n(1,A))),l.$$.dirty[0]&8&&n(7,a=x(t)),l.$$.dirty[0]&4&&n(6,s=x(o)),l.$$.dirty[0]&128&&n(5,_=J(a,[0,600]).nice()),l.$$.dirty[0]&64&&n(4,r=J(s,[350,0]).nice()),l.$$.dirty[0]&32&&n(10,p=_.ticks(8)),l.$$.dirty[0]&16&&n(9,d=r.ticks(8)),l.$$.dirty[0]&4&&n(8,v=o.reduce((b,C,j)=>({...b,[C.name]:U(j)}),{}))},[h,A,o,t,r,_,s,a,v,d,p,i,m,y]}class ke extends R{constructor(e){super(),V(this,e,Ke,Ge,W,{value:11,x:0,y:1,colors:12},null,[-1,-1])}}function Pe(l){let e,n;return e=new ze({props:{filetype:"text/csv",include_file_metadata:!1,$$slots:{default:[We]},$$scope:{ctx:l}}}),e.$on("load",l[16]),{c(){N(e.$$.fragment)},m(t,o){H(e,t,o),n=!0},p(t,o){const a={};o&1048576&&(a.$$scope={dirty:o,ctx:t}),e.$set(a)},i(t){n||(z(e.$$.fragment,t),n=!0)},o(t){B(e.$$.fragment,t),n=!1},d(t){q(e,t)}}}function Re(l){let e,n,t,o,a;return n=new Be({}),n.$on("clear",l[14]),o=new ke({props:{value:l[11],y:l[4],x:l[5],colors:l[9]}}),o.$on("process",l[15]),{c(){e=O("div"),N(n.$$.fragment),t=X(),N(o.$$.fragment),u(e,"class","chart svelte-etmurc")},m(s,_){L(s,e,_),H(n,e,null),w(e,t),H(o,e,null),a=!0},p(s,_){const r={};_&2048&&(r.value=s[11]),_&16&&(r.y=s[4]),_&32&&(r.x=s[5]),_&512&&(r.colors=s[9]),o.$set(r)},i(s){a||(z(n.$$.fragment,s),z(o.$$.fragment,s),a=!0)},o(s){B(n.$$.fragment,s),B(o.$$.fragment,s),a=!1},d(s){s&&E(e),q(n),q(o)}}}function Ve(l){let e,n,t,o;const a=[Je,Ze],s=[];function _(r,p){return r[12]?0:1}return e=_(l),n=s[e]=a[e](l),{c(){n.c(),t=G()},m(r,p){s[e].m(r,p),L(r,t,p),o=!0},p(r,p){let d=e;e=_(r),e===d?s[e].p(r,p):(pe(),B(s[d],1,1,()=>{s[d]=null}),ve(),n=s[e],n?n.p(r,p):(n=s[e]=a[e](r),n.c()),z(n,1),n.m(t.parentNode,t))},i(r){o||(z(n),o=!0)},o(r){B(n),o=!1},d(r){s[e].d(r),r&&E(t)}}}function We(l){let e,n;return e=new qe({props:{type:"csv"}}),{c(){N(e.$$.fragment)},m(t,o){H(e,t,o),n=!0},p:I,i(t){n||(z(e.$$.fragment,t),n=!0)},o(t){B(e.$$.fragment,t),n=!1},d(t){q(e,t)}}}function Ze(l){let e,n;return e=new Fe({props:{size:"large",unpadded_box:!0,$$slots:{default:[Qe]},$$scope:{ctx:l}}}),{c(){N(e.$$.fragment)},m(t,o){H(e,t,o),n=!0},p(t,o){const a={};o&1048576&&(a.$$scope={dirty:o,ctx:t}),e.$set(a)},i(t){n||(z(e.$$.fragment,t),n=!0)},o(t){B(e.$$.fragment,t),n=!1},d(t){q(e,t)}}}function Je(l){let e,n;return e=new ke({props:{value:l[12],colors:l[9]}}),{c(){N(e.$$.fragment)},m(t,o){H(e,t,o),n=!0},p(t,o){const a={};o&4096&&(a.value=t[12]),o&512&&(a.colors=t[9]),e.$set(a)},i(t){n||(z(e.$$.fragment,t),n=!0)},o(t){B(e.$$.fragment,t),n=!1},d(t){q(e,t)}}}function Qe(l){let e,n;return e=new be({}),{c(){N(e.$$.fragment)},m(t,o){H(e,t,o),n=!0},i(t){n||(z(e.$$.fragment,t),n=!0)},o(t){B(e.$$.fragment,t),n=!1},d(t){q(e,t)}}}function $e(l){let e,n,t,o,a,s,_,r;e=new Ue({props:{show_label:l[8],Icon:be,label:l[7]||"TimeSeries"}});const p=[l[10]];let d={};for(let m=0;m{h[y]=null}),ve()),~a?(s=h[a],s?s.p(m,k):(s=h[a]=i[a](m),s.c()),z(s,1),s.m(_.parentNode,_)):s=null)},i(m){r||(z(e.$$.fragment,m),z(t.$$.fragment,m),z(s),r=!0)},o(m){B(e.$$.fragment,m),B(t.$$.fragment,m),B(s),r=!1},d(m){q(e,m),m&&E(n),q(t,m),m&&E(o),~a&&h[a].d(m),m&&E(_)}}}function xe(l){let e,n;return e=new Se({props:{visible:l[3],variant:l[6]==="dynamic"&&!l[11]?"dashed":"solid",padding:!1,elem_id:l[1],elem_classes:l[2],$$slots:{default:[$e]},$$scope:{ctx:l}}}),{c(){N(e.$$.fragment)},m(t,o){H(e,t,o),n=!0},p(t,[o]){const a={};o&8&&(a.visible=t[3]),o&2112&&(a.variant=t[6]==="dynamic"&&!t[11]?"dashed":"solid"),o&2&&(a.elem_id=t[1]),o&4&&(a.elem_classes=t[2]),o&1056753&&(a.$$scope={dirty:o,ctx:t}),e.$set(a)},i(t){n||(z(e.$$.fragment,t),n=!0)},o(t){B(e.$$.fragment,t),n=!1},d(t){q(e,t)}}}function el(l){return l.data.map(e=>e.reduce((n,t,o)=>({...n,[l.headers[o]]:t}),{}))}function ll(l){const e=atob(l.split(",")[1]),n=l.split(",")[0].split(":")[1].split(";")[0],t=new ArrayBuffer(e.length),o=new Uint8Array(t);for(let a=0;an.push(o));for(let o=0;oa.push(s[o].y)),t.push(a)}return{headers:n,data:t}}function nl(l,e,n){let t;const o=de();let{elem_id:a=""}=e,{elem_classes:s=[]}=e,{visible:_=!0}=e,{value:r}=e,{y:p}=e,{x:d}=e,{mode:i}=e,{label:h}=e,{show_label:A}=e,{colors:m}=e,{loading_status:k}=e,v;function U(g){const c=new FileReader;c.addEventListener("loadend",M=>{n(11,v=M.srcElement.result)}),c.readAsText(g)}function y(g){g.headers&&n(11,v=g.headers.join(",")),g.data.forEach(M=>{n(11,v=v+` -`),n(11,v=v+M.join(","))})}function b(g){return n(0,r={data:g}),g}function C({detail:g}){n(0,r=null),o("change"),o("clear")}const j=({detail:{x:g,y:c}})=>n(0,r=tl(g,c)),D=({detail:g})=>b(g);return l.$$set=g=>{"elem_id"in g&&n(1,a=g.elem_id),"elem_classes"in g&&n(2,s=g.elem_classes),"visible"in g&&n(3,_=g.visible),"value"in g&&n(0,r=g.value),"y"in g&&n(4,p=g.y),"x"in g&&n(5,d=g.x),"mode"in g&&n(6,i=g.mode),"label"in g&&n(7,h=g.label),"show_label"in g&&n(8,A=g.show_label),"colors"in g&&n(9,m=g.colors),"loading_status"in g&&n(10,k=g.loading_status)},l.$$.update=()=>{l.$$.dirty&1&&(r&&r.data&&typeof r.data=="string"?r?U(ll(r.data)):n(11,v=null):r&&r.data&&typeof r.data!="string"&&(r||n(11,v=null),y(r))),l.$$.dirty&2049&&n(11,v=r==null?null:v),l.$$.dirty&65&&n(12,t=i==="static"&&r&&el(r)),l.$$.dirty&1&&o("change")},[r,a,s,_,p,d,i,h,A,m,k,v,t,b,C,j,D]}class ol extends R{constructor(e){super(),V(this,e,nl,xe,W,{elem_id:1,elem_classes:2,visible:3,value:0,y:4,x:5,mode:6,label:7,show_label:8,colors:9,loading_status:10})}}const kl=ol,yl=["static","dynamic"],wl=l=>({type:{payload:"{data: Array> | string; headers?: Array;}"},description:{payload:"dataset of series"}});export{kl as Component,wl as document,yl as modes}; -//# sourceMappingURL=index-3fb2ee4c.js.map diff --git a/spaces/lambdalabs/LambdaSuperRes/KAIR/main_challenge_sr.py b/spaces/lambdalabs/LambdaSuperRes/KAIR/main_challenge_sr.py deleted file mode 100644 index 0798dd31904adf647f0834a8ce4873438fad037f..0000000000000000000000000000000000000000 --- a/spaces/lambdalabs/LambdaSuperRes/KAIR/main_challenge_sr.py +++ /dev/null @@ -1,174 +0,0 @@ -import os.path -import logging -import time -from collections import OrderedDict -import torch - -from utils import utils_logger -from utils import utils_image as util -# from utils import utils_model - - -''' -This code can help you to calculate: -`FLOPs`, `#Params`, `Runtime`, `#Activations`, `#Conv`, and `Max Memory Allocated`. - -- `#Params' denotes the total number of parameters. -- `FLOPs' is the abbreviation for floating point operations. -- `#Activations' measures the number of elements of all outputs of convolutional layers. -- `Memory' represents maximum GPU memory consumption according to the PyTorch function torch.cuda.max_memory_allocated(). -- `#Conv' represents the number of convolutional layers. -- `FLOPs', `#Activations', and `Memory' are tested on an LR image of size 256x256. - -For more information, please refer to ECCVW paper "AIM 2020 Challenge on Efficient Super-Resolution: Methods and Results". - -# If you use this code, please consider the following citations: - -@inproceedings{zhang2020aim, - title={AIM 2020 Challenge on Efficient Super-Resolution: Methods and Results}, - author={Kai Zhang and Martin Danelljan and Yawei Li and Radu Timofte and others}, - booktitle={European Conference on Computer Vision Workshops}, - year={2020} -} -@inproceedings{zhang2019aim, - title={AIM 2019 Challenge on Constrained Super-Resolution: Methods and Results}, - author={Kai Zhang and Shuhang Gu and Radu Timofte and others}, - booktitle={IEEE International Conference on Computer Vision Workshops}, - year={2019} -} - -CuDNN (https://developer.nvidia.com/rdp/cudnn-archive) should be installed. - -For `Memery` and `Runtime`, set 'print_modelsummary = False' and 'save_results = False'. -''' - - - - -def main(): - - utils_logger.logger_info('efficientsr_challenge', log_path='efficientsr_challenge.log') - logger = logging.getLogger('efficientsr_challenge') - -# print(torch.__version__) # pytorch version -# print(torch.version.cuda) # cuda version -# print(torch.backends.cudnn.version()) # cudnn version - - # -------------------------------- - # basic settings - # -------------------------------- - model_names = ['msrresnet', 'imdn'] - model_id = 1 # set the model name - sf = 4 - model_name = model_names[model_id] - logger.info('{:>16s} : {:s}'.format('Model Name', model_name)) - - testsets = 'testsets' # set path of testsets - testset_L = 'DIV2K_valid_LR' # set current testing dataset; 'DIV2K_test_LR' - testset_L = 'set12' - - save_results = True - print_modelsummary = True # set False when calculating `Max Memery` and `Runtime` - - torch.cuda.set_device(0) # set GPU ID - logger.info('{:>16s} : {:16s} : {:<.4f} [M]'.format('#Activations', activations/10**6)) - logger.info('{:>16s} : {:16s} : {:<.4f} [G]'.format('FLOPs', flops/10**9)) - - num_parameters = sum(map(lambda x: x.numel(), model.parameters())) - logger.info('{:>16s} : {:<.4f} [M]'.format('#Params', num_parameters/10**6)) - - # -------------------------------- - # read image - # -------------------------------- - L_path = os.path.join(testsets, testset_L) - E_path = os.path.join(testsets, testset_L+'_'+model_name) - util.mkdir(E_path) - - # record runtime - test_results = OrderedDict() - test_results['runtime'] = [] - - logger.info('{:>16s} : {:s}'.format('Input Path', L_path)) - logger.info('{:>16s} : {:s}'.format('Output Path', E_path)) - idx = 0 - - start = torch.cuda.Event(enable_timing=True) - end = torch.cuda.Event(enable_timing=True) - - for img in util.get_image_paths(L_path): - - # -------------------------------- - # (1) img_L - # -------------------------------- - idx += 1 - img_name, ext = os.path.splitext(os.path.basename(img)) - logger.info('{:->4d}--> {:>10s}'.format(idx, img_name+ext)) - - img_L = util.imread_uint(img, n_channels=3) - img_L = util.uint2tensor4(img_L) - torch.cuda.empty_cache() - img_L = img_L.to(device) - - start.record() - img_E = model(img_L) - # img_E = utils_model.test_mode(model, img_L, mode=2, min_size=480, sf=sf) # use this to avoid 'out of memory' issue. - # logger.info('{:>16s} : {:<.3f} [M]'.format('Max Memery', torch.cuda.max_memory_allocated(torch.cuda.current_device())/1024**2)) # Memery - end.record() - torch.cuda.synchronize() - test_results['runtime'].append(start.elapsed_time(end)) # milliseconds - - -# torch.cuda.synchronize() -# start = time.time() -# img_E = model(img_L) -# torch.cuda.synchronize() -# end = time.time() -# test_results['runtime'].append(end-start) # seconds - - # -------------------------------- - # (2) img_E - # -------------------------------- - img_E = util.tensor2uint(img_E) - - if save_results: - util.imsave(img_E, os.path.join(E_path, img_name+ext)) - ave_runtime = sum(test_results['runtime']) / len(test_results['runtime']) / 1000.0 - logger.info('------> Average runtime of ({}) is : {:.6f} seconds'.format(L_path, ave_runtime)) - - -if __name__ == '__main__': - - main() diff --git a/spaces/lawliet/CS224-knowledge-discovery/app.py b/spaces/lawliet/CS224-knowledge-discovery/app.py deleted file mode 100644 index 965b9748331418d00a6438be1588b0e93a77366a..0000000000000000000000000000000000000000 --- a/spaces/lawliet/CS224-knowledge-discovery/app.py +++ /dev/null @@ -1,92 +0,0 @@ -import asyncio -import json -import os - -import streamlit as st -from aiocache import Cache, cached -from src.generate import get_openai_answer -from src.retrieve import get_pinecone_results - -st.title("CS224U knowledge discovery on background Materials") -# st.header("Welcome to the Llama Index Streamlit Demo") -st.write( - """ - This is a quick search engine for finding answers relevant to CS224U basic questions. - The demo contains a subset of notes relevant to CS224U background materials. This does not include any lecture notes of CS224U.""" -) - -index = None -api_key = st.text_input("Enter your OpenAI API key here:", type="password") -if api_key: - os.environ["OPENAI_API_KEY"] = api_key - - -if index is None: - st.warning("Please enter your api key first.") - -text = st.text_input("Query text:", value="different ways to normalize layers?") - - -@cached(ttl=None, cache=Cache.MEMORY) -async def run_query(_q: str, only_retrieve): - pinecone_response, token_usage_vec = await get_pinecone_results(_q, 10) - if not only_retrieve: - answer, token_usage_qa = await get_openai_answer(_q, pinecone_response) - else: - answer, token_usage_qa = "", 0 - return { - "pinecone_response": pinecone_response, - "token_usage_vec": token_usage_vec, - "answer": answer, - "token_usage_qa": token_usage_qa, - } - - -checkbox = False -if st.checkbox("Disable QA and only show search results"): - checkbox = True - - -async def main(): - if st.button("Run Query") and text is not None: - - answer_dict = await run_query(text, checkbox) - pinecone_response = answer_dict["pinecone_response"] - token_usage_vec = answer_dict["token_usage_vec"] - answer = answer_dict["answer"] - token_usage_qa = answer_dict["token_usage_qa"] - - if answer: - st.success("#### Generated Answer: ") - st.markdown(f"""{answer}""", unsafe_allow_html=True) - - # st.divider() - st.markdown( - """
        """, - unsafe_allow_html=True, - ) - st.success("#### Most relevant notes: ") - - reference_sources = set() - for source in pinecone_response["matches"]: - if source["metadata"]["filename"] in reference_sources: - continue - reference_sources.add(source["metadata"]["filename"]) - st.markdown( - f"[{source['metadata']['filename']}]({source['metadata']['link']})" - ) - # st.info(f"Example Snippet: {source['metadata']['content'][:100]}...") - with st.expander("Relevant Snippet"): - st.markdown( - f"""{source["metadata"]["content"]}""", unsafe_allow_html=True - ) - - llm_col, embed_col = st.columns(2) - with llm_col: - st.info(f"LLM Tokens Used: {token_usage_qa}") - - with embed_col: - st.info(f"Embedding Tokens Used: {token_usage_vec}") - - -asyncio.run(main()) diff --git a/spaces/leogabraneth/text-generation-webui-main/modules/models.py b/spaces/leogabraneth/text-generation-webui-main/modules/models.py deleted file mode 100644 index cbead69d73c7af05e1d5e1fcd357b4eba3526fe4..0000000000000000000000000000000000000000 --- a/spaces/leogabraneth/text-generation-webui-main/modules/models.py +++ /dev/null @@ -1,422 +0,0 @@ -import gc -import os -import re -import time -import traceback -from pathlib import Path - -import torch -import transformers -from accelerate import infer_auto_device_map, init_empty_weights -from accelerate.utils import is_ccl_available, is_xpu_available -from transformers import ( - AutoConfig, - AutoModel, - AutoModelForCausalLM, - AutoModelForSeq2SeqLM, - AutoTokenizer, - BitsAndBytesConfig, - GPTQConfig -) - -import modules.shared as shared -from modules import RoPE, llama_attn_hijack, sampler_hijack -from modules.logging_colors import logger -from modules.models_settings import get_model_metadata - -transformers.logging.set_verbosity_error() - -local_rank = None -if shared.args.deepspeed: - import deepspeed - from transformers.deepspeed import ( - HfDeepSpeedConfig, - is_deepspeed_zero3_enabled - ) - - from modules.deepspeed_parameters import generate_ds_config - - # Distributed setup - local_rank = shared.args.local_rank if shared.args.local_rank is not None else int(os.getenv("LOCAL_RANK", "0")) - world_size = int(os.getenv("WORLD_SIZE", "1")) - if is_xpu_available() and is_ccl_available(): - torch.xpu.set_device(local_rank) - deepspeed.init_distributed(backend="ccl") - else: - torch.cuda.set_device(local_rank) - deepspeed.init_distributed() - ds_config = generate_ds_config(shared.args.bf16, 1 * world_size, shared.args.nvme_offload_dir) - dschf = HfDeepSpeedConfig(ds_config) # Keep this object alive for the Transformers integration - -sampler_hijack.hijack_samplers() - - -def load_model(model_name, loader=None): - logger.info(f"Loading {model_name}...") - t0 = time.time() - - shared.is_seq2seq = False - load_func_map = { - 'Transformers': huggingface_loader, - 'AutoGPTQ': AutoGPTQ_loader, - 'GPTQ-for-LLaMa': GPTQ_loader, - 'llama.cpp': llamacpp_loader, - 'llamacpp_HF': llamacpp_HF_loader, - 'RWKV': RWKV_loader, - 'ExLlama': ExLlama_loader, - 'ExLlama_HF': ExLlama_HF_loader, - 'ExLlamav2': ExLlamav2_loader, - 'ExLlamav2_HF': ExLlamav2_HF_loader, - 'ctransformers': ctransformers_loader, - 'AutoAWQ': AutoAWQ_loader, - } - - if loader is None: - if shared.args.loader is not None: - loader = shared.args.loader - else: - loader = get_model_metadata(model_name)['loader'] - if loader is None: - logger.error('The path to the model does not exist. Exiting.') - return None, None - - shared.args.loader = loader - output = load_func_map[loader](model_name) - if type(output) is tuple: - model, tokenizer = output - else: - model = output - if model is None: - return None, None - else: - tokenizer = load_tokenizer(model_name, model) - - # Hijack attention with xformers - if any((shared.args.xformers, shared.args.sdp_attention)): - llama_attn_hijack.hijack_llama_attention() - - logger.info(f"Loaded the model in {(time.time()-t0):.2f} seconds.") - return model, tokenizer - - -def load_tokenizer(model_name, model): - tokenizer = None - path_to_model = Path(f"{shared.args.model_dir}/{model_name}/") - if any(s in model_name.lower() for s in ['gpt-4chan', 'gpt4chan']) and Path(f"{shared.args.model_dir}/gpt-j-6B/").exists(): - tokenizer = AutoTokenizer.from_pretrained(Path(f"{shared.args.model_dir}/gpt-j-6B/")) - elif path_to_model.exists(): - if shared.args.use_fast: - logger.info('Loading the tokenizer with use_fast=True.') - - tokenizer = AutoTokenizer.from_pretrained( - path_to_model, - trust_remote_code=shared.args.trust_remote_code, - use_fast=shared.args.use_fast - ) - - return tokenizer - - -def huggingface_loader(model_name): - - path_to_model = Path(f'{shared.args.model_dir}/{model_name}') - params = { - 'low_cpu_mem_usage': True, - 'trust_remote_code': shared.args.trust_remote_code, - 'torch_dtype': torch.bfloat16 if shared.args.bf16 else torch.float16 - } - config = AutoConfig.from_pretrained(path_to_model, trust_remote_code=params['trust_remote_code']) - - if 'chatglm' in model_name.lower(): - LoaderClass = AutoModel - else: - if config.to_dict().get('is_encoder_decoder', False): - LoaderClass = AutoModelForSeq2SeqLM - shared.is_seq2seq = True - else: - LoaderClass = AutoModelForCausalLM - - # Load the model in simple 16-bit mode by default - if not any([shared.args.cpu, shared.args.load_in_8bit, shared.args.load_in_4bit, shared.args.auto_devices, shared.args.disk, shared.args.deepspeed, shared.args.gpu_memory is not None, shared.args.cpu_memory is not None, shared.args.compress_pos_emb > 1, shared.args.alpha_value > 1, shared.args.disable_exllama]): - model = LoaderClass.from_pretrained(path_to_model, **params) - if torch.backends.mps.is_available(): - device = torch.device('mps') - model = model.to(device) - elif is_xpu_available(): - device = torch.device("xpu") - model = model.to(device) - else: - model = model.cuda() - - # DeepSpeed ZeRO-3 - elif shared.args.deepspeed: - model = LoaderClass.from_pretrained(path_to_model, torch_dtype=params['torch_dtype']) - model = deepspeed.initialize(model=model, config_params=ds_config, model_parameters=None, optimizer=None, lr_scheduler=None)[0] - model.module.eval() # Inference - logger.info(f'DeepSpeed ZeRO-3 is enabled: {is_deepspeed_zero3_enabled()}') - - # Load with quantization and/or offloading - else: - - if not any((shared.args.cpu, torch.cuda.is_available(), is_xpu_available(), torch.backends.mps.is_available())): - logger.warning('torch.cuda.is_available() and is_xpu_available() returned False. This means that no GPU has been detected. Falling back to CPU mode.') - - shared.args.cpu = True - - if shared.args.cpu: - params['torch_dtype'] = torch.float32 - else: - params['device_map'] = 'auto' - params['max_memory'] = get_max_memory_dict() - if shared.args.load_in_4bit: - # See https://github.com/huggingface/transformers/pull/23479/files - # and https://huggingface.co/blog/4bit-transformers-bitsandbytes - quantization_config_params = { - 'load_in_4bit': True, - 'bnb_4bit_compute_dtype': eval("torch.{}".format(shared.args.compute_dtype)) if shared.args.compute_dtype in ["bfloat16", "float16", "float32"] else None, - 'bnb_4bit_quant_type': shared.args.quant_type, - 'bnb_4bit_use_double_quant': shared.args.use_double_quant, - } - - logger.info('Using the following 4-bit params: ' + str(quantization_config_params)) - params['quantization_config'] = BitsAndBytesConfig(**quantization_config_params) - - elif shared.args.load_in_8bit: - if any((shared.args.auto_devices, shared.args.gpu_memory)): - params['quantization_config'] = BitsAndBytesConfig(load_in_8bit=True, llm_int8_enable_fp32_cpu_offload=True) - else: - params['quantization_config'] = BitsAndBytesConfig(load_in_8bit=True) - - if params['max_memory'] is not None: - with init_empty_weights(): - model = LoaderClass.from_config(config, trust_remote_code=params['trust_remote_code']) - - model.tie_weights() - params['device_map'] = infer_auto_device_map( - model, - dtype=torch.int8, - max_memory=params['max_memory'], - no_split_module_classes=model._no_split_modules - ) - - if shared.args.disk: - params['offload_folder'] = shared.args.disk_cache_dir - - if shared.args.disable_exllama: - try: - gptq_config = GPTQConfig(bits=config.quantization_config.get('bits', 4), disable_exllama=True) - params['quantization_config'] = gptq_config - logger.info('Loading with ExLlama kernel disabled.') - except: - exc = traceback.format_exc() - logger.error('Failed to disable exllama. Does the config.json for this model contain the necessary quantization info?') - print(exc) - - if shared.args.compress_pos_emb > 1: - params['rope_scaling'] = {'type': 'linear', 'factor': shared.args.compress_pos_emb} - elif shared.args.alpha_value > 1: - params['rope_scaling'] = {'type': 'dynamic', 'factor': RoPE.get_alpha_value(shared.args.alpha_value, shared.args.rope_freq_base)} - - model = LoaderClass.from_pretrained(path_to_model, **params) - - return model - - -def llamacpp_loader(model_name): - from modules.llamacpp_model import LlamaCppModel - - path = Path(f'{shared.args.model_dir}/{model_name}') - if path.is_file(): - model_file = path - else: - model_file = list(Path(f'{shared.args.model_dir}/{model_name}').glob('*.gguf'))[0] - - logger.info(f"llama.cpp weights detected: {model_file}") - model, tokenizer = LlamaCppModel.from_pretrained(model_file) - return model, tokenizer - - -def llamacpp_HF_loader(model_name): - from modules.llamacpp_hf import LlamacppHF - - for fname in [model_name, "oobabooga_llama-tokenizer", "llama-tokenizer"]: - path = Path(f'{shared.args.model_dir}/{fname}') - if all((path / file).exists() for file in ['tokenizer_config.json', 'special_tokens_map.json', 'tokenizer.model']): - logger.info(f'Using tokenizer from: {path}') - break - else: - logger.error("Could not load the model because a tokenizer in transformers format was not found. Please download oobabooga/llama-tokenizer.") - return None, None - - if shared.args.use_fast: - logger.info('Loading the tokenizer with use_fast=True.') - - tokenizer = AutoTokenizer.from_pretrained( - path, - trust_remote_code=shared.args.trust_remote_code, - use_fast=shared.args.use_fast - ) - - model = LlamacppHF.from_pretrained(model_name) - return model, tokenizer - - -def ctransformers_loader(model_name): - from modules.ctransformers_model import CtransformersModel - - path = Path(f'{shared.args.model_dir}/{model_name}') - ctrans = CtransformersModel() - if ctrans.model_type_is_auto(): - model_file = path - else: - if path.is_file(): - model_file = path - else: - entries = Path(f'{shared.args.model_dir}/{model_name}') - gguf = list(entries.glob('*.gguf')) - bin = list(entries.glob('*.bin')) - if len(gguf) > 0: - model_file = gguf[0] - elif len(bin) > 0: - model_file = bin[0] - else: - logger.error("Could not find a model for ctransformers.") - return None, None - - logger.info(f'ctransformers weights detected: {model_file}') - model, tokenizer = ctrans.from_pretrained(model_file) - return model, tokenizer - - -def AutoAWQ_loader(model_name): - from awq import AutoAWQForCausalLM - - model_dir = Path(f'{shared.args.model_dir}/{model_name}') - - model = AutoAWQForCausalLM.from_quantized( - quant_path=model_dir, - max_new_tokens=shared.args.max_seq_len, - trust_remote_code=shared.args.trust_remote_code, - fuse_layers=not shared.args.no_inject_fused_attention, - max_memory=get_max_memory_dict(), - batch_size=1, - safetensors=any(model_dir.glob('*.safetensors')), - ) - - return model - - -def GPTQ_loader(model_name): - - # Monkey patch - if shared.args.monkey_patch: - logger.warning("Applying the monkey patch for using LoRAs with GPTQ models. It may cause undefined behavior outside its intended scope.") - from modules.monkey_patch_gptq_lora import load_model_llama - - model, _ = load_model_llama(model_name) - - # No monkey patch - else: - import modules.GPTQ_loader - - model = modules.GPTQ_loader.load_quantized(model_name) - - return model - - -def AutoGPTQ_loader(model_name): - import modules.AutoGPTQ_loader - - return modules.AutoGPTQ_loader.load_quantized(model_name) - - -def ExLlama_loader(model_name): - from modules.exllama import ExllamaModel - - model, tokenizer = ExllamaModel.from_pretrained(model_name) - return model, tokenizer - - -def ExLlama_HF_loader(model_name): - from modules.exllama_hf import ExllamaHF - - return ExllamaHF.from_pretrained(model_name) - - -def ExLlamav2_loader(model_name): - from modules.exllamav2 import Exllamav2Model - - model, tokenizer = Exllamav2Model.from_pretrained(model_name) - return model, tokenizer - - -def ExLlamav2_HF_loader(model_name): - from modules.exllamav2_hf import Exllamav2HF - - return Exllamav2HF.from_pretrained(model_name) - - -def RWKV_loader(model_name): - ''' - This loader is not currently maintained as RWKV can now be loaded - through the transformers library. - ''' - from modules.RWKV import RWKVModel, RWKVTokenizer - - model = RWKVModel.from_pretrained( - Path(f'{shared.args.model_dir}/{model_name}'), - dtype="fp32" if shared.args.cpu else "bf16" if shared.args.bf16 else "fp16", - device="cpu" if shared.args.cpu else "xpu" if is_xpu_available() else "cuda" - ) - - tokenizer = RWKVTokenizer.from_pretrained(Path(shared.args.model_dir)) - return model, tokenizer - - -def get_max_memory_dict(): - max_memory = {} - if shared.args.gpu_memory: - memory_map = list(map(lambda x: x.strip(), shared.args.gpu_memory)) - for i in range(len(memory_map)): - max_memory[i] = f'{memory_map[i]}GiB' if not re.match('.*ib$', memory_map[i].lower()) else memory_map[i] - - max_cpu_memory = shared.args.cpu_memory.strip() if shared.args.cpu_memory is not None else '99GiB' - max_memory['cpu'] = f'{max_cpu_memory}GiB' if not re.match('.*ib$', max_cpu_memory.lower()) else max_cpu_memory - - # If --auto-devices is provided standalone, try to get a reasonable value - # for the maximum memory of device :0 - elif shared.args.auto_devices: - if is_xpu_available(): - total_mem = (torch.xpu.get_device_properties(0).total_memory / (1024 * 1024)) - else: - total_mem = (torch.cuda.get_device_properties(0).total_memory / (1024 * 1024)) - suggestion = round((total_mem - 1000) / 1000) * 1000 - if total_mem - suggestion < 800: - suggestion -= 1000 - - suggestion = int(round(suggestion / 1000)) - logger.warning(f"Auto-assiging --gpu-memory {suggestion} for your GPU to try to prevent out-of-memory errors. You can manually set other values.") - max_memory = {0: f'{suggestion}GiB', 'cpu': f'{shared.args.cpu_memory or 99}GiB'} - - return max_memory if len(max_memory) > 0 else None - - -def clear_torch_cache(): - gc.collect() - if not shared.args.cpu: - if is_xpu_available(): - torch.xpu.empty_cache() - else: - torch.cuda.empty_cache() - - -def unload_model(): - shared.model = shared.tokenizer = None - shared.lora_names = [] - shared.model_dirty_from_training = False - clear_torch_cache() - - -def reload_model(): - unload_model() - shared.model, shared.tokenizer = load_model(shared.model_name) diff --git a/spaces/leurez/moss/tailwind.config.js b/spaces/leurez/moss/tailwind.config.js deleted file mode 100644 index 66c6c725daee5e2295eea9574283c5ae0af3cbea..0000000000000000000000000000000000000000 --- a/spaces/leurez/moss/tailwind.config.js +++ /dev/null @@ -1,22 +0,0 @@ -/** @type {import('tailwindcss').Config} */ -module.exports = { - darkMode: 'class', - content: [ - './index.html', - './src/**/*.{vue,js,ts,jsx,tsx}', - ], - theme: { - extend: { - animation: { - blink: 'blink 1.2s infinite steps(1, start)', - }, - keyframes: { - blink: { - '0%, 100%': { 'background-color': 'currentColor' }, - '50%': { 'background-color': 'transparent' }, - }, - }, - }, - }, - plugins: [], -} diff --git a/spaces/librarian-bots/hub-analysis/index.html b/spaces/librarian-bots/hub-analysis/index.html deleted file mode 100644 index cf03d97a5f24da05f6368c7cfd178112c2969d7a..0000000000000000000000000000000000000000 --- a/spaces/librarian-bots/hub-analysis/index.html +++ /dev/null @@ -1,28 +0,0 @@ - - - - - - My static Space - - - - -
        -

        🤗 Hub analysis notebooks 🤗

        -

        This Space collects notebooks which analyze the Hugging Face Hub in various ways.

        -

        Current notebooks

        - -

        Community analysis work

        - -
        - - diff --git a/spaces/limingcv/AlignDet/finetune/finetune_detr_100e_voc0712/detr_mstrain_100e_voc0712.py b/spaces/limingcv/AlignDet/finetune/finetune_detr_100e_voc0712/detr_mstrain_100e_voc0712.py deleted file mode 100644 index f8872ebd70a88f56505cb562d43da1d33a0d571b..0000000000000000000000000000000000000000 --- a/spaces/limingcv/AlignDet/finetune/finetune_detr_100e_voc0712/detr_mstrain_100e_voc0712.py +++ /dev/null @@ -1,218 +0,0 @@ -model = dict( - type='DETR', - backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - out_indices=(3, ), - frozen_stages=1, - norm_cfg=dict(type='SyncBN', requires_grad=True), - norm_eval=True, - style='pytorch', - init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')), - bbox_head=dict( - type='DETRHead', - num_classes=20, - in_channels=2048, - transformer=dict( - type='Transformer', - encoder=dict( - type='DetrTransformerEncoder', - num_layers=6, - transformerlayers=dict( - type='BaseTransformerLayer', - attn_cfgs=[ - dict( - type='MultiheadAttention', - embed_dims=256, - num_heads=8, - dropout=0.1) - ], - feedforward_channels=2048, - ffn_dropout=0.1, - operation_order=('self_attn', 'norm', 'ffn', 'norm'))), - decoder=dict( - type='DetrTransformerDecoder', - return_intermediate=True, - num_layers=6, - transformerlayers=dict( - type='DetrTransformerDecoderLayer', - attn_cfgs=dict( - type='MultiheadAttention', - embed_dims=256, - num_heads=8, - dropout=0.1), - feedforward_channels=2048, - ffn_dropout=0.1, - operation_order=('self_attn', 'norm', 'cross_attn', 'norm', - 'ffn', 'norm')))), - positional_encoding=dict( - type='SinePositionalEncoding', num_feats=128, normalize=True), - loss_cls=dict( - type='CrossEntropyLoss', - bg_cls_weight=0.1, - use_sigmoid=False, - loss_weight=1.0, - class_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=5.0), - loss_iou=dict(type='GIoULoss', loss_weight=2.0)), - train_cfg=dict( - assigner=dict( - type='HungarianAssigner', - cls_cost=dict(type='ClassificationCost', weight=1.0), - reg_cost=dict(type='BBoxL1Cost', weight=5.0, box_format='xywh'), - iou_cost=dict(type='IoUCost', iou_mode='giou', weight=2.0))), - test_cfg=dict(max_per_img=100)) -dataset_type = 'VOCDataset' -data_root = 'data/VOCdevkit/' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True), - dict( - type='Resize', - img_scale=[(1333, 480), (1333, 512), (1333, 544), (1333, 576), - (1333, 608), (1333, 640), (1333, 672), (1333, 704), - (1333, 736), (1333, 768), (1333, 800)], - multiscale_mode='value', - keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict( - type='Normalize', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - to_rgb=True), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']) -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict( - type='Normalize', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - to_rgb=True), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']) - ]) -] -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict( - type='VOCDataset', - ann_file=[ - 'data/VOCdevkit/VOC2007/ImageSets/Main/trainval.txt', - 'data/VOCdevkit/VOC2012/ImageSets/Main/trainval.txt' - ], - img_prefix=['data/VOCdevkit/VOC2007/', 'data/VOCdevkit/VOC2012/'], - pipeline=[ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True), - dict( - type='Resize', - img_scale=[(1333, 480), (1333, 512), (1333, 544), (1333, 576), - (1333, 608), (1333, 640), (1333, 672), (1333, 704), - (1333, 736), (1333, 768), (1333, 800)], - multiscale_mode='value', - keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict( - type='Normalize', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - to_rgb=True), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']) - ]), - val=dict( - type='VOCDataset', - ann_file='data/VOCdevkit/VOC2007/ImageSets/Main/test.txt', - img_prefix='data/VOCdevkit/VOC2007/', - pipeline=[ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict( - type='Normalize', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - to_rgb=True), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']) - ]) - ]), - test=dict( - type='VOCDataset', - ann_file='data/VOCdevkit/VOC2007/ImageSets/Main/test.txt', - img_prefix='data/VOCdevkit/VOC2007/', - pipeline=[ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict( - type='Normalize', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - to_rgb=True), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']) - ]) - ])) -evaluation = dict(interval=1, metric='mAP', save_best='auto') -checkpoint_config = dict(interval=1) -log_config = dict(interval=50, hooks=[dict(type='TextLoggerHook')]) -custom_hooks = [ - dict(type='NumClassCheckHook'), - dict( - type='MMDetWandbHook', - init_kwargs=dict(project='I2B', group='finetune'), - interval=50, - num_eval_images=0, - log_checkpoint=False) -] -dist_params = dict(backend='nccl') -log_level = 'INFO' -load_from = 'pretrain/selfsup_detr_clusters-as-classes_add-contrastive-temp0.5-weight1.0/final_model.pth' -resume_from = None -workflow = [('train', 1)] -opencv_num_threads = 0 -mp_start_method = 'fork' -auto_scale_lr = dict(enable=False, base_batch_size=16) -custom_imports = None -norm_cfg = dict(type='SyncBN', requires_grad=True) -optimizer = dict( - type='AdamW', - lr=0.0001, - weight_decay=0.0001, - paramwise_cfg=dict( - custom_keys=dict(backbone=dict(lr_mult=0.1, decay_mult=1.0)))) -optimizer_config = dict(grad_clip=None) -lr_config = dict(policy='step', step=[70]) -runner = dict(type='EpochBasedRunner', max_epochs=100) -work_dir = 'work_dirs/finetune_detr_100e_voc0712' -auto_resume = False -gpu_ids = range(0, 8) diff --git a/spaces/lllqqq/so-vits-svc-models-pcr/vencoder/ContentVec256L12_Onnx.py b/spaces/lllqqq/so-vits-svc-models-pcr/vencoder/ContentVec256L12_Onnx.py deleted file mode 100644 index 9ad5085e02654fd1fcfbdad7d476bfa9b763d2c6..0000000000000000000000000000000000000000 --- a/spaces/lllqqq/so-vits-svc-models-pcr/vencoder/ContentVec256L12_Onnx.py +++ /dev/null @@ -1,28 +0,0 @@ -from vencoder.encoder import SpeechEncoder -import onnxruntime -import torch - -class ContentVec256L12_Onnx(SpeechEncoder): - def __init__(self,vec_path = "pretrain/vec-256-layer-12.onnx",device=None): - print("load model(s) from {}".format(vec_path)) - self.hidden_dim = 256 - if device is None: - self.dev = torch.device("cpu") - else: - self.dev = torch.device(device) - if device == 'cpu' or device == torch.device("cpu") or device is None: - providers = ['CPUExecutionProvider'] - elif device == 'cuda' or device == torch.device("cuda"): - providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] - self.model = onnxruntime.InferenceSession(vec_path, providers=providers) - - def encoder(self, wav): - feats = wav - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - feats = feats.unsqueeze(0).cpu().detach().numpy() - onnx_input = {self.model.get_inputs()[0].name: feats} - logits = self.model.run(None, onnx_input) - return torch.tensor(logits[0]).transpose(1, 2).to(self.dev) diff --git a/spaces/luost26/DiffAb/diffab/tools/eval/__main__.py b/spaces/luost26/DiffAb/diffab/tools/eval/__main__.py deleted file mode 100644 index cbcdeb82d00dcc46488d4ff37b67e21f342de368..0000000000000000000000000000000000000000 --- a/spaces/luost26/DiffAb/diffab/tools/eval/__main__.py +++ /dev/null @@ -1,4 +0,0 @@ -from .run import main - -if __name__ == '__main__': - main() diff --git a/spaces/lwchen/CodeFormer/CodeFormer/basicsr/utils/matlab_functions.py b/spaces/lwchen/CodeFormer/CodeFormer/basicsr/utils/matlab_functions.py deleted file mode 100644 index c6ce1004a2c9f8521505c4b5889d3c24a909c70d..0000000000000000000000000000000000000000 --- a/spaces/lwchen/CodeFormer/CodeFormer/basicsr/utils/matlab_functions.py +++ /dev/null @@ -1,347 +0,0 @@ -import math -import numpy as np -import torch - - -def cubic(x): - """cubic function used for calculate_weights_indices.""" - absx = torch.abs(x) - absx2 = absx**2 - absx3 = absx**3 - return (1.5 * absx3 - 2.5 * absx2 + 1) * ( - (absx <= 1).type_as(absx)) + (-0.5 * absx3 + 2.5 * absx2 - 4 * absx + 2) * (((absx > 1) * - (absx <= 2)).type_as(absx)) - - -def calculate_weights_indices(in_length, out_length, scale, kernel, kernel_width, antialiasing): - """Calculate weights and indices, used for imresize function. - - Args: - in_length (int): Input length. - out_length (int): Output length. - scale (float): Scale factor. - kernel_width (int): Kernel width. - antialisaing (bool): Whether to apply anti-aliasing when downsampling. - """ - - if (scale < 1) and antialiasing: - # Use a modified kernel (larger kernel width) to simultaneously - # interpolate and antialias - kernel_width = kernel_width / scale - - # Output-space coordinates - x = torch.linspace(1, out_length, out_length) - - # Input-space coordinates. Calculate the inverse mapping such that 0.5 - # in output space maps to 0.5 in input space, and 0.5 + scale in output - # space maps to 1.5 in input space. - u = x / scale + 0.5 * (1 - 1 / scale) - - # What is the left-most pixel that can be involved in the computation? - left = torch.floor(u - kernel_width / 2) - - # What is the maximum number of pixels that can be involved in the - # computation? Note: it's OK to use an extra pixel here; if the - # corresponding weights are all zero, it will be eliminated at the end - # of this function. - p = math.ceil(kernel_width) + 2 - - # The indices of the input pixels involved in computing the k-th output - # pixel are in row k of the indices matrix. - indices = left.view(out_length, 1).expand(out_length, p) + torch.linspace(0, p - 1, p).view(1, p).expand( - out_length, p) - - # The weights used to compute the k-th output pixel are in row k of the - # weights matrix. - distance_to_center = u.view(out_length, 1).expand(out_length, p) - indices - - # apply cubic kernel - if (scale < 1) and antialiasing: - weights = scale * cubic(distance_to_center * scale) - else: - weights = cubic(distance_to_center) - - # Normalize the weights matrix so that each row sums to 1. - weights_sum = torch.sum(weights, 1).view(out_length, 1) - weights = weights / weights_sum.expand(out_length, p) - - # If a column in weights is all zero, get rid of it. only consider the - # first and last column. - weights_zero_tmp = torch.sum((weights == 0), 0) - if not math.isclose(weights_zero_tmp[0], 0, rel_tol=1e-6): - indices = indices.narrow(1, 1, p - 2) - weights = weights.narrow(1, 1, p - 2) - if not math.isclose(weights_zero_tmp[-1], 0, rel_tol=1e-6): - indices = indices.narrow(1, 0, p - 2) - weights = weights.narrow(1, 0, p - 2) - weights = weights.contiguous() - indices = indices.contiguous() - sym_len_s = -indices.min() + 1 - sym_len_e = indices.max() - in_length - indices = indices + sym_len_s - 1 - return weights, indices, int(sym_len_s), int(sym_len_e) - - -@torch.no_grad() -def imresize(img, scale, antialiasing=True): - """imresize function same as MATLAB. - - It now only supports bicubic. - The same scale applies for both height and width. - - Args: - img (Tensor | Numpy array): - Tensor: Input image with shape (c, h, w), [0, 1] range. - Numpy: Input image with shape (h, w, c), [0, 1] range. - scale (float): Scale factor. The same scale applies for both height - and width. - antialisaing (bool): Whether to apply anti-aliasing when downsampling. - Default: True. - - Returns: - Tensor: Output image with shape (c, h, w), [0, 1] range, w/o round. - """ - if type(img).__module__ == np.__name__: # numpy type - numpy_type = True - img = torch.from_numpy(img.transpose(2, 0, 1)).float() - else: - numpy_type = False - - in_c, in_h, in_w = img.size() - out_h, out_w = math.ceil(in_h * scale), math.ceil(in_w * scale) - kernel_width = 4 - kernel = 'cubic' - - # get weights and indices - weights_h, indices_h, sym_len_hs, sym_len_he = calculate_weights_indices(in_h, out_h, scale, kernel, kernel_width, - antialiasing) - weights_w, indices_w, sym_len_ws, sym_len_we = calculate_weights_indices(in_w, out_w, scale, kernel, kernel_width, - antialiasing) - # process H dimension - # symmetric copying - img_aug = torch.FloatTensor(in_c, in_h + sym_len_hs + sym_len_he, in_w) - img_aug.narrow(1, sym_len_hs, in_h).copy_(img) - - sym_patch = img[:, :sym_len_hs, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - img_aug.narrow(1, 0, sym_len_hs).copy_(sym_patch_inv) - - sym_patch = img[:, -sym_len_he:, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - img_aug.narrow(1, sym_len_hs + in_h, sym_len_he).copy_(sym_patch_inv) - - out_1 = torch.FloatTensor(in_c, out_h, in_w) - kernel_width = weights_h.size(1) - for i in range(out_h): - idx = int(indices_h[i][0]) - for j in range(in_c): - out_1[j, i, :] = img_aug[j, idx:idx + kernel_width, :].transpose(0, 1).mv(weights_h[i]) - - # process W dimension - # symmetric copying - out_1_aug = torch.FloatTensor(in_c, out_h, in_w + sym_len_ws + sym_len_we) - out_1_aug.narrow(2, sym_len_ws, in_w).copy_(out_1) - - sym_patch = out_1[:, :, :sym_len_ws] - inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(2, inv_idx) - out_1_aug.narrow(2, 0, sym_len_ws).copy_(sym_patch_inv) - - sym_patch = out_1[:, :, -sym_len_we:] - inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(2, inv_idx) - out_1_aug.narrow(2, sym_len_ws + in_w, sym_len_we).copy_(sym_patch_inv) - - out_2 = torch.FloatTensor(in_c, out_h, out_w) - kernel_width = weights_w.size(1) - for i in range(out_w): - idx = int(indices_w[i][0]) - for j in range(in_c): - out_2[j, :, i] = out_1_aug[j, :, idx:idx + kernel_width].mv(weights_w[i]) - - if numpy_type: - out_2 = out_2.numpy().transpose(1, 2, 0) - return out_2 - - -def rgb2ycbcr(img, y_only=False): - """Convert a RGB image to YCbCr image. - - This function produces the same results as Matlab's `rgb2ycbcr` function. - It implements the ITU-R BT.601 conversion for standard-definition - television. See more details in - https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion. - - It differs from a similar function in cv2.cvtColor: `RGB <-> YCrCb`. - In OpenCV, it implements a JPEG conversion. See more details in - https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion. - - Args: - img (ndarray): The input image. It accepts: - 1. np.uint8 type with range [0, 255]; - 2. np.float32 type with range [0, 1]. - y_only (bool): Whether to only return Y channel. Default: False. - - Returns: - ndarray: The converted YCbCr image. The output image has the same type - and range as input image. - """ - img_type = img.dtype - img = _convert_input_type_range(img) - if y_only: - out_img = np.dot(img, [65.481, 128.553, 24.966]) + 16.0 - else: - out_img = np.matmul( - img, [[65.481, -37.797, 112.0], [128.553, -74.203, -93.786], [24.966, 112.0, -18.214]]) + [16, 128, 128] - out_img = _convert_output_type_range(out_img, img_type) - return out_img - - -def bgr2ycbcr(img, y_only=False): - """Convert a BGR image to YCbCr image. - - The bgr version of rgb2ycbcr. - It implements the ITU-R BT.601 conversion for standard-definition - television. See more details in - https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion. - - It differs from a similar function in cv2.cvtColor: `BGR <-> YCrCb`. - In OpenCV, it implements a JPEG conversion. See more details in - https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion. - - Args: - img (ndarray): The input image. It accepts: - 1. np.uint8 type with range [0, 255]; - 2. np.float32 type with range [0, 1]. - y_only (bool): Whether to only return Y channel. Default: False. - - Returns: - ndarray: The converted YCbCr image. The output image has the same type - and range as input image. - """ - img_type = img.dtype - img = _convert_input_type_range(img) - if y_only: - out_img = np.dot(img, [24.966, 128.553, 65.481]) + 16.0 - else: - out_img = np.matmul( - img, [[24.966, 112.0, -18.214], [128.553, -74.203, -93.786], [65.481, -37.797, 112.0]]) + [16, 128, 128] - out_img = _convert_output_type_range(out_img, img_type) - return out_img - - -def ycbcr2rgb(img): - """Convert a YCbCr image to RGB image. - - This function produces the same results as Matlab's ycbcr2rgb function. - It implements the ITU-R BT.601 conversion for standard-definition - television. See more details in - https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion. - - It differs from a similar function in cv2.cvtColor: `YCrCb <-> RGB`. - In OpenCV, it implements a JPEG conversion. See more details in - https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion. - - Args: - img (ndarray): The input image. It accepts: - 1. np.uint8 type with range [0, 255]; - 2. np.float32 type with range [0, 1]. - - Returns: - ndarray: The converted RGB image. The output image has the same type - and range as input image. - """ - img_type = img.dtype - img = _convert_input_type_range(img) * 255 - out_img = np.matmul(img, [[0.00456621, 0.00456621, 0.00456621], [0, -0.00153632, 0.00791071], - [0.00625893, -0.00318811, 0]]) * 255.0 + [-222.921, 135.576, -276.836] # noqa: E126 - out_img = _convert_output_type_range(out_img, img_type) - return out_img - - -def ycbcr2bgr(img): - """Convert a YCbCr image to BGR image. - - The bgr version of ycbcr2rgb. - It implements the ITU-R BT.601 conversion for standard-definition - television. See more details in - https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion. - - It differs from a similar function in cv2.cvtColor: `YCrCb <-> BGR`. - In OpenCV, it implements a JPEG conversion. See more details in - https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion. - - Args: - img (ndarray): The input image. It accepts: - 1. np.uint8 type with range [0, 255]; - 2. np.float32 type with range [0, 1]. - - Returns: - ndarray: The converted BGR image. The output image has the same type - and range as input image. - """ - img_type = img.dtype - img = _convert_input_type_range(img) * 255 - out_img = np.matmul(img, [[0.00456621, 0.00456621, 0.00456621], [0.00791071, -0.00153632, 0], - [0, -0.00318811, 0.00625893]]) * 255.0 + [-276.836, 135.576, -222.921] # noqa: E126 - out_img = _convert_output_type_range(out_img, img_type) - return out_img - - -def _convert_input_type_range(img): - """Convert the type and range of the input image. - - It converts the input image to np.float32 type and range of [0, 1]. - It is mainly used for pre-processing the input image in colorspace - convertion functions such as rgb2ycbcr and ycbcr2rgb. - - Args: - img (ndarray): The input image. It accepts: - 1. np.uint8 type with range [0, 255]; - 2. np.float32 type with range [0, 1]. - - Returns: - (ndarray): The converted image with type of np.float32 and range of - [0, 1]. - """ - img_type = img.dtype - img = img.astype(np.float32) - if img_type == np.float32: - pass - elif img_type == np.uint8: - img /= 255. - else: - raise TypeError('The img type should be np.float32 or np.uint8, ' f'but got {img_type}') - return img - - -def _convert_output_type_range(img, dst_type): - """Convert the type and range of the image according to dst_type. - - It converts the image to desired type and range. If `dst_type` is np.uint8, - images will be converted to np.uint8 type with range [0, 255]. If - `dst_type` is np.float32, it converts the image to np.float32 type with - range [0, 1]. - It is mainly used for post-processing images in colorspace convertion - functions such as rgb2ycbcr and ycbcr2rgb. - - Args: - img (ndarray): The image to be converted with np.float32 type and - range [0, 255]. - dst_type (np.uint8 | np.float32): If dst_type is np.uint8, it - converts the image to np.uint8 type with range [0, 255]. If - dst_type is np.float32, it converts the image to np.float32 type - with range [0, 1]. - - Returns: - (ndarray): The converted image with desired type and range. - """ - if dst_type not in (np.uint8, np.float32): - raise TypeError('The dst_type should be np.float32 or np.uint8, ' f'but got {dst_type}') - if dst_type == np.uint8: - img = img.round() - else: - img /= 255. - return img.astype(dst_type) diff --git a/spaces/ma-xu/LIVE/thrust/thrust/detail/complex/clog.h b/spaces/ma-xu/LIVE/thrust/thrust/detail/complex/clog.h deleted file mode 100644 index 8d288df0240e7fc2562ad415ff4f4fa2de1048c2..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/detail/complex/clog.h +++ /dev/null @@ -1,212 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * Copyright 2013 Filipe RNC Maia - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/*- - * Copyright (c) 2012 Stephen Montgomery-Smith - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * - * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND - * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE - * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL - * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS - * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) - * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT - * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY - * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF - * SUCH DAMAGE. - */ - -/* adapted from FreeBSDs msun:*/ - - -#pragma once - -#include -#include - -namespace thrust{ -namespace detail{ -namespace complex{ - -using thrust::complex; - -/* round down to 18 = 54/3 bits */ -__host__ __device__ inline -double trim(double x){ - uint32_t hi; - get_high_word(hi, x); - insert_words(x, hi &0xfffffff8, 0); - return x; -} - - -__host__ __device__ inline -complex clog(const complex& z){ - - // Adapted from FreeBSDs msun - double x, y; - double ax, ay; - double x0, y0, x1, y1, x2, y2, t, hm1; - double val[12]; - int i, sorted; - const double e = 2.7182818284590452354; - - x = z.real(); - y = z.imag(); - - /* Handle NaNs using the general formula to mix them right. */ - if (x != x || y != y){ - return (complex(std::log(norm(z)), std::atan2(y, x))); - } - - ax = std::abs(x); - ay = std::abs(y); - if (ax < ay) { - t = ax; - ax = ay; - ay = t; - } - - /* - * To avoid unnecessary overflow, if x and y are very large, divide x - * and y by M_E, and then add 1 to the logarithm. This depends on - * M_E being larger than sqrt(2). - * There is a potential loss of accuracy caused by dividing by M_E, - * but this case should happen extremely rarely. - */ - // if (ay > 5e307){ - // For high values of ay -> hypotf(DBL_MAX,ay) = inf - // We expect that for values at or below ay = 5e307 this should not happen - if (ay > 5e307){ - return (complex(std::log(hypot(x / e, y / e)) + 1.0, std::atan2(y, x))); - } - if (ax == 1.) { - if (ay < 1e-150){ - return (complex((ay * 0.5) * ay, std::atan2(y, x))); - } - return (complex(log1p(ay * ay) * 0.5, std::atan2(y, x))); - } - - /* - * Because atan2 and hypot conform to C99, this also covers all the - * edge cases when x or y are 0 or infinite. - */ - if (ax < 1e-50 || ay < 1e-50 || ax > 1e50 || ay > 1e50){ - return (complex(std::log(hypot(x, y)), std::atan2(y, x))); - } - - /* - * From this point on, we don't need to worry about underflow or - * overflow in calculating ax*ax or ay*ay. - */ - - /* Some easy cases. */ - - if (ax >= 1.0){ - return (complex(log1p((ax-1)*(ax+1) + ay*ay) * 0.5, atan2(y, x))); - } - - if (ax*ax + ay*ay <= 0.7){ - return (complex(std::log(ax*ax + ay*ay) * 0.5, std::atan2(y, x))); - } - - /* - * Take extra care so that ULP of real part is small if hypot(x,y) is - * moderately close to 1. - */ - - - x0 = trim(ax); - ax = ax-x0; - x1 = trim(ax); - x2 = ax-x1; - y0 = trim(ay); - ay = ay-y0; - y1 = trim(ay); - y2 = ay-y1; - - val[0] = x0*x0; - val[1] = y0*y0; - val[2] = 2*x0*x1; - val[3] = 2*y0*y1; - val[4] = x1*x1; - val[5] = y1*y1; - val[6] = 2*x0*x2; - val[7] = 2*y0*y2; - val[8] = 2*x1*x2; - val[9] = 2*y1*y2; - val[10] = x2*x2; - val[11] = y2*y2; - - /* Bubble sort. */ - - do { - sorted = 1; - for (i=0;i<11;i++) { - if (val[i] < val[i+1]) { - sorted = 0; - t = val[i]; - val[i] = val[i+1]; - val[i+1] = t; - } - } - } while (!sorted); - - hm1 = -1; - for (i=0;i<12;i++){ - hm1 += val[i]; - } - return (complex(0.5 * log1p(hm1), atan2(y, x))); -} - -} // namespace complex - -} // namespace detail - -template -__host__ __device__ -inline complex log(const complex& z){ - return complex(std::log(thrust::abs(z)),thrust::arg(z)); -} - -template <> -__host__ __device__ -inline complex log(const complex& z){ - return detail::complex::clog(z); -} - -template -__host__ __device__ -inline complex log10(const complex& z){ - // Using the explicit literal prevents compile time warnings in - // devices that don't support doubles - return thrust::log(z)/ValueType(2.30258509299404568402); -} - -} // namespace thrust - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/device_allocator.h b/spaces/ma-xu/LIVE/thrust/thrust/device_allocator.h deleted file mode 100644 index f5ff0d9654c997a8fcccb24db9707cd43cf18f17..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/device_allocator.h +++ /dev/null @@ -1,146 +0,0 @@ -/* - * Copyright 2008-2018 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - -/*! \file device_allocator.h - * \brief An allocator which creates new elements in device memory - */ - -#pragma once - -#include -#include -#include -#include - -#include -#include - -namespace thrust -{ - -/** \addtogroup memory_resources Memory Resources - * \ingroup memory_management_classes - * \{ - */ - -/*! Memory resource adaptor that turns any memory resource that returns a fancy - * with the same tag as \p device_ptr, and adapts it to a resource that returns - * a \p device_ptr. - */ -template -class device_ptr_memory_resource THRUST_FINAL - : public thrust::mr::memory_resource< - device_ptr - > -{ - typedef typename Upstream::pointer upstream_ptr; - -public: - /*! Initialize the adaptor with the global instance of the upstream resource. Obtains - * the global instance by calling \p get_global_resource. - */ - __host__ - device_ptr_memory_resource() : m_upstream(mr::get_global_resource()) - { - } - - /*! Initialize the adaptor with an upstream resource. - * - * \param upstream the upstream memory resource to adapt. - */ - __host__ - device_ptr_memory_resource(Upstream * upstream) : m_upstream(upstream) - { - } - - THRUST_NODISCARD __host__ - virtual pointer do_allocate(std::size_t bytes, std::size_t alignment = THRUST_MR_DEFAULT_ALIGNMENT) THRUST_OVERRIDE - { - return pointer(m_upstream->do_allocate(bytes, alignment).get()); - } - - __host__ - virtual void do_deallocate(pointer p, std::size_t bytes, std::size_t alignment) THRUST_OVERRIDE - { - m_upstream->do_deallocate(upstream_ptr(p.get()), bytes, alignment); - } - -private: - Upstream * m_upstream; -}; - -/*! \} - */ - -/*! \addtogroup memory_management Memory Management - * \addtogroup memory_management_classes Memory Management Classes - * \ingroup memory_management - * \{ - */ -template -class device_allocator - : public thrust::mr::stateless_resource_allocator< - T, - device_ptr_memory_resource - > -{ - typedef thrust::mr::stateless_resource_allocator< - T, - device_ptr_memory_resource - > base; - -public: - /*! The \p rebind metafunction provides the type of a \p device_allocator - * instantiated with another type. - * - * \tparam U the other type to use for instantiation. - */ - template - struct rebind - { - /*! The typedef \p other gives the type of the rebound \p device_allocator. - */ - typedef device_allocator other; - }; - - /*! Default constructor has no effect. */ - __host__ - device_allocator() {} - - /*! Copy constructor has no effect. */ - __host__ - device_allocator(const device_allocator& other) : base(other) {} - - /*! Constructor from other \p device_allocator has no effect. */ - template - __host__ - device_allocator(const device_allocator& other) : base(other) {} - -#if THRUST_CPP_DIALECT >= 2011 - device_allocator & operator=(const device_allocator &) = default; -#endif - - /*! Destructor has no effect. */ - __host__ - ~device_allocator() {} -}; - -/*! \} - */ - -} // end thrust - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/mr/disjoint_pool.h b/spaces/ma-xu/LIVE/thrust/thrust/mr/disjoint_pool.h deleted file mode 100644 index 898e499c807dc48a35c7dafe3da00d2885b62396..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/mr/disjoint_pool.h +++ /dev/null @@ -1,489 +0,0 @@ -/* - * Copyright 2018 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/*! \file disjoint_pool.h - * \brief A caching and pooling memory resource adaptor which uses separate upstream resources for memory allocation - * and bookkeeping. - */ - -#pragma once - -#include - -#include -#include -#include - -#include -#include -#include - -#include - -namespace thrust -{ -namespace mr -{ - -/** \addtogroup memory_resources Memory Resources - * \ingroup memory_management_classes - * \{ - */ - -/*! A memory resource adaptor allowing for pooling and caching allocations from \p Upstream, using \p Bookkeeper for - * management of that cached and pooled memory, allowing to cache portions of memory inaccessible from the host. - * - * On a typical memory resource, calls to \p allocate and \p deallocate actually allocate and deallocate memory. Pooling - * memory resources only allocate and deallocate memory from an external resource (the upstream memory resource) when - * there's no suitable memory currently cached; otherwise, they use memory they have acquired beforehand, to make - * memory allocation faster and more efficient. - * - * The disjoint version of the pool resources uses a separate upstream memory resource, \p Bookkeeper, to allocate memory - * necessary to manage the cached memory. There may be many reasons to do that; the canonical one is that \p Upstream - * allocates memory that is inaccessible to the code of the pool resource, which means that it cannot embed the necessary - * information in memory obtained from \p Upstream; for instance, \p Upstream can be a CUDA non-managed memory - * resource, or a CUDA managed memory resource whose memory we would prefer to not migrate back and forth between - * host and device when executing bookkeeping code. - * - * This is not the only case where it makes sense to use a disjoint pool resource, though. In a multi-core environment - * it may be beneficial to avoid stealing cache lines from other cores by writing over bookkeeping information - * embedded in an allocated block of memory. In such a case, one can imagine wanting to use a disjoint pool where - * both the upstream and the bookkeeper are of the same type, to allocate memory consistently, but separately for - * those two purposes. - * - * \tparam Upstream the type of memory resources that will be used for allocating memory blocks to be handed off to the user - * \tparam Bookkeeper the type of memory resources that will be used for allocating bookkeeping memory - */ -template -class disjoint_unsynchronized_pool_resource THRUST_FINAL - : public memory_resource, - private validator2 -{ -public: - /*! Get the default options for a disjoint pool. These are meant to be a sensible set of values for many use cases, - * and as such, may be tuned in the future. This function is exposed so that creating a set of options that are - * just a slight departure from the defaults is easy. - */ - static pool_options get_default_options() - { - pool_options ret; - - ret.min_blocks_per_chunk = 16; - ret.min_bytes_per_chunk = 1024; - ret.max_blocks_per_chunk = static_cast(1) << 20; - ret.max_bytes_per_chunk = static_cast(1) << 30; - - ret.smallest_block_size = THRUST_MR_DEFAULT_ALIGNMENT; - ret.largest_block_size = static_cast(1) << 20; - - ret.alignment = THRUST_MR_DEFAULT_ALIGNMENT; - - ret.cache_oversized = true; - - ret.cached_size_cutoff_factor = 16; - ret.cached_alignment_cutoff_factor = 16; - - return ret; - } - - /*! Constructor. - * - * \param upstream the upstream memory resource for allocations - * \param bookkeeper the upstream memory resource for bookkeeping - * \param options pool options to use - */ - disjoint_unsynchronized_pool_resource(Upstream * upstream, Bookkeeper * bookkeeper, - pool_options options = get_default_options()) - : m_upstream(upstream), - m_bookkeeper(bookkeeper), - m_options(options), - m_smallest_block_log2(detail::log2_ri(m_options.smallest_block_size)), - m_pools(m_bookkeeper), - m_allocated(m_bookkeeper), - m_cached_oversized(m_bookkeeper), - m_oversized(m_bookkeeper) - { - assert(m_options.validate()); - - pointer_vector free(m_bookkeeper); - pool p(free); - m_pools.resize(detail::log2_ri(m_options.largest_block_size) - m_smallest_block_log2 + 1, p); - } - - // TODO: C++11: use delegating constructors - - /*! Constructor. Upstream and bookkeeping resources are obtained by calling \p get_global_resource for their types. - * - * \param options pool options to use - */ - disjoint_unsynchronized_pool_resource(pool_options options = get_default_options()) - : m_upstream(get_global_resource()), - m_bookkeeper(get_global_resource()), - m_options(options), - m_smallest_block_log2(detail::log2_ri(m_options.smallest_block_size)), - m_pools(m_bookkeeper), - m_allocated(m_bookkeeper), - m_cached_oversized(m_bookkeeper), - m_oversized(m_bookkeeper) - { - assert(m_options.validate()); - - pointer_vector free(m_bookkeeper); - pool p(free); - m_pools.resize(detail::log2_ri(m_options.largest_block_size) - m_smallest_block_log2 + 1, p); - } - - /*! Destructor. Releases all held memory to upstream. - */ - ~disjoint_unsynchronized_pool_resource() - { - release(); - } - -private: - typedef typename Upstream::pointer void_ptr; - typedef typename thrust::detail::pointer_traits::template rebind::other char_ptr; - - struct chunk_descriptor - { - std::size_t size; - void_ptr pointer; - }; - - typedef thrust::host_vector< - chunk_descriptor, - allocator - > chunk_vector; - - struct oversized_block_descriptor - { - std::size_t size; - std::size_t alignment; - void_ptr pointer; - - __host__ __device__ - bool operator==(const oversized_block_descriptor & other) const - { - return size == other.size && alignment == other.alignment && pointer == other.pointer; - } - - __host__ __device__ - bool operator<(const oversized_block_descriptor & other) const - { - return size < other.size || (size == other.size && alignment < other.alignment); - } - }; - - struct equal_pointers - { - public: - __host__ __device__ - equal_pointers(void_ptr p) : p(p) - { - } - - __host__ __device__ - bool operator()(const oversized_block_descriptor & desc) const - { - return desc.pointer == p; - } - - private: - void_ptr p; - }; - - struct matching_alignment - { - public: - __host__ __device__ - matching_alignment(std::size_t requested) : requested(requested) - { - } - - __host__ __device__ - bool operator()(const oversized_block_descriptor & desc) const - { - return desc.alignment >= requested; - } - - private: - std::size_t requested; - }; - - typedef thrust::host_vector< - oversized_block_descriptor, - allocator - > oversized_block_vector; - - typedef thrust::host_vector< - void_ptr, - allocator - > pointer_vector; - - struct pool - { - __host__ - pool(const pointer_vector & free) - : free_blocks(free), - previous_allocated_count(0) - { - } - - __host__ - pool(const pool & other) - : free_blocks(other.free_blocks), - previous_allocated_count(other.previous_allocated_count) - { - } - -#if THRUST_CPP_DIALECT >= 2011 - pool & operator=(const pool &) = default; -#endif - - __host__ - ~pool() {} - - pointer_vector free_blocks; - std::size_t previous_allocated_count; - }; - - typedef thrust::host_vector< - pool, - allocator - > pool_vector; - - Upstream * m_upstream; - Bookkeeper * m_bookkeeper; - - pool_options m_options; - std::size_t m_smallest_block_log2; - - // buckets containing free lists for each pooled size - pool_vector m_pools; - // list of all allocations from upstream for the above - chunk_vector m_allocated; - // list of all cached oversized/overaligned blocks that have been returned to the pool to cache - oversized_block_vector m_cached_oversized; - // list of all oversized/overaligned allocations from upstream - oversized_block_vector m_oversized; - -public: - /*! Releases all held memory to upstream. - */ - void release() - { - // reset the buckets - for (std::size_t i = 0; i < m_pools.size(); ++i) - { - m_pools[i].free_blocks.clear(); - m_pools[i].previous_allocated_count = 0; - } - - // deallocate memory allocated for the buckets - for (std::size_t i = 0; i < m_allocated.size(); ++i) - { - m_upstream->do_deallocate( - m_allocated[i].pointer, - m_allocated[i].size, - m_options.alignment); - } - - // deallocate cached oversized/overaligned memory - for (std::size_t i = 0; i < m_oversized.size(); ++i) - { - m_upstream->do_deallocate( - m_oversized[i].pointer, - m_oversized[i].size, - m_oversized[i].alignment); - } - - m_allocated.clear(); - m_oversized.clear(); - m_cached_oversized.clear(); - } - - THRUST_NODISCARD virtual void_ptr do_allocate(std::size_t bytes, std::size_t alignment = THRUST_MR_DEFAULT_ALIGNMENT) THRUST_OVERRIDE - { - bytes = (std::max)(bytes, m_options.smallest_block_size); - assert(detail::is_power_of_2(alignment)); - - // an oversized and/or overaligned allocation requested; needs to be allocated separately - if (bytes > m_options.largest_block_size || alignment > m_options.alignment) - { - oversized_block_descriptor oversized; - oversized.size = bytes; - oversized.alignment = alignment; - - if (m_options.cache_oversized && !m_cached_oversized.empty()) - { - typename oversized_block_vector::iterator it = thrust::lower_bound( - thrust::seq, - m_cached_oversized.begin(), - m_cached_oversized.end(), - oversized); - - // if the size is bigger than the requested size by a factor - // bigger than or equal to the specified cutoff for size, - // allocate a new block - if (it != m_cached_oversized.end()) - { - std::size_t size_factor = (*it).size / bytes; - if (size_factor >= m_options.cached_size_cutoff_factor) - { - it = m_cached_oversized.end(); - } - } - - if (it != m_cached_oversized.end() && (*it).alignment < alignment) - { - it = find_if(it + 1, m_cached_oversized.end(), matching_alignment(alignment)); - } - - // if the alignment is bigger than the requested one by a factor - // bigger than or equal to the specified cutoff for alignment, - // allocate a new block - if (it != m_cached_oversized.end()) - { - std::size_t alignment_factor = (*it).alignment / alignment; - if (alignment_factor >= m_options.cached_alignment_cutoff_factor) - { - it = m_cached_oversized.end(); - } - } - - if (it != m_cached_oversized.end()) - { - oversized.pointer = (*it).pointer; - m_cached_oversized.erase(it); - return oversized.pointer; - } - } - - // no fitting cached block found; allocate a new one that's just up to the specs - oversized.pointer = m_upstream->do_allocate(bytes, alignment); - m_oversized.push_back(oversized); - - return oversized.pointer; - } - - // the request is NOT for oversized and/or overaligned memory - // allocate a block from an appropriate bucket - std::size_t bytes_log2 = thrust::detail::log2_ri(bytes); - std::size_t bucket_idx = bytes_log2 - m_smallest_block_log2; - pool & bucket = m_pools[bucket_idx]; - - // if the free list of the bucket has no elements, allocate a new chunk - // and split it into blocks pushed to the free list - if (bucket.free_blocks.empty()) - { - std::size_t bucket_size = static_cast(1) << bytes_log2; - - std::size_t n = bucket.previous_allocated_count; - if (n == 0) - { - n = m_options.min_blocks_per_chunk; - if (n < (m_options.min_bytes_per_chunk >> bytes_log2)) - { - n = m_options.min_bytes_per_chunk >> bytes_log2; - } - } - else - { - n = n * 3 / 2; - if (n > (m_options.max_bytes_per_chunk >> bytes_log2)) - { - n = m_options.max_bytes_per_chunk >> bytes_log2; - } - if (n > m_options.max_blocks_per_chunk) - { - n = m_options.max_blocks_per_chunk; - } - } - - bytes = n << bytes_log2; - - assert(n >= m_options.min_blocks_per_chunk); - assert(n <= m_options.max_blocks_per_chunk); - assert(bytes >= m_options.min_bytes_per_chunk); - assert(bytes <= m_options.max_bytes_per_chunk); - - chunk_descriptor allocated; - allocated.size = bytes; - allocated.pointer = m_upstream->do_allocate(bytes, m_options.alignment); - m_allocated.push_back(allocated); - bucket.previous_allocated_count = n; - - for (std::size_t i = 0; i < n; ++i) - { - bucket.free_blocks.push_back( - static_cast( - static_cast(allocated.pointer) + i * bucket_size - ) - ); - } - } - - // allocate a block from the front of the bucket's free list - void_ptr ret = bucket.free_blocks.back(); - bucket.free_blocks.pop_back(); - return ret; - } - - virtual void do_deallocate(void_ptr p, std::size_t n, std::size_t alignment = THRUST_MR_DEFAULT_ALIGNMENT) THRUST_OVERRIDE - { - n = (std::max)(n, m_options.smallest_block_size); - assert(detail::is_power_of_2(alignment)); - - // verify that the pointer is at least as aligned as claimed - assert(reinterpret_cast(detail::pointer_traits::get(p)) % alignment == 0); - - // the deallocated block is oversized and/or overaligned - if (n > m_options.largest_block_size || alignment > m_options.alignment) - { - typename oversized_block_vector::iterator it = find_if(m_oversized.begin(), m_oversized.end(), equal_pointers(p)); - assert(it != m_oversized.end()); - - oversized_block_descriptor oversized = *it; - - if (m_options.cache_oversized) - { - typename oversized_block_vector::iterator position = lower_bound(m_cached_oversized.begin(), m_cached_oversized.end(), oversized); - m_cached_oversized.insert(position, oversized); - return; - } - - m_oversized.erase(it); - - m_upstream->do_deallocate(p, oversized.size, oversized.alignment); - - return; - } - - // push the block to the front of the appropriate bucket's free list - std::size_t n_log2 = thrust::detail::log2_ri(n); - std::size_t bucket_idx = n_log2 - m_smallest_block_log2; - pool & bucket = m_pools[bucket_idx]; - - bucket.free_blocks.push_back(p); - } -}; - -/*! \} - */ - -} // end mr -} // end thrust - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/random/uniform_int_distribution.h b/spaces/ma-xu/LIVE/thrust/thrust/random/uniform_int_distribution.h deleted file mode 100644 index 42d745781964e3b4b85add7530fbaa2029635511..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/random/uniform_int_distribution.h +++ /dev/null @@ -1,276 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - -/*! \file uniform_int_distribution.h - * \brief A uniform distribution of integer-valued numbers - */ - -#pragma once - -#include -#include -#include -#include -#include - -namespace thrust -{ - -namespace random -{ - -/*! \addtogroup random_number_distributions Random Number Distributions Class Templates - * \ingroup random - * \{ - */ - -/*! \class uniform_int_distribution - * \brief A \p uniform_int_distribution random number distribution produces signed or unsigned integer - * uniform random numbers from a given range. - * - * \tparam IntType The type of integer to produce. - * - * The following code snippet demonstrates examples of using a \p uniform_int_distribution with a - * random number engine to produce random integers drawn from a given range: - * - * \code - * #include - * #include - * - * int main(void) - * { - * // create a minstd_rand object to act as our source of randomness - * thrust::minstd_rand rng; - * - * // create a uniform_int_distribution to produce ints from [-7,13] - * thrust::uniform_int_distribution dist(-7,13); - * - * // write a random number from the range [-7,13] to standard output - * std::cout << dist(rng) << std::endl; - * - * // write the range of the distribution, just in case we forgot - * std::cout << dist.min() << std::endl; - * - * // -7 is printed - * - * std::cout << dist.max() << std::endl; - * - * // 13 is printed - * - * // write the parameters of the distribution (which happen to be the bounds) to standard output - * std::cout << dist.a() << std::endl; - * - * // -7 is printed - * - * std::cout << dist.b() << std::endl; - * - * // 13 is printed - * - * return 0; - * } - * \endcode - */ -template - class uniform_int_distribution -{ - public: - // types - - /*! \typedef result_type - * \brief The type of the integer produced by this \p uniform_int_distribution. - */ - typedef IntType result_type; - - /*! \typedef param_type - * \brief The type of the object encapsulating this \p uniform_int_distribution's parameters. - */ - typedef thrust::pair param_type; - - // constructors and reset functions - - /*! This constructor creates a new \p uniform_int_distribution from two values defining the - * range of the distribution. - * - * \param a The smallest integer to potentially produce. Defaults to \c 0. - * \param b The largest integer to potentially produce. Defaults to the largest representable integer in - * the platform. - */ - __host__ __device__ - explicit uniform_int_distribution(IntType a = 0, IntType b = thrust::detail::integer_traits::const_max); - - /*! This constructor creates a new \p uniform_int_distribution from a \p param_type object - * encapsulating the range of the distribution. - * - * \param parm A \p param_type object encapsulating the parameters (i.e., the range) of the distribution. - */ - __host__ __device__ - explicit uniform_int_distribution(const param_type &parm); - - /*! This does nothing. It is included to conform to the requirements of the RandomDistribution concept. - */ - __host__ __device__ - void reset(void); - - // generating functions - - /*! This method produces a new uniform random integer drawn from this \p uniform_int_distribution's - * range using a \p UniformRandomNumberGenerator as a source of randomness. - * - * \param urng The \p UniformRandomNumberGenerator to use as a source of randomness. - */ - template - __host__ __device__ - result_type operator()(UniformRandomNumberGenerator &urng); - - /*! This method produces a new uniform random integer as if by creating a new \p uniform_int_distribution - * from the given \p param_type object, and calling its operator() method with the given - * \p UniformRandomNumberGenerator as a source of randomness. - * - * \param urng The \p UniformRandomNumberGenerator to use as a source of randomness. - * \param parm A \p param_type object encapsulating the parameters of the \p uniform_int_distribution - * to draw from. - */ - template - __host__ __device__ - result_type operator()(UniformRandomNumberGenerator &urng, const param_type &parm); - - // property functions - - /*! This method returns the value of the parameter with which this \p uniform_int_distribution - * was constructed. - * - * \return The lower bound of this \p uniform_int_distribution's range. - */ - __host__ __device__ - result_type a(void) const; - - /*! This method returns the value of the parameter with which this \p uniform_int_distribution - * was constructed. - * - * \return The upper bound of this \p uniform_int_distribution's range. - */ - __host__ __device__ - result_type b(void) const; - - /*! This method returns a \p param_type object encapsulating the parameters with which this - * \p uniform_int_distribution was constructed. - * - * \return A \p param_type object enapsulating the range of this \p uniform_int_distribution. - */ - __host__ __device__ - param_type param(void) const; - - /*! This method changes the parameters of this \p uniform_int_distribution using the values encapsulated - * in a given \p param_type object. - * - * \param parm A \p param_type object encapsulating the new range of this \p uniform_int_distribution. - */ - __host__ __device__ - void param(const param_type &parm); - - /*! This method returns the smallest integer this \p uniform_int_distribution can potentially produce. - * - * \return The lower bound of this \p uniform_int_distribution's range. - */ - __host__ __device__ - result_type min THRUST_PREVENT_MACRO_SUBSTITUTION (void) const; - - /*! This method returns the largest integer this \p uniform_int_distribution can potentially produce. - * - * \return The upper bound of this \p uniform_int_distribution's range. - */ - __host__ __device__ - result_type max THRUST_PREVENT_MACRO_SUBSTITUTION (void) const; - - /*! \cond - */ - private: - param_type m_param; - - friend struct thrust::random::detail::random_core_access; - - __host__ __device__ - bool equal(const uniform_int_distribution &rhs) const; - - template - std::basic_ostream& stream_out(std::basic_ostream &os) const; - - template - std::basic_istream& stream_in(std::basic_istream &is); - /*! \endcond - */ -}; // end uniform_int_distribution - - -/*! This function checks two \p uniform_int_distributions for equality. - * \param lhs The first \p uniform_int_distribution to test. - * \param rhs The second \p uniform_int_distribution to test. - * \return \c true if \p lhs is equal to \p rhs; \c false, otherwise. - */ -template -__host__ __device__ -bool operator==(const uniform_int_distribution &lhs, - const uniform_int_distribution &rhs); - - -/*! This function checks two \p uniform_int_distributions for inequality. - * \param lhs The first \p uniform_int_distribution to test. - * \param rhs The second \p uniform_int_distribution to test. - * \return \c true if \p lhs is not equal to \p rhs; \c false, otherwise. - */ -template -__host__ __device__ -bool operator!=(const uniform_int_distribution &lhs, - const uniform_int_distribution &rhs); - - -/*! This function streams a uniform_int_distribution to a \p std::basic_ostream. - * \param os The \p basic_ostream to stream out to. - * \param d The \p uniform_int_distribution to stream out. - * \return \p os - */ -template -std::basic_ostream& -operator<<(std::basic_ostream &os, - const uniform_int_distribution &d); - - -/*! This function streams a uniform_int_distribution in from a std::basic_istream. - * \param is The \p basic_istream to stream from. - * \param d The \p uniform_int_distribution to stream in. - * \return \p is - */ -template -std::basic_istream& -operator>>(std::basic_istream &is, - uniform_int_distribution &d); - - -/*! \} // end random_number_distributions - */ - - -} // end random - -using random::uniform_int_distribution; - -} // end thrust - -#include - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/omp/detail/default_decomposition.h b/spaces/ma-xu/LIVE/thrust/thrust/system/omp/detail/default_decomposition.h deleted file mode 100644 index cb4b03c719b7c89e2b4561066394fc3874971638..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/omp/detail/default_decomposition.h +++ /dev/null @@ -1,45 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - -/*! \file default_decomposition.h - * \brief Return a decomposition that is appropriate for the OpenMP backend. - */ - -#pragma once - -#include -#include - -namespace thrust -{ -namespace system -{ -namespace omp -{ -namespace detail -{ - -template -thrust::system::detail::internal::uniform_decomposition default_decomposition(IndexType n); - -} // end namespace detail -} // end namespace omp -} // end namespace system -} // end namespace thrust - -#include - diff --git a/spaces/manavisrani07/gradio-lipsync-wav2lip/basicsr/data/data_util.py b/spaces/manavisrani07/gradio-lipsync-wav2lip/basicsr/data/data_util.py deleted file mode 100644 index 328c3cb4b56160da12c12acdd7f0c5f31d11b24f..0000000000000000000000000000000000000000 --- a/spaces/manavisrani07/gradio-lipsync-wav2lip/basicsr/data/data_util.py +++ /dev/null @@ -1,313 +0,0 @@ -import cv2 -import numpy as np -import torch -from os import path as osp -from torch.nn import functional as F - -from basicsr.data.transforms import mod_crop -from basicsr.utils import img2tensor, scandir - - -def read_img_seq(path, require_mod_crop=False, scale=1, return_imgname=False): - """Read a sequence of images from a given folder path. - - Args: - path (list[str] | str): List of image paths or image folder path. - require_mod_crop (bool): Require mod crop for each image. - Default: False. - scale (int): Scale factor for mod_crop. Default: 1. - return_imgname(bool): Whether return image names. Default False. - - Returns: - Tensor: size (t, c, h, w), RGB, [0, 1]. - list[str]: Returned image name list. - """ - if isinstance(path, list): - img_paths = path - else: - img_paths = sorted(list(scandir(path, full_path=True))) - imgs = [cv2.imread(v).astype(np.float32) / 255. for v in img_paths] - - if require_mod_crop: - imgs = [mod_crop(img, scale) for img in imgs] - imgs = img2tensor(imgs, bgr2rgb=True, float32=True) - imgs = torch.stack(imgs, dim=0) - - if return_imgname: - imgnames = [osp.splitext(osp.basename(path))[0] for path in img_paths] - return imgs, imgnames - else: - return imgs - - -def generate_frame_indices(crt_idx, max_frame_num, num_frames, padding='reflection'): - """Generate an index list for reading `num_frames` frames from a sequence - of images. - - Args: - crt_idx (int): Current center index. - max_frame_num (int): Max number of the sequence of images (from 1). - num_frames (int): Reading num_frames frames. - padding (str): Padding mode, one of - 'replicate' | 'reflection' | 'reflection_circle' | 'circle' - Examples: current_idx = 0, num_frames = 5 - The generated frame indices under different padding mode: - replicate: [0, 0, 0, 1, 2] - reflection: [2, 1, 0, 1, 2] - reflection_circle: [4, 3, 0, 1, 2] - circle: [3, 4, 0, 1, 2] - - Returns: - list[int]: A list of indices. - """ - assert num_frames % 2 == 1, 'num_frames should be an odd number.' - assert padding in ('replicate', 'reflection', 'reflection_circle', 'circle'), f'Wrong padding mode: {padding}.' - - max_frame_num = max_frame_num - 1 # start from 0 - num_pad = num_frames // 2 - - indices = [] - for i in range(crt_idx - num_pad, crt_idx + num_pad + 1): - if i < 0: - if padding == 'replicate': - pad_idx = 0 - elif padding == 'reflection': - pad_idx = -i - elif padding == 'reflection_circle': - pad_idx = crt_idx + num_pad - i - else: - pad_idx = num_frames + i - elif i > max_frame_num: - if padding == 'replicate': - pad_idx = max_frame_num - elif padding == 'reflection': - pad_idx = max_frame_num * 2 - i - elif padding == 'reflection_circle': - pad_idx = (crt_idx - num_pad) - (i - max_frame_num) - else: - pad_idx = i - num_frames - else: - pad_idx = i - indices.append(pad_idx) - return indices - - -def paired_paths_from_lmdb(folders, keys): - """Generate paired paths from lmdb files. - - Contents of lmdb. Taking the `lq.lmdb` for example, the file structure is: - - lq.lmdb - ├── data.mdb - ├── lock.mdb - ├── meta_info.txt - - The data.mdb and lock.mdb are standard lmdb files and you can refer to - https://lmdb.readthedocs.io/en/release/ for more details. - - The meta_info.txt is a specified txt file to record the meta information - of our datasets. It will be automatically created when preparing - datasets by our provided dataset tools. - Each line in the txt file records - 1)image name (with extension), - 2)image shape, - 3)compression level, separated by a white space. - Example: `baboon.png (120,125,3) 1` - - We use the image name without extension as the lmdb key. - Note that we use the same key for the corresponding lq and gt images. - - Args: - folders (list[str]): A list of folder path. The order of list should - be [input_folder, gt_folder]. - keys (list[str]): A list of keys identifying folders. The order should - be in consistent with folders, e.g., ['lq', 'gt']. - Note that this key is different from lmdb keys. - - Returns: - list[str]: Returned path list. - """ - assert len(folders) == 2, ('The len of folders should be 2 with [input_folder, gt_folder]. ' - f'But got {len(folders)}') - assert len(keys) == 2, f'The len of keys should be 2 with [input_key, gt_key]. But got {len(keys)}' - input_folder, gt_folder = folders - input_key, gt_key = keys - - if not (input_folder.endswith('.lmdb') and gt_folder.endswith('.lmdb')): - raise ValueError(f'{input_key} folder and {gt_key} folder should both in lmdb ' - f'formats. But received {input_key}: {input_folder}; ' - f'{gt_key}: {gt_folder}') - # ensure that the two meta_info files are the same - with open(osp.join(input_folder, 'meta_info.txt')) as fin: - input_lmdb_keys = [line.split('.')[0] for line in fin] - with open(osp.join(gt_folder, 'meta_info.txt')) as fin: - gt_lmdb_keys = [line.split('.')[0] for line in fin] - if set(input_lmdb_keys) != set(gt_lmdb_keys): - raise ValueError(f'Keys in {input_key}_folder and {gt_key}_folder are different.') - else: - paths = [] - for lmdb_key in sorted(input_lmdb_keys): - paths.append(dict([(f'{input_key}_path', lmdb_key), (f'{gt_key}_path', lmdb_key)])) - return paths - - -def paired_paths_from_meta_info_file(folders, keys, meta_info_file, filename_tmpl): - """Generate paired paths from an meta information file. - - Each line in the meta information file contains the image names and - image shape (usually for gt), separated by a white space. - - Example of an meta information file: - ``` - 0001_s001.png (480,480,3) - 0001_s002.png (480,480,3) - ``` - - Args: - folders (list[str]): A list of folder path. The order of list should - be [input_folder, gt_folder]. - keys (list[str]): A list of keys identifying folders. The order should - be in consistent with folders, e.g., ['lq', 'gt']. - meta_info_file (str): Path to the meta information file. - filename_tmpl (str): Template for each filename. Note that the - template excludes the file extension. Usually the filename_tmpl is - for files in the input folder. - - Returns: - list[str]: Returned path list. - """ - assert len(folders) == 2, ('The len of folders should be 2 with [input_folder, gt_folder]. ' - f'But got {len(folders)}') - assert len(keys) == 2, f'The len of keys should be 2 with [input_key, gt_key]. But got {len(keys)}' - input_folder, gt_folder = folders - input_key, gt_key = keys - - with open(meta_info_file, 'r') as fin: - gt_names = [line.strip().split(' ')[0] for line in fin] - - paths = [] - for gt_name in gt_names: - basename, ext = osp.splitext(osp.basename(gt_name)) - input_name = f'{filename_tmpl.format(basename)}{ext}' - input_path = osp.join(input_folder, input_name) - gt_path = osp.join(gt_folder, gt_name) - paths.append(dict([(f'{input_key}_path', input_path), (f'{gt_key}_path', gt_path)])) - return paths - - -def paired_paths_from_folder(folders, keys, filename_tmpl): - """Generate paired paths from folders. - - Args: - folders (list[str]): A list of folder path. The order of list should - be [input_folder, gt_folder]. - keys (list[str]): A list of keys identifying folders. The order should - be in consistent with folders, e.g., ['lq', 'gt']. - filename_tmpl (str): Template for each filename. Note that the - template excludes the file extension. Usually the filename_tmpl is - for files in the input folder. - - Returns: - list[str]: Returned path list. - """ - assert len(folders) == 2, ('The len of folders should be 2 with [input_folder, gt_folder]. ' - f'But got {len(folders)}') - assert len(keys) == 2, f'The len of keys should be 2 with [input_key, gt_key]. But got {len(keys)}' - input_folder, gt_folder = folders - input_key, gt_key = keys - - input_paths = list(scandir(input_folder)) - gt_paths = list(scandir(gt_folder)) - assert len(input_paths) == len(gt_paths), (f'{input_key} and {gt_key} datasets have different number of images: ' - f'{len(input_paths)}, {len(gt_paths)}.') - paths = [] - for gt_path in gt_paths: - basename, ext = osp.splitext(osp.basename(gt_path)) - input_name = f'{filename_tmpl.format(basename)}{ext}' - input_path = osp.join(input_folder, input_name) - assert input_name in input_paths, f'{input_name} is not in {input_key}_paths.' - gt_path = osp.join(gt_folder, gt_path) - paths.append(dict([(f'{input_key}_path', input_path), (f'{gt_key}_path', gt_path)])) - return paths - - -def paths_from_folder(folder): - """Generate paths from folder. - - Args: - folder (str): Folder path. - - Returns: - list[str]: Returned path list. - """ - - paths = list(scandir(folder)) - paths = [osp.join(folder, path) for path in paths] - return paths - - -def paths_from_lmdb(folder): - """Generate paths from lmdb. - - Args: - folder (str): Folder path. - - Returns: - list[str]: Returned path list. - """ - if not folder.endswith('.lmdb'): - raise ValueError(f'Folder {folder}folder should in lmdb format.') - with open(osp.join(folder, 'meta_info.txt')) as fin: - paths = [line.split('.')[0] for line in fin] - return paths - - -def generate_gaussian_kernel(kernel_size=13, sigma=1.6): - """Generate Gaussian kernel used in `duf_downsample`. - - Args: - kernel_size (int): Kernel size. Default: 13. - sigma (float): Sigma of the Gaussian kernel. Default: 1.6. - - Returns: - np.array: The Gaussian kernel. - """ - from scipy.ndimage import filters as filters - kernel = np.zeros((kernel_size, kernel_size)) - # set element at the middle to one, a dirac delta - kernel[kernel_size // 2, kernel_size // 2] = 1 - # gaussian-smooth the dirac, resulting in a gaussian filter - return filters.gaussian_filter(kernel, sigma) - - -def duf_downsample(x, kernel_size=13, scale=4): - """Downsamping with Gaussian kernel used in the DUF official code. - - Args: - x (Tensor): Frames to be downsampled, with shape (b, t, c, h, w). - kernel_size (int): Kernel size. Default: 13. - scale (int): Downsampling factor. Supported scale: (2, 3, 4). - Default: 4. - - Returns: - Tensor: DUF downsampled frames. - """ - assert scale in (2, 3, 4), f'Only support scale (2, 3, 4), but got {scale}.' - - squeeze_flag = False - if x.ndim == 4: - squeeze_flag = True - x = x.unsqueeze(0) - b, t, c, h, w = x.size() - x = x.view(-1, 1, h, w) - pad_w, pad_h = kernel_size // 2 + scale * 2, kernel_size // 2 + scale * 2 - x = F.pad(x, (pad_w, pad_w, pad_h, pad_h), 'reflect') - - gaussian_filter = generate_gaussian_kernel(kernel_size, 0.4 * scale) - gaussian_filter = torch.from_numpy(gaussian_filter).type_as(x).unsqueeze(0).unsqueeze(0) - x = F.conv2d(x, gaussian_filter, stride=scale) - x = x[:, :, 2:-2, 2:-2] - x = x.view(b, t, c, x.size(2), x.size(3)) - if squeeze_flag: - x = x.squeeze(0) - return x diff --git a/spaces/marlenezw/audio-driven-animations/MakeItTalk/src/autovc/retrain_version/__init__.py b/spaces/marlenezw/audio-driven-animations/MakeItTalk/src/autovc/retrain_version/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/matthoffner/AudioCraft_Plus/docs/AUDIOGEN.md b/spaces/matthoffner/AudioCraft_Plus/docs/AUDIOGEN.md deleted file mode 100644 index a0ff481190fb52fe865aa66aaaa10176f7cf995c..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/AudioCraft_Plus/docs/AUDIOGEN.md +++ /dev/null @@ -1,158 +0,0 @@ -# AudioGen: Textually-guided audio generation - -AudioCraft provides the code and a model re-implementing AudioGen, a [textually-guided audio generation][audiogen_arxiv] -model that performs text-to-sound generation. - -The provided AudioGen reimplementation follows the LM model architecture introduced in [MusicGen][musicgen_arxiv] -and is a single stage auto-regressive Transformer model trained over a 16kHz -EnCodec tokenizer with 4 codebooks sampled at 50 Hz. -This model variant reaches similar audio quality than the original implementation introduced in the AudioGen publication -while providing faster generation speed given the smaller frame rate. - -**Important note:** The provided models are NOT the original models used to report numbers in the -[AudioGen publication][audiogen_arxiv]. Refer to the model card to learn more about architectural changes. - -Listen to samples from the **original AudioGen implementation** in our [sample page][audiogen_samples]. - - -## Model Card - -See [the model card](../model_cards/AUDIOGEN_MODEL_CARD.md). - - -## Installation - -Please follow the AudioCraft installation instructions from the [README](../README.md). - -AudioCraft requires a GPU with at least 16 GB of memory for running inference with the medium-sized models (~1.5B parameters). - -## API and usage - -We provide a simple API and 1 pre-trained models for AudioGen: - -`facebook/audiogen-medium`: 1.5B model, text to sound - [🤗 Hub](https://huggingface.co/facebook/audiogen-medium) - -You can play with AudioGen by running the jupyter notebook at [`demos/audiogen_demo.ipynb`](../demos/audiogen_demo.ipynb) locally (if you have a GPU). - -See after a quick example for using the API. - -```python -import torchaudio -from audiocraft.models import AudioGen -from audiocraft.data.audio import audio_write - -model = AudioGen.get_pretrained('facebook/audiogen-medium') -model.set_generation_params(duration=5) # generate 5 seconds. -descriptions = ['dog barking', 'sirene of an emergency vehicle', 'footsteps in a corridor'] -wav = model.generate(descriptions) # generates 3 samples. - -for idx, one_wav in enumerate(wav): - # Will save under {idx}.wav, with loudness normalization at -14 db LUFS. - audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness", loudness_compressor=True) -``` - -## Training - -The [AudioGenSolver](../audiocraft/solvers/audiogen.py) implements the AudioGen's training pipeline -used to develop the released model. Note that this may not fully reproduce the results presented in the paper. -Similarly to MusicGen, it defines an autoregressive language modeling task over multiple streams of -discrete tokens extracted from a pre-trained EnCodec model (see [EnCodec documentation](./ENCODEC.md) -for more details on how to train such model) with dataset-specific changes for environmental sound -processing. - -Note that **we do NOT provide any of the datasets** used for training AudioGen. - -### Example configurations and grids - -We provide configurations to reproduce the released models and our research. -AudioGen solvers configuration are available in [config/solver/audiogen](../config/solver/audiogen). -The base training configuration used for the released models is the following: -[`solver=audiogen/audiogen_base_16khz`](../config/solver/audiogen/audiogen_base_16khz.yaml) - -Please find some example grids to train AudioGen at -[audiocraft/grids/audiogen](../audiocraft/grids/audiogen/). - -```shell -# text-to-sound -dora grid audiogen.audiogen_base_16khz -``` - -### Sound dataset and metadata - -AudioGen's underlying dataset is an AudioDataset augmented with description metadata. -The AudioGen dataset implementation expects the metadata to be available as `.json` files -at the same location as the audio files or through specified external folder. -Learn more in the [datasets section](./DATASETS.md). - -### Evaluation stage - -By default, evaluation stage is also computing the cross-entropy and the perplexity over the -evaluation dataset. Indeed the objective metrics used for evaluation can be costly to run -or require some extra dependencies. Please refer to the [metrics documentation](./METRICS.md) -for more details on the requirements for each metric. - -We provide an off-the-shelf configuration to enable running the objective metrics -for audio generation in -[config/solver/audiogen/evaluation/objective_eval](../config/solver/audiogen/evaluation/objective_eval.yaml). - -One can then activate evaluation the following way: -```shell -# using the configuration -dora run solver=audiogen/debug solver/audiogen/evaluation=objective_eval -# specifying each of the fields, e.g. to activate KL computation -dora run solver=audiogen/debug evaluate.metrics.kld=true -``` - -See [an example evaluation grid](../audiocraft/grids/audiogen/audiogen_pretrained_16khz_eval.py). - -### Generation stage - -The generation stage allows to generate samples conditionally and/or unconditionally and to perform -audio continuation (from a prompt). We currently support greedy sampling (argmax), sampling -from softmax with a given temperature, top-K and top-P (nucleus) sampling. The number of samples -generated and the batch size used are controlled by the `dataset.generate` configuration -while the other generation parameters are defined in `generate.lm`. - -```shell -# control sampling parameters -dora run solver=audiogen/debug generate.lm.gen_duration=5 generate.lm.use_sampling=true generate.lm.top_k=15 -``` - -## More information - -Refer to [MusicGen's instructions](./MUSICGEN.md). - -### Learn more - -Learn more about AudioCraft training pipelines in the [dedicated section](./TRAINING.md). - - -## Citation - -AudioGen -``` -@article{kreuk2022audiogen, - title={Audiogen: Textually guided audio generation}, - author={Kreuk, Felix and Synnaeve, Gabriel and Polyak, Adam and Singer, Uriel and D{\'e}fossez, Alexandre and Copet, Jade and Parikh, Devi and Taigman, Yaniv and Adi, Yossi}, - journal={arXiv preprint arXiv:2209.15352}, - year={2022} -} -``` - -MusicGen -``` -@article{copet2023simple, - title={Simple and Controllable Music Generation}, - author={Jade Copet and Felix Kreuk and Itai Gat and Tal Remez and David Kant and Gabriel Synnaeve and Yossi Adi and Alexandre Défossez}, - year={2023}, - journal={arXiv preprint arXiv:2306.05284}, -} -``` - -## License - -See license information in the [model card](../model_cards/AUDIOGEN_MODEL_CARD.md). - -[audiogen_arxiv]: https://arxiv.org/abs/2209.15352 -[musicgen_arxiv]: https://arxiv.org/abs/2306.05284 -[audiogen_samples]: https://felixkreuk.github.io/audiogen/ diff --git a/spaces/matthoffner/chatbot/components/Chat/ChatInput.tsx b/spaces/matthoffner/chatbot/components/Chat/ChatInput.tsx deleted file mode 100644 index 07370ffa06ab4f108d4e23798e8674d80176711c..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/chatbot/components/Chat/ChatInput.tsx +++ /dev/null @@ -1,386 +0,0 @@ -import { - IconArrowDown, - IconBolt, - IconBrandGoogle, - IconPlayerStop, - IconRepeat, - IconSend, -} from '@tabler/icons-react'; -import { - KeyboardEvent, - MutableRefObject, - useCallback, - useContext, - useEffect, - useRef, - useState, -} from 'react'; - -import { useTranslation } from 'next-i18next'; - -import { Message } from '@/types/chat'; -import { Plugin } from '@/types/plugin'; -import { Prompt } from '@/types/prompt'; - -import HomeContext from '@/pages/api/home/home.context'; - -import { PluginSelect } from './PluginSelect'; -import { PromptList } from './PromptList'; -import { VariableModal } from './VariableModal'; - -interface Props { - onSend: (message: Message, plugin: Plugin | null) => void; - onRegenerate: () => void; - onScrollDownClick: () => void; - stopConversationRef: MutableRefObject; - textareaRef: MutableRefObject; - showScrollDownButton: boolean; -} - -export const ChatInput = ({ - onSend, - onRegenerate, - onScrollDownClick, - stopConversationRef, - textareaRef, - showScrollDownButton, -}: Props) => { - const { t } = useTranslation('chat'); - - const { - state: { selectedConversation, messageIsStreaming, prompts }, - - dispatch: homeDispatch, - } = useContext(HomeContext); - - const [content, setContent] = useState(); - const [isTyping, setIsTyping] = useState(false); - const [showPromptList, setShowPromptList] = useState(false); - const [activePromptIndex, setActivePromptIndex] = useState(0); - const [promptInputValue, setPromptInputValue] = useState(''); - const [variables, setVariables] = useState([]); - const [isModalVisible, setIsModalVisible] = useState(false); - const [showPluginSelect, setShowPluginSelect] = useState(false); - const [plugin, setPlugin] = useState(null); - - const promptListRef = useRef(null); - - const filteredPrompts = prompts.filter((prompt) => - prompt.name.toLowerCase().includes(promptInputValue.toLowerCase()), - ); - - const handleChange = (e: React.ChangeEvent) => { - const value = e.target.value; - const maxLength = selectedConversation?.model.maxLength; - - if (maxLength && value.length > maxLength) { - alert( - t( - `Message limit is {{maxLength}} characters. You have entered {{valueLength}} characters.`, - { maxLength, valueLength: value.length }, - ), - ); - return; - } - - setContent(value); - updatePromptListVisibility(value); - }; - - const handleSend = () => { - if (messageIsStreaming) { - return; - } - - if (!content) { - alert(t('Please enter a message')); - return; - } - - onSend({ role: 'user', content }, plugin); - setContent(''); - setPlugin(null); - - if (window.innerWidth < 640 && textareaRef && textareaRef.current) { - textareaRef.current.blur(); - } - }; - - const handleStopConversation = () => { - stopConversationRef.current = true; - setTimeout(() => { - stopConversationRef.current = false; - }, 1000); - }; - - const isMobile = () => { - const userAgent = - typeof window.navigator === 'undefined' ? '' : navigator.userAgent; - const mobileRegex = - /Android|webOS|iPhone|iPad|iPod|BlackBerry|IEMobile|Opera Mini|Mobile|mobile|CriOS/i; - return mobileRegex.test(userAgent); - }; - - const handleInitModal = () => { - const selectedPrompt = filteredPrompts[activePromptIndex]; - if (selectedPrompt) { - setContent((prevContent) => { - const newContent = prevContent?.replace( - /\/\w*$/, - selectedPrompt.content, - ); - return newContent; - }); - handlePromptSelect(selectedPrompt); - } - setShowPromptList(false); - }; - - const handleKeyDown = (e: KeyboardEvent) => { - if (showPromptList) { - if (e.key === 'ArrowDown') { - e.preventDefault(); - setActivePromptIndex((prevIndex) => - prevIndex < prompts.length - 1 ? prevIndex + 1 : prevIndex, - ); - } else if (e.key === 'ArrowUp') { - e.preventDefault(); - setActivePromptIndex((prevIndex) => - prevIndex > 0 ? prevIndex - 1 : prevIndex, - ); - } else if (e.key === 'Tab') { - e.preventDefault(); - setActivePromptIndex((prevIndex) => - prevIndex < prompts.length - 1 ? prevIndex + 1 : 0, - ); - } else if (e.key === 'Enter') { - e.preventDefault(); - handleInitModal(); - } else if (e.key === 'Escape') { - e.preventDefault(); - setShowPromptList(false); - } else { - setActivePromptIndex(0); - } - } else if (e.key === 'Enter' && !isTyping && !isMobile() && !e.shiftKey) { - e.preventDefault(); - handleSend(); - } else if (e.key === '/' && e.metaKey) { - e.preventDefault(); - setShowPluginSelect(!showPluginSelect); - } - }; - - const parseVariables = (content: string) => { - const regex = /{{(.*?)}}/g; - const foundVariables = []; - let match; - - while ((match = regex.exec(content)) !== null) { - foundVariables.push(match[1]); - } - - return foundVariables; - }; - - const updatePromptListVisibility = useCallback((text: string) => { - const match = text.match(/\/\w*$/); - - if (match) { - setShowPromptList(true); - setPromptInputValue(match[0].slice(1)); - } else { - setShowPromptList(false); - setPromptInputValue(''); - } - }, []); - - const handlePromptSelect = (prompt: Prompt) => { - const parsedVariables = parseVariables(prompt.content); - setVariables(parsedVariables); - - if (parsedVariables.length > 0) { - setIsModalVisible(true); - } else { - setContent((prevContent) => { - const updatedContent = prevContent?.replace(/\/\w*$/, prompt.content); - return updatedContent; - }); - updatePromptListVisibility(prompt.content); - } - }; - - const handleSubmit = (updatedVariables: string[]) => { - const newContent = content?.replace(/{{(.*?)}}/g, (match, variable) => { - const index = variables.indexOf(variable); - return updatedVariables[index]; - }); - - setContent(newContent); - - if (textareaRef && textareaRef.current) { - textareaRef.current.focus(); - } - }; - - useEffect(() => { - if (promptListRef.current) { - promptListRef.current.scrollTop = activePromptIndex * 30; - } - }, [activePromptIndex]); - - useEffect(() => { - if (textareaRef && textareaRef.current) { - textareaRef.current.style.height = 'inherit'; - textareaRef.current.style.height = `${textareaRef.current?.scrollHeight}px`; - textareaRef.current.style.overflow = `${ - textareaRef?.current?.scrollHeight > 400 ? 'auto' : 'hidden' - }`; - } - }, [content]); - - useEffect(() => { - const handleOutsideClick = (e: MouseEvent) => { - if ( - promptListRef.current && - !promptListRef.current.contains(e.target as Node) - ) { - setShowPromptList(false); - } - }; - - window.addEventListener('click', handleOutsideClick); - - return () => { - window.removeEventListener('click', handleOutsideClick); - }; - }, []); - - return ( -
        -
        - {messageIsStreaming && ( - - )} - - {!messageIsStreaming && - selectedConversation && - selectedConversation.messages.length > 0 && ( - - )} - -
        - - - {showPluginSelect && ( -
        - { - if (e.key === 'Escape') { - e.preventDefault(); - setShowPluginSelect(false); - textareaRef.current?.focus(); - } - }} - onPluginChange={(plugin: Plugin) => { - setPlugin(plugin); - setShowPluginSelect(false); - - if (textareaRef && textareaRef.current) { - textareaRef.current.focus(); - } - }} - /> -
        - )} - -